[Remote] Data Engineer
Note: The job is a remote job and is open to candidates in USA. Xsolla is a global commerce company focused on providing tools and services for the video game industry. The Data Engineer role involves developing and optimizing data pipelines and models to support user insights and machine learning applications, while ensuring data accuracy and performance across various platforms. Responsibilities Build, and optimize data pipelines, data dictionary and ETL workflows in Snowflake using Snowpark, Streams/Tasks, and Snowpipe Develop scalable data models supporting user 360 views, churn prediction, and recommendation engine inputs Support integration across data sources: MySQL, BigQuery, Redis, Kafka, GCP Storage, and API Gateway Implement CI/CD for data pipelines using Git, dbt, and automated testing Define data quality checks and auditing pipelines for ingestion and transformation layers Tune warehouse performance and cost efficiency via query optimization, caching, and cluster sizing Establish data partitioning, clustering, and materialized views for fast query execution Build dashboards and monitors for pipeline health, job success, and data latency metrics (e.g., via Looker, Tableau, or Snowsight) Establish and enforce naming conventions, data lineage, and metadata standards across schemas Contribute to the company’s evolving data mesh and streaming architecture vision Skills 0-3 years of experience in Data Engineering, with databases ecosystem SQL and Python skills, with proven experience building ETL/ELT at scale Understanding of Snowflake performance tuning, query optimization, and warehouse orchestration Understanding of data modeling (Kimball, Data Vault, or hybrid) Familiarity with API-based data integration and microservice architectures Build, and optimize data pipelines, data dictionary and ETL workflows in Snowflake using Snowpark, Streams/Tasks, and Snowpipe Develop scalable data models supporting user 360 views, churn prediction, and recommendation engine inputs Support integration across data sources: MySQL, BigQuery, Redis, Kafka, GCP Storage, and API Gateway Implement CI/CD for data pipelines using Git, dbt, and automated testing Define data quality checks and auditing pipelines for ingestion and transformation layers Tune warehouse performance and cost efficiency via query optimization, caching, and cluster sizing Establish data partitioning, clustering, and materialized views for fast query execution Build dashboards and monitors for pipeline health, job success, and data latency metrics (e.g., via Looker, Tableau, or Snowsight) Establish and enforce naming conventions, data lineage, and metadata standards across schemas Contribute to the company's evolving data mesh and streaming architecture vision Excellent cross-functional communication — can translate between engineering and business Hands-on problem solver who balances velocity with reliability Company Overview Xsolla is a video game company that provides tools and services to help developers and publishers launch, monetize, and scale their games. It was founded in 2005, and is headquartered in Sherman Oaks, California, USA, with a workforce of 1001-5000 employees. Its website is