Senior Databricks Engineer- fully remote

Remote Full-time
Key Details • Experience Level: Senior (5 to 8 years) • Job Type: Full Time • Visa Sponsorship: Unknown • Industries: Biotechnology, Scientific Research, Healthcare Responsibilities Architect, build, and optimize data solutions to support Thermo Fisher Scientific’s digital transformation strategy Build connections and workflows within cloud-based systems Build, develop, and deploy scalable data pipelines and ETL/ELT processes using Databricks Engineer robust data solutions to integrate enterprise data sources, including ERP, CRM, laboratory, and manufacturing systems Develop reusable frameworks and templates to accelerate data delivery and ensure consistency across domains Implement and maintain high-performance data connections across Databricks, Snowflake, and Iceberg environments Author and optimize complex SQL queries, transformations, and data models for analytics and reporting use cases Support data Lakehouse and data mesh initiatives to enable seamless access to trusted data across the organization Apply data governance, lineage, and security controls using Unity Catalog, Delta Live Tables, and related technologies Partner with compliance and cybersecurity teams to uphold data privacy, GxP, and regulatory standards Establish monitoring, auditing, and optimization processes for ongoing data quality assurance Collaborate with data scientists, architects, and business partners to build and implement end-to-end data solutions Serve as a technical mentor and leader with vision within the CRG data engineering community Contribute to critical initiatives for digital platform modernization and advanced analytics enablement Requirements Bachelor's degree or equivalent (combination of appropriate education, training, and/or directly related experience may be considered) Minimum of 8 years professional experience in data engineering or data platform development Minimum of 5 years of hands-on experience with Databricks and Apache Spark in production environments Demonstrated expertise with Snowflake and Apache Iceberg Strong proficiency in SQL and experience optimizing queries on large, distributed datasets Proven experience with cloud-based data platforms (Azure preferred; AWS or GCP acceptable) Strong understanding of data modeling, ETL/ELT pipelines, and data governance practices Experience implementing Unity Catalog or CI/CD pipelines for data workflows (preferred) Skills • Databricks • Apache Spark • Snowflake • Iceberg • SQL • ETL • ELT • Data Pipelines • Cloud-Native Data Architectures Apply tot his job
Apply Now
← Back to Home