Data Pipeline Engineer

Remote Full-time
Description As a Big Data Architect Contractor, you will support the project team by designing and implementing large-scale data solutions to meet business needs. Design and develop scalable big data architectures to handle large volumes of data. Collaborate with stakeholders to understand data requirements and translate them into technical solutions. Implement data integration, data processing, and data storage solutions using big data technologies. Ensure data security, data quality, and data governance standards are met. Optimize data architectures for performance, scalability, and cost-efficiency. Primary Skill Required for the Role: Databricks Level Required for Primary Skill: Advanced (6-9 years experience) Additional Skills Requested for Role: • Microsoft - SQL Azure - Advanced (6-9 years experience) • Data Modeling - Advanced (6-9 years experience) • DATA ENGINEERING - Advanced (6-9 years experience) Additional Details for Role: Data Pipeline Engineer Skills: • Databricks (Python and SQL/PySpark) • Azure SQL Server (Managed Instance, Azure SQL DB, SQL Server VMs) • Data modeling and ETL/ELT design patterns • Docker and Azure Kubernetes Service (AKS) for select automation • Performance optimization and scalability Experience: • 7+ years enterprise data engineering • Healthcare data integration experience strongly preferred • Experience with large-scale data harmonization and normalization Apply tot his job
Apply Now
← Back to Home