Senior Data Engineer (Databricks) — U.S. Citizens Only

Remote Full-time
This role requires U.S. Citizenship and U.S. residence to support work in secure, regulated environments. About the Role Join a small, experienced engineering team building scalable data platforms that support large-scale, mission-driven programs. As a Senior Data Engineer, you’ll design and operate modern data pipelines, optimize distributed processing workloads, and help maintain secure, production-grade data systems in the cloud. This role is well-suited for engineers who enjoy working deep in the data stack—solving performance challenges, improving reliability, and building systems that others rely on daily. What You’ll Work On You’ll collaborate with engineers, data scientists, and cloud specialists to: • Design and optimize large-scale data pipelines using Databricks and Spark • Build reliable data workflows to ingest, transform, and serve complex datasets • Develop and maintain data models that support analytics and downstream use cases • Automate infrastructure and data workflows using Infrastructure as Code • Monitor, troubleshoot, and support production data jobs • Implement CI/CD workflows for data pipelines and processing code • Ensure data systems meet security, reliability, and performance standards • Work with containerized services where appropriate • Participate in Agile delivery and continuous improvement efforts What You Need To succeed in this role, you should have: • U.S. Citizenship + U.S. residence • 5+ years of experience in data engineering or backend data infrastructure • Hands-on experience with Databricks, Spark, or distributed data processing in production • Strong proficiency with Python and PySpark • Experience with data modeling and schema design • Familiarity with orchestration and transformation tools (Airflow, dbt, or similar) • Experience working in cloud environments (Azure preferred; AWS or GCP acceptable) • Working knowledge of CI/CD pipelines and version control workflows • Experience with relational and NoSQL databases • Comfort monitoring, debugging, and scaling cloud-based data systems • Strong communication skills and ability to work independently in a remote team Nice to Have • Experience with Infrastructure as Code (Terraform or similar) • Familiarity with Docker and containerized workloads • Exposure to Kubernetes or workload orchestration tools • Experience with lakehouse architectures or Delta Lake • Background supporting regulated or public-sector programs • Active or prior security clearance • Databricks certification Why It’s Interesting You’ll work on data systems that operate at scale and matter in the real world. This role offers ownership, technical depth, and the opportunity to improve how critical data platforms are built and maintained—without the overhead of large, bureaucratic teams. If you enjoy solving hard data engineering problems and want your work to have visible impact, this role provides that opportunity. Benefits • Competitive compensation • Remote-first work environment • Health benefits + PTO • Retirement plan with company contribution • Bonus opportunities • Growth and learning support Apply tot his job
Apply Now
← Back to Home