Software Engineer-1715

Remote Full-time
About the position FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we’re making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job Summary Job Description DUTIES: Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team; perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; process data using Python and Shell scripts; optimize performance using Java Virtual Machine (JVM); architect and integrate data using Delta Lake and Apache Iceberg; automate the deployment, scaling, and management of containerized applications using Kubernetes; develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitor applications and platforms using Datadog and Grafana; store and query relational data using MySQL and Presto; support applications under development and customize current applications; assist with the software update process for existing applications, and roll-outs of software releases; analyze, test, and assist with the integration of new applications; document all development activity; research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports; monitor and evaluate competitive applications and products; review literature, patents, and current practices relevant to the solution of assigned projects; collaborate with project stakeholders to identify product and technical requirements; conduct analysis to determine integration needs; perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements; and build CI/CD pipelines to automate the quality assurance process and minimize manual errors. Position is eligible to work remotely one or more days per week, per company policy. REQUIREMENTS: Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming; using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks; processing data using Python and Shell scripts; developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53; monitoring applications and platforms using Datadog and Grafana; and storing and querying relational data using MySQL and Presto; of which one (1) year includes optimizing performance using Java Virtual Machine (JVM); architecting and integrating data using Delta Lake and Apache Iceberg; and automating the deployment, scaling, and management of containerized applications using Kubernetes Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Responsibilities • Contribute to a team responsible for designing, developing, testing, and launching critical systems within data foundation team • Perform data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Use Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Process data using Python and Shell scripts • Optimize performance using Java Virtual Machine (JVM) • Architect and integrate data using Delta Lake and Apache Iceberg • Automate the deployment, scaling, and management of containerized applications using Kubernetes • Develop software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitor applications and platforms using Datadog and Grafana • Store and query relational data using MySQL and Presto • Support applications under development and customize current applications • Assist with the software update process for existing applications, and roll-outs of software releases • Analyze, test, and assist with the integration of new applications • Document all development activity • Research, write, and edit documentation and technical requirements, including software designs, evaluation plans, test results, technical manuals, and formal recommendations and reports • Monitor and evaluate competitive applications and products • Review literature, patents, and current practices relevant to the solution of assigned projects • Collaborate with project stakeholders to identify product and technical requirements • Conduct analysis to determine integration needs • Perform unit tests, functional tests, integration tests, and performance tests to ensure the functionality meets requirements • Build CI/CD pipelines to automate the quality assurance process and minimize manual errors Requirements • Bachelor’s degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience performing data transformations and aggregations using Scala within Spark Framework, including Spark APIs, Spark SQL, and Spark Streaming • Using Java within Hadoop ecosystem, including HDFS, HBase, and YARN to store and access data automating tasks • Processing data using Python and Shell scripts • Developing software infrastructure using AWS services including EC2, Lambda, S3, and Route 53 • Monitoring applications and platforms using Datadog and Grafana • Storing and querying relational data using MySQL and Presto • Of which one (1) year includes optimizing performance using Java Virtual Machine (JVM) • Architecting and integrating data using Delta Lake and Apache Iceberg • Automating the deployment, scaling, and management of containerized applications using Kubernetes Apply tot his job
Apply Now

Similar Opportunities

Support Engineer 2

Remote

DevOps Engineer (Virtual) in Philadelphia, PA in Comcast

Remote

Solutions Engineer 3 (Sales Engineering)

Remote

Associate Visual and Motion Designer

Remote

Golang Security Automation Developer

Remote

Cross Platform and Project Management Intern

Remote

Communications Manager job at Duke University in Durham, NC

Remote

PR and Communications Manager

Remote

Sr Manager, Communications

Remote

Communications Manager (North America)

Remote

Experienced Remote Customer Support Specialist – Work from Home Opportunity with Competitive Hourly Rates up to $35 per Hour at arenaflex

Remote

Experienced Team Lead, Customer Care - Provider Leadership Opportunity in a Dynamic and Supportive Environment

Remote

Lead User Experience Designer - Lending and Credit Product Management (Remote)

Remote

Instructional Designer for Blended Learning (Contract, Remote)

Remote

**Experienced Part-Time Remote Data Entry Specialist – Supporting arenaflex's Global Operations with Precision and Efficiency**

Remote

**Experienced Full Stack Data Entry Professional – Remote Work Opportunity with blithequark**

Remote

Experienced Customer Service Representative – Entry-Level Remote Online Chat Support Agent for Dynamic Customer Engagement and Technical Issue Resolution

Remote

Experienced Customer Service Representative – Retail and Hospitality Expertise – Remote Opportunity with blithequark

Remote

Experienced Full Stack Customer Service Representative – Remote Work Opportunities in Providing Exceptional Support to a Global Customer Base

Remote

**Experienced Live Chat Agent – Remote Customer Service Representative**

Remote
← Back to Home