Data Processing Engineer

Job Description


As a software engineer on the Data Pipeline team, you will build and support systems for processing datasets describing customer behavior for our Client’s OTT streaming video applications. This team collects, transforms, and assembles data from access logs, services, web and native applications, and authoritative stores to create a unified view of customer experience for streaming video applications across web, mobile, gaming consoles, and connected screens. This is the team that provides the platform that enables near real time targeted customer experiences and dynamic business decisions.

The current data processing infrastructure is ready for an upgrade, and they are looking for data minded developers to design and implement the 2nd version of the event processing pipeline to support both batch and stream processing scenarios. You will work as part of a cross functional team of software developers and business intelligence engineers to build a new processing infrastructure that is discoverable, traceable, and produces high quality data that truthfully represents the online behavior of the Client’s customers.

Basic Qualifications

  • Bachelor’s degree in Computer Science, Software Engineering, or a related technical discipline
  • 5+ years of experience in developing/ configuring/ maintaining large scale applications in a micro service architecture
  • 3+ years of professional development experience using languages; such as TypeScript, Java, or C#
  • Strong understanding of NodeJS, web services, workflow modeling, web application development, and industry-standard version control systems
  • Fundamentals in OO design, data structures, and algorithm design

Preferred Qualifications

  • Graduate degree in Computer Science, Data Engineering, or a related technical discipline
  • Experience with building data pipelines and applications to stream and process datasets at low latencies. (Apache Spark, Apache Kafka, Amazon Kinesis, Apache Beam)
  • Experience using native AWS services (preferably S3, Lambda, Redshift, and Dynamo DB)
  • Experience with continuous ETL sourced from databases (Cassandra, Postgres, Redis) and third party systems APIs.
  • Experience with data vault integration and visualization tools such as (Tableau, Periscope, Amplitude, Snowflake)
  • Hands on experience with native mobile application development stacks, e.g., at least one of Android (Java), iOS (Objective-C or Swift)

Reference Number: 5447