Principal Data Engineer I

cover
logo

Principal Data Engineer I

  •   1 Job Openings
  • 75 Views

Employee type

Full Time

Position

Experienced Professional

Offer Salary

Not Listed

Job Description

Spectrum is seeking a Principal Data Engineer I to develop scalable data pipelines, integrate APIs, and collaborate with teams to enhance AI/ML models. Proficiency in cloud services, SQL ETL, big data, and tools like Snowflake and Airflow is required. This role offers opportunities to work on cutting-edge tech, improving data accessibility and fostering data-driven decisions.

MAJOR DUTIES AND RESPONSIBILITIES

  • Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity

  • Collaborates with data, data science and business teams to improve data models that feed AI/ML models, applications, and business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization

  • Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it

  • Writes unit/integration tests, contributes to engineering wiki, and documents work

  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues

  • Works closely with data scientists, frontend and backend engineers, analysts, and product managers

  • Designs data integrations and data quality framework

  • Identifies and evaluates open source and vendor tools for data lineage

  • Works closely with business units and engineering teams to develop strategy for long term data platform architecture

REQUIRED QUALIFICATIONS

  • Proficiency with cloud based services

  • Proficiency in coding/scripting and in the use of code repositories

  • Proficiency with SQL ETL techniques and familiarity with data platforms such as Terradata, Databricks and Snowflake

  • Proficiency dealing with big data

  • Proficiency with data workflow/data prep platforms, such as Airflow, Infomatica, Pentaho, or Talend

  • Basic background in Linux/Unix/CentOS or Windows installation and administration

  • Familiarity with rest, soap, RPC, and gRPC API protocols

  • Exposure to visualization or BI tools, such as Tableau and Looker

  • Exposure to AI/ML libraries and tooling, such as Sklearn and TensorFlow

  • Familiarity with agile work practices and frameworks

  • Ability to identify and resolve end-to-end performance, network, server, and platform issues

  • Attention to detail with the ability to effectively prioritize and execute multiple tasks

PREFERRED QUALIFICATIONS

  • Advanced knowledge of SQL and SQL optimization techniques

  • Experience with Snowflake

  • Experience with Airflow and with Spark

  • Proficiency coding with Python and shell scripts, experience with GIT

  • Proficiency in the AWS cloud environment

  • Experience working and/or certification in SAFe

  • Knowledge of best practices and IT operations in always-up, always-available services

  • Experience with Tableau and/or Looker

  • Experience in advertising sales, cable or related industries

Education

Degree in an engineering discipline or computer science, or related field, or equivalent work experience, Master of Science preferred.

Skills
Python SQL Communication Skill Interpersonal Skill Management Skill Problem-solving Skill Time management Skill Troubleshooting AWS Platforms AI/ML Organizational skills