WANTED

Senior Data Engineer – REF 114 – 02

  • Full Time
  • Permanent
  • remote
  • Remote, Remote in South America, South America

Purpose of the Role

We are seeking a skilled Senior Data Engineer to contribute to the modernization and evolution of an enterprise data platform. This role involves designing, building, and maintaining reliable data pipelines, scalable data models, and robust infrastructure that support analytics and Business Intelligence initiatives.

The Senior Data Engineer will play a key role in migrating legacy data platforms to a modern, cloud-native architecture on AWS, leveraging technologies such as Redshift, dbt, and modern orchestration frameworks. This includes end-to-end ownership of data migration processes, ensuring data integrity, consistency, and performance throughout the transition.

The role requires a strong hands-on engineering mindset, the ability to make architectural and design decisions, and close collaboration with analysts, engineers, and business stakeholders to deliver reliable and high-quality data solutions.

Duties and Responsibilities

  • Design, build, and maintain scalable ETL/ELT pipelines using SQL, Python, and modern data integration frameworks (e.g., dbt, Talend, or similar).
  • Develop and optimize data models that support analytics and Business Intelligence, including dimensional models, star schemas, and performance-optimized datasets.
  • Lead and contribute to data platform modernization initiatives, including migrating legacy data warehouses (e.g., DB2, SQL Server) to AWS-based solutions such as Redshift.
  • Define and implement robust data validation and reconciliation strategies during migrations, ensuring data accuracy, completeness, and business consistency beyond basic checks.
  • Support and enhance existing data integrations and pipelines, including systems such as Semarchy and other upstream and downstream data flows.
  • Ensure strong data quality and observability by implementing testing frameworks, lineage tracking, monitoring, and alerting mechanisms.
  • Collaborate with analysts, engineers, and business stakeholders to design governed and accessible datasets that support data-driven decision-making.
  • Apply strong software engineering practices to data pipelines, including version control, code reviews, documentation, CI/CD workflows, and reproducible deployments.
  • Participate in or lead migrations between ETL tools or orchestration frameworks (e.g., Talend to Airflow), considering scalability, maintainability, and team adoption.
  • Take end-to-end ownership of data solutions—from requirements and design to deployment, monitoring, and continuous improvement.
  • Evaluate and design data architectures and pipeline strategies based on business requirements, performance considerations, and scalability trade-offs.

Required Experience & Knowledge

  • Mandatory:
  • 5+ years of professional experience in data engineering, data warehousing, or enterprise ETL development.
  • Advanced SQL expertise and strong experience working with relational databases (e.g., DB2, SQL Server, or similar legacy systems).
  • Hands-on experience developing ETL/ELT pipelines using tools such as dbt, Talend, Airflow, or similar frameworks.
  • Strong experience working with AWS data services, particularly Redshift, S3, Glue, and IAM.
  • Proficiency in Python (or similar scripting languages) for automation, data processing, and pipeline development.
  • Solid understanding of data modeling concepts for analytics and reporting, including dimensional modeling and performance optimization.
  • Experience with data migration projects, including moving data from legacy systems to cloud-based data platforms.
  • Experience implementing data validation, reconciliation, and data quality practices, especially in the context of migrations.
  • Hands-on experience with CI/CD pipelines and modern development workflows (e.g., Git-based version control).
  • Strong documentation and communication skills, with the ability to collaborate effectively across technical and non-technical teams.

Nice to Have:

  • Experience working with Master Data Management platforms such as Semarchy.
  • Familiarity with orchestration frameworks such as Airflow or AWS Glue workflows.
  • Experience implementing infrastructure-as-code practices for data platforms.
  • Experience building scalable and maintainable data platforms that support self-service analytics.
  • Experience working with streaming or near real-time data processing.

Skills and Attributes

  • Strong problem-solving and analytical skills with a focus on building scalable, reliable data solutions.
  • Ownership mindset with the ability to work independently and deliver high-quality outcomes with minimal supervision.
  • Ability to treat data pipelines and transformations as production-grade software—versioned, testable, observable, and maintainable.
  • Ability to make informed technical decisions and evaluate trade-offs between different architectural approaches.
  • Excellent communication skills to collaborate effectively with engineers, analysts, and business stakeholders.
  • Proactive, adaptable, and curious, with a strong interest in modern data engineering practices and cloud technologies.
  • Ability to thrive in a collaborative, fast-paced technical environment while maintaining attention to detail and operational reliability.

Required Education & Qualifications

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field (or equivalent practical experience).
  • Strong proficiency in spoken and written English.
  • Relevant certifications in AWS, data engineering, or cloud technologies are considered a plus.

Apply Online

A valid email address is required.
Docx and PDF files allowed. Up to 5MB max size.