Senior Data Engineer (WFH - Mexico)
Remote
Full Time
Experienced
🚀 Senior Data Engineer
Join an innovative data engineering team and help revolutionize data pipeline automation and management. Leverage cutting-edge big data technologies and cloud tools to transform, automate, and optimize data pipelines for large-scale datasets. Collaborate with cross-functional teams to ensure seamless data extraction, transformation, and deployment, enabling efficient automation and high-quality results.
What You’ll Need to Succeed
• Proficiency in Apache Airflow for data pipeline creation.
• Strong experience with Apache Spark for handling large datasets.
• Hands-on expertise with AWS EMR for data extraction.
• Advanced programming skills in Scala and Java.
• Proven ability to manage and configure Jenkins pipelines for application automation.
• Familiarity with EKS deployment and container orchestration.
• Experience with Gradle for building JAR files into Docker images.
• Familiarity with Spinnaker (limited use).
Role Requirements
• Ensure data jobs have accurate configurations, command-line arguments, and optimized scheduling.
• Automate existing manual Jenkins pipelines by understanding current technologies and redesigning them for end-to-end automation.
• Collaborate with cross-functional teams to ensure data quality and alignment with automation best practices.
Preferred Qualifications
• Experience with large-scale data migration and integration projects.
• Strong understanding of cloud-based data solutions and best practices.
• Background in working with diverse engineering teams to ensure pipeline efficiency.
Additional Details
• Location: Anywhere in Mexico (remote work from home).
• Eligibility: Only fluent English speakers and legal residents/locals can be considered (directly hired as a full-time employee).
• Contract Type: Direct hire, indefinite contract.
Starting Benefits
• Aguinaldo and vacations: by law.
• Grocery Vouchers (Vales de Despensa).
• Life Insurance.
• Major Medical Insurance: Individual coverage.
Join an innovative data engineering team and help revolutionize data pipeline automation and management. Leverage cutting-edge big data technologies and cloud tools to transform, automate, and optimize data pipelines for large-scale datasets. Collaborate with cross-functional teams to ensure seamless data extraction, transformation, and deployment, enabling efficient automation and high-quality results.
What You’ll Need to Succeed
• Proficiency in Apache Airflow for data pipeline creation.
• Strong experience with Apache Spark for handling large datasets.
• Hands-on expertise with AWS EMR for data extraction.
• Advanced programming skills in Scala and Java.
• Proven ability to manage and configure Jenkins pipelines for application automation.
• Familiarity with EKS deployment and container orchestration.
• Experience with Gradle for building JAR files into Docker images.
• Familiarity with Spinnaker (limited use).
Role Requirements
• Ensure data jobs have accurate configurations, command-line arguments, and optimized scheduling.
• Automate existing manual Jenkins pipelines by understanding current technologies and redesigning them for end-to-end automation.
• Collaborate with cross-functional teams to ensure data quality and alignment with automation best practices.
Preferred Qualifications
• Experience with large-scale data migration and integration projects.
• Strong understanding of cloud-based data solutions and best practices.
• Background in working with diverse engineering teams to ensure pipeline efficiency.
Additional Details
• Location: Anywhere in Mexico (remote work from home).
• Eligibility: Only fluent English speakers and legal residents/locals can be considered (directly hired as a full-time employee).
• Contract Type: Direct hire, indefinite contract.
Starting Benefits
• Aguinaldo and vacations: by law.
• Grocery Vouchers (Vales de Despensa).
• Life Insurance.
• Major Medical Insurance: Individual coverage.
Apply for this position
Required*