Position Overview: We are seeking an experienced and skilled remote Data Engineer in Mexico to join our dynamic data team. The ideal candidate should have a strong background in building efficient and scalable data pipelines, designing robust data architectures, and implementing ETL/ELT processes. The role requires proficiency in T-SQL, PySpark, Python, Azure Synapse Analytics, Azure Data Factory, and SSIS. The Data Engineer will play a critical role in ensuring the availability, integrity, and reliability of our data assets, contributing to the success of our data-driven initiatives.
Responsibilities
Design, develop, and maintain end-to-end data pipelines, ensuring smooth and efficient data flow from various source systems to target destinations.
Collaborate with cross-functional teams to gather requirements, understand data needs, and implement data solutions that align with business objectives.
Build and optimize data architectures for storing, processing, and analyzing large volumes of structured and unstructured data.
Implement robust ETL/ELT processes to transform raw data into clean, usable formats, addressing data quality and consistency concerns.
Work with data stakeholders to define data integration and transformation requirements, and translate them into technical solutions.
Develop and maintain documentation for data pipelines, data models, and architectural designs, ensuring knowledge sharing across the team.
Monitor and troubleshoot data pipelines, addressing performance bottlenecks, data quality issues, and ensuring data accuracy.
Collaborate with the data infrastructure team to ensure data security, compliance, and privacy standards are upheld.
Stay up-to-date with industry best practices, emerging technologies, and trends in data engineering, contributing insights to the team’s continuous improvement efforts.
Qualifications
Bachelor’s degree in Computer Science, Information Technology, or a related field. Master’s degree is a plus.
6+ years of proven experience as a Data Engineer or in a similar role, with a strong track record of designing and implementing data pipelines and architectures.
Proficiency in T-SQL, PySpark, Python, and ETL/ELT methodologies, with hands-on experience in developing complex data transformations.
Expertise in working with Azure Synapse Analytics (with expertise in Azure Data Factory, Azure DevOps and Azure DataBricks being nice-to-have) designing and implementing solutions within the Azure ecosystem.
Solid understanding of data warehousing concepts, data modeling, and database design principles.
Experience with SSIS (SQL Server Integration Services) for ETL processes.
Familiarity with Spark SQL for big data.
Strong problem-solving skills and the ability to diagnose and resolve issues in data pipelines.
Excellent communication skills to collaborate effectively with both technical and non-technical stakeholders.
Attention to detail and a commitment to producing high-quality work.
Proactive attitude, adaptability to change, and willingness to learn and apply new technologies.
Relevant certifications in Azure, data engineering, big data, or related fields are a plus.
Must be located in Mexico
#Remote
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job