Data Architect Engineer
Location: Remote, must work EST hours
Duration: 6-month contract (potential to extend or convert)
Pay: $140K
JOB DESCRIPTION
Our global Fortune 500 client, with U.S. headquarters in Charlotte, NC, is a world class food service provider with a strong presence across the nation. Celebrating almost 30 years in North America, this employee-focused company has received honors for diversity and inclusion, innovation, health and wellness, and company culture. CRG has successfully placed over 220 employees within the last 7 years within this organization, known for its continuous growth opportunities, fantastic benefits package, innovative technology, flexible work environment, and collaborative culture.
The Data Architect Engineer will be responsible for designing and implementing robust, scalable, and high-performing data solutions on AWS. You will work closely with data engineers, software developers, and business stakeholders to ensure our cloud data infrastructure meets the needs of our growing organization.
We're currently a Python and Angular/TypeScript tech stack team and use a range of AWS services like S3, PostgreSQL, DynamoDB, Athena, Snowflake, Lambda, and Glue.
RESPONSIBILITIES
* Define, build, test, and implement scalable data pipelines.
* Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and real-time data processing pipelines.
* Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
* Collaborate with development, analytics, and reporting teams to develop data models that feed business intelligence tools.
* Design and build API integrations to support the needs of analysts and reporting systems.
* Develop, deploy, and manage AWS Lambda functions written in Python.
* Develop, deploy, and manage AWS Glue jobs written in Python.
* Ensure efficient and scalable serverless operations.
* Debug and troubleshoot Lambda functions and Glue jobs.
* Collaborate with other AWS service teams to design and implement robust solutions.
* Optimize data storage, retrieval, and pipeline performance for large-scale distributed systems.
* Ensure data security, compliance, and privacy policies are integrated into solutions.
* Develop and maintain technical documentation and architecture diagrams.
* Stay current with AWS updates and industry trends to continuously evolve the data architecture.
* Mentor and provide technical guidance to junior team members and stakeholders.
QUALIFICATIONS
* Bachelor's degree in Computer Science, Information Systems, Analytics, or related field.
* 5+ years of experience in data architecture, engineering, or similar roles.
* 3+ years programming with Python.
* 3+ years in an ETL or Data Engineering role building and implementing data pipelines.
* Strong understanding of design best practices for OLTP systems, ODS reporting needs, and dimensional database practices.
* Hands-on experience with AWS Lambda, AWS Glue, and other AWS services.
* Proficient in Python and SQL with the ability to write efficient queries.
* Experience with API-driven data access (API development experience a plus).
* Solid experience with database technologies (SQL, NoSQL) and data modeling.
* Understanding of serverless architecture benefits and challenges.
* Experience working in agile development environments.
* Highly self-motivated, detail-oriented, and able to work independently.
* Strong analytical thinking, problem-solving, and communication skills.
* AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect) are highly desirable.
NICE TO HAVE
* Experience with modern data stack technologies (e.g., dbt, Snowflake, Databricks).
* Familiarity with machine learning pipelines and AI-driven analytics.
* Background in DevOps practices and Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation.
* Knowledge of CI/CD pipelines for data workflows.
Category Code: JN008