Introduction
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
Your Role And Responsibilities
At IBM, we know technology. But technology is only as good as the results it delivers. Are you interested and adept in application development methodology and tools? The IBM Client Innovation Centre CIC occupies a unique position in the IT marketplace – at the intersection of business and technology. In this role, you'll work in our IBM Client Innovation Center (CIC), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As a member of our team, you will help IBM clients solve some of their most challenging business problems, capitalize on market opportunities and achieve real value in today’s tough economy.
We’re looking for passionate, original thinkers who want to drive progress, grow their network, and further their careers. If you see yourself as someone who never stops learning, who believes no problem is too big to solve, and who wants to unleash their potential, an Associate role is for you.
As an Associate Data Engineer, you will be able to:
- Design, build, and maintain scalable data pipelines and architectures in the cloud.
- Develop and manage data models and databases, ensuring they meet the needs of various business functions.
- Collaborate with data scientists, analysts, and engineers to integrate data from diverse sources.
- Implement data storage solutions that are highly scalable, secure, and compliant.
- Ensure data quality and integrity across all systems.
- Develop processes and scripts to automate data cleaning, transformation, and loading (ETL/ELT).
- Participate in code reviews and ensure best practices are followed.
- Provide technical guidance and mentoring to junior team members as required.
Required Qualifications
Required technical and professional expertise
- Eligibility to work in Canada without future visa sponsorship (valid work permit, permanent residency, or Canadian citizenship).
- Willingness to live and work in Calgary, AB.
Preferred Education
- Bachelor's degree in compurer science, computer engineering, software engineering, software development
Core Competencies
- Programming: Python, SQL (Java or Scala is a plus)
- Data Engineering Tools:
- Apache Spark, Hadoop
- Airflow, dbt, Kafka (preferred)
- Cloud Platforms:
- AWS, Azure, or GCP (experience with services like S3, Redshift, BigQuery, or Azure Data Lake)
- Databases:
- Relational (PostgreSQL, MySQL)
- NoSQL (MongoDB, Cassandra)
- AI & Analytics Tools (for AI-focused roles):
- TensorFlow, PyTorch, scikit-learn
- Jupyter, pandas, NumPy
- Data Visualization: Power BI, Tableau, or matplotlib/seaborn
Additional Skills
- Data modeling and ETL pipeline development
- Machine learning model development and evaluation
- CI/CD basics and version control (Git/GitHub)
Soft Skills
- Strong analytical and problem-solving ability
- Effective communication and teamwork
- Curiosity and eagerness to learn emerging tools/tech