Budget: Below 30 LPA
Notice Period: Less than 30 days
Job Summary
We are seeking a highly skilled Azure Data Engineer to design, develop, and implement data lakehouse projects using Microsoft Azure technologies. The ideal candidate will have expertise in data engineering, SQL, Python, PySpark, Azure Data Factory (ADF), Databricks, and CI/CD. You will work closely with cross-functional teams to build scalable, high-performance data solutions while ensuring best practices in data integration, security, and compliance.
Key Responsibilities
- Design and develop distributed data pipelines within the Azure cloud ecosystem.
- Implement data lakehouse architectures using Azure Data Lake, Databricks, and ADF.
- Develop ETL/ELT workflows and optimize data ingestion, processing, and storage.
- Hands-on development with Python, SQL, and PySpark to build robust data solutions.
- Design and implement CI/CD pipelines for automated deployment and monitoring.
- Develop data models and implement best practices for data governance and security.
Collaboration & Agile Processes
- Work closely with stakeholders to understand business requirements and translate them into scalable data solutions.
- Participate in Agile ceremonies including daily stand-ups, sprints, retrospectives, and code reviews.
- Ensure documentation of data pipelines, architectures, and best practices.
- Collaborate with cross-functional teams to develop and deliver high-quality solutions.
Key Skills & Competencies
- Data Engineering & SQL – Strong experience with SQL-based transformations and data modelling.
- Python & PySpark – Advanced skills in Python programming and distributed data processing with PySpark.
- Azure Data Services – Hands-on experience with Azure Data Lake, Azure Data Factory (ADF), and Databricks.
- CI/CD & DevOps – Experience in Azure DevOps, CI/CD pipelines, and automated testing.
- Data Integration – Proficiency in integrating data via APIs, Web Services, and Queues.
- Strong Communication & Problem-Solving Skills – Ability to collaborate with teams, solve complex problems, and deliver high-quality solutions.
Minimum Qualifications & Experience
Education: Bachelor’s degree in Computer Science, Information Technology, or related field.
Experience:
- 7+ years of hands-on experience in data engineering and distributed data pipelines.
- 5+ years of experience in Azure data services (Azure Data Lake, ADF, Databricks).
- 5+ years of experience in Python, SQL, object-oriented programming, and ETL development.
- Experience in data integration using APIs, Web Services, and Message Queues.
- Experience with Agile methodologies, JIRA, Confluence, and Azure DevOps.
Additional Responsibilities
- Maintain comprehensive project documentation and industry best practices.
- Ensure compliance with privacy regulations, data security, and governance standards.
- Report any security breaches or unauthorized access to the compliance team.
- Continuously stay updated with the latest advancements in cloud data engineering.