Required Skills
About the Job
Quest Global is seeking a Senior Software Engineer specializing in Python Data Engineering to join our team in Chennai, Tamil Nadu. In this hands-on role, you will be responsible for designing, building, and maintaining robust end-to-end data pipelines and solutions that support analytics and AI/ML applications. You'll have the opportunity to influence architecture, data standards, and engineering best practices. This position requires close collaboration with globally distributed engineering, data science, and product teams, necessitating strong communication skills and the ability to work effectively across different time zones and cultures.
**Key Responsibilities:**
- Design, develop, deploy, and support scalable data pipelines and platforms using Python.
- Build comprehensive data solutions, including ingestion, transformation, validation, storage, and consumption layers.
- Develop and maintain ETL/ELT pipelines integrating data from diverse sources.
- Design and optimize data models and storage architectures for analytics and AI/ML workloads.
- Collaborate with global teams to gather requirements and deliver data solutions.
- Troubleshoot complex data issues, perform root-cause analysis, and implement lasting fixes.
- Ensure data quality, reliability, observability, and performance.
- Produce technical documentation, architecture diagrams, and operational runbooks.
**Required Skills & Qualifications:**
- Strong proficiency in Python with a focus on data engineering.
- Proven experience building production-grade data pipelines.
- Solid understanding of data warehousing concepts and architectures.
- Strong SQL skills (3–5+ years) with experience in performance tuning and optimization.
- Experience with Pandas and NumPy.
- Ability to design scalable data storage solutions.
- Strong analytical and communication skills.
- Experience working with global, cross-functional teams.
- Experience with Relational Databases (MySQL, PostgreSQL, Oracle) and NoSQL Databases (MongoDB, Cassandra, DynamoDB).
- Experience with data cleaning, feature engineering, and transformation workflows for ML.
- Familiarity with scikit-learn and ML data pipelines.
- Demonstrated ownership of production-grade ETL pipelines and involvement in data modeling decisions.
- Experience handling pipeline reliability issues or production incidents.