Required Skills
About the Job
JPMorgan Chase & Co. is seeking a Senior Data Engineer to join our Corporate Technology team in Hyderabad. In this role, you will design and deliver innovative, scalable data solutions on AWS using Databricks/Spark. You will modernize data processing platforms, contribute to LLM-assisted development for secure and reliable solutions, and accelerate engineering productivity. Your responsibilities will include designing and implementing Databricks/Spark pipelines on AWS with Delta Lake, building high-quality Python/PySpark production code aligned with security best practices, and implementing CI/CD delivery using infrastructure-as-code (Terraform/CloudFormation) and automated testing. You will produce architecture and design artifacts, analyze data for continuous improvement, and identify system issues. We require eight years of software engineering experience, proficiency in Python and PySpark, and experience with Databricks/Spark, database querying, and the full Software Development Life Cycle. Familiarity with agile methodologies, CI/CD, application resiliency, security, and LLM-assisted development is essential. Experience with agentic automation, LLM tooling patterns, Data Mesh, Airflow, ThoughtSpot, and relevant AWS/Databricks certifications are preferred.