Required Skills
About the Job
JP Morgan Chase & Co. is seeking a Senior AWS Data Engineer to design, build, and optimize scalable data pipelines and architectures on Amazon Web Services (AWS). You will leverage your expertise in AI, Python, Spark, Data Lakes, and Snowflake to deliver robust, AI-driven data solutions. This role involves collaborating with data scientists, analysts, and business stakeholders, and utilizing AI productivity tools to enhance development efficiency and code quality. You will be responsible for architecting and implementing Data Lake solutions, integrating and managing Snowflake environments, and developing optimized distributed data processing workflows using Apache Spark (PySpark). A key aspect of this role is enabling data-driven models and solutions for AI/ML teams and ensuring data quality, governance, and security.
Key Responsibilities: * Design, develop, and maintain scalable data pipelines and ETL processes on AWS (S3, Glue, Lambda, Redshift, etc.) using Python and Spark. * Architect and implement Data Lake solutions, ensuring efficient data ingestion, storage, and retrieval. * Integrate and manage Snowflake environments for data warehousing and analytics. * Develop and optimize distributed data processing workflows using Apache Spark (PySpark). * Collaborate with AI/ML teams to support feature engineering and model deployment. * Leverage AI coding assistants (e.g., Copilot, Claude) to accelerate development and improve code quality. * Optimize data workflows for performance, reliability, and cost efficiency. * Ensure data quality, governance, and security across all platforms. * Automate data processing tasks using Python, Spark, and AWS-native tools. * Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure. * Document technical solutions and provide knowledge transfer.