Data Engineer_Analyst Sr Level
Apply now »Date: Jun 19, 2025
Location: Pune, Maharastra, IN
Company: sistemasgl
At Globant, we are working to make the world a better place, one step at a time. We enhance business development and enterprise solutions to prepare them for a digital future. With a diverse and talented team present in more than 30 countries, we are strategic partners to leading global companies in their business process transformation.
We seek a [Data Engineer-AWS, Senior Level who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era.
-
Job Description:Job Location: Pune/Bangalore/Hyderabad/Indore/AhmedabadWork Mode: Hybrid
Total Exp: 5 years to 9 years
What will help you succeed:
-
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team.
The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies,
and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and
ensuring the seamless integration and analysis of large datasets.Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments.
Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices.
AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows.
Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift).
Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation.
Optimize SQL queries for performance and scalability.
Expertise in writing complex SQL queries and optimizing them for performance.
Monitor, troubleshoot, and improve data pipelines for reliability and performance.
Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines,
ensuring data quality and integrity for various applications.
Must have skills are 1) AWS 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch
Create with us digital products that people love. We will bring businesses and consumers together through AI technology and creativity, driving digital transformation to positively impact the world.
Job Segment:
Database, Cloud, Software Engineer, Data Analyst, SQL, Technology, Engineering, Data