Data Engineer
Apply now »Date: Nov 17, 2025
Location: Berlin, berlin, DE
Company: sistemasgl
At Globant, we are working to make the world a better place, one step at a time. We enhance business development and enterprise solutions to prepare them for a digital future. With a diverse and talented team present in more than 30 countries, we are strategic partners to leading global companies in their business process transformation.
We seek a Data Engineer Sr 1 and 3 who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era.
What will help you succeed:
-
Experience with AWS services: Kinesis, Flink, Glue, ECS, S3, and VPC Endpoint for scalable, reliable cloud-based data solutions.
- Fluent in German and English.
-
Strong programming proficiency in Java, Python, and Scala for robust, efficient data processing.
-
Expertise in Apache Spark, NiFi, Airflow, and SQL for creating, managing, and optimizing large-scale workflows.
-
Knowledge of storage systems: Iceberg, Impala, Hive, Hadoop, and Kafka.
-
Familiarity with monitoring tools such as OpenTelemetry, New Relic, and Splunk to ensure pipeline reliability and system performance.
-
Comprehensive understanding of ETL processes, database architecture, and management.
-
Proven ability to design and implement end-to-end data pipelines.
-
Hands-on experience with Apache Spark, Hadoop, and Cloudera for scalable Big Data solutions.
-
Excellent analytical and troubleshooting skills.
-
Effective collaboration in cross-functional teams and clear communication of technical content to stakeholders.
* Main Responsibilities:
-
Design and implement end-to-end data pipelines, ensuring seamless data transformation and transfer across systems.
-
Build cloud-based data solutions on AWS using Kinesis, Flink, Glue, ECS, S3, and VPC Endpoint.
-
Develop robust, efficient data processing applications in Java, Python, and Scala.
-
Create, manage, and optimize workflows for large-scale data processing with Apache Spark, NiFi, Airflow, and SQL.
-
Efficiently organize and access large amounts of data using Iceberg, Impala, Hive, Hadoop, and Kafka.
-
Ensure the reliability of data pipelines and system performance with monitoring tools such as OpenTelemetry, New Relic, and Splunk.
-
Establish ETL processes, database architecture, and management for efficient data integration and organization.
-
Implement scalable Big Data solutions with Apache Spark, Hadoop, and Cloudera.
-
Analyze, debug, and resolve complex data-related challenges.
-
Work effectively in cross-functional teams and clearly communicate technical solutions to various stakeholders.
This job can be filled in Germany #LI-Hybrid or #LI-Remote .
Create with us digital products that people love. We will bring businesses and consumers together through AI technology and creativity, driving digital transformation to impact the world positively.
We may use AI and machine learning technologies in our recruitment process. Globant is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, veteran status, or any other characteristic protected by applicable federal, state, or local law. Globant is also committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need assistance or an accommodation due to a disability, please let your recruiter know.
Final compensation offered is based on multiple factors such as the specific role, hiring location, as well as individual skills, experience, and qualifications. In addition to competitive salaries, we offer a comprehensive benefits package. Learn more about life at Globant here: Globant Experience Guide.
Job Segment:
Database, Cloud, Software Engineer, Business Development, Technology, Bilingual, Engineering, Sales