**What you'll do**
- Extract, Transform and Load data from multiple sources and multiple formats using Big Data Technologies.
- Work across teams to integrate data from multiple Business Areas in a Centralized Data Repository.
- Integrate our systems with existing internal systems, Data Fabric, Corporate Services.
- Participate in a tight-knit engineering team employing agile software development practices.
- Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality.
- Lead effort for Sprint deliverables, and solve problems with medium complexity
**What experience you need**:
- Bachelor's degree in Computer Science, Systems Engineering or equivalent experience
- 3+ years experience working in Data Engineering using any programming languages such as **Scala, Java, Python and SQL (is a must)**
- 3+ years of experience working with **ETL (Extract, Transform, and Load) procedures**
- 3+ years of experience working with Big Data Frameworks such as **A **pache Spark, Apache Beam or equivalent **.
- 2+ years of experience working with workflow management technologies such as **Apache Airflow or equivalen **t.
- 1+ years experience with software build management tools like **Maven or Gradle**
- 1+ year of experience with Cloud technology**:GCP, AWS, or Azure**
- English proficiency B2 or above
**What could set you apart**
- **Data Engineering using GCP Technologies (BigQuery, DataProc, Dataflow, Composer, etc)**
- Experience with Big Data tools such as Hadoop, Hive
- Self-starter that identifies/responds to priority shifts with mínimal supervision
- Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle.
- Agile environments (e.g. Scrum, XP)
- Relational databases (e.g. SQL Server, Oracle, MySQL)
- Atlassian tooling (e.g. JIRA, Confluence, and Github)
LI-DU1
LI-Hybrid