Cloud Data Engineer (Spark/Databricks)
Job Description
Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus
Responsibilities Duties:
- 7+ years of experience as a Data Engineer or in a similar role.
- Proven experience with cloud platforms: AWS, Azure, and GCP.
- Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.
- Experience with other ETL tools like Informatica, SAP Data Intelligence, etc.
- Experience in building and managing data lakes and data warehouses.
- Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse.
- Experience with data extraction from SAP or ERP systems is a plus.
- Strong experience with Spark and Scala for data processing.
Key Skills:
- Strong programming skills in Python, Java, or Scala.
- Proficient in SQL and query optimization techniques.
- Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts.
- Knowledge of data governance, security, and compliance best practices.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Experiance Qualifications:
- Experience with other data tools and technologies such as Apache Spark, or Hadoop.
- Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate).
- Experience with CI/CD pipelines and DevOps practices for data engineering
- Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law.
Benefits:
Health, insurance, commuting support, lunch service