Develop and manage ETL processes using Python, PySpark, and SQL.
Work with Databricks for data processing.
Utilize AWS services like S3, CloudWatch, IAM, SNS, and Lambda.
Familiarity with Terraform for infrastructure management.
Skills Required:
Strong experience in Python, PySpark, and SQL
Experience in Graph Database is mandatory
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Data EngineerEmployement Type: Full time