Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Data Engineer- PySpark @ Valuelabs

Home > Software Development






 Data Engineer- PySpark

Job Description

Job Title- PySpark Data Engineer

We're growing our Data Engineering team at ValueLabs and looking for a talented individual to build scalable data pipelines on Cloudera Data Platform!

Experience- 5years to 9years.

Pyspark Job Description:

  • Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
  • Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  • Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
  • Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  • Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
  • Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  • Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  • Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  • Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.

Qualifications Education and Experience

  • Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills
  • PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Scripting and Automation: Strong scripting skills in Linux.

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Valuelabs
Location(s): Bengaluru

+ View Contactajax loader


Keyskills:   Pyspark Automation Linux Cloudera Hadoop Hadoop Big Data Technologies Data Warehousing Python SQL HBase

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Data Architect

  • Accenture
  • 5 - 10 years
  • Bengaluru
  • 2 days ago
₹ Not Disclosed

Data Analyst

  • eClerx
  • 5 - 10 years
  • Pune
  • 2 days ago
₹ Not Disclosed

Excellent Opportunity- Teradata Developer - Bteq Sql - Any Exl

  • EXL
  • 4 - 8 years
  • Pune
  • 2 days ago
₹ Not Disclosed

Azure Databricks - 23rd April - Virtual Interview

  • Tata Consultancy
  • 4 - 9 years
  • Hyderabad
  • 2 days ago
₹ Not Disclosed

Valuelabs

For more information Call OR WhatsApp: 99749 35572 HR Krushali