Designation / Job title : Developer Programmer - 261312/ Developer (Other)
Primary or Mandatory skills :-
• Big Data
• Cloudera Hadoop
• Spark, Scala
• Databricks
• Azure
Good to have skills :
Data lake, Azure, Telecom Domain Knowledge.
Detailed Job description :
• Minimum 11 years of experience in Enterprise Data Warehouse solutioning, Exposure of Big Data technology stacks like Cloudera, HBase, Hive, Impala Kafka and Spark, Analytics tools like Python, R
• In depth knowledge of Teradata Utilities, Macros
• Data lake, Azure
• Strong SQL analytical skills
• Knowledge in Unix system
• Good to have knowledge in Control M scheduling
• Involve in Business requirement gatherings, Analysis of requirements, Design, solution walkthrough, Workshops & Identify gap’s in solution & Business requirement with Business & IT team.
• Create detailed technical design document based on the requirements and High-Level Solution Design.
• Basic knowledge in Spark leveraging Scala or Python and Optimize the performance of the built Spark applications in Big data Platform.
• Strong analytical mindset and ability to work independently and in fast-paced and quickly changing environment
• Work and continuously improve the DevOps pipeline and tooling to provide active management of the continuous integration/continuous deployment processes
• Good to have experience in any ETL tools
• Experience in Teradata, Informatica, Linux, Hadoop Spark, Control-M
• Experience working in an Agile delivery model
• Preparing implementation plans, reports, manuals and other documentation on the status, operation and maintenance of Data ware housing Applications.
• Designing, developing and unit testing of Data Ingestion, transformation using bigdata, Hadoop and Databricks.
Primary or Mandatory skills :-
• Big Data
• Cloudera Hadoop
• Spark, Scala
• Databricks
• Azure
Good to have skills :
Data lake, Azure, Telecom Domain Knowledge.
Detailed Job description :
• Minimum 11 years of experience in Enterprise Data Warehouse solutioning, Exposure of Big Data technology stacks like Cloudera, HBase, Hive, Impala Kafka and Spark, Analytics tools like Python, R
• In depth knowledge of Teradata Utilities, Macros
• Data lake, Azure
• Strong SQL analytical skills
• Knowledge in Unix system
• Good to have knowledge in Control M scheduling
• Involve in Business requirement gatherings, Analysis of requirements, Design, solution walkthrough, Workshops & Identify gap’s in solution & Business requirement with Business & IT team.
• Create detailed technical design document based on the requirements and High-Level Solution Design.
• Basic knowledge in Spark leveraging Scala or Python and Optimize the performance of the built Spark applications in Big data Platform.
• Strong analytical mindset and ability to work independently and in fast-paced and quickly changing environment
• Work and continuously improve the DevOps pipeline and tooling to provide active management of the continuous integration/continuous deployment processes
• Good to have experience in any ETL tools
• Experience in Teradata, Informatica, Linux, Hadoop Spark, Control-M
• Experience working in an Agile delivery model
• Preparing implementation plans, reports, manuals and other documentation on the status, operation and maintenance of Data ware housing Applications.
• Designing, developing and unit testing of Data Ingestion, transformation using bigdata, Hadoop and Databricks.