Designation / Job title : 261312-Developer Programmer
Primary or Mandatory skills :-
• Data warehousing
• Azure - Databricks
• Big Data Platform - Cloudera
• Teradata Tools and utilities
• Linux
Good to have skills :
Azure Databricks, Telecom Domain Knowledge, DBMS
Detailed Job description : -
Minimum 5 years of experience in Enterprise Data Warehousing, Exposure of Big Data Platform like Hive, Impala, Scala, Spark and Oracle, Teradata, SSMS, Apache Airflow, Azure Databricks, ADO and CICD Pipeline.
• In depth knowledge of Teradata Utilities, Macros
• Azure Databricks, ADO and Pelican
• Strong SQL analytical skills
• Knowledge in Unix system
• Good to have knowledge in Control M & Airflow scheduling.
• Involve in planning and Analysis of requirements, Design, solution walkthrough, Workshops & Identify gap’s in solution & Business requirement with Business & IT team.
• Good knowledge in Hive, Impala and Scala in big data platform.
• Strong analytical mindset and ability to work independently and in fast-paced and quickly changing environment.
• Work and continuously improve the DevOps pipeline and tooling to provide active management of the continuous integration/continuous deployment processes.
• Good to have experience in any ETL tools.
• Experience in Teradata, Informatica, Linux, Hadoop Spark, Control-M, SSMS, Oracle and Azure Databricks, Apache Airflow.
• Experience working in an Agile delivery model.
• Preparing implementation plans, reports, manuals and other documentation on the status, operation, and maintenance of Data ware housing Applications.
Primary or Mandatory skills :-
• Data warehousing
• Azure - Databricks
• Big Data Platform - Cloudera
• Teradata Tools and utilities
• Linux
Good to have skills :
Azure Databricks, Telecom Domain Knowledge, DBMS
Detailed Job description : -
Minimum 5 years of experience in Enterprise Data Warehousing, Exposure of Big Data Platform like Hive, Impala, Scala, Spark and Oracle, Teradata, SSMS, Apache Airflow, Azure Databricks, ADO and CICD Pipeline.
• In depth knowledge of Teradata Utilities, Macros
• Azure Databricks, ADO and Pelican
• Strong SQL analytical skills
• Knowledge in Unix system
• Good to have knowledge in Control M & Airflow scheduling.
• Involve in planning and Analysis of requirements, Design, solution walkthrough, Workshops & Identify gap’s in solution & Business requirement with Business & IT team.
• Good knowledge in Hive, Impala and Scala in big data platform.
• Strong analytical mindset and ability to work independently and in fast-paced and quickly changing environment.
• Work and continuously improve the DevOps pipeline and tooling to provide active management of the continuous integration/continuous deployment processes.
• Good to have experience in any ETL tools.
• Experience in Teradata, Informatica, Linux, Hadoop Spark, Control-M, SSMS, Oracle and Azure Databricks, Apache Airflow.
• Experience working in an Agile delivery model.
• Preparing implementation plans, reports, manuals and other documentation on the status, operation, and maintenance of Data ware housing Applications.