• Implementation experience in BigData Platform ; preferably in Cloudera Hadoop platform
• Minimum 2 years of Development experience using Hadoop eco system tools & utilities: MapReduce, Spark, Kafka, Sqoop, Impala, Hive etc
• Ability to work independently and also contribute to overall architecture and design
• Experience in writing Shell scripts in Linux Platform
• Knowledge on API management concepts and design
• Developed Apache Spark applications and comfortable developing in Python. (Preferred)
• Performed debugging and performance tuning of Spark applications.
• Minimum 2 years of Development experience using Hadoop eco system tools & utilities: MapReduce, Spark, Kafka, Sqoop, Impala, Hive etc
• Ability to work independently and also contribute to overall architecture and design
• Experience in writing Shell scripts in Linux Platform
• Knowledge on API management concepts and design
• Developed Apache Spark applications and comfortable developing in Python. (Preferred)
• Performed debugging and performance tuning of Spark applications.