Data Engineer
Opportunity
Stable and secure permanent full-time role Competitive salary and benefits Flexibility + access to professional development Immediate start $70k - $85k p.a plus Superannuation.
About Us
We are an IT consulting and services company focusing on Data Analysis, Data visualisation, Big Data and Cloud services based in Victoria.
Role Description
We have an opportunity for a Data Engineer to work on a large, greenfield data project in a large environment. You will be working closely with the business and other stakeholders as you develop a business-critical analytics and data platform.
Key Responsibilities:
As a Data Engineer, you will have to:
Build data lakes, data pipelines, big data solutions and data warehousing solutions.
Develop and deploy key cloud based infrastructure and maintain data pipelines.
Excellent working experience in Python and SQL.
Excellent working experience of AWS Redshift.
Experience with Dockers and kubernetes.
Experience with AWS including any or all of EC2, S3, IAM, EKS and RDS.
Good understanding of Data Warehousing/ETL concepts.
Experience with CI/CD tools.
Translate business requirements into efficient technical design.
Collaborate with cross-functional teams and work with different stakeholders.
Work with the Lead Data Engineer to manage and maintain a global data warehouse environment/ETL framework and database integrations
Problem analysis, process improvements and efficient issue resolution
Design and implement technology best practices, guidelines, and repeatable operational processes.
Skills and Experience:
To meet the challenges of this role, you will ideally be a bachelor's degree qualified in computer Science, engineering, or related field and possess the following skills and experience:
5+ years of experience in building cloud data infrastructure that can scale with ease, depending on traffic
Experience with Cloud platforms such as: AWS, Azure and GCP
Experience with DevOps / Continuous Integration frameworks with configuration management tools such as Puppet (Chef, Salt, Docker also relevant).
Experienced with automation and orchestration concepts/skills.
Good knowledge of data management, data governance, and data storage principles.
Proficiency in Pyspark for data processing and analytics.
Work closely with functional teams to design and implement a Customer Data Platform (CDP) on Cloud.
Ensure data solutions adhere to best practices in security, scalability, and performance.
Develop, manage, and maintain ETL frameworks, data modelling and analysis using Kimball dimensional modelling methodology
Ability to work both individually and collaboratively and enjoy taking ownership of a project
Infrastructure as Code: Terraform, ARM Templates, Cloud Formation, AWS CDK, Serverless Framework and Pulumi
Build / Release Tools: GitHub, Azure DevOps, Bitbucket and TeamCity
Containers: Docker, Kubernetes
Project Management Capabilities (scrum, agile, and waterfall) and good communication skills.
Job Type: Full-time
Salary: $70,000.00 – $85,000.00 per year
Benefits:
Hybrid
Schedule:
8 hour shift
Ability to commute/relocate:
Melbourne VIC: Reliably commute or planning to relocate before starting work (Preferred)
Work Authorisation:
Australia (Preferred)
Date Posted: 15 Oct 2024
Job Types: Full-time, Permanent
Pay: $70,000.00 – $85,000.00 per year
Benefits:
Professional development assistance
Work from home
Schedule:
8 hour shift
Monday to Friday
Supplementary Pay:
Overtime pay
Education:
Bachelor Degree (Preferred)
Experience:
Data Engineer: 5 years (Preferred)
Work Location: Hybrid remote in Alfredton, VIC 3350
Opportunity
Stable and secure permanent full-time role Competitive salary and benefits Flexibility + access to professional development Immediate start $70k - $85k p.a plus Superannuation.
About Us
We are an IT consulting and services company focusing on Data Analysis, Data visualisation, Big Data and Cloud services based in Victoria.
Role Description
We have an opportunity for a Data Engineer to work on a large, greenfield data project in a large environment. You will be working closely with the business and other stakeholders as you develop a business-critical analytics and data platform.
Key Responsibilities:
As a Data Engineer, you will have to:
Build data lakes, data pipelines, big data solutions and data warehousing solutions.
Develop and deploy key cloud based infrastructure and maintain data pipelines.
Excellent working experience in Python and SQL.
Excellent working experience of AWS Redshift.
Experience with Dockers and kubernetes.
Experience with AWS including any or all of EC2, S3, IAM, EKS and RDS.
Good understanding of Data Warehousing/ETL concepts.
Experience with CI/CD tools.
Translate business requirements into efficient technical design.
Collaborate with cross-functional teams and work with different stakeholders.
Work with the Lead Data Engineer to manage and maintain a global data warehouse environment/ETL framework and database integrations
Problem analysis, process improvements and efficient issue resolution
Design and implement technology best practices, guidelines, and repeatable operational processes.
Skills and Experience:
To meet the challenges of this role, you will ideally be a bachelor's degree qualified in computer Science, engineering, or related field and possess the following skills and experience:
5+ years of experience in building cloud data infrastructure that can scale with ease, depending on traffic
Experience with Cloud platforms such as: AWS, Azure and GCP
Experience with DevOps / Continuous Integration frameworks with configuration management tools such as Puppet (Chef, Salt, Docker also relevant).
Experienced with automation and orchestration concepts/skills.
Good knowledge of data management, data governance, and data storage principles.
Proficiency in Pyspark for data processing and analytics.
Work closely with functional teams to design and implement a Customer Data Platform (CDP) on Cloud.
Ensure data solutions adhere to best practices in security, scalability, and performance.
Develop, manage, and maintain ETL frameworks, data modelling and analysis using Kimball dimensional modelling methodology
Ability to work both individually and collaboratively and enjoy taking ownership of a project
Infrastructure as Code: Terraform, ARM Templates, Cloud Formation, AWS CDK, Serverless Framework and Pulumi
Build / Release Tools: GitHub, Azure DevOps, Bitbucket and TeamCity
Containers: Docker, Kubernetes
Project Management Capabilities (scrum, agile, and waterfall) and good communication skills.
Job Type: Full-time
Salary: $70,000.00 – $85,000.00 per year
Benefits:
Hybrid
Schedule:
8 hour shift
Ability to commute/relocate:
Melbourne VIC: Reliably commute or planning to relocate before starting work (Preferred)
Work Authorisation:
Australia (Preferred)
Date Posted: 15 Oct 2024
Job Types: Full-time, Permanent
Pay: $70,000.00 – $85,000.00 per year
Benefits:
Professional development assistance
Work from home
Schedule:
8 hour shift
Monday to Friday
Supplementary Pay:
Overtime pay
Education:
Bachelor Degree (Preferred)
Experience:
Data Engineer: 5 years (Preferred)
Work Location: Hybrid remote in Alfredton, VIC 3350