Big Data Engineer- Financial Services- Melbourne CBD
We are seeking a Big Data Engineer to join a fast growing Financial services firm based in the heart of the city. The ideal Big Data Engineer will be join a small team with big plans, who enjoy a flat hierarchy where all ideas are heard. The benefits to the candidate is gaining exposure to different financial services companies and expanding your career progression and exposure.
The Data Engineer will be comfortable preparing big data infrastructure to be analysed by Data Scientists. The ideal Data Engineer will design, build and integrate data from various resources, and manages big data.
The Data Engineer will be experienced with running some ETL on top of big Datasets and create big data warehouses that can be used for reporting or analysis by data scientists. You are finally used to use different data services on the cloud (ideally from GCP). An amazing role with chance to learn many new things.
The requirements of the Data Engineer
- Python, Hadoop, Hive, Scala, MongoDB, HBase, Cassandra
- 2+ Years Data Engineer exposure
- 3+ years Big Data stream processing frameworks on any of these technical skills, Kafka streaming, KSQL, Spark streaming, Storm, Flume etc.
- 4+ years experience with the following technologies: Java, Spring Frameworks, Kafka, SPring Boot, NoSQL, AWS.
- Some experience with ETL tools such as Informatica/Talend advantageous.
- Research based mindset
- The ideal Data Engineer will have a clear route of progression from the start and will be a key decision maker for the company.
Please contact Paulina for any further questions.
To find out more about Progressive Recruitment please visit our website.
Award Winner for:
Best Medium Recruitment Company of the Year by Recruitment International 2018
Training & Development Initiative of the Year by Recruitment International 2018