This job is no longer accepting applications.
From an Australian-born company to a rapidly growing global business, we’re on the ride of a lifetime! We’re on a mission to be the world’s most loved way to pay.
We love connecting our customers with brands they love and empowering them to spend their money and buy what they want in a responsible way. We’re all about building a high-performing team, where our teams come to work to be the best they can be. We are grounded in reality and work together to achieve the extraordinary.
It’s a fast-paced business and that’s the way we love it. We know that world class talent is the only way to pave our future success, so come and work with some of the brightest minds and be part of the once in a lifetime ride.
About the Opportunity
Afterpay is looking for a Mid-weight Data Engineer to be part of the Global Data Engineering and Platforms (GDP) team. The purpose of the GDP team is to design, develop, and maintain a scalable and easy-to-use Data Platform that helps Afterpay on its journey to become the world's most-loved way to pay. This is a great opportunity for someone with a DevOps or software engineering background who is experienced in cloud infrastructure and wants to build their career within a fast-growing, global company.
What you’ll be doing:
Procuring, Processing, and Providing data from myriad sources in the GDP
Writing secure, clear, well-structured, and performant code
Keeping up-to-date with advances in the Data space and sharing learnings with the team
Participating in after-hours and weekend support rotations
You love data engineering and get excited talking about data, analytics, and AI/ML
You love being aboard a rocketship with a high performing team
You relish customers and embrace change as customer’s needs evolve
You are a team player, which means being respectful, willing to co-create and trust your team
You believe in delivering useful solutions quickly and iteratively– we don’t care for perfection that takes months to show value
Data Engineering skills (SQL, Python, ETL, data pipelines, etc.)
Usual data concepts like Data Lake, data warehousing, ingestion patterns, etc
Experience in Cloud infrastructure, AWS/Azure/GCP
Experience with Git/Jenkins, JIRA
Bonus points for:
You have a background in distributed data processing
Spark (we use Python and Spark for big data processing)
Experience with Apache Airflow/Luigi/Azkaban
Software Engineering/Dev-Ops background
Understanding of AI/ML fundamentals.
Your application has been successfully submitted.
Shop now. Pay later. Always interest-free.