We’re working with a fast-moving company that is investing heavily in its data platform and looking to strengthen its Data Engineering team.
This role focuses on building and improving data infrastructure, working closely with analysts, and helping shape a scalable foundation for future AI use cases.
What you’ll do
- Design, build, and maintain data pipelines across multiple data sources
- Work with analysts to translate data needs into technical solutions
- Develop and maintain data warehouse and lakehouse environments
- Improve pipeline performance, scalability, and reliability
- Ensure data quality through validation and monitoring processes
- Apply data security and privacy best practices
- Support data infrastructure as part of an on-call rotation
- Keep up with new data and AI technologies
Tech stack
- Python, SQL, Linux
- Airflow, dbt, PySpark, Talend
- Oracle, PostgreSQL, MariaDB
- AWS S3 and modern data lake technologies
- Docker, GitLab, VS Code
- Power BI / SAP BusinessObjects
What we’re looking for
- 2+ years of experience in Data Engineering or a similar role
- Strong SQL and Python skills
- Experience with data warehouses and data lakes
- Familiarity with cloud environments (AWS preferred)
- Understanding of data modelling and ETL/ELT processes
- Comfortable working in Linux environments
- Good problem-solving skills and ability to work independently
- Collaborative mindset and clear communication
Nice to have
- Experience with tools like Dremio, Snowflake, or Databricks
- Exposure to Kubernetes or containerised environments
- Background in telecom or database administration
To apply for this job email your details to info@softwarejobs.io
