**About the job Senior Backend Data Engineer**:
Join a dynamic team as a Python backend/data developer, contributing to an AI-powered analytics and data platform aimed at reducing industrial manufacturing carbon footprint. Focus on backend data pipeline and analytics stack, including tasks like scalable backend layer implementation and continuous deployment. Shape overall architecture, collaborate with cross-functional teams, and provide hands-on support during customer implementations.
**Functions**:
- You thrive in an early-stage startup environment.
- You work independently, self-directing your work to align with a broader technical and product vision, and are comfortable adapting to change.
- You are biased toward action, finding solutions that work for todays problems while keeping an eye on the future; you recognize that perfect is the enemy of good enough and are comfortable making tradeoffs to achieve
- Velocity while managing technical debt.
- You can translate requirements into technical design and make technology recommendations. You are a strong communicator, engaging early and often with teammates and leaders to share and iterate on designs, building consensus, and adapting to feedback.
**Qualifications: **
- 10+ years of software engineering experience, with at least three years of generalist Python development experience
- Experience with building and operating Python-based backends and/or data pipelines
- Familiarity and/or experience with Python data/analytics libraries like Pandas and/or NumPy and data pipeline technologies like Parquet files and Apache Spark
- Computer science background, with practical working knowledge of concepts like algorithmic complexity and shared data structures like graphs and queues
- Experience contributing to open source projects, especially in the data / data processing space (this is the single best qualification for the profile we seek a successful open source contributor is self-motivated and has proven ability to design and communicate in the context of a more significant effort)
- Experience with data-focused API design
- Experience with distributed data pipeline processing in AWS
- Experience setting up scalable development processes for data and analytics-based systems (CI/CD, testing, etc.)
- Knowledge of Docker-based pipeline deployment
- Experience with data lifecycle and lineage
- Experience with data processing orchestration
**Bonus qualifications**:
- SQL and RDBMS (Postgres, Mysql, etc)
- GitHub Actions
- Cloud Formation or Terraform or other declarative IaC language
- Kubernetes
- AWS Kinesis and Glue for cloud-scale data pipelines
- Monitoring and alerting frameworks
- DSL design