We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them.
It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together.
One Confluent. One Team. One Data Streaming Platform.
The Confluent Cloud Platform based on Apache Kafka is the leading Cloud Native Platform as a Service for streaming data infrastructure, but this is just the beginning. We are building a PaaS, enabling customers around the globe to deliver streaming applications.
Design, build, and evolve internal infrastructure services written in Go, often as Kubernetes operators, that power the core platform behind Confluent Cloud
Own the systems that make cloud infrastructure secure, scalable, observable, and reliable, using GitOps, Terraform, Prometheus, Grafana, and a strong foundation in Linux, networking, and public cloud
Collaborate with engineers across Confluent to enable fast, safe, and autonomous deployment of services through shared platform tooling and best practices
Take shared responsibility for the full lifecycle of our infrastructure: availability, performance, monitoring, incident response, and capacity planning
Work on systems at scale, across tens of thousands of instances and multiple regions, with a focus on smooth, fast, and safe operations
Influence the architecture and operational strategy behind the critical infrastructure that supports all of Confluent’s cloud services
Participate in a 12-hour, follow-the-sun on-call rotation.
Strong programming skills, with a focus on reading, debugging, and evolving existing code in languages like Go, Java, or Python
Solid understanding of systems internals, including filesystems, memory management, network stacks, and kernel behavior
Hands-on experience running Kubernetes in production, and a deep understanding of containers and modern cloud-native workflows
Demonstrated experience with at least one major public cloud provider such as AWS, GCP, or Azure
A strong bias for automation and reproducibility; comfortable working with Kubernetes, GitOps, Terraform, CI/CD pipelines, and observability tools
Confidence in diagnosing complex systems, handling incidents, and driving continuous improvements to reliability
A genuine interest in how large-scale systems behave in the real world, and a drive to make them better every day
Exceptional teamwork, collaboration skills, and the ability to work independently as part of a globally distributed team
Solid written and verbal communication skills coupled with a strong motivation to
Experience working on developer productivity tooling
Experience with EKS, GKE or AKS specific Kubernetes distributions
Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible.
We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.