Senior Technical Program Manager
Cerebras
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
At Cerebras, we’re redefining the future of AI compute. We’re looking for a Senior Technical Program Manager to lead complex, cross-functional programs across our AI training and inference platforms.
This role sits at the intersection of engineering, product, and infrastructure. You’ll partner closely with technical teams to bring clarity to ambiguous problems, align stakeholders, and drive execution across some of the most challenging AI systems in the world.
We prefer candidates based in the Silicon Valley who can work closely with our engineering teams in real time. Silicon Valley preferred; strong remote candidates welcome.
Responsibilities
- Lead Complex Programs: Own delivery of large, cross-functional initiatives across Cerebras’ AI training and inference platforms.
- Align Stakeholders: Partner with engineering, product, hardware, and infrastructure teams to define scope, priorities, and timelines.
- Plan & Execute: Turn ambiguous goals into clear roadmaps, milestones, and measurable outcomes.
- Engage Technically: Participate in system design and architecture discussions to drive informed tradeoffs.
- Manage Risks & Dependencies: Surface blockers early, coordinate across teams, and resolve escalations quickly.
- Improve Execution: Establish strong planning cadences, tracking, and metrics to increase predictability and velocity.
- Scale Impact: Refine processes and share best practices to help teams operate more effectively.
Skills And Qualifications
- B.S./M.S. in Computer Science, Electrical Engineering, or related technical field (or equivalent experience)
- 7–10+ years leading complex technical programs or product delivery
- Additional experience may be considered for Staff leveling
- Experience delivering complex technical programs across software platforms, distributed systems, or infrastructure
- Strong technical understanding of AI/ML systems, including training and inference workflows
- Prior product or platform experience building AI training, inference, or large-scale compute systems strongly preferred
- Comfortable working closely with engineers and engaging in technical tradeoff discussions
- Excellent prioritization, risk management, and cross-functional alignment skills
- Ability to operate independently in fast-paced, ambiguous environments
- Experience collaborating with distributed or remote teams
Preferred Skills and Qualifications
- Experience in productizing AI/ML platform, infrastructure or distributed system solutions (training, serving, compilers, runtimes, or large-scale compute) into reliable, customer-facing solutions
- Familiarity with hardware–software co-design or accelerator-based systems
- Experience driving 0→1 or ambiguous platform initiatives
- Strong grasp of performance, scaling, and reliability metrics
- Exposure to customer-facing or field deployments
- Experience mentoring TPMs and improving planning/execution processes
- Comfortable working across distributed, cross-functional teams
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2026.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.