Build the cloud
AI runs on.
Jarvis Labs serves builders in 65+ countries and has spent 6+ years making GPU compute simple, fast, and reliable. We are hiring serious builders for inference, training, forward-deployed engineering, and AI-native GTM.
// 01 — WHAT WE BUILD
The platform behind serious AI workloads.
This is not just a GPU rental interface. The work is making hard compute feel simple, reliable, and useful for people building across the world.
GPU containers and VMs
Fast compute that stays simple — from first launch to repeated production.
Inference products
Serving paths for open and custom models across runtimes, GPUs, latency targets, and cost constraints.
Training clusters
Kubernetes, Slurm, Ray, multi-node workloads, observability, scheduling, and reliability.
Forward-deployed engineering
Stay close to real users, debug what breaks, and turn repeated pain into product direction.
AI-native GTM systems
Code, data, automation, AI tools, and customer understanding for a small technical team.
// 02 — WHO WE WANT
Evidence > Pedigree.
We care less about years of experience and more about proof that you have taken on hard work and made it real.
If you have built production systems, contributed deeply to open source, published useful research, solved hard ML problems, debugged difficult product issues, or built something impressive independently — we want to hear from you.
// 03 — HOW WE WORK
Small team. Flat hierarchy. Real ownership.
A short charter — what we expect of ourselves and what we expect of teammates.
- 01
Own hard problems end to end.
- 02
Use AI tools responsibly, but own the judgment.
- 03
Reliability, docs, and observability are part of the product.
- 04
Customer reality beats internal assumptions.
- 05
Build products that serve serious users around the world.
// 04 — OPEN ROLES
Pick the problem you want to own.
The titles matter less than the ownership. If the exact title is not right but the work is, use the open track.
Group
Inference
2 open · roles
Senior Inference Platform Engineer
Build and evolve Jarvis Labs inference-as-a-service for open and custom AI models. Own performance, reliability, runtime integrations, benchmarks, APIs, and production behavior.
Read roleforward-deployedInference Forward Deployed AI Engineer
Work with customers deploying real inference workloads. Help them choose runtimes, GPUs, precision, quantization, scaling patterns, and production deployment paths.
Read roleGroup
Training
2 open · roles
Senior Training Platform Engineer
Build the platform layer for serious training workloads: Kubernetes, Slurm, Ray, multi-node jobs, observability, scheduling, reliability, and customer workflows.
Read roleforward-deployedTraining Forward Deployed AI Engineer
Help customers run distributed training workloads successfully. Understand their models, data, clusters, failure modes, and turn repeated problems into better product direction.
Read roleGroup
GTM Systems
1 open · role
// 05 — HOW TO APPLY
Do not send only a CV.
Write to build@jarvislabs.ai. We read personal notes and proof of work. CV-only applications are not reviewed.
$ mail build@jarvislabs.ai < your-note.md
# include in your-note.md:
- 01Why Jarvis Labs?
- 02Which problem area interests you?
- 03What can you contribute in the first 90 days?
- 042 to 3 proof links.
- 05Attach a CV if useful — but never CV-only.
- 06If there is a better way to show your work, use that.
// 06 — PROCESS
High-signal, respectful of your time.
Four steps. No leetcode. No surprise panels. Real conversation about real work.
- 01
Personal note review
We read every personal note. Not every CV.
- 02
Technical conversation
Trade-offs, signal, scope. Real questions, real answers.
- 03
Problem deep dive
Working session on a real problem. Show how you think.
- 04
Founder + team conversation
Mutual fit, in both directions.
// 07 — WHERE WE WORK
Mostly in person. Sometimes remote.
We prefer people who can spend meaningful time with the team, but we are open to remote for exceptional candidates.
// 08 — ORIGIN
Built by staying close to hard problems.
Jarvis Labs started from a simple pain: serious GPU access was harder than it should have been, especially for builders outside the usual centers of cloud power.