Skip to main content

What you can do with Jarvislabs.ai?

  • Launch an GPU/CPU powered instance.
  • Pause the instance.
  • Resume the instance.
  • Resume it by changing the GPU type, scaling GPUs or storage.
  • SSH to an instance.
  • Run from your own docker image.
  • Deploy your model using Flask/FastAPI

Pick a modern Nvidia GPU to power your Deep Learning​

We support the latest GPU based on Nvidia Ampere architecture (2022), which are usually 1.3X - 3X times faster than GPUs based on Turing Architecture. A100 is the fastest GPU, followed by A6000 and A5000.

GPUvCPUSRamReservedSpotWeeklyMonthly
A100 - 40GB732GB$2.39/hr$0.99/hr$360/week$1200/month
A6000 - 48GB732GB$1.79/hr$0.79/hr$270/week$900/month
A5000 - 24GB732GB$1.29/hr$0.59/hr$195/week$650/month

You can also choose GPU based on Nvidia Turing Architecture (2019), which offers similar performance like Nvidia V100 GPU.

GPUvCPUSRamReservedSpotWeeklyMonthly
RTX 6000 - 24GB732GB$0.99/hr$0.39/hr$149/week$500/month
RTX 5000 - 16GB732GB$0.49/hr$0.19/hr$75/week$245/month

Need any assistance, say πŸ‘‹ Hi to the team using the chat option available on the website or drop us an email to hello@jarvislabs.ai