Q Blocks Documentation
  • 👋Welcome to Q Blocks
  • 🌐GPU Computing at Scale
  • 💻Launch a Q Blocks GPU instance
    • Using Dashboard UI
    • Using Rest APIs
  • 💰GPU Instance Pricing
  • 🤖Fine-tuning Falcon 7B/40B LLM
  • 🔑IAM: Share access with team
  • 🤔Q Blocks How To Guide
    • Create a new user
    • Upload data using SCP command
    • Use Visual Studio Code with Q Blocks instances
    • Port forwarding to run web services
    • Launch Jupyter Hub in Q Blocks Instance
    • Launch TensorBoard in Q Blocks instance
    • Setup Horovod and OpenMPI in Q Blocks Instance
    • Setup AIM for ML experiment tracking
    • Disco Diffusion AI Art on Q Blocks
    • Stable Diffusion Text to Image GPU server on Q Blocks
    • Setup Docker with Nvidia GPU support
    • Enable port forwarding on a Docker container in Q Blocks instance
    • Run production ready lightweight kubernetes using K3s in Q Blocks instance
    • ↗️Upgrade CUDA to v12.2
Powered by GitBook
On this page

GPU Computing at Scale

PreviousWelcome to Q BlocksNextLaunch a Q Blocks GPU instance

Last updated 1 year ago

Q Blocks has been designed primarily to enable access of scalable and very affordable GPU computing power for AI/ML workloads.

We are not a cloud platform that offers 100s of services like VPC or storage services. We focus today on computing services; more specialised for GPU compute.

Unlike a traditional cloud platform with centralised / large data centers and prohibitively expensive access of infrastructure; Q Blocks developed an alternate approach of accessing GPU power by partnering with thousands of GPU server owners across the world.

We call these partners as hosts and these hosts are of 2 types:

  1. Small facilities with 20-100 GPU servers

  2. Tier 2 - 3 Data centers with 1000+ GPU servers.

This approach helps us offer 3 fundamental value additions for AI/ML businesses:

  1. Scalable access of GPU servers with high availability

  2. High optionality of GPU types to choose from

On our platform we are able to offer a ton of GPU options to choose from:

A screenshot of Q Blocks GPU instance launch dashboard

A large variety of GPUs from 80GB VRAM to 8GB VRAM are available.

If we do a straightforward price to price comparison for some of the most in-use GPU servers like Tesla V100 16GB GPUs between AWS and Q Blocks then we will notice a straightaway 50% cost reduction:

Parameter

AWS

Q Blocks

Instance Type

P3.2xlarge - Tesla V100

QB16-v1-n1 - Tesla V100

Cost

$3.06/hr

$1.5/hr

  1. We don't charge for egress bandwidth.

  2. We offer Technical support at no additional cost.

With the Data Center nodes we support Tier 2 grade uptime and reliability of GPU instances.

🌐