🌐GPU Computing at Scale

Q Blocks has been designed primarily to enable access of scalable and very affordable GPU computing power for AI/ML workloads.

We are not a cloud platform that offers 100s of services like VPC or storage services. We focus today on computing services; more specialised for GPU compute.

Unlike a traditional cloud platform with centralised / large data centers and prohibitively expensive access of infrastructure; Q Blocks developed an alternate approach of accessing GPU power by partnering with thousands of GPU server owners across the world.

We call these partners as hosts and these hosts are of 2 types:

  1. Small facilities with 20-100 GPU servers

  2. Tier 2 - 3 Data centers with 1000+ GPU servers.

This approach helps us offer 3 fundamental value additions for AI/ML businesses:

  1. Scalable access of GPU servers with high availability

  2. High optionality of GPU types to choose from

On our platform we are able to offer a ton of GPU options to choose from:

A screenshot of Q Blocks GPU instance launch dashboard

A large variety of GPUs from 80GB VRAM to 8GB VRAM are available.

If we do a straightforward price to price comparison for some of the most in-use GPU servers like Tesla V100 16GB GPUs between AWS and Q Blocks then we will notice a straightaway 50% cost reduction:

Parameter

AWS

Q Blocks

Instance Type

P3.2xlarge - Tesla V100

QB16-v1-n1 - Tesla V100

Cost

$3.06/hr

$1.5/hr

  1. We don't charge for egress bandwidth.

  2. We offer Technical support at no additional cost.

With the Data Center nodes we support Tier 2 grade uptime and reliability of GPU instances.

Last updated