Skip to main content

How we use Micro Containers to ensure Workspace Privacy

00:04:38:10

One of my most favorite part about how we operate at WorqHat has always been Customer Obsession. The two biggest principles of how we operate and how we build products has always been

  1. Bringing in a Soul and a Personality to every User Interface that we build (Hello WorqKitty !!!!)

  1. Obsess over making sure that our Users have THE BEST experience when building their applications with us.

When we were building our biggest concerns were how do we enable users to have private spaces with a Containerized model across the Workspaces that the users are building. Think of it in terms of giving Users a secure closed experience without going through the complete hassle of managing an infrastructure. Earlier on, in order to attain the desired level of isolation we used dedicated EC2 instances on AWS or Compute Engines on GCP for each and every account that was being created. But soon enough we realized there were a lot of Infrastructure Problems. By a lot, I really mean a lot. The biggest issue was Cost, creating a VM meant we had to charge the user for the Complete VM lifecycle, there were processes in place for the Virtual Machines to go to sleep when not in use but the waiting time was configured to be a higher value. Yes, it did allow us to meet our security goals but with a massive massive trade-off: Deployment Speeds.

Every instance for a Workspace that was created took a minimum of ~3-4 minutes, and then you add the package installation times and Network Connection processes. At that time, as a very early stage startup, we were really figuring out on how people would put WorqHat to use and what would people think of the entire “Enterprise Private Workspace” models for Building and Hosting Enterprise Applications. Our plan was always to focus on delivering an amazing experiences while we spent time on figuring out how we can optimize resources on the backend to make it even more efficient and dropping costs for our users.

So until now we had to choose between containers with fast-startup times and high density, or VMs with strong hardware-virtualization based security and workload isolation. But, there is a way out. Welcome “QEMU ( _ general-purpose emulator software_ a type 2 hypervisor)”. QEMU enables us to deploy high performance workloads in lightweight virtual machines, called MicroVMs, which provides enhanced security and workload isolation over traditional VMs, while enabling the speed and resource efficiency of containers. In simpler terms, the same security features and capabilities of a traditional VM, but at a Fraction of the cost and faster setup times. Anything that costs us less means that we can drop down prices for the Users as well. So it’s a win-win for both of us.

Now, building MicroVMs, gives us a lot of advantages, we remove all unnecessary devices and guest functionalities to reduce device footprint and attack surface area of each MicroVM. Each MicroVM runs on an individual user space based on a Linux Kernel-based Virtual Machine. The low memory overhead allows us to serve a wider host of small MicroVMs without compromising on the speed and efficiency. The fast startup time and low memory overhead of each MicroVM enables us to pack thousands of MicroVMs onto the same machine. This means that every function aka every workflow run, container, or container group can be encapsulated with a virtual machine barrier, enabling workloads from different customers to run on the same machine, without any tradeoffs to security or efficiency.

Now when we create the MicroVM, we constantly configure every system based on the requirements of the Workspace/Organization through a RESTful API which enables us to configure the number of vCPUs or starting the machine. With built in rate-limiters, we can granularly control network and storage resources used by thousands of MicroVMs on the same machine which enables us to support Burst Scalable operations without any limits. Each MicroVM is further isolated with common Linux user-space security barriers by a companion program called “jailer”. The jailer provides a second line of defense in case the virtualization barrier is ever compromised.

Watch Lightning McQueen Race! Enjoy the GIF Here

MicroVMs are the Lightning McQueens to build Containers

Here’s an overview of how it helps:

  • Security based Design: MicroVMs use KVM-based virtualizations that provide enhanced security over traditional VMs. This ensures that workloads from different end customers can run safely on the same machine through a minimal device model where we tend to exclude all non-essential functionality.

  • I..Am…Speed: The minimal device model ensures that we can accelerate kernel loading times to be as fast as 125ms initialization time and up to 200 MicroVM startups per second per host.

  • Scale 📏 and Efficiency: Each MicroVM runs with a reduced memory overhead of less than 5 MiB, enabling a high density of MicroVMs to be packed and optimized for performance even across 100s of MicroVMs.

KVM hypervisor: a beginners' guide | Ubuntu

A Top View of how the entire QEMU Architecture works

The fact that running simple operations has always necessitated immense engineering has always been a delight to learn about the unique ways you’ve utilized our AI Models and workflows to power and automate commercial applications globally. We eagerly anticipate witnessing the incredible creations you’ll build with WorqHat. As evident, this is a significant milestone, but it’s merely the beginning. Our team eagerly anticipates sharing more details and collaborating with you to progress further. Until then, have an extraordinary week and a fantastic ‘Worq’ing’ experience!

Sagnik Ghosh's profile picture
Sagnik GhoshCo-Founder & CEO @ WorqHat