I have a pet peeve about virtualization and security, and it happens to be a minor thing with syntax. It comes down to this question; what is the difference between virtual and virtualized, and why does it matter in in the language of security?
From our friends at TheFreeDictionary:
1. Existing or resulting in essence or effect though not in actual fact, form, or name: the virtual extinction of the buffalo.
2. Existing in the mind, especially as a product of the imagination. Used in literary criticism of a text.
3. Computer Science Created, simulated, or carried on by means of a computer or computer network: virtual conversations in a chatroom.
To boil it down, something that is virtual exists as an idea, but not in reality. When a datacenter or an end-user system is virtualized, it doesn’t become ‘virtual’; it still exists, n matter how virtualized it may be. A virtual machine is still a machine, though it is abstracted from the underlying physical infrastructure. So why is there so much virtual security out there? I propose it is because much of it is exactly as it claims – virtual = vapor.
Virtualization of the datacenter encompasses a massive change in how datacenters are designed, built, and operated. The workloads, whether borne by servers or end-user systems, are being abstracted to sit atop of not just a supervisor (the operating system that abstracts applications from hardware) but also a hypervisor (the operating systems that abstracts the supervisors from hardware).
Virtualization has gained acceptance because it seamlessly performs this new layer of abstraction, while taking advantage of deduplication. For example, f the same memory contents are used by several VMs, keep a single copy and so on. The same physical RAM that was dedicated to servicing a single supervisor can then pull multiple-duty by deduplicating memory contents. Layered above the basic deduplication advantages are the management advantages; moving VMs from host to host, live snapshots, fault tolerance, and so on.
Why, then, is security still virtual, and not virtualized?
To answer this, we need to draw a box. The outside of the box represents the perimeter of a datacenter, or ‘slice’ therein (I’m staying away from networking terms on purpose; let’s keep this conceptual for this discussion). Inside the box are the workloads. Traditionally, physical network devices have kept the outside out, while selectively letting some things in. Endpoint security, that being primarily anti-malware, operated inside the box, happily doing lots of endpoint security things within each endpoint.
Now, consider that box as applied to a virtualized (not virtual!) datacenter. The perimeter may not have changed. In-fact, in large environments maintaining hefty, and very physical security devices at the edge are likely to be the norm for years to come. Perimeters within the datacenter may take advantage of virtualization via virtualized versions of perimeter devices, an example being virtual appliances that run IDS/IPS. Edge-worthy throughput negotiated through a supervisor that is, in-turn, negotiating with a hypervisor to access hardware sounds like the game of telephone tag that it is. Dedicated hardware at the edge of large networks shall remain. Within a datacenter, some would call it a software-defined network; let’s agree to call it network stuff that happens within the box.
Inside the box are the workloads. The applications work within the supervisors that work within the hypervisors. Where did the endpoint security go?
From my observations, most organizations that are virtualizing server workloads don’t hesitate to virtualize the endpoint security along with the workload. On some levels it makes sense; whilst virtualizing the application and the operating system within which the application resides, the endpoint security should be along for the ride. Unfortunately, this has led to a lot of ‘virtual’ endpoint security.
As organizations have moved from piloting server virtualization, through embracing the vision of a virtualized datacenter, and on to private cloud and end-user system virtualization (VDI), they begin to notice significant problems. Traditional endpoint security does not become ‘virtualized security’ simply because the endpoint that the security runs within is virtualized. Traditional endpoint security also cannot become virtualization-specific security via addition of work-around features; virtual security strikes again!
The problem comes down to duplication. A traditional anti-malware agent is designed to treat the operating system that it is protecting as an island. Scanning activity is done within that isolated system. Scanning engines and databases are maintained (installed, updated, upgraded, etc.) must be present. We all know what impact that has – remember the last time you bought a new desktop or laptop and smiled as it booted in a fraction of the time that your old one did. Then you install anti-malware, and sigh as that new-found thrill quickly fades as the boot time seems to double. After your shiny new system finally fires-up, you check the resource usage and roll your eyes at the couple of hundred megabytes of memory being consumed solely to secure your system.
That simple problem of duplication leads to bigger problems, which bring a virtualization project to a grinding halt. With virtualized servers, perhaps twenty or forty instances can be squeezed on a particularly hefty host, and two- to four-hundred VDI instances. That means there are twenty to four-hundred anti-malware agents happily churning away. Organizations have learned to disable scheduled scans, lest every anti-malware agent grabs as much computing power as it can, and the host resources quickly become exhausted. When the consolidation ratio (the holy grail of virtualization) gets high enough, even regular updates can create resource churning, and upgrades are even worse.
Organizations typically won’t see these issues when first jumping into virtualization. Pilot projects usually use over-spec’ed hardware. The reasonable assumption is that the pilot will help iron-out the wrinkles, and when done, more physical systems will be migrated to the hardware. Only when the project moves past the initial stages does this hidden problem become apparent. However, if an organization starts with VDI, the problem is obvious from the start. If you’re wondering who jumps straight to VDI, consider small to medium sized organizations. It can actually be much simpler for them to virtualize end-user systems with the help of solutions like Citrix’s VDI-in-a-Box.
To try to solve these problems, security vendors have tried a few different workarounds. One of my favorites (I’m being deeply cynical) is randomized scheduled scans. In other words, a schedule scan that is, well, scheduled, but actually runs at a random time during a scheduled interval with the hopes that not too many agents are running the scan simultaneously. Amazingly, the treatment is applied to updates. Take a moment to ponder that – an industry that has touted near real-time updates of protection and then actively hobbles the functionality and calls it a feature.
Other vendors have taken a more holistic approach. If duplicating a scan engine within each VM doesn’t work, use a single scan engine on a dedicated virtual appliance. The problem then becomes one of remote scanning; how can introspection of activity within a VM be achieved from a virtual appliance?
VMware has created an API and functionality for remote introspection. It is called vShield Endpoint. Vendors who are willing can create a virtual appliance that is integrated with the vShield API. vShield handles the remote introspection by exposing file system events that are captured by the vShield driver that is embedded in VMware Tools within protected VMs. This means that the single agent on the virtual appliance performs scanning, deduplicating the impact and freeing-up resources.
This approach works very well if the protected system is supported. Currently, that means it is limited to Windows VMs running on ESXi. Also, since the remote introspection is handled through ESXi, the virtual appliance is tied to the host (one per host, and they cannot be moved). Finally, although file system events are exposed, other areas, such as memory, processes, and registry, are not.
Other vendors have bypassed vShield and created their own remote introspection technologies. Not being tied to a particular hypervisor and the API of the virtualization vendor means that these solutions tend to be hypervisor agnostic. This is especially handy when the VMs that are being protected are running on infrastructure that doesn’t belong to the organization, namely public cloud. This approach also has the possibility of going beyond file system events to include memory and process inspection, and expanding that protection to include Linux.
If these approaches are so great, why isn’t every endpoint anti-malware company doing it?
Simply put, it’s not easy, and it’s all fairly new. Although the scanning engine on the virtual appliance is, more or less, simply a scanning engine, the architecture of the solution around that engine is new. That means doing more than tweaking scan and update schedules – it means building a new product. Vendors that have traditionally purchased innovation have had a hard time because this approach is new enough that there simply are not start-ups or small players that available for acquisition. Also, the core is still an anti-malware engine. Who would pitch a start-up that includes creating a brand-new commodity technology? True, and an anti-malware engine could be OEM’ed, but that makes the prospects of acquisition rather murky.
In the end, the existing endpoint anti-malware players need to come-up with solutions, not workarounds, for virtualization security. As more organizations expand virtualization projects, ‘virtual security’ isn’t going to cut-it. Organizations face a choice of continuing with ‘virtual’ security, the product of imagination, or embracing security that is virtualized.