Comparison with Thin Client and Hosted Virtual Desktops
Three generations of system architectures: Mainframe, PCs, LivePCs
The moka5 LivePC™ technology is a result of a four-year NSF research project, called the Collective, at Stanford University.
- Thin-client computing inspired the original research project;
- We proposed originally to develop the hosted virtual architecture.
- As our research in managing desktop environments matured, we realized that we could distribute the virtual machines to the end-user PCs, giving rise to the concept of LivePCs.
Thin Client vs. PCs
The Collective research project was inspired by Sun Rays, a stateless thin-client architecture from Sun Laboratories that enables users to resume computing from any terminal exactly where they left off, with the help of a smart card[1]. In response to the difficulty of managing and securing desktop computers, IT professionals have increasingly turned to thin-client computing so that they can centralize the management of desktops.
Thin-client computing, which includes other examples such as Citrix terminals, is like a throw-back to the Mainframe days. It fails to take full advantage of the advances of PC computing technology. Running a data center to host all the users' computing states is costly and not scalable. The cost of a thin-client terminal is not too different from that of a commodity PC. The data center requires not just hardware equipment, but the floor space, cooling of the large facilities, and significant operational cost.
Microsoft Windows applications were mostly written to be run on "personal" computers. It may take a non-trivial amount of work to run these applications on the same thin-client server to satisfy multiple users. Also, it is not easy to access users' USB peripherals and printers with thin-client technology.
Furthermore, the data center is a single point of failure, and downtime would disrupt all the users. In contrast, PCs are geographically distributed. Individuals can independently upgrade their PCs without having to compete for a shared resource.
Finally, thin clients cannot run disconnected, and thin-client technology is not latency tolerant, causing poor performance when there is high latency between the server and the client.
Thin Client | Distributed PCs | |
---|---|---|
Security | x | |
Ease of Software Updates | x | |
Low cost (no data center) | x | |
PC Software Compatibility | x | |
Distributed (no single point of failure) | x | |
Disconnected operation | x | |
Interactive performance over the WAN | x |
Hosted Virtual Desktop Architecture
The Collective project was originally proposed to manage the large number of user computing states in a thin-client computing environment. Users could log on to a thin-client server, but might never log off. This would result in several idle sessions that take up resources on the server. The solution was to "suspend" these idle sessions, and resume them as necessary. Virtual machines are a natural choice because inactive machines can be offloaded from the server. The original idea is to host these virtual desktops in the data center, and let users use thin-client technology to connect to the virtual desktops. Since then, this technology has also been developed commercially; VMware refers to this architecture as "Virtual Desktop Infrastructure", and IBM refers to this as the "Virtualized Hosted Client Infrastructure".
The hosted virtual desktop architecture shares many of the thin-client computing characteristics. The computation is centralized, allowing the IT staff to service all the machines at once. It requires a data center, disallows disconnected operation, and suffers from the performance overhead of virtualization and remote display. The IT staff has to manage the large number of virtual machines nonetheless. One advantage hosted virtual desktops have over thin clients is that they can run arbitrary PC software.
LivePCs: centralized management and local execution
In the course of our research at Stanford, we learned how to manage the large number of virtual machines. Instead of applying standard software management and patching tools, we cast the software management problem as a virtual disk update problem, taking full advantage of virtualizing machines. Every update corresponds to creating a next issue of a sequel of virtual machines, hence the term LivePC. We only need to ship over the disk blocks that have changed with each update. We separate the system state from the user state; the system administrators update the system state and share them across all the users, and the users update the user state.
As we solved the management problem, we realized there was no reason to centralize the computation. We simply let virtual machines migrate to the users[2]. The system state is made available over the network, which means there is always a backup of the system state. The disks serve as a cache of the system state. We developed a LivePC Engine™, where we apply several computer science techniques like caching, demand-paging, and prefetching the blocks of the system state to minimize the transfer overhead[3]. Disconnection can be elegantly supported by storing all the blocks in a cache.
The advent of high-capacity yet tiny and cheap portable USB drives brings a new angle to this architecture. We can now carry our large cache, plus the code, with us. The design of the USB drive is that it is just a network accelerator, allowing us quick access to a lot of information even when the computer we wish to use is disconnected. The USB drive has no indispensable state, except for the data written when the machine was disconnected, If the drive is lost, stolen, or forgotten, we can easily reconstitute the drive with contents.
The LivePC model allows the effort of system management and integration be shared across users of the same configuration. It combines the advantage of central management found in thin clients with the local execution and cost effectiveness of distributed PCs.
The design of LivePCs is a bet on Moore's Law. The overhead of virtualization and network communication bandwidth will diminish over time, whereas the cost of labor and human attention will continue to rise.
Bibliography
-
The Interactive Performance of SLIM: A Stateless, Thin-Client
Architecture.
B. K. Schmidt, M. S. Lam, and J. D. Northcutt,
In Proc. 17th ACM Symposium on Operating Systems Principles, December 1999. -
Optimizing the Migration of Virtual Computers.
C. Sapuntzakis, R. Chandra, B. Pfaff, J. Chow, M. S. Lam, and M. Rosenblum.
In Proc. 5th Symposium on Operating Systems Design and Implementation, December 2002. -
The Collective: A Cache-Based System Management
Architecture
R. Chandra, N. Zeldovich, C. P. Sapuntzakis and M. S. Lam.
In Proc. 2nd Symposium on Networked Systems Design and Implementation, May 2005.