We talk a lot about infrastructure -- the MobiledgeX offering is application infrastructure. To implement that, we run code on cloud system infrastructure and mobile operator infrastructure. All the pieces are interconnected by private or public network infrastructure. Knowing a little about modern infrastructure makes it a lot easier to understand where MobiledgeX fits in and creates value.
But what exactly is “infrastructure?” A common sense answer is that infrastructure is all the stuff below the application and above the hardware that makes it work. Networks are good example -- a whole bunch of hardware and software that connects things. Most infrastructure is pretty transparent if it’s working perfectly but painfully visible if it’s not. Like everything else, most modern infrastructure is software (although there are interesting hardware bits).
Viewed from “50,000” feet, pretty much all of this runs on hundreds of millions of X86 servers that are distributed throughout the world and interconnected locally and then globally using TCP/IP Ethernet networking. All that complexity is made usable by the addition of software abstractions (like cloud computing). Most of the infrastructure we see and use is virtual infrastructure (virtual servers, storage and networking) carved out of this physical infrastructure.
What we want to do here is present an only slightly simplified view of how the various pieces can fit together because it will help explain how our system itself is structured, how we integrate with mobile operator infrastructure, and how what we are doing relates to other mobile operator infrastructure initiatives including Network Functions Virtualization (NFV), Mobile Edge Computing (MEC) and other 5G related initiatives.
In the following discussion, numbers refer to this Infrastructure diagram.
Raw Hardware / Hypervisors / Virtualization
Almost all cloud systems today are built on top of X86 architecture servers (1) (built using Intel microprocessors most commonly) and Ethernet networking (2). The first X86 implementation was introduced by Intel more than 40 years ago into a very different and much simpler world. Since introduction, Intel as refined and extended the architecture to adapt to evolving needs but the most important adaptation was the introduction of the hypervisor software more recently. Here’s some explanatory context.
Twenty years ago, each X86 Windows Server based application was given its own server. Due to the design of Windows, when an application was deployed details of the hardware were embedded into the application configuration, so if the hardware failed it had to be replaced by exactly the same server configuration or the application had to be rebuilt for a new server configuration. As the number of applications grew, the complexity of a data center server infrastructure grew with the addition of new server models and configurations, complicating procurement, maintenance and operations. As servers continued to grow more powerful (“Moore’s Law” progress), most applications required only a fraction of a server, so average server hardware utilization (and return on the hardware investment) continued to fall.
These very real and growing financial and operational problems were fixed with the introduction of robust and efficient server virtualization software by VMware around 15 years ago. Virtualization enables software workloads (a “workload” is the entire software stack from application down to operating system) with nothing in common to share the same physical server hardware. The hypervisor “tricks” each workload into thinking it’s running on the specific server model and configuration is was built for, while at the same time running all these workloads on a standard and potentially different physical configuration. Virtualization enabled server consolidation, a substantial operational and cost-efficiency improvement.
When a server is virtualized, a thin layer of software -- a hypervisor -- is inserted between the software work load and the server hardware. The change is transparent and invisible to the software workload. The hypervisor detects and emulates (executes in software rather than letting the CPU execute the instruction) the X86 instructions that control the CPU and attached devices, and by doing so, the hypervisor enables multiple workloads with different operating systems to share the physical hardware. The native CPU hardware is designed to support only a single operating system at one time. In the same way the hypervisor prevents direct control of the I/O devices attached to the server (storage drives and network adaptors). The hypervisor “virtualizes” the CPU and devices by creating an abstraction layer that enables sharing and isolation of what previously had been a hardware resource.
In retrospect virtualization came at just the right time. Modern CPU’s have many internal processors core (today up to 50 logically independent processors), each core is more powerful than a typical application can use. Without virtualization all the applications running on the CPU would have to run the same version and revision of the operating system (at least in the case of Windows Server, the dominant enterprise O/S. With virtualization that constraint is removed. Applications never get rewritten without an overwhelming reason. Without virtualization, cloud computing (being able to move most existing workloads to a shaed, multi-tenant, on-demand platform) would have not been possible.
Hardware Augmentation (4)
VMware invented the hypervisor as software technology and their demanding engineering accuracy and performance goals made the resulting hypervisor (and competitive hypervisors) as elegant and useful as they are. Subsequently Intel (and AMD) added silicon support to speed things up (e.g., high speed setting table lookups) to make the virtualization induced slowdown (and cost) even less.
The hyper-scale service operators (AWS, Google, Azure) are all adding purpose-engineered silicon parts (“ASICs” to their infrastructures to make other aspects of virtualization more efficient and less dependent on host operating system software. Some of AWS’ effort has been described as part of their “Nitro Stack.”
Virtual Systems, Multitenancy (5)
Hypervisor virtualization is the means, multi-tenancy is the goal. The initial use of server virtualization was server consolidation, running many independently created applications on the same shared server rather than each requiring its own bespoke server.. Before virtualization each Windows Server application tended to get its own server. As a result, an enterprise data center would have lots of servers, each often different (sized to the application), and with overall CPU utilization less than 10%. Using virtualization, the number of server types could be dramatically reduced (simplifying procurement, replacement, upgrade and operation) and the utilization of these shared servers increased very substantially. Server virtualization had immediate, real return.
The virtualization software took the real, physical storage and networking resources in the data center and “virtualized” them, providing each software workload what looked like a private disk or network (5), but what was in fact a piece of a shared resource. Make this kind of sharing work, each workload had to be strongly isolated from each other (unable to read other storage or see other network traffic).
From this perspective, cloud computing is a straightforward generalization. Hardware virtualization enables multi-tenant sharing of infrastructure resources, augmenting the “private” resources that the workload needs to operate with new virtual resources -- storage devices that automatically tier the storage to move rarely used data to cheaper storage, virtual networks with features and capabilities not supported in many network devices.
Abstraction is probably the most important, recurring concept in application system design. Machine language programming is very detailed and complicated so high-level programming languages (a new programming abstraction) were created. Some, like FORTRAN, were designed for a specific programming task (science and engineering). Using a disk drive is a challenging task, for example because you have to deal with transient and permanent errors, so a file system abstraction was created that focussed on what was important to an application.
Today’s physical cloud infrastructure comprises (literally) hundreds of millions of servers and storage devices, interconnected with a comparable number of communication devices (switches and routers) and communications links. Viewed at that level, failure is massive and continuous. No human being could individually program that complex to do something useful like an ERP application.
Cloud computing is broadly useful because we keep piling abstractions on abstractions to create simplified views for specific tasks (a process that continues, e.g., the addition of “serverless” programming). At the bottom there is real hardware, as we’ve sketched above but above that tt’s layer upon layer of software built on compounding abstractions.
MobiledgeX is such an abstraction. We build on diverse physical and virtual resources to create a cloud-like system to deploy and manage applications near the edge of the Internet. Our developers, builders and mobile operators can in turn integrate those resources into the application, device or service abstractions that enable their product. And so on.
On Demand, Pay for Use Systems (8)
Server consolidation was an important improvement to data center computing, but by far the big contribution of virtual system technology is to enable generalized on-demand service platforms such as AWS, GCP and Azure. Multi-tenant service systems had certainly existed before, starting with the BASIC language time sharing systems of more than 50 years ago, and more recently multi-tenant services such as the SalesForce CRM service. But what AWS offered 12 years ago went well beyond that, making entire virtual systems available on-demand, when needed, that were charged for as used, and then released when the customer finished with the shared assets returned to the available pool. Being able to create and operate entire virtual server systems enabled any imaginable service or application to also be offered on an on-demand, pay-as-you-go basis. Smartphone application developers could now include backend services (data repositories or collaboration) that cost essentially nothing if there were few users but could scale pretty much arbitrarily as long as the costs were covered by fees (or ad revenues) from the additional users.
Virtual Systems Built on Virtual Systems (7)
Cloud computing is a very useful abstraction. It is enabled by server virtualization that abstracts the hardware and lets it be shared. But cloud computing as an abstraction doesn’t stop there. It can be nested as deep as makes sense. If a SaaS offering is run on AWS, it’s clearly a different abstraction built using AWS resources. A new level of multi tenancy appears -- there can be as many SaaS systems hosted on AWS as makes sense commercially.
We bring this up because the system that a MobiledgeX developer or builder partner sees is precisely a virtual system built on virtual systems (like almost everyone else today, we use hyper-scale cloud services as the infrastructure we build on).
MobiledgeX is a little more infrastructure, conceptually sitting between today’s cloud systems and the end users and devices. We create an abstraction that provides access to resources and integration with many mobile operators. Our abstraction provides homogenized, developer-friendly access to those resources and operate in a cloud-like fashion than enables on-demand access. Sometimes we run on bare metal but mostly we run on other infrastructure (abstractions can be layered). Underneath it all are lots of X86 servers and Ethernet networking.