NPP Training Series – Nutanix Terminology – Part 1

I started with Nutanix about 1 month ago and concurrently began my Nutanix training which entails going through the NPP (Nutanix Platform Professional) training course. I will be posting a series of blog posts on my learning track with Nutanix and will eventually tie it all together with the Microsoft Stack. My track commenced about 2 weeks ago and I have keep detailed notes on my learning progress. My plan is to post every few days in a multi-part series.

Background: The Nutanix solution consists of the “Nutanix Virtual Computing Platform” which delivers enterprise compute and storage through the deployment of commodity computing hardware (called nodes) which run a standard hypervisor and the Nutanix Operating System (NOS). Each server contains Intel processors, memory, solid-state drives and traditional hard drives, and when added into a cluster, aggregates storage resources into a single storage pool. The technology is called Hyper-Convergence. I will get into the weeds of this in a later post.

My first training module was Nutanix Terminology. Understanding the terminology is important.  There are too many variants and interchangeable terms, which people get confused with talking at the compute layer versus the hyper-visor layer. Let put some clarity around the issue and IMO. The Nutanix terminology should be a standard in the industry.

Nutanix Terminology

Basic Components of a Nutanix Cluster – Part 1

Node: The Foundation Unit of a Nutanix Cluster.  Each Node runs a standard hypervisor and contains processors, memory and local storage, which is made up of both SSD and HDD Storage. The term “Node” is interchangeable with the term “host”, whether you are in Hyper-V or ESXi.

Block: A Nutanix rack-able Unit containing up to 4 nodes. This is one of the great pieces of the Nutanix solution.  You can fit up to 4 Nodes in a 2 U space.  If you compare to traditional solutions, 4 nodes with storage would be in a much larger footprint.

nutanix-footprint Nutanix Terminology
Example of traditional architecture versus Nutanix footprint

Cluster: The set of Nutanix blocks and nodes that forms the Nutanix Distributed File system (NDFS). A minimum of 3 nodes form a cluster.
NDFS Nutanix Terminology
Example of a Nutanix Cluster

Storage Tiers: Storage tiers are made up with a combination of SSD and HDD drives. We have a technology called MapReduce. MapReduce tiering technology ensures that data is intelligently placed in the optimal storage tier – SSD or HDD to yield the best performance. The most frequently accessed is available in SSD or cache tier and is demoted into higher capacity HDD tier as data becomes cold. So, as you can see with this type of tiering technology performance goes through the roof.

storagetiers Nutanix Terminology
Example of Storage Tiers

Summary

Today, we learned some of the basic blocks of the Nutanix solution, as you can see, this is mind shift of our traditional architecture. For Microsoft workloads, like SQL and Exchange, this is a win/win, as we need all the IOPS we can get….

This is the first post in a series of posts around my journey learning with Nutanix. I will try to keep the blog posts palatable. So join me in my journey to learn about the next generation data center technology and see how you can bring WebScale in your life. Until next time, Rob.

Next up…Nutanix Terminology – Part 2

Back to Basics….Hyper-V…What is it?

To start journey, one needs a foundation to start.  I assume everyone knows what a hyperviser is, but if you don’t check out Wikipedia.

VMware has been the leader for a number of years, along with another vendors and open source, like Xen and KVM. Microsoft also has a hand in Virtualization early on in the with Virtual Server, which was originally developed by Connectix, but no real traction and frankly it sucked.. For years, it had a bad rap compared to is competition until Hyper-V was introduced with Windows 2008.  Like any other product, it had it humble beginnings, but started to become a real threat.  With introduction of Windows 2012 and SMB 3.0, In my opinion, Hyper-V is as good, if not better than VMware’s ESXi. At this point, it is a numbers and education play.vmwarecompare Hyper-V

Hyper-V Overview (from Wikipedia with edits from me)

Hyper-V

Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. A hypervisor instance has to have at least one parent partition, running a supported version of Windows Server (2008, 2008 R2, 2012 or 2012 R2). The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V.

A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, which, depending on the configuration of the hypervisor, might not necessarily be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition. The hypervisor handles the interrupts to the processor, and redirects them to the respective partition using a logical Synthetic Interrupt Controller (SynIC). Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI (formerly NPT) on AMD.

Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests. The VMBus is a logical channel which enables inter-partition communication. The response is also redirected via the VMBus. If the devices in the parent partition are also virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider (VSP), which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirect the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS.

Virtual devices can also take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage, networking and graphics subsystems, among others. Enlightened I/O is specialized virtualization-aware implementation of high level communication protocols like SCSI to take advantage of VMBus directly, that allows bypassing any device emulation layer. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O.

Also, check out the below poster, this highlights all the current features of Hyper-V

Windows_Server_2012_Hyper-V_Architecture

Hyper-V Install

The easiest way to start using Hyper-V is by adding the Hyper-V role in Windows Server (2008 & later). Roles are the services that a server provides and features are what a server does and by default all roles and features are disabled when you do a clean install of Windows Server.

Having done that (which requires a reboot) you get the hyper-V manager when you expand the role and then connect to you server and you are ready to create or import virtual machines (VMs).

This is not the only way to run Hyper-V, there is free version of Hyper-v called Hyper-V server which you can download here. This is based on server core, an install time option in Windows Server (2008 & later) that is the bare minimum operating system with no real graphical interface, and is either managed from the command line (e.g. with PowerShell, DOS, netsh, diskpart etc.)or remotely. Hyper-V server is even more cut down, it has all of the roles and features removed except hyper-V and so you need be pretty good at command line stuff or know who to connect and setup the remote administration tools for Windows.

So having got a Hyper-V environment setup, you can then use the create new virtual machine wizard in the virtual machine manager to create new virtual machines. This process is similar to the way you would specify your requirements for a physical server except that you are telling the wizard which resources you are using on the physical server the VM will run on. You can also import a VM from an export created on a another Hyper-V environment or complete the wizard but use a VHD that you have got from somewhere e.g. there are some on Microsoft to save you having to install and configure a Microsoft application in order to evaluate it.

If you are creating a new virtual machine, then the virtual hard disk will be empty and will need an operating system. Theoretically This can be anything that runs on x86 x64 hardware form DOS 3.3 to Windows 8, or even other OS’s like Unix, and Linux. However Microsoft will only support it’s operating systems and applications that are supported to run on physical hardware e.g. Windows 7 and not Windows 95 which is out of support. They word support here means you can get support from Microsoft and not a vague statement along the lines of “we got it to work but you’re on your own if you get stuck”. When it comes to Linux the latest versions of Red Hat SUSE and Centos are also supported because those distros have been made the subject of support arrangements between those Linux vendors and Microsoft so you can get support from Microsoft for them.

Will post a YouTube video soon on how to install Hyper-V…but take a test drive and see for yourself….Until next time, Rob…