What Is Hyper-V?: The Authoritative Guide

Featured

hyper-v

What Is Hyper-V? [Definition & Uses For It]

Whether you’re just beginning to look into virtualization platform options for your company, or you’re a new Hyper-V user trying to get up to speed, it can be a challenge to find all the information you need in one place. That’s why we created this guide—to give you an all-in-one resource you can bookmark and refer back to as often as you need to, so you can get up and running on Hyper-V more smoothly.

——————–Read More——————–

Storage Spaces Direct Explained – Storage QOS & Networking

Storage QOS & NetworkingYo everyone…This is going to be a short blog post in this series. I am just covering Networking and Storage QoS as it pertains to S2D. There are the technologies the bind S2D together.
Storage QoS

S2D is using the Storage (QoS) Quality of Service that ships with Windows Server 2016 which provides standard min/max IOPS and bandwidth control. QoS policy can be applied at the VHD, VM, Groups of VMs, or Tenant Level. Benefits include:

  • Mitigate noisy neighbor issues. By default, Storage QoS ensures that a single virtual machine cannot consume all storage resources and starve other virtual machines of storage bandwidth.
  • Monitor end to end storage performance. As soon as virtual machines stored on a Scale-Out File Server are started, their performance is monitored. Performance details of all running virtual machines and the configuration of the Scale-Out File Server cluster can be viewed from a single location
  • Manage Storage I/O per workload business needs Storage QoS policies define performance minimums and maximums for virtual machines and ensures that they are met. This provides consistent performance to virtual machines, even in dense and overprovisioned environments. If policies cannot be met, alerts are available to track when VMs are out of policy or have invalid policies assigned.

Storage QOS & NetworkingWhat’s New in Networking with S2D?
In Windows Server 2016, they added Remote Direct Memory Access (RDMA) support to the Hyper-V virtual switch.
For those that don’t know what RMDA is it technology that allows direct memory access from one computer to another, bypassing TCP layer, CPU , OS layer and driver layer. Allowing for low latency and high-throughput connections. This is done with hardware transport offloads on network adapters that support RDMA.
Back to Hyper-V virtual switch support for RDMA.  This allows you to configure regular or RDMA enabled vNICs on top of a pair of RDMA capable physical NICs. They also added embedded NIC teaming or Switch Embedded Teaming (SET).
SET is where NIC teaming and the Hyper-V switch is a single entity and can now be used in conjunction with RDMA NICs, wherein Windows 2012 Server you needed to have separate NIC teams for RDMA and Hyper-V Switch.
The images below illustrates the architecture changes between Windows Server 2012 R2 and Windows Server 2016.
Storage QOS & Networking
Storage QOS & NetworkingNext up…Management and Operations…

Until next time, Rob

Storage Spaces Direct Explained – Fault Tolerance and Multisite Replication

funniest-construction-mistakes-25Fault Tolerance…What does it mean?  Let me break it down simply. Pictured above is just a bad design, not fault tolerance. This is not really what fault tolerance means. Having two or more of something is one factor, but how it’s implanted is just as important.  Fault Tolerance incorporates two very important principles, High availability and Redundancy.
Now if we had a few toilets side by side and kept only 1 open and the other 2 on standby. Also, if it could move the user automatically to another toilet during a failure, then it technically it would be fault tolerant. Anyways, let’s move on from toilets to the real world. 🙂
stalls-3Simply, Fault Tolerance is the ability to continue non-stop when a hardware failure occurs. A fault-tolerant system is designed from the ground up for reliability by building multiples of all critical components, such as CPUs, memories, disks and power supplies into the same computer. In the event one component fails, another takes over without skipping a beat.
Many systems are designed to recover from a failure by detecting the failed component and switching to another computer system. These systems, although sometimes called fault tolerant, are more widely known as “high availability” systems, requiring that the software re-submits the job when the second system is available.
True fault tolerant systems with redundant hardware are the most costly because the additional components add to the overall system cost. However, fault tolerant systems provide the same processing capacity after a failure as before, whereas high availability systems often provide reduced capacity. Ok, let move on to fault tolerance in S2D.
Fault Tolerance in S2D

Storage Space Direct (S2D) uses 3-way mirroring and will spread those mirrors across 3 different servers in the cluster. S2D supports full chassis and rack awareness and gives you the option to distribute data copies across these fault domains.
For disk failures, S2D also uses a self-healing approach… in basic terms, S2D offlines the disk and rebuilds the data copy on another node in the cluster. Replacing a drive adds capacity back into the system.  This is important note as not all HCI vendors support self-healing, For example, on VSAN and some other vendors, disk failures take out entire vDisks.
Fault Tolerance Fault Tolerance Fault Tolerance Fault Tolerance
Multisite Replication

S2D uses Storage Replica (that ships with Windows Server 2016) for synchronous or async replication. They support both stretched clusters and cluster to cluster DR. Storage Replica is part of Windows Server  can be used for other data replication needs outside of S2D.
Fault ToleranceFault ToleranceOk…Next up, Storage QOS and Networking..

Until next time, Rob….

The Evolution of S2D

storage spaces
The intention of this blog post series is to give some history of how Microsoft Storage Spaces evolved to what it has become known today as Storages Spaces Direct (S2D). This first blog post will go into the history of Storage Spaces. Over my next few posts, I will delve further into the recent Storage Spaces Direct release with Windows 2016 server.  l will conclude my series with where I think it’s headed and how it compares to other HCI solutions in general. Now let’s go for a ride down memory lane….

The Evolution of Storage Spaces

Let me remind everyone, Storage Spaces isn’t new.  Microsoft has been working on it for over 6 years and it first shipped with Windows Server 2012.  Back then, Microsoft’s goal was to replace the components of a traditional SAN with software intelligence running on cheaper commodity hardware… much like everyone else was starting to do back then.
Their somewhat unique approach was to replace traditional expensive hardware storage controllers with lightweight servers running software controllers connected to shared JBOD disk shelves.  The software controllers were grouped together as a scale-out file server (2 or 4 controllers at that time) and presented an SMB storage target over a standard Ethernet network.  The storage was consumed by VMs running on Hyper-V hosts or by workloads running on physical servers.  Now do note at this time, they were still maintaining a 3-tier architecture with a disaggregated compute layer.  There was a reason for this.  Traditional storage network protocols can consume between 25-40% of system CPU to service IO.

To try to address this problem, Microsoft started making investments in SMB Direct (SMB over RDMA).  RDMA or Remote Direct Memory Access provides high network throughput with low latency while using significantly lower system CPU.  In Microsoft’s implementation, this allowed them to drive much higher VM densities on the compute layer and go very CPU light on the storage controller hardware.  A hyper-converged architecture didn’t make much sense for them at the time.

Storage Spaces Storage Spaces
Also, one of the limitations of this architecture was… it still used ‘shared storage’.  Because of that, they needed to use dual-ported SAS drives.  But at the time single port SATA drives were available in higher capacities and at a much lower cost.  Another factor at the time was that all the hyper-scale cloud providers were using SATA drives further driving the cost down.  All of these factors IMO forced Microsoft to dump the shared storage model and move to ‘shared nothing’ storage… which basically meant each storage controller had its own storage (local or in a direct attached JBOD) or ‘shared nothing’. Reminds me of an old saying in tech, If at first, you don’t succeed; call it version 1.0. 🙂

Fast Forward to Today

Storage Spaces with ‘shared nothing’ storage is now referred to as Storage Spaces Direct or S2D for short.  With Windows Server 2016, Storage Spaces Direct can now be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:
Storage Spaces Storage Spaces
As mentioned above, over the next few posts in this series, I will dive into the basics, ReFS with S2D, Multi-Tier Volumes, Erasure Coding, Fault Tolerance, Multisite Replication, Storage QOS, Networking,  Management, Native App Support, Performance claims, Scalability, Product Positioning, How to buy? and my final conclusions of S2D as it compares to other HCI solutions.

Until next time, Rob…