Storage Spaces Direct Explained – Fault Tolerance and Multisite Replication

Featured

funniest-construction-mistakes-25

Fault Tolerance…What does it mean?  Let me break it down simply. Pictured above is just a bad design, not fault tolerance. This is not really what fault tolerance means. Having two or more of something is one factor, but how it’s implanted is just as important.  Fault Tolerance incorporates two very important principles, High Availablity and Redundancy.

Now if we had a few toilets side by side and kept only 1 open and the other 2 on standby. Also, if it could move the user automatically to another toilet during a failure, then it technically it would be fault tolerant. Anyways, let’s move on from toilets to the real world. 🙂

stalls-3

Simply, Fault Tolerance is the ability to continue non-stop when a hardware failure occurs. A fault-tolerant system is designed from the ground up for reliability by building multiples of all critical components, such as CPUs, memories, disks and power supplies into the same computer. In the event one component fails, another takes over without skipping a beat.

Many systems are designed to recover from a failure by detecting the failed component and switching to another computer system. These systems, although sometimes called fault tolerant, are more widely known as “high availability” systems, requiring that the software resubmits the job when the second system is available.

True fault tolerant systems with redundant hardware are the most costly because the additional components add to the overall system cost. However, fault tolerant systems provide the same processing capacity after a failure as before, whereas high availability systems often provide reduced capacity. Ok, let move on to fault tolerance in S2D.

Fault Tolerance in S2D

Storage Space Direct (S2D) uses 3-way mirroring and will spread those mirrors across 3 different servers in the cluster. S2D supports full chassis and rack awareness and gives you the option to distribute data copies across these fault domains.

For disk failures, S2D also uses a self-healing approach… in basic terms, S2D offlines the disk and rebuilds the data copy on another node in the cluster. Replacing a drive adds capacity back into the system.  This is important note as not all HCI vendors support self-healing, For example, on VSAN and some other vendors, disk failures take out entire vDisks.

Fault Tolerance Fault Tolerance Fault Tolerance Fault Tolerance

Multisite Replication

S2D uses Storage Replica (that ships with Windows Server 2016) for synchronous or async replication. They support both stretched clusters and cluster to cluster DR. Storage Replica is part of Windows Server  can be used for other data replication needs outside of S2D.

Fault Tolerance Fault Tolerance

Ok…Next up, Storage QOS and Networking, until next time, Rob….

Storage Spaces Direct Basics – Explained

Featured

'Steno Keypads 50% OFF' 'So, would you like the model that only types verbs, or the one that only types nouns?'

Storage Spaces Direct Basics

Storage Spaces Direct Basics

Like anything else, I’m going to start with the basics of the stack and then dive into details of each component over the next few blog posts. There’s a lot to digest…So let’s get rolling…

As mentioned in my previous post, S2D can be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:

Storage Spaces Direct Basics

Here are the basic components of the stack…

Failover Clustering The built-in clustering feature of Windows Server is used to connect the servers.

Software Storage Bus – The Software Storage Bus is new in S2D. The bus spans the cluster and establishes a software-defined storage fabric where all the servers can see all of each other’s local drives.

Storage Bus Layer Cache – The Software Storage Bus dynamically binds the fastest drives present (typically  SSDs) to slower HDDs to provide server-side read/write caching. The cache is independent of pools and vDisks, always-on, and requires no configuration.

Storage Pool – When an IT Admin enables storage spaces, all of the eligible drives (excludes boot drives, etc.) discovered by the storage bus. Disks are grouped together to form a pool.  It’s created automatically on setup, and by default, there is only one pool per cluster.  IT Admin’s can configure additional pools, but Microsoft recommends against it.

Storage Spaces – From the pool, Microsoft’s carves out ‘storage spaces’ or essentially virtual disks. The vDisks can be defined as a simple space (no protection), mirrored space (distributed 2-way or 3-way mirroring), or a parity space (distributed erasure coding). You can think of it as distributed, software-defined RAID using the drives in the pool.  IT Admin’s can choose to use the new ReFS file system (more on this later) or traditional NTFS.

Resilient File System (ReFS)  ReFS is the purpose-built filesystem for virtualization. This includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging. It also has built-in checksums to detect and correct bit errors. ReFS also introduces real-time tiers. This allows the rotation data between so-called “hot” and “cold” storage tiers in real-time based on usage.

Cluster Shared Volumes – Each vDisk is a cluster shared volume that exists within a single namespace so that every volume appears to each host server as being mounted locally.

Scale-Out File Server – The scale-out file server only exists in converged deployments and provides remote file access via SMB3.

Networking Hardware  Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. Microsoft strongly recommends 10+ GbE with remote-direct memory access (RDMA). IT Admin’s can either use iWARP or RoCE (RDMA over Converged Ethernet).

In Windows Server 2016, Microsoft has also incorporated Storage Replica, Storage QoS, and a new Health Service. I’ll cover each of these areas in a little more detail in a later post with regards to S2D.

Storage Spaces Direct Basics Storage Spaces Direct Basics

Storage Hardware

Microsoft supports hybrid or all-flash configurations.  Each server must have at least 2 SSDs and 4 additional drives. Microsoft has support for NVMe in the product today.  IT Admin’s can use a mixture of NVME, SSD, or HDDs in a variety of tiering models. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander.

Storage Spaces Direct Basics Storage Spaces Direct Basics

Now that we have covered the basics, next I will dive into how each of the components work.  Next up, ReFS, Multi-Tier Volumes, Erasure Coding and tigers oh my… 🙂

Until next time, Rob…