Storage Spaces Direct Basics – Explained

'Steno Keypads 50% OFF' 'So, would you like the model that only types verbs, or the one that only types nouns?'Storage Spaces Direct BasicsStorage Spaces Direct BasicsLike anything else, I’m going to start with the basics of the stack and then dive into details of each component over the next few blog posts. There’s a lot to digest…So let’s get rolling…
As mentioned in my previous post, S2D can be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:
Storage Spaces Direct Basics

Here are the basic components of the stack…

Failover Clustering The built-in clustering feature of Windows Server is used to connect the servers.

Software Storage Bus – The Software Storage Bus is new in S2D. The bus spans the cluster and establishes a software-defined storage fabric where all the servers can see all of each other’s local drives.

Storage Bus Layer Cache – The Software Storage Bus dynamically binds the fastest drives present (typically  SSDs) to slower HDDs to provide server-side read/write caching. The cache is independent of pools and vDisks, always-on, and requires no configuration.

Storage Pool – When an IT Admin enables storage spaces, all of the eligible drives (excludes boot drives, etc.) discovered by the storage bus. Disks are grouped together to form a pool.  It’s created automatically on setup, and by default, there is only one pool per cluster.  IT Admin’s can configure additional pools, but Microsoft recommends against it.

Storage Spaces – From the pool, Microsoft’s carves out ‘storage spaces’ or essentially virtual disks. The vDisks can be defined as a simple space (no protection), mirrored space (distributed 2-way or 3-way mirroring), or a parity space (distributed erasure coding). You can think of it as distributed, software-defined RAID using the drives in the pool.  IT Admin’s can choose to use the new ReFS file system (more on this later) or traditional NTFS.

Resilient File System (ReFS)  ReFS is the purpose-built filesystem for virtualization. This includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging. It also has built-in checksums to detect and correct bit errors. ReFS also introduces real-time tiers. This allows the rotation data between so-called “hot” and “cold” storage tiers in real-time based on usage.

Cluster Shared Volumes – Each vDisk is a cluster shared volume that exists within a single namespace so that every volume appears to each host server as being mounted locally.

Scale-Out File Server – The scale-out file server only exists in converged deployments and provides remote file access via SMB3.

Networking Hardware  Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. Microsoft strongly recommends 10+ GbE with remote-direct memory access (RDMA). IT Admin’s can either use iWARP or RoCE (RDMA over Converged Ethernet).

In Windows Server 2016, Microsoft has also incorporated Storage Replica, Storage QoS, and a new Health Service. I’ll cover each of these areas in a little more detail in a later post with regards to S2D.
Storage Spaces Direct Basics Storage Spaces Direct BasicsStorage Hardware

Microsoft supports hybrid or all-flash configurations.  Each server must have at least 2 SSDs and 4 additional drives. Microsoft has support for NVMe in the product today.  IT Admin’s can use a mixture of NVME, SSD, or HDDs in a variety of tiering models. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander.
Storage Spaces Direct Basics Storage Spaces Direct Basics
Now that we have covered the basics, next I will dive into how each of the components work.  Next up, ReFS, Multi-Tier Volumes, Erasure Coding and tigers oh my… 🙂

Until next time, Rob…

The Evolution of S2D

storage spaces
The intention of this blog post series is to give some history of how Microsoft Storage Spaces evolved to what it has become known today as Storages Spaces Direct (S2D). This first blog post will go into the history of Storage Spaces. Over my next few posts, I will delve further into the recent Storage Spaces Direct release with Windows 2016 server.  l will conclude my series with where I think it’s headed and how it compares to other HCI solutions in general. Now let’s go for a ride down memory lane….

The Evolution of Storage Spaces

Let me remind everyone, Storage Spaces isn’t new.  Microsoft has been working on it for over 6 years and it first shipped with Windows Server 2012.  Back then, Microsoft’s goal was to replace the components of a traditional SAN with software intelligence running on cheaper commodity hardware… much like everyone else was starting to do back then.
Their somewhat unique approach was to replace traditional expensive hardware storage controllers with lightweight servers running software controllers connected to shared JBOD disk shelves.  The software controllers were grouped together as a scale-out file server (2 or 4 controllers at that time) and presented an SMB storage target over a standard Ethernet network.  The storage was consumed by VMs running on Hyper-V hosts or by workloads running on physical servers.  Now do note at this time, they were still maintaining a 3-tier architecture with a disaggregated compute layer.  There was a reason for this.  Traditional storage network protocols can consume between 25-40% of system CPU to service IO.

To try to address this problem, Microsoft started making investments in SMB Direct (SMB over RDMA).  RDMA or Remote Direct Memory Access provides high network throughput with low latency while using significantly lower system CPU.  In Microsoft’s implementation, this allowed them to drive much higher VM densities on the compute layer and go very CPU light on the storage controller hardware.  A hyper-converged architecture didn’t make much sense for them at the time.

Storage Spaces Storage Spaces
Also, one of the limitations of this architecture was… it still used ‘shared storage’.  Because of that, they needed to use dual-ported SAS drives.  But at the time single port SATA drives were available in higher capacities and at a much lower cost.  Another factor at the time was that all the hyper-scale cloud providers were using SATA drives further driving the cost down.  All of these factors IMO forced Microsoft to dump the shared storage model and move to ‘shared nothing’ storage… which basically meant each storage controller had its own storage (local or in a direct attached JBOD) or ‘shared nothing’. Reminds me of an old saying in tech, If at first, you don’t succeed; call it version 1.0. 🙂

Fast Forward to Today

Storage Spaces with ‘shared nothing’ storage is now referred to as Storage Spaces Direct or S2D for short.  With Windows Server 2016, Storage Spaces Direct can now be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:
Storage Spaces Storage Spaces
As mentioned above, over the next few posts in this series, I will dive into the basics, ReFS with S2D, Multi-Tier Volumes, Erasure Coding, Fault Tolerance, Multisite Replication, Storage QOS, Networking,  Management, Native App Support, Performance claims, Scalability, Product Positioning, How to buy? and my final conclusions of S2D as it compares to other HCI solutions.

Until next time, Rob…

Ignite 2016 highlights and pics….

Well, I’m fresh back from Microsoft Ignite 2016.  It was a busy week of attending sessions and booth duty for Nutanix :). Here is a summary of highlights and pics from the conference.  I plan on diving deep into each of the major highlights over the coming weeks\months. My first blog post will be on Storage Spaces Direct and its evolution. On to the updates…

Highlights by Solution

System Center 2016

  • GA in October


  • Announced 3rd party extensibility

Azure Stack

Azure Marketplace

  • Azure Marketplace apps can be offered via Azure Stack as well
    • Conditions and compatibility will apply.
    • ISV owners need to ensure their solution compatible and opt-in to offer on Azure Stack
    • Roadmap of intentions for marketplace:
      • Highest
        • Always connected scenarios
        • Marketplace bring-your-own-license VM Images
      • Next
        • Connected and occasionally connected
        • Marketplace lifecycle (servicing)
        • Extensions and other artifacts in the marketplace
      • Late
        • Complete marketplace publishing pipeline
        • Billing based on usage data

Azure VM Sizes

  • New VM Sizes announced
    • NC (NVIDIA Compute) – just went into preview
    • NV (NVIDIA visualization) – just went into preview
    • H (focus on compute that’s not GPU)
    • L (Close to G-series, but less memory so less expensive. IE – Big Data workloads for less)

SAP HANA on Azure

  • Will be announced tomorrow
  • SAP BW & EE oftp
  • SAP Business Suite
  • S/4 HANA
  • HANA One

Azure Networking

  • Will be announced this week
  • IPv6 for Azure VMs (north-south)
  • VNET Peering
  • MAC Persistence
  • Azure DNS
  • Multiple IPs per NIC

Windows Server 2016

  • GA in October 12th
  • Security focus
    • Credential Guard
    • Just enough and Just-in-time administration
    • Shielded VMs
    • Can utilize TPM 2.0 and UEFI 2.3.1
  • Server/VM Maximums
    • 24 TB of memory per host
    • 12 TB of memory per VM
    • 240 virtual processors per VM
  • Clustering
    • Fault domain awareness
    • Rolling upgrades
  • Networking
    • Quote from Jeffery Snover (paraphrased). “Don’t buy hardware without RDMA capable NICs”
    • Note from someone attending from Chelsio, iWarp doesn’t require specific switch functionality (ROCE does).
    • Network virtualization: vxlan support

Next post up, “The evolution of Storage Spaces” and where we are now. Until next time Rob
PS….Pics from the conference…enjoy
Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016Ignite 2016