The Evolution of S2D

Estimated reading time: 3 minutes

Thank you for reading this post, don't forget to subscribe! Happy New Year 2024!

storage spaces
The intention of this blog post series is to give some history of how Microsoft Storage Spaces evolved to what it has become known today as Storages Spaces Direct (S2D). This first blog post will go into the history of Storage Spaces. Over my next few posts, I will delve further into the recent Storage Spaces Direct release with Windows 2016 server.  l will conclude my series with where I think it’s headed and how it compares to other HCI solutions in general. Now let’s go for a ride down memory lane….

The Evolution of Storage Spaces

Let me remind everyone, Storage Spaces isn’t new.  Microsoft has been working on it for over 6 years and it first shipped with Windows Server 2012.  Back then, Microsoft’s goal was to replace the components of a traditional SAN with software intelligence running on cheaper commodity hardware… much like everyone else was starting to do back then.
Their somewhat unique approach was to replace traditional expensive hardware storage controllers with lightweight servers running software controllers connected to shared JBOD disk shelves.  The software controllers were grouped together as a scale-out file server (2 or 4 controllers at that time) and presented an SMB storage target over a standard Ethernet network.  The storage was consumed by VMs running on Hyper-V hosts or by workloads running on physical servers.  Now do note at this time, they were still maintaining a 3-tier architecture with a disaggregated compute layer.  There was a reason for this.  Traditional storage network protocols can consume between 25-40% of system CPU to service IO.

To try to address this problem, Microsoft started making investments in SMB Direct (SMB over RDMA).  RDMA or Remote Direct Memory Access provides high network throughput with low latency while using significantly lower system CPU.  In Microsoft’s implementation, this allowed them to drive much higher VM densities on the compute layer and go very CPU light on the storage controller hardware.  A hyper-converged architecture didn’t make much sense for them at the time.

Storage Spaces Storage Spaces
Also, one of the limitations of this architecture was… it still used ‘shared storage’.  Because of that, they needed to use dual-ported SAS drives.  But at the time single port SATA drives were available in higher capacities and at a much lower cost.  Another factor at the time was that all the hyper-scale cloud providers were using SATA drives further driving the cost down.  All of these factors IMO forced Microsoft to dump the shared storage model and move to ‘shared nothing’ storage… which basically meant each storage controller had its own storage (local or in a direct attached JBOD) or ‘shared nothing’. Reminds me of an old saying in tech, If at first, you don’t succeed; call it version 1.0. 🙂

Fast Forward to Today

Storage Spaces with ‘shared nothing’ storage is now referred to as Storage Spaces Direct or S2D for short.  With Windows Server 2016, Storage Spaces Direct can now be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:
Storage Spaces Storage Spaces
As mentioned above, over the next few posts in this series, I will dive into the basics, ReFS with S2D, Multi-Tier Volumes, Erasure Coding, Fault Tolerance, Multisite Replication, Storage QOS, Networking,  Management, Native App Support, Performance claims, Scalability, Product Positioning, How to buy? and my final conclusions of S2D as it compares to other HCI solutions.

Until next time, Rob…

Leave a Reply

Your email address will not be published. Required fields are marked *