Joining 5nine Software as Director of Product Management

Today, I am excited to announce I will be joining the awesome team at 5nine Software as Director of Product Management. My primary job responsibilities will be for the product strategy and direction of 5nine’s security and management solutions.
5nineSo, you ask, why Product Management? It’s been a lifelong dream to be part of shaping the direction of a technology solution.  By joining 5nine, I hope to simplify IT, Cloud and beyond, because there’s always a better way 🙂

“What prepared me for this was very surprising looking back.”

Life at Nutanix

Over the past 2 1/2 years at Nutanix, I managed 84 partners over 146 solutions.  The partner solutions that my team managed and validated were from all aspects of technology. i.e. Monitoring, backup, DR, Big Data, DevOps, security, networking, databases and the list goes on.

5nine Software was one of my first partners I validated. I was familiar with them.  5nine Manager was a tool I had used in the field during my consulting days. But I had not seen security solution yet. During Nutanix Ready process, this is when I first got introduced to 5nine Security.  I remember at the time, I was super impressed with how they integrated with Hyper-V.

Shortly, after 5nine’s Nutanix Ready validation, my colleague and Alliance Manager Tommy Gustaveson and I interviewed past 5nine’s VP of Alliances Symon Perriman.  We enjoyed understanding 5nine’s vision and also getting to know a little more about Symon Perriman’s journey.  Yes, I admit, I had a little hero worship for him. But, who can blame me, Symon is a one of kind person and proud to this day to call him a friend 🙂

So, on with the story, part of my job at Nutanix was front-ending the Product Managers (PM’s). The PM’s were always pulled in 10 different directions and they came to trust us with some of these activities with partners.  This would include understanding the partner technology, how we can go to market together and how the partner would integrate with Nutanix.  We worked with Alliance Managers and PM’s to determine if this would be a good partnership.

Once the business side of alliances onboard s a partner, that’s where the handoff to the Nutanix Ready team happens. The team spends a lot of time understanding each partner solution(‘s). The team does a deep investigation of any issues around the partner solution(‘s) and Nutanix. This is vetted by Nutanix’s support and solutions teams. This, in turn, gives the customer a certain degree of comfort that the partner solutions were tested, validated and it will work on Nutanix 🙂

Over the course of my time at Nutanix and my career to that extent, I have to see many, many UI\UX’s and engines (code) behind it.  I’ve seen what works and what doesn’t. The common theme of what doesn’t work is over complicating your user experience.
We are at the age of managing multiple multi-geographic data centers and clouds, backups, DR, networking, SDN’s and we need to secure it all. If your UI even vaguely resembles an airplane cockpit, you’re doing it wrong.  It is an inefficient use of an IT Pro’s time and energy.  They just want to simply manage their production applications and have an easy management experience.

I will never trade the time I had at Nutanix, but times are a changing 🙂  As I’ve mentioned in a previous post “Building Nutnaix Ready”, “it was the best of times and the worst of times”.  I have not finished that series yet, but needless to say, it prepared me for the next step in my journey.

So, keep an eye on my blog, twitter feed, etc, because things are about to get into high gear.

Until next time and happy holidays,
Robert Corradini, MVP – Cloud & Datacenter

Windows User Profiles…The Mysteries Untold – Part 1

Happy New Year Everyone…This is my first blog post of 2017. Woo Hoo!!  As always, I love to blog about questions from the field.  This one came from a customer testing their new Virtual Desktop Infrustrure (VDI) on Nutanix and had 1 out of 50 users profiles be corrupt. He asked why did this happen and how can I avoid this in the future. Now, I would say that 1 corrupt profile out of 50 is fine during a test, but let understand why it happens. This topic is especially important to understand because directly relates to VDI and your end-user experience in VDI.

Windows User Profiles

What is a Windows User Profile? It not just your desktop 🙂

Let’s do a quick primer…

Windows creates a user profile the first time that a user logs onto a physical computer or VDI session. At subsequent logons, the system loads the user’s profile, and then other system components configure the user’s environment according to the information in the profile.

A user profile consists of the following elements:

  • A registry hive. The registry hive is the file NTuser.dat. The hive is loaded by the system at user logon, and it is mapped to the HKEY_CURRENT_USER registry key. The user’s registry hive maintains the user’s registry-based preferences and configuration.
  • A set of profile folders stored in the file system. User-profile files are stored in the Profiles directory, on a folder per-user basis. The user-profile folder is a container for applications and other system components to populate with sub-folders and per-user data such as documents and configuration files. Windows Explorer uses the user-profile folders extensively for such items as the user’s Desktop, Start menu and Documents folder.

Type of User Profiles

  • Mandatory profiles:
    • Typically one pre-configured profile for many users.
    • Although during a session changes can be made, they are discarded. When the user logs on the next time, the locally cached copy of the mandatory profile is reset (replaced with the network copy).
    • The path to the mandatory profile needs to be assigned to users
    • Useful mainly for kiosk systems.
  • Local profiles:
    • One profile per user per machine.
    • No dependency on the network.
    • Since the profile is available locally, logons are very fast.
    • No configuration is necessary, local profiles are assigned to users automatically.
    • Backing up local profiles is often a challenge because the profiles are distributed across many machines with potentially slow and/or only intermittent network connectivity.
    • Another difficulty is how to transfer local profiles between computers, which becomes necessary when machines are replaced.
    • Useful for users who do not switch computers often or for computers without permanent network connectivity, like laptops. In VDI environments local profiles should not be used since users are directed to an arbitrary (the least loaded) server when they launch a new session.
  • Roaming profiles:
    • One profile per user.
    • The master copy of the profile is stored on a file server. During logon, it is copied to the local machine, which may slow down logons considerably depending on profile size and network speed.
    • During log off, changed files are copied back to the master copy on the file server. Since a user’s registry hive is stored in a single file, this approach creates the “last writer wins” problem.
    • The path to the roaming profile needs to be assigned to users.
    • Useful for most setups where local profiles cannot be used.
  • Temporary User Profiles:
    • A temporary profile is issued each time that an error condition prevents the user’s profile from loading. Temporary profiles are deleted at the end of each session, and changes made by the user to desktop settings and files are lost when the user logs off.

Windows User Profiles

Windows User Profiles – The Reality

Ok, now let me paint a picture….A user calls the help desk to report a strange issue on an application running on their VDI Desktop. What does the help desk technician do? Analyze the root cause of the problem? Probably not. Most likely, the user’s profile will be deleted and the problem will have gone away. Happy ending? Not at all!

Deleting entire user profiles because of malfunctions caused by small data inconsistencies reveals a great deal of helplessness. While the user can work with the faulting application again, the user has lost thousands of personal settings configured both implicitly and explicitly. The help desk technician, on the other hand, has learned nothing from the case, except a brute force way of closing a call. The next time a user rings with a weird problem the technician will be all the more eager to repeat the procedure.

Deleting is cheap. Who is to blame?  Nobody, really. Given the prehistoric user profile design Windows still uses in its latest incarnations, the help desk technician has no other choice but to delete the profile. Trying to get to the root cause is way too difficult and time-consuming a task to perform routinely several times a day. It is so much cheaper to just delete everything and have the user start from scratch.
Why is it like this? Finding a “Needle in a Haystack” is expensive. User profiles are a mess, a chaotic agglomeration of data. Applications can write what they want, where they want, in what way they want into the profile. Among the piles of data junk each Windows user profile stores, there are, however, quite a few hidden gems: the settings a user actually has configured. That is the stuff users care about.

Take your favorite web browser, for example. It comes with hundreds or thousands of factory presets, most of which you could not care about less. But I bet there are a few tweaks in your configuration you would not like to live without. Unfortunately, those settings dear to your heart are buried among all the other default stuff.

Configuration Craziness with some Applications

And it gets worse. Not only are the valuable settings from individual applications intermingled with worthless data, some applications store their configuration all over the place, effectively creating a mix of settings from multiple programs. This makes it virtually impossible to easily identify and extract a single program’s settings. By the way, Microsoft is especially good at this mixing business. Try to identify all storage locations for (Internet) Explorer settings on your own. LOL 😉

Untangling the Knot – How?

The inadequacies of Windows user profiles have led to the development of quite a few profile management products and technologies.  My next post will dive into Best Practices and some of the solutions that help solve this problem.
Finally, at the beginning of the post I mentioned that this series was inspired by a customer in the field. Well, in the end, the problem was a bad registry setting loaded by the NTUSER.DAT, by a third-party application. ;(

Until next time,  Rob.

Storage Spaces Direct Explained – Applications & Performance

Applications

Microsoft SQL Server product group announced that SQL Server, either virtual or bare metal, is fully supported on Storage Spaces Direct. The Exchange Team did not have a clear endorsement for Exchange on S2D and clearly still prefers that Exchange is deployed on physical servers with local JBODs using Exchange Database Availability Groups or that customers simply move to O365.
image031
Performance

Microsoft showed all kinds of performance #s but these are using all NVMe SSD systems and real-world workloads like 100% 4k Random Reads.
image032
Much like VSAN, Storage Spaces is implemented in-kernel. Their messaging is very similar as well, claiming more efficient IO path, and CPU consumption typically way less than 10% of system CPU. Like VSAN, the exact overhead of S2D is difficult to measure.
image033
Microsoft is pushing NVMe Flash Devices for S2D and here are some examples of their positioning.
Their guidance was to avoid NVMe devices if your primary requirement is capacity as today you will pay a significant premium $/GB.
image034 image035
Where NVMe shines is on reduced latency and increased performance with NVMe systems driving 3.4x more IOPs than a similar SATA SSD on S2D.
image036 image037
There is also a significant benefit to CPU consumption with NVMe consuming nearly 50% less CPU than SATA SSDs on S2D.
image038
I also want to point out that the Azure Storage team is working very closely with Intel and Micron and will be moving parts of Azure to 3D Xpoint as soon as possible. This will filter down to S2D at some point, and we should expect them to be close to the bleeding edge for supporting new storage class memory technologies.
Scalability

Storage Spaces Direct will scale up to 16 nodes. In earlier Tech Preview releases they supported a minimum cluster size of 4 nodes. Recently they dropped that to 3 nodes and this week at Ignite they announced support for 2-node configurations. The 2-node configurations will use 2-way mirroring and require a separate witness that can be deployed on-premise or as a remote witness in Azure. Support for min 2 node configs does give them an advantage in ROBO and mid-market especially when low-cost is more important than high availability.

S2D will support both scale-up (adding additional local disk) and scale-out (with support for adding nodes in increments of 1).
image039 image040 image041 image042

Product Positioning

Microsoft’s guidance is for customers to use smaller hyper-converged configurations for ROBO and small departmental workloads where cost efficiency is the primary driver. For larger enterprises and hosters/service providers, Microsoft recommends a converged model that allows the independent scaling of compute and storage resources.image043
So How Do Customers Buy Storage Spaces Direct?

Storage Spaces Direct is a feature of Windows Server 2016 and customers get it for free with DataCenter Edition. Customers will have the option of DIY or purchase one of the new Storage Spaces Direct reference architecture solutions from one of 12 different partners.
image044
With previous storage spaces offerings in Server 2012 and 2012R2, Microsoft put the technology out there for the DIY crowd and hoped that the server vendors would find the technology interesting enough to add to their portfolios. The problem was it needed JBOD shelves and in most server vendor organizations, JBODs fell under the storage teams, not the server teams. There was no way that any storage team was going to jeopardize their high margin traditional storage business by offering low margin Storage Spaces based JBOD solutions. Most vendors didn’t even want to sell JBODs at all. For example, Dell typically overpriced their JBODs to make EqualLogic look like a good deal at just a 15% uplift from a basic JBOD shelf…. much like movie theaters get us to buy the large popcorn for 50 cents more.

With Storage Spaces Direct, Microsoft is now dealing with the server part of these organizations… and all these guys care about is selling more servers. So Spaces went from having no partner interest to having support from all of the major server vendors.

However, since S2D is free with Windows and channel partners only get paid for the server sale, there is little incentive for them push S2D over other HCI options on these platforms. Therefore, I suspect that the majority of S2D adoption should come from customers asking to buy it rather than partners pushing it as an option.
So here is what the partner ecosystem looks like today.
image045 image046

To formalize this, Microsoft created a new program called Windows Server Software Defined (WSSD) allowing partners to submit validated WSSD Reference Architectures. Microsoft provides the validation tools and methodology and the partner does the testing. They get a Windows Server 2016 Certified Logo plus SDDC Additional Qualifiers.
image047 image048

Partners can offer their choice of Hyper-Converged or Converged configurations. Here’s where the classic Microsoft unnecessary complexity comes in… Within Hyper-Converged there are two additional options – Standard and Premium. Premium has some additional SDN and Security features turned on, but it’s simply a configuration thing. All of these come with Datacenter Edition so there is no cost or licensing difference.
image049 image050

Here are a few examples of the offerings. S2D offerings will be available starting in mid-October as soon as Server 2016 goes GA.
image050 image051 image052
You may be asking who is responsible for support? Because it’s just a reference architecture, there is a split support model. Customers will call the server vendor for hardware issues and Microsoft for software issues.

Conclusions…

Storage Spaces has come a long way since Server 2012 and will be considered a viable option for customers looking at software-defined storage solutions. Some of the customers perceived advantages of S2D will be… low cost, min 2-node config, a broad choice of hardware vendors, storage QoS, NVMe support, single vendor software stack, and choice of deployment model (Hyper-Converged or Converged). Probably the most important of those is the price. Understanding the differences will be key. It’s tough to compete against ‘good enough’ and ‘free’.

Microsoft has not been very successful driving Storage Spaces adoption in the last 2 releases. Part of this is due to product immaturity, but most of this is because they didn’t build any real sales program around it. This hasn’t really changed with the WSSD Reference Architecture program. The big boys like Dell, HP, and Cisco are not going to position S2D over their own HCI offerings and the smaller players like SuperMicro, DataON and RAID Inc will never drive any significant adoption. Regardless of hardware platform, there is a very little incentive for the channel to sell S2D reference architectures over other HCI solutions (where they get paid for both the SW+HW sale). So without a strong sales program, I don’t believe that we will see S2D emerge as a big market share anytime soon.

Until next time, Rob.

Microsoft Exchange Best Practices on Nutanix

To continue on my last blog post on Exchange…

As I mentioned previously, I support SE’s from all over the world. And again today, I got asked what are the best practices for running Exchange on Nutanix. Funny enough, this question comes in quite often.  Well, I am going to help resolve that. There’s a lot of great info out there, especially from my friend Josh Odgers, which has been leading the charge on this for a long time.  Some of his posts can be controversial, but truth is always there.  He’s getting a point across.

This blog post will be updated on a regular basis as things change. It will also be moved to a permanent part of the netwatch.me resources section.  This is meant to be a general best practice guide to help with planning and maintaining a healthy Exchange environment on Nutanix.  I will specify hypervisor specifics when required.  Now on the post…..

msexchange.

Let’s start out with the basics…

MS Exchange on Nutanix Support

Nutanix provides a 100% supported solution for MS Exchange running on vSphere, Hyper-V or Acropolis Hypervisor using iSCSI (Block storage)
Here is a breakdown of supported configurations by hypervisor:

vSphere (ESXi)Use In-Guest iSCSI (Volume Groups) for full support
Hyper-V Use SMB 3.0
AHVUse native vDisks (iSCSI) – SVVP Certification for AHV

Also, check out Josh’s post “Fight the FUD – Support for MS Exchange on Nutanix” that outlines this very topic.  In summary, the customer has the choice to deploy in multiple configurations to suit their needs. But, one of the most often questions I get is, “does your SVVP Certification cover running Exchange on all your supported hypervisors?”  The answer is not simple.  The SVVP was submitted for the Acropolis Hypervisor, while this does not cover all of them, we technically are supported for all hypervisors as per Microsoft supported storage architectures.  Microsoft does not specifically mention Hyperconverged, it only mentions ISCSI in regards to SAN.  IMO, that covers ESXi and AHV.

Now let me explain….SAN’s are one of the biggest modern datacenter bottlenecks. Data has gravity, so co-locating storage and compute eliminates network bottlenecks = Hyperconverged is way better than SAN and hence SUPPORTED IMO 😉

To end this topic and move on, a Nutanix customer has the choice to deploy in multiple configurations to suit their needs.  Being pushed to one particular hypervisor for a customer is not always in their best interest.  Having choices now and later is a much better approach with the overall goal of simplifying the datacenter.   As Josh said in one of his blog posts ,”Running a standard platform and storage protocol for all workloads is a simple model which reduces the unnecessary complexity of multiple protocols and/or in-guest storage configurations”, I can’t agree more with that statement. 🙂

Exchange Performance on Nutanix

Now this subject will always be controversial and potentially subject to criticism.  Internal testing performed by the Nutanix Performace and Engineering team shows that AHV and Hyper-V performance are roughly the same from a hypervisor perspective and ESXi was 10% higher. That being said, usually, the next question is how is performance versus traditional SAN/NAS.  And again, I have to point out, it’s all about Data Locality. Can’t change the laws of physics. Data has gravity, hence we will always beat traditional SAN architecture.

Check out Josh’s posts on “Peak Performance vs Real World – Exchange on Nutanix Acropolis Hypervisor”.  It gives you a better understanding of are realistic benchmarks of Exchange in general and on Nutanix. I wholeheartedly agree with Josh when he says “Benchmarks are of little value without context specific to customer requirements!”  Spending close to over 15 years building and maintain Exchange systems, I learned one hard fact, no generic simulator (like JetStress) can show real world metrics.

Data Reduction Technologies with Exchange on Nutanix

Recommendation:
1 vDisk per Database, 1 vDisk per DB Logs
1 Container with RF2, In-Line Compression & EC-X for Databases
1 Container with RF2 for Logs
Do not use Dedupe with MS Exchange!
Reference: https://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx
Microsoft does not support Data deduplication (Note: Underlying storage deduplication such as Nutanix dedupe is not mentioned, but implied)

Data Reduction Estimates:

Rule of thumb: Always size without data reduction if possible.
Conservative assumption for compression for Exchange = 1.3:1
Aggressive assumption for compression for Exchange = 1.6:1
Conservative assumption for EC-X for Exchange = 1.1:1
Aggressive assumption for EC-X for Exchange = 1.25:1

Questions to ask yourself when planning an Exchange Environment:

How many Users? e.g.: 10000, 10000, etc.
How many user profiles do you need? e.g.: 2 , Standard and Executives
How large Mailbox (excluding archiving) per User? e.g.: 1GB, 2GB , 5GB
How many messages per day do you want to support per user? Light = 50 , Medium = 100 , Heavy = 150+

Do you require site resiliency?

These are among some of the basic questions you need to answer.  This is where the Exchange Server Role Calculator comes in. It’s a great tool, but like any tool, you do need to give it good input to get out good output. The function of the tool is as the name implies.

Exchange Server Role Calculator Defined

Now, at the time of this writing, version 7.8 is the latest and greatest. Now, do note, I would not call this tool perfect, but its gets you pretty close. Like anything else, the Exchange team is still learning real world behavior and this is where a good experienced Exchange engineer comes into play.

IMO..there is an Art and Science to sizing Exchange.  The days of Exchange just being a simple mail server are far over. These days, it’s much more complex with supporting multiple forms of ingress and egress traffic for different functions (Mobile, Web, SMTP, Skype Integration, etc.). Each of these different functions has varying load considerations and supports more visible features like Outlook Web Access and Exchange Activesync. Also, I still am of the opinion that it does not take into consideration the number of devices that 1 mailbox services.
exchangecomplex
Considering this complexity, you can see that undersizing or oversizing can happen easily.  If you size correctly at the beginning with Nutanix, then it just an easy scale out, buy as you need it situation. Then you know what happens, finally for the first time, predictability in your budgets.  I remember the days, not that long ago, when I had to have a client retire a SAN, not for space constraints, but for IO constraints.  And at the time, all I got from the client was “can’t we use it for something else” and ya, I’ve replied with “use it as a WSUS repository for patching the Exchange environment” 😉

During my next post, I will dive into the Exchange Role Calculator much more and go over some examples of sizing on Exchange. We’ll mainly focus on mailbox storage and then move on to other role sizing considerations.  I also plan to cover the other aspects to maintain a healthy Exchange environment (i.e. Message Hygiene, Global and Local Load balancing, Integrations and End User Experience) in subsequent posts.
Below are the Office Best Practices Guides from Nutanix and some public case studies.

Until next time, Rob…..

Nutanix Offical Best Practice Guides
MS Exchange on Nutanix / vSphere Best practice guide: http://go.nutanix.com/VirtualizingMicrosoftExchangeonWeb-ScaleConvergedInfrastructure.html

Public Case Studies for Nutanix customers using Exchange
Richter: http://go.nutanix.com/rs/nutanix/images/Nutanix-Case-Study-Richter.pdf
Riverside: http://www.nutanix.com/resource/riverside-for-riversides-server-and-storage-consolidation-nutanix-fits-like-a-glove/