What Is Microsoft Azure – A Complete Guide

Featured

What Is Microsoft Azure – A Complete Guide

Joining 5nine Software as Director of Product Management

Today, I am excited to announce I will be joining the awesome team at 5nine Software as Director of Product Management. My primary job responsibilities will be for the product strategy and direction of 5nine’s security and management solutions.
5nineSo, you ask, why Product Management? It’s been a lifelong dream to be part of shaping the direction of a technology solution.  By joining 5nine, I hope to simplify IT, Cloud and beyond, because there’s always a better way 🙂

“What prepared me for this was very surprising looking back.”

Life at Nutanix

Over the past 2 1/2 years at Nutanix, I managed 84 partners over 146 solutions.  The partner solutions that my team managed and validated were from all aspects of technology. i.e. Monitoring, backup, DR, Big Data, DevOps, security, networking, databases and the list goes on.

5nine Software was one of my first partners I validated. I was familiar with them.  5nine Manager was a tool I had used in the field during my consulting days. But I had not seen security solution yet. During Nutanix Ready process, this is when I first got introduced to 5nine Security.  I remember at the time, I was super impressed with how they integrated with Hyper-V.

Shortly, after 5nine’s Nutanix Ready validation, my colleague and Alliance Manager Tommy Gustaveson and I interviewed past 5nine’s VP of Alliances Symon Perriman.  We enjoyed understanding 5nine’s vision and also getting to know a little more about Symon Perriman’s journey.  Yes, I admit, I had a little hero worship for him. But, who can blame me, Symon is a one of kind person and proud to this day to call him a friend 🙂

So, on with the story, part of my job at Nutanix was front-ending the Product Managers (PM’s). The PM’s were always pulled in 10 different directions and they came to trust us with some of these activities with partners.  This would include understanding the partner technology, how we can go to market together and how the partner would integrate with Nutanix.  We worked with Alliance Managers and PM’s to determine if this would be a good partnership.

Once the business side of alliances onboard s a partner, that’s where the handoff to the Nutanix Ready team happens. The team spends a lot of time understanding each partner solution(‘s). The team does a deep investigation of any issues around the partner solution(‘s) and Nutanix. This is vetted by Nutanix’s support and solutions teams. This, in turn, gives the customer a certain degree of comfort that the partner solutions were tested, validated and it will work on Nutanix 🙂

Over the course of my time at Nutanix and my career to that extent, I have to see many, many UI\UX’s and engines (code) behind it.  I’ve seen what works and what doesn’t. The common theme of what doesn’t work is over complicating your user experience.
We are at the age of managing multiple multi-geographic data centers and clouds, backups, DR, networking, SDN’s and we need to secure it all. If your UI even vaguely resembles an airplane cockpit, you’re doing it wrong.  It is an inefficient use of an IT Pro’s time and energy.  They just want to simply manage their production applications and have an easy management experience.

I will never trade the time I had at Nutanix, but times are a changing 🙂  As I’ve mentioned in a previous post “Building Nutnaix Ready”, “it was the best of times and the worst of times”.  I have not finished that series yet, but needless to say, it prepared me for the next step in my journey.

So, keep an eye on my blog, twitter feed, etc, because things are about to get into high gear.

Until next time and happy holidays,
Robert Corradini, MVP – Cloud & Datacenter

Storage Spaces Direct Explained – Applications & Performance

Applications

Microsoft SQL Server product group announced that SQL Server, either virtual or bare metal, is fully supported on Storage Spaces Direct. The Exchange Team did not have a clear endorsement for Exchange on S2D and clearly still prefers that Exchange is deployed on physical servers with local JBODs using Exchange Database Availability Groups or that customers simply move to O365.
image031
Performance

Microsoft showed all kinds of performance #s but these are using all NVMe SSD systems and real-world workloads like 100% 4k Random Reads.
image032
Much like VSAN, Storage Spaces is implemented in-kernel. Their messaging is very similar as well, claiming more efficient IO path, and CPU consumption typically way less than 10% of system CPU. Like VSAN, the exact overhead of S2D is difficult to measure.
image033
Microsoft is pushing NVMe Flash Devices for S2D and here are some examples of their positioning.
Their guidance was to avoid NVMe devices if your primary requirement is capacity as today you will pay a significant premium $/GB.
image034 image035
Where NVMe shines is on reduced latency and increased performance with NVMe systems driving 3.4x more IOPs than a similar SATA SSD on S2D.
image036 image037
There is also a significant benefit to CPU consumption with NVMe consuming nearly 50% less CPU than SATA SSDs on S2D.
image038
I also want to point out that the Azure Storage team is working very closely with Intel and Micron and will be moving parts of Azure to 3D Xpoint as soon as possible. This will filter down to S2D at some point, and we should expect them to be close to the bleeding edge for supporting new storage class memory technologies.
Scalability

Storage Spaces Direct will scale up to 16 nodes. In earlier Tech Preview releases they supported a minimum cluster size of 4 nodes. Recently they dropped that to 3 nodes and this week at Ignite they announced support for 2-node configurations. The 2-node configurations will use 2-way mirroring and require a separate witness that can be deployed on-premise or as a remote witness in Azure. Support for min 2 node configs does give them an advantage in ROBO and mid-market especially when low-cost is more important than high availability.

S2D will support both scale-up (adding additional local disk) and scale-out (with support for adding nodes in increments of 1).
image039 image040 image041 image042

Product Positioning

Microsoft’s guidance is for customers to use smaller hyper-converged configurations for ROBO and small departmental workloads where cost efficiency is the primary driver. For larger enterprises and hosters/service providers, Microsoft recommends a converged model that allows the independent scaling of compute and storage resources.image043
So How Do Customers Buy Storage Spaces Direct?

Storage Spaces Direct is a feature of Windows Server 2016 and customers get it for free with DataCenter Edition. Customers will have the option of DIY or purchase one of the new Storage Spaces Direct reference architecture solutions from one of 12 different partners.
image044
With previous storage spaces offerings in Server 2012 and 2012R2, Microsoft put the technology out there for the DIY crowd and hoped that the server vendors would find the technology interesting enough to add to their portfolios. The problem was it needed JBOD shelves and in most server vendor organizations, JBODs fell under the storage teams, not the server teams. There was no way that any storage team was going to jeopardize their high margin traditional storage business by offering low margin Storage Spaces based JBOD solutions. Most vendors didn’t even want to sell JBODs at all. For example, Dell typically overpriced their JBODs to make EqualLogic look like a good deal at just a 15% uplift from a basic JBOD shelf…. much like movie theaters get us to buy the large popcorn for 50 cents more.

With Storage Spaces Direct, Microsoft is now dealing with the server part of these organizations… and all these guys care about is selling more servers. So Spaces went from having no partner interest to having support from all of the major server vendors.

However, since S2D is free with Windows and channel partners only get paid for the server sale, there is little incentive for them push S2D over other HCI options on these platforms. Therefore, I suspect that the majority of S2D adoption should come from customers asking to buy it rather than partners pushing it as an option.
So here is what the partner ecosystem looks like today.
image045 image046

To formalize this, Microsoft created a new program called Windows Server Software Defined (WSSD) allowing partners to submit validated WSSD Reference Architectures. Microsoft provides the validation tools and methodology and the partner does the testing. They get a Windows Server 2016 Certified Logo plus SDDC Additional Qualifiers.
image047 image048

Partners can offer their choice of Hyper-Converged or Converged configurations. Here’s where the classic Microsoft unnecessary complexity comes in… Within Hyper-Converged there are two additional options – Standard and Premium. Premium has some additional SDN and Security features turned on, but it’s simply a configuration thing. All of these come with Datacenter Edition so there is no cost or licensing difference.
image049 image050

Here are a few examples of the offerings. S2D offerings will be available starting in mid-October as soon as Server 2016 goes GA.
image050 image051 image052
You may be asking who is responsible for support? Because it’s just a reference architecture, there is a split support model. Customers will call the server vendor for hardware issues and Microsoft for software issues.

Conclusions…

Storage Spaces has come a long way since Server 2012 and will be considered a viable option for customers looking at software-defined storage solutions. Some of the customers perceived advantages of S2D will be… low cost, min 2-node config, a broad choice of hardware vendors, storage QoS, NVMe support, single vendor software stack, and choice of deployment model (Hyper-Converged or Converged). Probably the most important of those is the price. Understanding the differences will be key. It’s tough to compete against ‘good enough’ and ‘free’.

Microsoft has not been very successful driving Storage Spaces adoption in the last 2 releases. Part of this is due to product immaturity, but most of this is because they didn’t build any real sales program around it. This hasn’t really changed with the WSSD Reference Architecture program. The big boys like Dell, HP, and Cisco are not going to position S2D over their own HCI offerings and the smaller players like SuperMicro, DataON and RAID Inc will never drive any significant adoption. Regardless of hardware platform, there is a very little incentive for the channel to sell S2D reference architectures over other HCI solutions (where they get paid for both the SW+HW sale). So without a strong sales program, I don’t believe that we will see S2D emerge as a big market share anytime soon.

Until next time, Rob.

Microsoft Azure Cloud Series – Azure External Connectivity Options – Part 4

Hello Everyone….Today I will go over the Azure External Connectivity Options.  There is a lot flexibility depending your needs of your workload/application with Azure. So let’s dive in and go into each option start at the bottom as shown in the handy graph below:

Azure External Connectivity Options

azureexternal Azure External Connectivity Options

Private Site-to-Site Connectivity – ExpressRoute

ExpressRoute provides organizations a private, dedicated, high-throughput network connection between Windows Azure datacenters and their on-premises environment. See my blog post on ExpressRoute from last year for more details.  Below is a comparison from traditional site-to-site tunnel versys Express

ER-compare Azure External Connectivity Options

Site-to-Site Connectivity

A Site-to-Site VPN allows you to create a secure connection between your on-premises site and your virtual network. To create a Site-to-Site connection, a VPN device that is located on your on-premises network is configured to create a secure connection with the Azure VPN Gateway. Once the connection is created, resources on your local network and resources located in your virtual network can communicate directly and securely. Site-to-Site connections do not require you to establish a separate connection for each client computer on your local network to access resources in the virtual network.
AzureS2S Azure External Connectivity Options

Point-to-Site

A Point-to-Site configuration allows you to create a secure connection to your virtual network from a client computer, individually. A VPN connection is established by starting the connection from the client computer. This is an excellent solution when you want to connect to your VNET from a remote location, such as from home or a conference, or when you only have a few clients that need to connect to a virtual network. Point-to-Site connections do not require a VPN device or a public-facing IP address in order to work.
AzureP2S Azure External Connectivity Options

Service Bus

Service Bus is a generic, cloud-based messaging system for connecting just about anything – applications, services, and devices – wherever they are. Here are of the basic fundamentals.
Different situations call for different styles of communication and this one is for more complex scenarios.  Sometimes, letting applications send and receive messages through a simple queue is the best solution. In other situations, an ordinary queue isn’t enough; a queue with a publish-and-subscribe mechanism is better. In some cases, all that’s really needed is a connection between applications; queues aren’t required. Service Bus provides all three options, enabling your applications to interact in several different ways.
Service Bus is a multi-tenant cloud service, which means that the service is shared by multiple users. Each user, such as an application developer, creates a namespace, then defines the communication mechanisms she needs within that namespace. See below pic on how this looks.
Service Bus provides a multi-tenant service for connecting applications through the cloud.
svcbus_01_architecture Azure External Connectivity Options
Within a namespace, you can use one or more instances of four different communication mechanisms, each of which connects applications in a different way. The choices are:

  • Queues, which allow one-directional communication. Each queue acts as an intermediary (sometimes called a broker) that stores sent messages until they are received. Each message is received by a single recipient.
  • Topics, which provide one-directional communication using subscriptions-a single topic can have multiple subscriptions. Like a queue, a topic acts as a broker, but each subscription can optionally use a filter to receive only messages that match specific criteria.
  • Relays, which provide bi-directional communication. Unlike queues and topics, a relay doesn’t store in-flight messages-it’s not a broker. Instead, it just passes them on to the destination application.
  • Event Hubs, which provide event and telemetry ingress to the cloud at massive scale, with low latency and high reliability.

When you create a queue, topic, relay, or Event Hub, you give it a name. Combined with whatever you called your namespace, this name creates a unique identifier for the object. Applications can provide this name to Service Bus, then use that queue, topic, relay, or Event Hub to communicate with one another.
To use any of these objects, Windows applications can use Windows Communication Foundation (WCF). For queues, topics, and Event Hubs Windows applications can also use Service Bus-defined messaging APIs. To make these objects easier to use from non-Windows applications, Microsoft provides SDKs for Java, Node.js, and other languages. You can also access queues, topics, and Event Hubs using REST APIs over HTTP.
It’s important to understand that even though Service Bus itself runs in the cloud (that is, in Microsoft’s Azure datacenters), applications that use it can run anywhere. You can use Service Bus to connect applications running on Azure, for example, or applications running inside your own datacenter. You can also use it to connect an application running on Azure or another cloud platform with an on-premises application or with tablets and phones. It’s even possible to connect household appliances, sensors, and other devices to a central application or to one other. Again, Service Bus is a generic communication mechanism in the cloud that’s accessible from pretty much anywhere. How you use it depends on what your applications need to do.

Azure Data Factory

Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. Just like a manufacturing factory that runs equipment to take raw materials and transform them into finished goods, Data Factory orchestrates existing services that collect raw data and transform it into ready-to-use information.
Data Factory works across on-premises and cloud data sources and SaaS to ingest, prepare, transform, analyze, and publish your data. Use Data Factory to compose services into managed data flow pipelines to transform your data using services like Azure HDInsight (Hadoop) and Azure Batch for your big data computing needs, and with Azure Machine Learning to operationalize your analytics solutions. Go beyond just a tabular monitoring view, and use the rich visualizations of Data Factory to quickly display the lineage and dependencies between your data pipelines. Monitor all of your data flow pipelines from a single unified view to easily pinpoint issues and setup monitoring alerts.

data-factory-overview Azure External Connectivity OptionsCollect data from many different on-premises data sources, ingest and prepare it, organize and analyze it with a range of transformations, then publish ready-to-use data for consumption.
You can use Data Factory anytime you need to collect data of different shapes and sizes, transform it, and publish it to extract deep insights – all on a reliable schedule. Data Factory is used to create highly available data flow pipelines for many scenarios across different industries for their analytics pipeline needs. Online retailers use it to generate personalized product recommendations based on customer browsing behavior. Game studios use it to understand the effectiveness of their marketing campaigns and the use cases go on…..
So, as you can see, there are lots of options for connecting your on-premise environment with Azure and that dovetails me into the next topic in the series, Azure VNET’s….

Until next time, Rob……

Microsoft Azure Cloud Series – Azure Resource Manager – Part 3

Hello everybody, time to get in-depth with Azure Resource Manager.  But, before I dive into the Azure Resource Manager, I would like to quickly review some of the basics in Azure.  I will start with a rundown of the Azure Global Footprint.  Then, I will go into how Azure charges are incurred.  And finally, I will dive into the Azure Resource Manager V2 and comparing it to the older Azure Service Manager V1.  Sit tight and let’s go for an Azure Ride 😉

Azure Global Footprint
Azure Resource Manager

Microsoft Azure itself is deployed around the world and involves the concept of regions, which is where you select to place and run your code.  Each region has a Microsoft Azure data center.  These data centers are massive facilities that host tens of thousands or, in some cases, hundreds of thousands of servers.  Currently, Microsoft has:

  • Four regions in North America
  • Two regions in Europe
  • Two regions in Asia
  • One region in Japan

As shown above, Microsoft also has a number of Content Delivery Network (CDN) edge points.  They can be used to cache your content and deliver it even faster to end users.
Once you build an application, you can choose any location in the world where you want to run it and you can move your workloads from region to region.  You can also run your application in multiple regions simultaneously or just direct traffic and end users to whichever version of the app is closest to them

How are Azure Charges Incurred?

This may be different for many of you who are familiar with hosting providers and on-premises systems
Simply, with Microsoft Azure, you pay only for what you use:

  • There are no upfront costs
  • There is no need to buy any upfront server licenses; this is included in the price
  • VMs (IaaS and web/worker role) usage is by the minute
  • VMs (IaaS only) that are stopped in Microsoft Azure, only storage charges apply
  • Likewise, if you use a SQL database, through the SQL Database feature in Microsoft Azure, you do not have to buy a SQL Server license—this is also included in the price
  • For compute services, such as VMs and websites you only pay by the hour

This gives you the flexibility to run your applications very cost effectively
You can scale up and scale down your solutions or even turn them on and off as necessary. This also opens up a wide range of possibilities in terms of the new types of apps you can build.

Managing Azure Deployments

Microsoft Azure currently have two management models:

  • Azure Service Manager (ASM) has been around since 2009 and has been due for an upgrade..
  • Azure Resource Manager (ARM), released last summer, supports modern deployment practices. It is designed to be extensible to all current and future services.

Azure Service Manager V1

  • Traditional way to deploy and manage applications hosted in Azure
  • Azure Portal https://manage.windowsazure.com
  • PowerShell / CLI (default mode)
  • REST API

Azure Resource Manager V2

  • Modern way to deploy and manage applications hosted in Azure
  • Azure Portal https://portal.azure.com
  • PowerShell / CLI (ARM mode)
  • REST API
  • Azure Resource Management Library for .NET

Why and what is Azure Resource Manager?

Today’s challenge with Azure Service Manager V1– it’s difficult to…

  • Set and manage permissions – only co-admin and service admin
  • Monitor and have alerting rules – limited to Management Services and basic KPI in portal
  • Billing – through the billing portal
  • Deployment – complex PowerShell to gather all components for an application
  • Visualize a group of resources in a logical view, including monitoring/billing

ASM V1 Portal – Resource Centric Views

Azure Resource Manager
After working with the current ASM V1 for a number of years now, here’s the breakdown:

  • Resources are provisioned in isolation
  • Finding resources is not so easy
  • Deployment is more complex than on-premise
  • Management of app is challenging
  • Proper use of resources becomes more abstract
  • Isolation makes communications a challenge

Ok, Rob, then why does Microsoft still keep ASM V1 in production?  
Answer:  As of the writing of this blog post, not all features have been ported over to Azure Resource Manager V2.  Once all features and services have been ported over, I expect Microsoft to end of life Azure Service Manager V1.

Azure Resource Manager Overview

Azure Resource Manager
Azure Resource Manager enables you to work with the resources in your solution as a group.  You can deploy, update or delete all of the resources for your solution in a single, coordinated operation.  You use a template for deployment and that template can work for different environments such as testing, staging and production.  Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.

Benefits of ARM

  • Desired-state deployment
    • ARM does desired-state deployment of resources. It does not do desired-state configuration inside these resources (e.g., VMs), although it can initiate the process of desired-state configuration.
  • Faster deployments
    • ARM can deploy in true parallel as compared to semi-sequential in ASM
  • Role-based access control (RBAC)
    • RBAC is fully integrated with Azure Active Directory
  • Resource-provider model
    • Resource-provider model is intended to be fully extensible.
  • Common interface for Azure and Azure Stack
    • When Azure Stack is released, same API model for on-premises and Cloud

ARM Definitions and what they mean?

  • Resource – Atomic unit of deployment
  • Resource group – Collection of resources
  • Resource provider – Manages specific kinds of resources
  • Resource type – Specifies the type of resource

Ok, let’s dive into the details of each now.

Resource Group (RG)
Azure Resource Manager

A Resource Group is a Unit of Management providing:

  • Application Life-Cycle Containment – Deployment, update, delete and status
    • You can deploy everything included in a resource group together, thereby maintaining versions of an application along with it’s resources
  • Declarative solution for Deployment – “Config as Code”
    • Resource Group’s are .json, declarative/configuration code
  • Grouping – Metering, billing, quote: applied and rolled up to the group
    • Resource groups provide a logical grouping of resources
  • Consistent Management Layer
    • In the V2 portal, everything is controlled in a RG. RGs can be accessed via REST APIs and resource providers
  • Access Control – Scope for RBAC permissions
    • You can only use RBAC in the new portal and the highest level generally used for RBAC is the resource group level.

But, Rob, that sounds great, but should these resources (VM’s, DB’s, Storage, etc) be in the same Resource Group or in a different one?
Hint:  Do they have common life cycle and management?
Azure Resource Manager
Answer: It’s up to you

Resource Groups Best Practices

  • Tightly coupled containers of multiple resources of similar or different types
    • When resources are in the container, they have a common life cycle. You can deploy these things together, put RBAC on them together with one request and they can know about each other
  • Every resource *must* exist in one and only one resource group
    • Every resource must be in ONE resource group, important for RBAC
  • Resource groups can span regions
    • Don’t have to live in same location, can deploy to multiple regions

A few final thoughts on Resource Groups and their deployment scenarios before we move on.

  • Most significant question is of life-cycle and what to place in a resource group
  • Can apply RBAC, but is this right for a particular resource group?
  • Sometimes resources are shared across multiple applications, in other words a VM could be stored in a storage account in a different resource group
  • Life-cycle is distinct and managed by different people
  • There is no hard and fast rule

Resource Providers

A Resource Provider is used by the Azure Resource Manager to manage distinct types of resources – in your JSON template, you will have code that shows what the resource provider expects to see in order for the resource provider (sitting out in Azure) to build the resource that you want…for example a SQL Server or SQL DB or VM.
Resource providers are an extensibility point allowing new resource providers to be added in a consistent manner as new services are added to Azure – anyone can write their own provider

Resource Provider Types Examples
Azure Resource Manager

Ok, Rob, how do I know what resources providers are available?
Using PowerShell, log in to your Azure account and then run
Get-AzureRmResourceProvider
Azure Resource Manager

Tools typically used with ARM

  • PowerShellBlog Post coming soon
    • PowerShell is used to deploy the ARM templates and can be used to download log files from the Resource Group to analyze issues
  • Troubleshooting in the portal – Blog Post coming soon
  • Visual Studio
    • Although not required, will more than likely be the tool of choice for creating the ARM templates – Blog Post coming soon

Well, that wraps up my blog post on Azure Resource Manager.  We covered a lot and have much more to go.  Stay tuned…..

Until next time, Rob…