About netwatch

Robert is a creative, savvy technical engineer in cloud computing, able to move masterfully back and forth from a specific point to the entire structure. As Microsoft Solutions Architect and many years of project management experience, he has a solid, emerging skill set in cloud and hybrid computing. Robert is a dynamic collaborator who knows when to push his team in a new direction and when to pause and let the ideas of others percolate. Creativity, balance, input, analysis and synthesis are all processes at play when he leads a team. As a technology visionary with 20+ years leading the design, development and implementation of high-performance technology solutions, Robert has a strong record of success in managing robust IT High Reliable Organizations (HRO). He has a proven ability to bring the benefits of IT to solve business issues while delivering applications, infrastructure, costs and managing risks. Robert has spent the better part of 20+ years working with Microsoft workloads. For the past 15 years before joining Nutanix, Robert spent his time leading System IntegratorsMicrosoft related practices. In April 2016, Robert was first awarded Microsoft® Most Valuable Professional (MVP) in Cloud & Datacenter Management. He has substantially been awarded for 2017/2018. The Microsoft MVP Award is an annual award that recognizes exceptional technology community leaders worldwide who actively share their high quality, real world expertise with users and Microsoft. With just over 4,000 awardees worldwide, Microsoft MVPs represent a highly select group of experts. MVPs share a deep commitment to community and a willingness to help others. They represent the diversity of today’s technical communities. MVPs are present in over 90 countries, in more than 40 languages, and across numerous Microsoft technologies. MVPs share a passion for technology, a willingness to help others, and a commitment to the community. These are the qualities that make MVPs exceptional community leaders. MVPs’ efforts enhance people’s lives and contribute to our industry’s success in many ways. By sharing their knowledge and experiences, and providing objective feedback, they help people solve problems and discover new capabilities every day.

Lessons Learned – Managing your Critical IT Infrastructure during a Pandemic

Worldwide Craziness

The Novel Coronavirus has already devastated the global economy. Historically, most business continuity plans for data centers are based on local scenarios, where “acts of God” wreaked havoc in one place. Rarely had anyone considered that one place being all of Earth.

A Change in IT Mindset

It is not — at least not yet — the equivalent of a worldwide hurricane. Today, the world’s data centers are, for the most part, functional. Modern enterprise data centers have already been designed to operate with as few as three full-time staff members onsite.

You don’t have to look far to see how the global COVID-19 pandemic has fundamentally upended IT. As organizations in all sectors have rapidly emptied their offices and sent their employees home to comply with ever more expansive shelter-in-place and quarantine mandates, replicating the full breadth of services remotely has been IT’s biggest priority.

All of this is nothing short of a remote collaboration revolution. It is already rewriting how work gets done — and how technology gets supported — when direct access to traditional, physical infrastructure is no longer a given.

But this is merely one aspect of IT. As we begin to digest how these changes will shape technology best practices, both during the current crisis and well into the future, we can’t afford to ignore the often unseen underpinnings of IT infrastructure that don’t have the luxury of working remotely.

Not an Option

Put merely, mission-critical facilities like data centers can’t be relocated into employees’ home offices. While transferring end-user productivity out of a traditional office context is a fairly straightforward process. The same can’t be said for the highly specialized workloads that can only be managed within the framework of a data center. Beyond the uniquely visible and non-transferrable capabilities of the facilities themselves — grid access, raw compute power, failover, security, etc. — there is the genuine accountability associated with the sheer volume and type of workloads managed within them.

Regulatory constraints around how incalculably vital data must be managed and protected throughout all phases of its lifecycle add even more complexity to data center protocols during a pandemic.

So while you can’t simply abandon your data center in the same manner as your end users have cleared out their offices, you can — and must — understand how to rebalance your provision of data center services in light of how the pandemic continues to evolve. And it would be best if you did so while you continue to keep the lights on for stakeholders who need uninterrupted access to data center services now more than ever.

Against this backdrop, if you haven’t already examined your data center management strategy through a COVID-19 lens, now is the time to do so. As with anything related to the data center, however, this will be a complex, multifaceted process. It would be best if you positioned yourself to navigate it by looking at it through the following contexts.

  • Capacity Management

    The historically unpredictable global business environment is putting unprecedented pressure on capacity management, with businesses barely able to forecast demand — or, in many cases, keep up with it. Global internet traffic is trending upward, with several exchanges routinely reaching record throughput as entire economies and workforces adjust to the new lockdown paradigm. Some organizations facing spiking demand have no choice but to move services out of their own data centers and lean more heavily on vendors. This makes absolute sense in an unpredictable landscape where scale needs to be implemented without delay. Still, it doesn’t make everyday issues like bandwidth, power, CPU, memory, and disk space disappear. Instead, it shifts the burden onto these external providers and their specific infrastructure. IT leadership must adapt these partnerships to keep pace because, if vendors don’t stay ahead of the curve, IT may find itself unable to serve the business adequately.

  • Connectivity

    The old truth to avoid putting all your eggs in one basket has never been more valid than it is now. This issue relates directly to capacity management, and, as the crisis deepens, the strain on all aspects of infrastructure will only increase. Diversify your upstream providers as much as possible to mitigate the risks associated with any one of them being compromised by pandemic-related resourcing constraints. This minimizes the potential for back-end interruptions to reach your customers. Leverage third-party user reviews and analyst resources to better assess and compare vendors, match provider capabilities to fast-changing business needs, and position yourself to make best-of-breed decisions faster.

  • Disaster Recovery

    The uptick in adopting mission-critical services being deployed off-premises doesn’t only impact day-to-day service delivery and the service level agreements (SLAs) that set expectations and confirm accountabilities. It also has significant implications for disaster recovery (DR) planning and implementation. It shifts a fair degree of risk over to the third-party providers now responsible for delivering these services. DR plans must be updated to reflect this new world of vendor-distributed work, and vendors must be integral to this process to ensure they are in a position to fulfill all requirements.

  • Security

    Cybercriminals have never missed an opportunity to take advantage of periods of uncertainty to ply their evil trade, and the COVID-19 pandemic is no exception. As more organizations move their services to centralized locations, bad actors suddenly have significantly more — and better defined — higher-value targets. From a cybercriminal’s perspective, why attack one company and net only one victim when you can strike a mission-critical data center and compromise many victims? This sobering reality reinforces the need to nail down end-to-end security protocols with all vendors, including, but not limited to, encryption, authentication, and onsite access control. Reaffirming your cybersecurity skills inventory — and closing any gaps with targeted training — should also be prioritized.

  • Colocation

    If you are either using or responsible for colocated resources or infrastructure, you must take immediate steps to reduce physical risks at all levels, including:

    • Focus on disease control and disinfection throughout the facility.
    • Enforce monitoring — including temperature checks — at tightly controlled entries, and turn away anyone exhibiting symptoms to avoid compromising the facility itself.
    • Reduce the number of people onsite, especially unknowns and other individuals not considered essential to the business.
    • Consider extending shift lengths from eight to 12 hours and moving to a two-shift schedule, if local labor laws will accommodate.
    • Take individual steps to protect technical staff with skills required to maintain data center uptime, including sequestering them in a third, unscheduled shift, and holding them in reserve if primary staff exhibit symptoms.
    • Incorporate in-person monitoring of tasks during shift rotations to ensure continuity of operations. Implement contactless handovers to minimize transmission risk during these critical periods.
    • Assign activities and technical resources to single buildings and prevent them from moving to other buildings within a more massive campus.
    • Prioritize the implementation of “smart hands” services to ensure trained, known resources handle tasks requiring onsite engagement.
    • Leverage guidance from local and regional health authorities to ensure nothing is missed, including physical traffic control methods in shared areas to support social distancing.

Focus on the Opportunity

Not everything about the current pandemic should incite fear — all significant disruptions offer opportunities to rethink how data center operations are planned, managed, and evolved over time. The possibilities can be game-changing, but only if you take the time to get out of firefighting mode and zero in on what your strategy should look like once COVID-19 is firmly behind us.

For example, as more data physically moves offsite toward data centers, hardware GPUs can be leveraged for compute-intensive artificial intelligence, machine learning, and related data analysis applications. Recognize that data has gravity and tends to pull surrounding apps with it. Position yourself to sell compute capacity to meet these shifting demands.

Don’t Reinvent the Wheel

As the pandemic continues to play out, expect the value of traditional data center best practice to be reinforced. This isn’t so much a time to rip apart and rebuild as it is to validate what you’ve been doing all along and double down on it.

Start by ensuring your basics are sound and that your existing slate of products and services is reliable, secure, and well-communicated to your stakeholders. The sudden increase in demand for data center services and capacity may be unique in history, but stakeholders will depend on you having a firm foundation. By taking the time to reaffirm that this is indeed the case, you’re in a much better position to scale and meet this demand.

Learn from experience

As unique as this experience seems to us all, recognize that we’ve been through this before — including the SARS, H1N1, and Ebola outbreaks in 2003, 2009, and 2014, respectively. Refer back to any documentation you may have from those periods to inform your thinking and responses for the current pandemic, but bear in mind that the impact in those previous cases was significantly smaller, and we “returned to normal” much more quickly.

This time out, the impact is unprecedented, and the future timeline won’t be resolving itself anytime soon. Expect it to take far longer than initially expected to return to anything remotely approaching “normal,” and, even then, expect the very definition of the word to evolve.

Many economic, technological, and social changes will indeed be permanent, which means your go-forward strategy to manage data center resources should not be to overutilize what you’ve got and hope to ride out the storm. Instead, now is the time to scale your investments in critical infrastructure and prepare for a changing world after that. This strategy will maximize your business continuity and minimize the risks associated with navigating these strange times.

Until next time, Rob.

Working Remotely as an IT Pro in this age of COVID-19 – Tips to a Successful WFH Strategy for your end-users

Hey All,

Its been a few months since I did my last blog post. Not making excuses, but working at an IT Director at a large Biotech can be challenging :).  I finally have some time to share some good tips I shared with my team and end-users during this COVID-19 Pandemic.  Having a good WFH (Work from Home) environment is key to keeping the balance at work and at home.

Get your technology in order

Technology is what enables remote work in the first place. So make sure to take your laptop home, and don’t forget your dock and charger. Also, take home your mouse, keyboard, and monitors — anything that might make working on your laptop from home a little easier. Then there’s the software. Make sure you have the right applications. Lots of remote workers are leaning heavily on Microsoft Teams. Slack, WebEx, and Zoom.  In fact, Microsoft is giving MS Teams out for free to use.

Iron out what your team is planning to use ASAP.  and of course, you’ll want to make sure all your technology actually works from home.

Make sure you have bandwidth

Another thing? Internet access — is yours robust enough at home to allow you to video conference? Many conferences and almost all nonessential work travel are being canceled right now, so people want to use online video conferencing, which requires a good Internet connection.  If your bandwidth is low and you’re on a video call, try shutting down other programs to lighten the load on your connection. If your connection is really choppy, you can often shut off the video portion of a call and participate with audio only, which defeats the purpose of seeing your team but will still allow you to participate in the conversation.

Another Internet hog? Kids.  If your connection is not robust, set some ground rules about when kids can’t be online because mom is on a conference call, or stagger your video meetings with your partner or other family members if possible

The kids are alright — but they’re home too

With school closures and concerns about putting kids in daycare, as well as staffing those places up, parents are faced with a challenge, especially parents who have to physically go to work because they have no remote work option. If you are working from home with kids in tow, you’ll need to make a plan for education and entertainment. Stock up on books and puzzles. Also, it’s OK to use streaming services (Common Sense Media has good recommendations for kid-appropriate content)

Manage expectations with Work

It’s wise to have a discussion with your manager about what can actually be accomplished from home.  Ask your manager what the priorities are, and discuss how tasks will get done.  How are teams going to track projects they’re working on? How will they meet to discuss this? Will you all be connecting on Microsoft Teams or email? Will there be standing meetings at a certain time to get everyone coordinated?  This should be an ongoing conversation. Remember, going fully remote is a new experience for many companies and their workers. Be honest about what isn’t working or can’t get done in these circumstances. More overall communication is going to be necessary.

Embrace the webcam

Conference calls are tough — there are time delays, not knowing who’s talking because you can’t see the person, people getting interrupted on accident. Webcams can solve a number of these issues: the sense of isolation and that confusion. “To be able to see the person you’re talking to I think is important,”. And also, because we miss cues when we aren’t working together in person, make doubly sure all colleagues understand their marching orders.  “Personally, I tend to overcommunicate, and I think that’s a good default setting,” Don’t be afraid to ask, “Is this clear?”   You can even try repeating back what you heard the other person say, to make sure you interpreted the person’s meaning correctly.

Stay connected

One undeniable loss is the social, casual “water cooler” conversation that connects us to people — if you’re not used to that loss, full-time remote work can feel isolating. To fill the gap, some co-workers are scheduling online social time to have conversations with no agenda. Use Microsoft Teams\Slack chats and things like that if you miss real-time interaction.  Again, embrace video calling and webcams so you can see your colleagues. Try an icebreaker over your team chat: What’s everyone’s favorite TV show right now? What’s one good thing that someone read that day?

Are you a Manager of a Team? – Have a Daily Stand-Up Meeting with your Team or a Virtual Lunch and Learn

Keep them quick and make sure everyone participates. We do this in my IT team every day. It keeps the team engaged. I plan on having virtual lunch and learns and have the company pickup lunch for the team.  Again…Keep everyone engaged.

Hopefully everyone is staying safe at home and off course keeping social distancing…Until next time and Stay Safe…..Rob

Azure Powershell – How to Build and Deploy Azure IaaS VMs

Throughout my career, my primary role has always been to make things more efficient and automated.  And now more than ever, automation is needed to manage and deploy IT services at scale to support our ever-changing needs.

In my opinion, one of the most convenient aspects of public cloud-based services is the ability to host virtual machines (VMs). Hosting VMs in the cloud doesn’t just mean putting your VMs in someone else’s datacenter. It’s a way to achieve a scalable, low-cost and resilient infrastructure in a matter of minutes.

What once required hardware purchases, layers of management approval and weeks of work now can be done with no hardware and in a fraction of the time. We still probably have those management layers though 🙁

Microsoft Azure is in the lead pack along with Google (GCP) and Amazon (AWS). Azure has made great strides over the past few years on in its Infrastructure as a Service (IaaS) service which allows you to host VMs in their cloud.

Azure provides a few different ways to build and deploy VMs in Azure.

  • You could choose to use the Azure portal, build VMs through Azure Resource Manager(ARM) templates and some PowerShell
  • Or you could simply use a set of PowerShell cmdlets to provision a VM and all its components from scratch.

Each has its advantages and drawbacks. However, the main reason to use PowerShell is for automation tasks. If you’re working on automated VM provisioning for various purposes, PowerShell is the way to go 😉

Let’s look at how we can use PowerShell to build all of the various components that a particular VM requires in Azure to eventually come up with a fully-functioning Azure VM.

To get started, you’ll first obviously need an Azure subscription. If you don’t, you can sign up for a free trial to start playing around. Once you have a subscription, I’m also going to be assuming you’re using at least Windows 10 with PowerShell version 6. Even though the commands I’ll be showing you might work fine on older versions of PowerShell, it’s always a good idea to work alongside me with the same version, if possible.

You’ll also need to have the Azure PowerShell module installed. This module contains hundreds of various cmdlets and sub-modules. The one we’ll be focusing on is called Azure.RM. This contains all of the cmdlets we’ll need to provision a VM in Azure.

Building a VM in Azure isn’t quite as simple as New-AzureVM; far from it actually. Granted, you might already have much of the underlying infrastructure required for a VM, but how do you build it out, I’ll be going over how to build every component necessary and will be assuming you’re beginning to work from a blank Azure subscription.

At its most basic, an ARM VM requires eight individual components

  1. A resource group
  2. A virtual network (VNET)
  3. A storage account
  4. A network interface with private IP on VNET
  5. A public IP address (if you need to access it from the Internet)
  6. An operating system
  7. An operating system disk
  8. The VM itself (compute)

In order to build any components between numbers 2 and 7, they must all reside in a resource group so we’ll need to build this first. We can then use it to place all the other components in. To create a resource group, we’ll use the New-AzureRmResourceGroup cmdlet. You can see below that I’m creating a resource group called NetWatchRG and placing it in the East US datacenter.

New-AzureRmResourceGroup -Name 'NetWatchRG' -Location 'East US'

Next, I’ll build the networking that is required for our VM. This requires both creating a virtual subnet and adding that to a virtual network. I’ll first build the subnet where I’ll assign my VM an IP address dynamically in the 10.0.1.0/24 network when it gets built.

$newSubnetParams = @{
'Name' = 'NetWatchSubnet'
'AddressPrefix' = '10.0.1.0/24'
}
$subnet = New-AzureRmVirtualNetworkSubnetConfig @newSubnetParams

Next, I’ll create my virtual network and place it in the resource group I just built. You’ll notice that the subnet’s network is a slice of the virtual network (my virtual network is a /16 while my subnet is a /24). This allows me to segment out my VMs

$newVNetParams = @{
'Name' = 'NetWatchNetwork'
'ResourceGroupName' = 'MyResourceGroup'
'Location' = 'West US'
'AddressPrefix' = '10.0.0.0/16'
'Subnet' = $subnet
}
$vNet = New-AzureRmVirtualNetwork @newVNetParams

Next, we’ll need somewhere to store the VM so we’ll need to build a storage account. You can see below that I’m building a storage account called NetWatchSA.

$newStorageAcctParams = @{
'Name' = 'NetWatchSA'
'ResourceGroupName' = 'NetWatchRG'
'Type' = 'Standard_LRS'
'Location' = 'East US'
}
$storageAccount = New-AzureRmStorageAccount @newStorageAcctParams

Once the storage account is built, I’ll now focus on building the public IP address. This is not required but if you’re just testing things out now it’s probably easiest to simply access your VM over the Internet rather than having to worry about setting up a VPN.

Here I’m calling it NetWatchPublicIP and I’m ensuring that it’s dynamic since I don’t care what the public IP address is. I’m using many of the same parameters as the other objects as well.

$newPublicIpParams = @{'Name' = 'NetWatchPublicIP''ResourceGroupName' = 'NetWatchRG''AllocationMethod' = 'Dynamic' ## Dynamic or Static'DomainNameLabel' = 'NETWATCHVM1''Location' = 'East US'}$publicIp = New-AzureRmPublicIpAddress @newPublicIpParams
Once the public IP address is created, I then need somehow to get connected to my virtual network and ultimately the Internet. I’ll create a network interface again using the same resource group and location again. You can also see how I’m slowly building all of the objects I need as I go along. Here I’m specifying the subnet ID I created earlier and the public IP address I just created. Each step requires objects from the previous steps.
$newVNicParams = @{
'Name' = 'NetWatchNic1'
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'SubnetId' = $vNet.Subnets[0].Id
'PublicIpAddressId' = $publicIp.Id
}
$vNic = New-AzureRmNetworkInterface @newVNicParams
Once we’ve got the underlying infrastructure defined, it’s now time to build the VM.
First, you’ll need to define the performance of the VM. Here I’m choosing the lowest performance option (and the cheapest) with a Standard A3. This is great for testing but might not be enough performance for your production environment.
$newConfigParams = @{
'VMName' = 'NETWATCHVM1'
'VMSize' = 'Standard_A3'
}
$vmConfig = New-AzureRmVMConfig @newConfigParams
Next, we need to create the OS itself. Here I’m specifying that I need a Windows VM, the name it will be, the password for the local administrator account and a couple of other Azure-specific parameters. However, by default, an Azure VM agent is installed anyway but does not automatically update itself. You don’t explicitly need a VM agent but it will come in handy if you begin to need more advanced automation capabilities down the road.
$newVmOsParams = @{
'Windows' = $true
'ComputerName' = 'NETWATCHVM1'
'Credential' = (Get-Credential -Message 'Type the name and password of the local administrator account.')
'ProvisionVMAgent' = $true
'EnableAutoUpdate' = $true
}
$vm = Set-AzureRmVMOperatingSystem @newVmOsParams -VM $vmConfig
Next, we need to pick what image our OS will come from. Here I’m picking Windows Server 2016 Datacenter with the latest patches. This will pick an image from the Azure image gallery to be used for our VM.
$newSourceImageParams = @{
'PublisherName' = 'MicrosoftWindowsServer'
'Version' = 'latest'
'Skus' = '2016-Datacenter'
'VM' = $vm
}$offer = Get-AzureRmVMImageOffer -Location 'East US' -PublisherName 'MicrosoftWindowsServer'
$vm = Set-AzureRmVMSourceImage @newSourceImageParams -Offer $offer.Offer
Next, we’ll attach the NIC we’ve built earlier to the VM and specify the NIC ID on the VM that we’d like to add it as in case we need to add more NICs later.
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $vNic.Id
At this point, Azure still doesn’t know how you’d like the disk configuration on your VM. To define where the operating system will be stored, you’ll need to create an OS disk. The OS disk is a VHD that’s stored in your storage account. Here I’m putting the VHD in a VHDs storage container (folder) in Azure. This step gets a little convoluted since we must specify the VhdUri. This is the URI to the storage account we created earlier.
$osDiskUri = $storageAcct.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + $osDiskName + ".vhd"

$newOsDiskParams = @{
'Name' = 'OSDisk'
'CreateOption' = 'fromImage'
'VM' = $vm
'VhdUri' = $osDiskUri
}

$vm = Set-AzureRmVMOSDisk @newOsDiskParams
Ok, Whew! We now have all the components required to finally bring up our VM. To build the actual VM, we’ll use the New-AzureRmVM cmdlet. Since we’ve already done all of the hard work ahead of time, at this point, I simply need to pass the resource group name, the location, and the VM object which contains all of the configurations we just applied to it.
$newVmParams = @{
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'VM' = $vm
}
New-AzureRmVM @newVmParams

Your VM should now be showing up under the Virtual Machines section in the Azure portal. If you’d like to check on the VM from PowerShell you can also use the Get-AzureRmVM cmdlet.

Now that you’ve got all the basic code required to build a VM in Azure, I suggest you go and build a PowerShell script from this tutorial. Once you’re able to bring this code together into a script, building your second, third or tenth VM will be a breeze!

One final tip, in addition to managing Azure Portal through a browser, there are mobile apps for IOS and Android and now the new Azure portal app (Currently in Preview).  It gives you the same experience as the Azure Portal, without the need of a browser, like Microsoft Edge or Google Chrome.  Great for environments that have restrictions on browsing.

Until next time, Rob…

Azure Active Directory, Active Directory Domain Services – What’s the difference?

Here is a subject I hear and get asked over and over again.  Is Azure Active Directory (AAD) the same as Active Directory Domain Services (AD DS).

Let me be very clear.  Azure Active Directory is NOT a cloud version of Active Directory Domain Services, and in fact, it bears minimal resemblance to its on-premises names at all.

The number one question I get asked: “How do I join my servers to Azure AD?”. IT admins expect (not unexpectedly) to be able to use Azure AD just like they have always used Active Directory Domain Services. So let’s compare AD DS (and particularly the domain services part of AD DS) to AAD.  Let me educate you 🙂

What is Active Directory?

Most of us have probably worked with it for years, and now you’re looking to move to the cloud and understand what AAD is. Let’s start with a recap of what AD DS is. 

Active Directory Domain Services was introduced as a hierarchical authentication and authorization database system to replace the flat file Domain system in use on NT4 and previous servers.

The NT4 domain model in 2000 was straining at the seams to keep up with evolving corporate structures, hampered by some quite severe limitations – maximum of 26,000 objects in a flat file “bucket”, only 5 kinds of fixed objects whose structure (properties etc.) could not be changed, maximum size of the database of 40Mb etc. NT4 Domains also primarily used NetBIOS (another flat file, Microsoft specific system) for its name resolution.

For a lot of larger organizations, this necessitated multiple domain databases with very limited and complicated interactions between those domains. Active Directory Domain Services (just called Active Directory in those days) was released with Windows Server 2000 and was based upon the X.500 hierarchical network standard that companies such as Novel’s NDS and Banyan Vines were using at the time.

AD DS also used DNS as its name resolution system and the TCP/IP communication protocols in use on the internet. It brought in the idea of a directory system which contained a “schema” database (the set of “rules” that define the properties or attributes of objects created in the “domain” database) which could be added to or “extended” to create either entirely new objects or new properties of existing objects.

Size limitations were also thrown out the window, with Microsoft creating directory systems in the billions of objects (given enough storage!) in their test labs.

Here is a list of the essential functions that make up AD DS:

  • Secure Object store, including Users, Computers and Groups
  • Object organization use OU’s, Domains and Forests
  • Common Authentication and Authorization provider
  • LDAP, NTLM, Kerberos
  • Group Policy
  • Customizable Schema

Along with Domain Services, there are also components like Certificate Services, Federation Services, and Privileged Access Management.

From its inception, AD DS quickly became the defacto directory system in most organizations, even today.

What is Azure Active Directory

So if you know what Active Directory Domain Services is, then how does this compare to Azure Active Directory? The answer to this is, not very closely. The decision to name AAD after AD, in my opinion, was more of a marketing decision than a technical one. This has lead to years of confusion. In many ways, AAD was designed for a world where PaaS and SaaS services were the default choice, not for IaaS in the cloud.

Azure Active Directory is a secure authentication store, which can contain users and groups, but that is about where the similarities end. AAD is a cloud-based identity management store for modern applications. AAD is designed to allow you to create users, groups, and applications that work with modern authentication mechanisms like SAML and OAuth.

Applications are an object that exists in AAD but not in AD DS. Applications allow you to create an identity for your applications that you can grant access for users to, and to allow you to grant your users access to applications owned by others.

What AAD does not provide is any AD DS service beyond user management.

  • You can’t join computers to an Azure AD domain in the way you would with AD DS. There is something called Azure AD Join, but this is a different animal that I’ll address below. This means there are no computer objects in your AAD to apply things like GPOs to, and no centralized control of user rights on those machines.
  • There is no Group Policy. AAD has some policy tools like conditional access, but it is more focused on access to applications.
  • No support for LDAP, directory queries all use the REST API, Graph or PowerShell/CLI
  • There’s no support for NTLM or Kerberos. AAD is modern authentication protocols only
  • There’s no schema you have access to or can modify
  • Flat structure, no OU’s, Domains or Forests

So, at this point, it’s obvious now that Azure AD is a very different thing to AD DS. AAD is for user, group and application management in the cloud. If your building all new services using PaaS or SaaS and using modern authentication protocols then you should be all set with AAD, it’s what it was designed for.

However, if your running IaaS in Azure and want AD DS to domain join machines and create GPO’s, then AAD won’t cut it for you (and that is by design).

Active Directory on Azure

Hopefully, now it’s clear what AAD is and isn’t, and if your building modern apps and AAD does what you need, then you can stop here.

However, if you are going down the IaaS route in Azure and you feel you still need the services of an AD domain, what alternatives are there?

Azure AD Join

I mentioned this briefly earlier; it is possible to join devices directly to Azure AD. AAD Join is limited to Windows 10 machines only and provides limited functionality, certainly nothing like a full AD join.

When Azure AD joined, it is then possible to login to machines using Azure AD user accounts. You can apply conditional access policies that require machines to be AAD joined before accessing resources or applications. If you’re looking for a way to provide common user account management across Windows 10 machines, then this may work for you.

Azure AD Domain Services

If you need more than just user management, then it is possible to extend Azure AD to offer more AD based services using Azure AD Domain Services. AAD DS is an Azure product that you enable on your virtual network which deploys two domain controllers. They are managed by Microsoft and synchronized with your Azure AD tenant. This allows admins to grant machine access to users in your AAD tenant, but also to implement things like custom OU’s, group policy, LDAP queries, NTLM and Kerberos.

This is a domain managed by Microsoft, so you do not have to worry about patching your domain controllers or ensuring they are up. However, it also means you do not have full control of the domain. For example, you do not have domain admin rights, only enough rights to undertake the tasks Microsoft allows. You can see a full breakdown of AAD DS limitations here.

AD Domain Controllers on Azure

Nothing is stopping you just deploying some virtual machines in Azure and turning them into domain controllers. This is a support configuration and is in use by many people who need the full suite of services provided by AD inside Azure.

The downside to this approach is that you need to manage this yourself. You need to take care of patching and updating your servers, backing up your domain and any other maintenance you require. You are also in charge of making sure it is highly available and implementing a DR strategy if you require it. If you need all that AD DS has to offer then, this can be a great option, but if all you want is a common user store for machine login, it might be overkill.

Access your On-Premises AD Domain

Finally, you can also extend your existing on-premises domain into Azure. Using ExpressRoute or VPN, you can connect your on-premises network to your Azure vNet and allow access to domain controllers. You can even deploy IaaS domain controllers in Azure that are joined to your on-premises domain. This then adds a dependency to your infrastructure of connectivity back to the on-premises network, so this connectivity becomes a key point of failure. You need to ensure that resiliency is built in.

Summary

If your new to Azure and especially identity in Azure, I hope clears things up. This is a new, modern authentication provider and is not Active Directory Domain Services in the cloud. AAD does not behave like the AD DS you know and love and really shouldn’t be compared to it, it is a different service.

If you need AD DS in your cloud environment, then there are options to achieve this, but AAD is not going to give you that. Take a look at the options listed in this blog post and see what meets your needs.

Until next time, Rob

Windows Virtual Desktop and FSLogix – What you need to know?

Expanding on my last post on Windows Virtual Desktop,  let’s talk about FSLogix.  So, let start at the beginning, FSLogix was founded by Randy Cook and Kevin Goodman, VDI industry veterans, tackling user experience problems with virtual desktops.

FSLogix was one of the first along with Liquidware to use virtual hard disks as a way to migrate the user’s profile data between virtual desktops and sessions.

Giving users local admin rights on Windows desktops has become a thing of the past.  More and more apps (for example, Modern Apps) install themselves and their caches directly into the user profile (because the user always has permissions to write there).  While there are proven solutions for using only the required parts of the user profile and ignoring things like app installs some administrators prefer the approach of just roaming everything and not trying to manage the contents of the profile.

In the last couple of years, the attention has shifted from user profile roaming to solving the problem of roaming Office 365 caches in virtual desktops, so that they perform and feel as fast as a physical desktop. Microsoft’s early attempt using this approach – User Profile Disks, as introduced in Windows Server 2012 – was a step in the right direction but was lacking, and the acquisition of FSLogix allows them to accelerate their support for this capability. Then another great business tip is to use a virtual office so that you can have a business postal address in a capital city like London as these just look amazing so work really well.

When a user logs on to their Windows session, the Windows User Profile is loaded. The profile includes everything from the user’s download folder to their mouse scrolling speed preference and everything in between. So you can imagine that profiles can get big.  Check out my blog post on Windows Users Profiles – The Untold Mysteries to learn more.

There are also some programs that create massive profile data like AutoCAD – which – due to Nvidia GRID, works great in a VDI environment but easily generates GB’s of profile data. If the user’s profile grows this big, a roaming profile solution won’t work. Logon will take minutes or in some extreme cases hours to complete because the FileServer will copy all the profile data to the endpoint. Even “just in time” profile technology like Zero Profiling isn’t able to handle the big application data quick enough for a good user experience because it also just copies the data from a FileServer to the endpoint but not in one big chunk like roaming profiles.

So, how does FSLogix Profile Containers help?

FSLogix Profile Containers creates a Virtual Hard Drive (VHD) file on a FileServer and stores the user profile including registry in the VHD file. Sounds relatively simple, right? Although, why does this improve speed? Well, during login the only thing that is happening is that the endpoint mounts the VHD file as a Virtual Hard Drive and then the profile is just accessible. So there is NO data copy! This results in lighting fast logons. And eliminates FileServer and network bottlenecks from login storms.

FSLogix Profile Containers also has additional benefits for the end user native support for Office 365 products, such as Outlook, Search, OneDrive for business, SharePoint folder synchronization, Teams, and Skype for Business GAL

Profile Containers Cloud support

It’s worth mentioning that FSLogix has a cool tech called Cloud Cache. This functionality adds the possibility to add multiple storage repositories to the existing products to provide high availability to on-premises and cloud environments.

Imagine a workspace scenario where you are running a VDI\WVD environment in Microsoft Azure. Typically, you store your profile data on a Windows file share in Azure Infrastructure-as-a-Service. The Cloud Cache Driver makes it possible to provide the store of the Containers directly on much less expensive Azure Blob Storage. This is just one of the significant use-cases which FSLogix is solving with this tremendous new Cloud technology.

Other uses of Cloud Cache include high availability in the event of storage or network interruptions, profile storage server migrations, cloud migrations, offline access to FSLogix containers, and more.

So, how do you setup FSLogix Profile containers?

As always first, download the software here.

Next, you need to push the installer to your endpoints.  To make your life easier, use these silent install parameters:

“FSLogixAppsSetup.exe /install /quiet /norestart ProductKey=YOURPRODUCTKEY”. 

With the install, you also get a FSLogix.ADML and ADMX file. You need to copy these to your PolicyDefinitions folder in \YOURDOMAIN\SYSVOL\Policies. Next, you need  to create a new GPO object and set the following options:

Make sure you don’t forget to disable roaming profiles and enable local profiles on the endpoint. You can monitor if the Profile Container is working correctly with the easy FSLogix Tray application located in: “C:\Program Files\FSLogix\Apps\frxtray.exe”.

And that’s it. 🙂  Your users can now log in with the speed of Flash Gordon and you never have to worry about profile issues again. It’s a win\win!

FSLogix technology will be available to Microsoft customers with the following licenses vs just WVD as they had originally stated:

    • M365 E3, E5, F1  – These are subscriptions that include the Windows OS which also includes everything in the Office 365 license and additional tools and security software.
    • Windows E3, E5 – These are subscription licenses of the Windows OS
    • Any Microsoft RDS Server Cal holder  (For example, Citrix XenApp users and this is the newly added part that makes it more available)

Now that we understand how it works, a basic understanding of the setup and licensing.  My next blog post in this series will be a video walkthrough on the setup and usage.

Until next time,

Rob

My thoughts on the Future of the Cloud

Many people in the IT consider containers, a technology used to isolate applications with their own environment, to be the future.

However, serverless geeks think that containers will gradually fade away. They will exist as a low-level implementation detail bubbling below the surface but most software developers will not have to deal with them directly. It may seem premature to declare victory for serverless just yet but there are enough positive signs already. Forward-thinking organizations like iRobot, Coca-Cola, Thomson Reuters, and Autodesk are experimenting and adopting serverless technologies. All major and minor Cloud providers — including the aforementioned ones as well as players like Azure, AWS, GCP, IBM, Oracle, and Pivotal are working on serverless offerings.  If you wan to learn more just take a quick look to this link, https://docs.microsoft.com/en-us/archive/blogs/wincat/validating-hybrid-cloud-scenarios-in-the-server-2012-technology-adoption-program-tap.

Together with the major players, a whole ecosystem of startups is emerging. These startups attempt to solve problems around deployment and observability, provide new security solutions, and help enterprises evolve their systems and architectures to take advantage of serverless. This isn’t, of course, to mention a vibrant community of enthusiasts who contribute to serverless open source projects, evangelize at conferences and online, and promote ideas within their organizations.

It would be great to close the book now and declare victory for the serverless camp, but the reality is different. There are challenges that the community and vendors are yet to solve. These challenges are cultural and technological; there’s tribal friction within the tech community; inertia to adoption within organizations, and issues around some of the technology itself. Also remember to make sure that you are properly certified if you are running cloud-based services, it’s the ISO 27017 certificate that you need for that.

Confusion and the Cloud

While adoption of serverless is growing, more work needs to be done by the serverless community to communicate what this technology is all about. The community needs to bring more people in and explain how serverless adds value. It’s inarguable that there are good questions from members of the tech community. These can range from trivial disagreements over “serverless” as a name, to more philosophical arguments about fit, use-case, and lock-in. This as a perfectly normal example of past successes (with other technologies) breeding inertia to change.

This isn’t to say that those who have objections are wrong. Serverless in its current incarnation isn’t suitable in all cases. There are limitations on how long functions can run, tooling is immature and monitoring distributed applications made up of a lot of functions and cloud services can be difficult (although some progress is being made to address this).

There’s also a need for a robust set of example patterns and architectures. After all, the best way to convince someone of the merit of technology is to build something with it and then show them how it was done.

Confusingly, there is a tendency by some vendors to label their offerings as serverless when they aren’t. This makes it look like they are jumping on the bandwagon rather than thoughtfully building services that adhere to serverless principles. Some of the bigger cloud vendors are guilty of this and unfortunately, this confuses people’s understanding of technology.

Go Big or Go Home

At the very large end of the scale, companies like Netflix and Uber are building their own internal serverless-like platforms. But unless you are the size of Netflix or Uber, building your own Function as a service (FaaS) platform from scratch is a terrible idea. Think of it this way like this, its like building a toaster yourself rather than buying a commoditized, off-the-shelf product. Interestingly, Google recently released a product called kNative. This product — based on the open source Kubernetes container orchestration software— is designed to help build, deploy and manage serverless workloads on your own servers.

For example, Google’s Bret McGowen, at Serverlessconf San Francisco ’18, gave of a real-life customer scenario out on an oil rig in the middle of an ocean with poor Internet connectivity. The customer needed to perform computation with terabytes of telemetry data but uploading it to a cloud platform over a connection equivalent to a 3G modem wasn’t feasible. “They cannot use cloud and it’s totally unfair to say — sorry buddy, hosted functions-as-a-service or bust — their developers deserve to have the same serverless experience as the rest of us” was Bret’s explanation why, in this case, running kNative locally on the oil rig made sense.

He is, of course, correct. Having a serverless system running in your own environment — when you cannot use a cloud platform — is better than nothing. However, for most of us, serverless solutions like Google Cloud Functions, Azure Functions, or AWS Lambda offer a far smaller barrier to entry and remove many administrative headaches. It’s fair to say that most companies should look at serverless solutions like Lambda first and if they don’t satisfy requirements look at other alternatives, like kNative and containers, second.

The Future…in my humble opinion

It’s likely that some of the major limitations with serverless functions are going to be solved in the coming years, if not months. Cloud vendors will allow functions to run for longer, support more languages, and allow deeper customizations. A lot of work is being done by cloud vendors to allow developers to bring their own containers to a hosted environment and then have those containers seamlessly managed by the platform alongside regular functions.

In the end, “do you have a choice?” “No, none, whatsoever” was Bret’s succinct, brutal answer at the conference. Existing limitations will be solved and serverless compute technologies will herald the rise of new, emerging architectural patterns and practices. We are yet to see what these are but, this is the future and it is unavoidable.

Cloud computing is where we are, and where the world is going for the next decade or two. After that, probably something new will come along.

But the reasons for going to cloud computing in general and the inevitable wind-down of on-premises to niche special functions are now pretty obvious.

  • Security – Big cloud operators have FAR more security people and capacity than even a big enterprise, and your own disgruntled employees don’t have the keys to the servers.
  • Cost-effectiveness – Economies of scale. The rule of big numbers.
  • Zero capital outlay – reduced costs.
  • For software developers, no more software piracy. That’s a big saving on the cost of developing software, especially for sales in certain countries.
  • Compliance – So much easier if your cloud vendor is fully certified, so you only have to worry about your part of the puzzle.
  • Energy efficiency – Big, well-designed datacentres use a LOT less global resources.

My next post in this series will be on “The Past and On-prem and the Cloud?

Until next time, Rob