Working Remotely as an IT Pro in this age of COVID-19 – Tips to a Successful WFH Strategy for your end-users

Hey All,

Its been a few months since I did my last blog post. Not making excuses, but working at an IT Director at a large Biotech can be challenging :).  I finally have some time to share some good tips I shared with my team and end-users during this COVID-19 Pandemic.  Having a good WFH (Work from Home) environment is key to keeping the balance at work and at home.

Get your technology in order

Technology is what enables remote work in the first place. So make sure to take your laptop home, and don’t forget your dock and charger. Also, take home your mouse, keyboard, and monitors — anything that might make working on your laptop from home a little easier. Then there’s the software. Make sure you have the right applications. Lots of remote workers are leaning heavily on Microsoft Teams. Slack, WebEx, and Zoom.  In fact, Microsoft is giving MS Teams out for free to use.

Iron out what your team is planning to use ASAP.  and of course, you’ll want to make sure all your technology actually works from home.

Make sure you have bandwidth

Another thing? Internet access — is yours robust enough at home to allow you to video conference? Many conferences and almost all nonessential work travel are being canceled right now, so people want to use online video conferencing, which requires a good Internet connection.  If your bandwidth is low and you’re on a video call, try shutting down other programs to lighten the load on your connection. If your connection is really choppy, you can often shut off the video portion of a call and participate with audio only, which defeats the purpose of seeing your team but will still allow you to participate in the conversation.

Another Internet hog? Kids.  If your connection is not robust, set some ground rules about when kids can’t be online because mom is on a conference call, or stagger your video meetings with your partner or other family members if possible

The kids are alright — but they’re home too

With school closures and concerns about putting kids in daycare, as well as staffing those places up, parents are faced with a challenge, especially parents who have to physically go to work because they have no remote work option. If you are working from home with kids in tow, you’ll need to make a plan for education and entertainment. Stock up on books and puzzles. Also, it’s OK to use streaming services (Common Sense Media has good recommendations for kid-appropriate content)

Manage expectations with Work

It’s wise to have a discussion with your manager about what can actually be accomplished from home.  Ask your manager what the priorities are, and discuss how tasks will get done.  How are teams going to track projects they’re working on? How will they meet to discuss this? Will you all be connecting on Microsoft Teams or email? Will there be standing meetings at a certain time to get everyone coordinated?  This should be an ongoing conversation. Remember, going fully remote is a new experience for many companies and their workers. Be honest about what isn’t working or can’t get done in these circumstances. More overall communication is going to be necessary.

Embrace the webcam

Conference calls are tough — there are time delays, not knowing who’s talking because you can’t see the person, people getting interrupted on accident. Webcams can solve a number of these issues: the sense of isolation and that confusion. “To be able to see the person you’re talking to I think is important,”. And also, because we miss cues when we aren’t working together in person, make doubly sure all colleagues understand their marching orders.  “Personally, I tend to overcommunicate, and I think that’s a good default setting,” Don’t be afraid to ask, “Is this clear?”   You can even try repeating back what you heard the other person say, to make sure you interpreted the person’s meaning correctly.

Stay connected

One undeniable loss is the social, casual “water cooler” conversation that connects us to people — if you’re not used to that loss, full-time remote work can feel isolating. To fill the gap, some co-workers are scheduling online social time to have conversations with no agenda. Use Microsoft Teams\Slack chats and things like that if you miss real-time interaction.  Again, embrace video calling and webcams so you can see your colleagues. Try an icebreaker over your team chat: What’s everyone’s favorite TV show right now? What’s one good thing that someone read that day?

Are you a Manager of a Team? – Have a Daily Stand-Up Meeting with your Team or a Virtual Lunch and Learn

Keep them quick and make sure everyone participates. We do this in my IT team every day. It keeps the team engaged. I plan on having virtual lunch and learns and have the company pickup lunch for the team.  Again…Keep everyone engaged.

Hopefully everyone is staying safe at home and off course keeping social distancing…Until next time and Stay Safe…..Rob

Azure Powershell – How to Build and Deploy Azure IaaS VMs

Throughout my career, my primary role has always been to make things more efficient and automated.  And now more than ever, automation is needed to manage and deploy IT services at scale to support our ever-changing needs.

In my opinion, one of the most convenient aspects of public cloud-based services is the ability to host virtual machines (VMs). Hosting VMs in the cloud doesn’t just mean putting your VMs in someone else’s datacenter. It’s a way to achieve a scalable, low-cost and resilient infrastructure in a matter of minutes.

What once required hardware purchases, layers of management approval and weeks of work now can be done with no hardware and in a fraction of the time. We still probably have those management layers though 🙁

Microsoft Azure is in the lead pack along with Google (GCP) and Amazon (AWS). Azure has made great strides over the past few years on in its Infrastructure as a Service (IaaS) service which allows you to host VMs in their cloud.

Azure provides a few different ways to build and deploy VMs in Azure.

  • You could choose to use the Azure portal, build VMs through Azure Resource Manager(ARM) templates and some PowerShell
  • Or you could simply use a set of PowerShell cmdlets to provision a VM and all its components from scratch.

Each has its advantages and drawbacks. However, the main reason to use PowerShell is for automation tasks. If you’re working on automated VM provisioning for various purposes, PowerShell is the way to go 😉

Let’s look at how we can use PowerShell to build all of the various components that a particular VM requires in Azure to eventually come up with a fully-functioning Azure VM.

To get started, you’ll first obviously need an Azure subscription. If you don’t, you can sign up for a free trial to start playing around. Once you have a subscription, I’m also going to be assuming you’re using at least Windows 10 with PowerShell version 6. Even though the commands I’ll be showing you might work fine on older versions of PowerShell, it’s always a good idea to work alongside me with the same version, if possible.

You’ll also need to have the Azure PowerShell module installed. This module contains hundreds of various cmdlets and sub-modules. The one we’ll be focusing on is called Azure.RM. This contains all of the cmdlets we’ll need to provision a VM in Azure.

Building a VM in Azure isn’t quite as simple as New-AzureVM; far from it actually. Granted, you might already have much of the underlying infrastructure required for a VM, but how do you build it out, I’ll be going over how to build every component necessary and will be assuming you’re beginning to work from a blank Azure subscription.

At its most basic, an ARM VM requires eight individual components

  1. A resource group
  2. A virtual network (VNET)
  3. A storage account
  4. A network interface with private IP on VNET
  5. A public IP address (if you need to access it from the Internet)
  6. An operating system
  7. An operating system disk
  8. The VM itself (compute)

In order to build any components between numbers 2 and 7, they must all reside in a resource group so we’ll need to build this first. We can then use it to place all the other components in. To create a resource group, we’ll use the New-AzureRmResourceGroup cmdlet. You can see below that I’m creating a resource group called NetWatchRG and placing it in the East US datacenter.

New-AzureRmResourceGroup -Name 'NetWatchRG' -Location 'East US'

Next, I’ll build the networking that is required for our VM. This requires both creating a virtual subnet and adding that to a virtual network. I’ll first build the subnet where I’ll assign my VM an IP address dynamically in the 10.0.1.0/24 network when it gets built.

$newSubnetParams = @{
'Name' = 'NetWatchSubnet'
'AddressPrefix' = '10.0.1.0/24'
}
$subnet = New-AzureRmVirtualNetworkSubnetConfig @newSubnetParams

Next, I’ll create my virtual network and place it in the resource group I just built. You’ll notice that the subnet’s network is a slice of the virtual network (my virtual network is a /16 while my subnet is a /24). This allows me to segment out my VMs

$newVNetParams = @{
'Name' = 'NetWatchNetwork'
'ResourceGroupName' = 'MyResourceGroup'
'Location' = 'West US'
'AddressPrefix' = '10.0.0.0/16'
'Subnet' = $subnet
}
$vNet = New-AzureRmVirtualNetwork @newVNetParams

Next, we’ll need somewhere to store the VM so we’ll need to build a storage account. You can see below that I’m building a storage account called NetWatchSA.

$newStorageAcctParams = @{
'Name' = 'NetWatchSA'
'ResourceGroupName' = 'NetWatchRG'
'Type' = 'Standard_LRS'
'Location' = 'East US'
}
$storageAccount = New-AzureRmStorageAccount @newStorageAcctParams

Once the storage account is built, I’ll now focus on building the public IP address. This is not required but if you’re just testing things out now it’s probably easiest to simply access your VM over the Internet rather than having to worry about setting up a VPN.

Here I’m calling it NetWatchPublicIP and I’m ensuring that it’s dynamic since I don’t care what the public IP address is. I’m using many of the same parameters as the other objects as well.

$newPublicIpParams = @{'Name' = 'NetWatchPublicIP''ResourceGroupName' = 'NetWatchRG''AllocationMethod' = 'Dynamic' ## Dynamic or Static'DomainNameLabel' = 'NETWATCHVM1''Location' = 'East US'}$publicIp = New-AzureRmPublicIpAddress @newPublicIpParams
Once the public IP address is created, I then need somehow to get connected to my virtual network and ultimately the Internet. I’ll create a network interface again using the same resource group and location again. You can also see how I’m slowly building all of the objects I need as I go along. Here I’m specifying the subnet ID I created earlier and the public IP address I just created. Each step requires objects from the previous steps.
$newVNicParams = @{
'Name' = 'NetWatchNic1'
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'SubnetId' = $vNet.Subnets[0].Id
'PublicIpAddressId' = $publicIp.Id
}
$vNic = New-AzureRmNetworkInterface @newVNicParams
Once we’ve got the underlying infrastructure defined, it’s now time to build the VM.
First, you’ll need to define the performance of the VM. Here I’m choosing the lowest performance option (and the cheapest) with a Standard A3. This is great for testing but might not be enough performance for your production environment.
$newConfigParams = @{
'VMName' = 'NETWATCHVM1'
'VMSize' = 'Standard_A3'
}
$vmConfig = New-AzureRmVMConfig @newConfigParams
Next, we need to create the OS itself. Here I’m specifying that I need a Windows VM, the name it will be, the password for the local administrator account and a couple of other Azure-specific parameters. However, by default, an Azure VM agent is installed anyway but does not automatically update itself. You don’t explicitly need a VM agent but it will come in handy if you begin to need more advanced automation capabilities down the road.
$newVmOsParams = @{
'Windows' = $true
'ComputerName' = 'NETWATCHVM1'
'Credential' = (Get-Credential -Message 'Type the name and password of the local administrator account.')
'ProvisionVMAgent' = $true
'EnableAutoUpdate' = $true
}
$vm = Set-AzureRmVMOperatingSystem @newVmOsParams -VM $vmConfig
Next, we need to pick what image our OS will come from. Here I’m picking Windows Server 2016 Datacenter with the latest patches. This will pick an image from the Azure image gallery to be used for our VM.
$newSourceImageParams = @{
'PublisherName' = 'MicrosoftWindowsServer'
'Version' = 'latest'
'Skus' = '2016-Datacenter'
'VM' = $vm
}$offer = Get-AzureRmVMImageOffer -Location 'East US' -PublisherName 'MicrosoftWindowsServer'
$vm = Set-AzureRmVMSourceImage @newSourceImageParams -Offer $offer.Offer
Next, we’ll attach the NIC we’ve built earlier to the VM and specify the NIC ID on the VM that we’d like to add it as in case we need to add more NICs later.
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $vNic.Id
At this point, Azure still doesn’t know how you’d like the disk configuration on your VM. To define where the operating system will be stored, you’ll need to create an OS disk. The OS disk is a VHD that’s stored in your storage account. Here I’m putting the VHD in a VHDs storage container (folder) in Azure. This step gets a little convoluted since we must specify the VhdUri. This is the URI to the storage account we created earlier.
$osDiskUri = $storageAcct.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + $osDiskName + ".vhd"

$newOsDiskParams = @{
'Name' = 'OSDisk'
'CreateOption' = 'fromImage'
'VM' = $vm
'VhdUri' = $osDiskUri
}

$vm = Set-AzureRmVMOSDisk @newOsDiskParams
Ok, Whew! We now have all the components required to finally bring up our VM. To build the actual VM, we’ll use the New-AzureRmVM cmdlet. Since we’ve already done all of the hard work ahead of time, at this point, I simply need to pass the resource group name, the location, and the VM object which contains all of the configurations we just applied to it.
$newVmParams = @{
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'VM' = $vm
}
New-AzureRmVM @newVmParams

Your VM should now be showing up under the Virtual Machines section in the Azure portal. If you’d like to check on the VM from PowerShell you can also use the Get-AzureRmVM cmdlet.

Now that you’ve got all the basic code required to build a VM in Azure, I suggest you go and build a PowerShell script from this tutorial. Once you’re able to bring this code together into a script, building your second, third or tenth VM will be a breeze!

One final tip, in addition to managing Azure Portal through a browser, there are mobile apps for IOS and Android and now the new Azure portal app (Currently in Preview).  It gives you the same experience as the Azure Portal, without the need of a browser, like Microsoft Edge or Google Chrome.  Great for environments that have restrictions on browsing.

Until next time, Rob…

Azure Active Directory, Active Directory Domain Services – What’s the difference?

Here is a subject I hear and get asked over and over again.  Is Azure Active Directory (AAD) the same as Active Directory Domain Services (AD DS).

Let me be very clear.  Azure Active Directory is NOT a cloud version of Active Directory Domain Services, and in fact, it bears minimal resemblance to its on-premises names at all.

The number one question I get asked: “How do I join my servers to Azure AD?”. IT admins expect (not unexpectedly) to be able to use Azure AD just like they have always used Active Directory Domain Services. So let’s compare AD DS (and particularly the domain services part of AD DS) to AAD.  Let me educate you 🙂

What is Active Directory?

Most of us have probably worked with it for years, and now you’re looking to move to the cloud and understand what AAD is. Let’s start with a recap of what AD DS is. 

Active Directory Domain Services was introduced as a hierarchical authentication and authorization database system to replace the flat file Domain system in use on NT4 and previous servers.

The NT4 domain model in 2000 was straining at the seams to keep up with evolving corporate structures, hampered by some quite severe limitations – maximum of 26,000 objects in a flat file “bucket”, only 5 kinds of fixed objects whose structure (properties etc.) could not be changed, maximum size of the database of 40Mb etc. NT4 Domains also primarily used NetBIOS (another flat file, Microsoft specific system) for its name resolution.

For a lot of larger organizations, this necessitated multiple domain databases with very limited and complicated interactions between those domains. Active Directory Domain Services (just called Active Directory in those days) was released with Windows Server 2000 and was based upon the X.500 hierarchical network standard that companies such as Novel’s NDS and Banyan Vines were using at the time.

AD DS also used DNS as its name resolution system and the TCP/IP communication protocols in use on the internet. It brought in the idea of a directory system which contained a “schema” database (the set of “rules” that define the properties or attributes of objects created in the “domain” database) which could be added to or “extended” to create either entirely new objects or new properties of existing objects.

Size limitations were also thrown out the window, with Microsoft creating directory systems in the billions of objects (given enough storage!) in their test labs.

Here is a list of the essential functions that make up AD DS:

  • Secure Object store, including Users, Computers and Groups
  • Object organization use OU’s, Domains and Forests
  • Common Authentication and Authorization provider
  • LDAP, NTLM, Kerberos
  • Group Policy
  • Customizable Schema

Along with Domain Services, there are also components like Certificate Services, Federation Services, and Privileged Access Management.

From its inception, AD DS quickly became the defacto directory system in most organizations, even today.

What is Azure Active Directory

So if you know what Active Directory Domain Services is, then how does this compare to Azure Active Directory? The answer to this is, not very closely. The decision to name AAD after AD, in my opinion, was more of a marketing decision than a technical one. This has lead to years of confusion. In many ways, AAD was designed for a world where PaaS and SaaS services were the default choice, not for IaaS in the cloud.

Azure Active Directory is a secure authentication store, which can contain users and groups, but that is about where the similarities end. AAD is a cloud-based identity management store for modern applications. AAD is designed to allow you to create users, groups, and applications that work with modern authentication mechanisms like SAML and OAuth.

Applications are an object that exists in AAD but not in AD DS. Applications allow you to create an identity for your applications that you can grant access for users to, and to allow you to grant your users access to applications owned by others.

What AAD does not provide is any AD DS service beyond user management.

  • You can’t join computers to an Azure AD domain in the way you would with AD DS. There is something called Azure AD Join, but this is a different animal that I’ll address below. This means there are no computer objects in your AAD to apply things like GPOs to, and no centralized control of user rights on those machines.
  • There is no Group Policy. AAD has some policy tools like conditional access, but it is more focused on access to applications.
  • No support for LDAP, directory queries all use the REST API, Graph or PowerShell/CLI
  • There’s no support for NTLM or Kerberos. AAD is modern authentication protocols only
  • There’s no schema you have access to or can modify
  • Flat structure, no OU’s, Domains or Forests

So, at this point, it’s obvious now that Azure AD is a very different thing to AD DS. AAD is for user, group and application management in the cloud. If your building all new services using PaaS or SaaS and using modern authentication protocols then you should be all set with AAD, it’s what it was designed for.

However, if your running IaaS in Azure and want AD DS to domain join machines and create GPO’s, then AAD won’t cut it for you (and that is by design).

Active Directory on Azure

Hopefully, now it’s clear what AAD is and isn’t, and if your building modern apps and AAD does what you need, then you can stop here.

However, if you are going down the IaaS route in Azure and you feel you still need the services of an AD domain, what alternatives are there?

Azure AD Join

I mentioned this briefly earlier; it is possible to join devices directly to Azure AD. AAD Join is limited to Windows 10 machines only and provides limited functionality, certainly nothing like a full AD join.

When Azure AD joined, it is then possible to login to machines using Azure AD user accounts. You can apply conditional access policies that require machines to be AAD joined before accessing resources or applications. If you’re looking for a way to provide common user account management across Windows 10 machines, then this may work for you.

Azure AD Domain Services

If you need more than just user management, then it is possible to extend Azure AD to offer more AD based services using Azure AD Domain Services. AAD DS is an Azure product that you enable on your virtual network which deploys two domain controllers. They are managed by Microsoft and synchronized with your Azure AD tenant. This allows admins to grant machine access to users in your AAD tenant, but also to implement things like custom OU’s, group policy, LDAP queries, NTLM and Kerberos.

This is a domain managed by Microsoft, so you do not have to worry about patching your domain controllers or ensuring they are up. However, it also means you do not have full control of the domain. For example, you do not have domain admin rights, only enough rights to undertake the tasks Microsoft allows. You can see a full breakdown of AAD DS limitations here.

AD Domain Controllers on Azure

Nothing is stopping you just deploying some virtual machines in Azure and turning them into domain controllers. This is a support configuration and is in use by many people who need the full suite of services provided by AD inside Azure.

The downside to this approach is that you need to manage this yourself. You need to take care of patching and updating your servers, backing up your domain and any other maintenance you require. You are also in charge of making sure it is highly available and implementing a DR strategy if you require it. If you need all that AD DS has to offer then, this can be a great option, but if all you want is a common user store for machine login, it might be overkill.

Access your On-Premises AD Domain

Finally, you can also extend your existing on-premises domain into Azure. Using ExpressRoute or VPN, you can connect your on-premises network to your Azure vNet and allow access to domain controllers. You can even deploy IaaS domain controllers in Azure that are joined to your on-premises domain. This then adds a dependency to your infrastructure of connectivity back to the on-premises network, so this connectivity becomes a key point of failure. You need to ensure that resiliency is built in.

Summary

If your new to Azure and especially identity in Azure, I hope clears things up. This is a new, modern authentication provider and is not Active Directory Domain Services in the cloud. AAD does not behave like the AD DS you know and love and really shouldn’t be compared to it, it is a different service.

If you need AD DS in your cloud environment, then there are options to achieve this, but AAD is not going to give you that. Take a look at the options listed in this blog post and see what meets your needs.

Until next time, Rob

Windows Virtual Desktop and FSLogix – What you need to know?

Expanding on my last post on Windows Virtual Desktop,  let’s talk about FSLogix.  So, let start at the beginning, FSLogix was founded by Randy Cook and Kevin Goodman, VDI industry veterans, tackling user experience problems with virtual desktops.

FSLogix was one of the first along with Liquidware to use virtual hard disks as a way to migrate the user’s profile data between virtual desktops and sessions.

Giving users local admin rights on Windows desktops has become a thing of the past.  More and more apps (for example, Modern Apps) install themselves and their caches directly into the user profile (because the user always has permissions to write there).  While there are proven solutions for using only the required parts of the user profile and ignoring things like app installs some administrators prefer the approach of just roaming everything and not trying to manage the contents of the profile.

In the last couple of years, the attention has shifted from user profile roaming to solving the problem of roaming Office 365 caches in virtual desktops, so that they perform and feel as fast as a physical desktop. Microsoft’s early attempt using this approach – User Profile Disks, as introduced in Windows Server 2012 – was a step in the right direction but was lacking, and the acquisition of FSLogix allows them to accelerate their support for this capability. Then another great business tip is to use a virtual office so that you can have a business postal address in a capital city like London as these just look amazing so work really well.

When a user logs on to their Windows session, the Windows User Profile is loaded. The profile includes everything from the user’s download folder to their mouse scrolling speed preference and everything in between. So you can imagine that profiles can get big.  Check out my blog post on Windows Users Profiles – The Untold Mysteries to learn more.

There are also some programs that create massive profile data like AutoCAD – which – due to Nvidia GRID, works great in a VDI environment but easily generates GB’s of profile data. If the user’s profile grows this big, a roaming profile solution won’t work. Logon will take minutes or in some extreme cases hours to complete because the FileServer will copy all the profile data to the endpoint. Even “just in time” profile technology like Zero Profiling isn’t able to handle the big application data quick enough for a good user experience because it also just copies the data from a FileServer to the endpoint but not in one big chunk like roaming profiles.

So, how does FSLogix Profile Containers help?

FSLogix Profile Containers creates a Virtual Hard Drive (VHD) file on a FileServer and stores the user profile including registry in the VHD file. Sounds relatively simple, right? Although, why does this improve speed? Well, during login the only thing that is happening is that the endpoint mounts the VHD file as a Virtual Hard Drive and then the profile is just accessible. So there is NO data copy! This results in lighting fast logons. And eliminates FileServer and network bottlenecks from login storms.

FSLogix Profile Containers also has additional benefits for the end user native support for Office 365 products, such as Outlook, Search, OneDrive for business, SharePoint folder synchronization, Teams, and Skype for Business GAL

Profile Containers Cloud support

It’s worth mentioning that FSLogix has a cool tech called Cloud Cache. This functionality adds the possibility to add multiple storage repositories to the existing products to provide high availability to on-premises and cloud environments.

Imagine a workspace scenario where you are running a VDI\WVD environment in Microsoft Azure. Typically, you store your profile data on a Windows file share in Azure Infrastructure-as-a-Service. The Cloud Cache Driver makes it possible to provide the store of the Containers directly on much less expensive Azure Blob Storage. This is just one of the significant use-cases which FSLogix is solving with this tremendous new Cloud technology.

Other uses of Cloud Cache include high availability in the event of storage or network interruptions, profile storage server migrations, cloud migrations, offline access to FSLogix containers, and more.

So, how do you setup FSLogix Profile containers?

As always first, download the software here.

Next, you need to push the installer to your endpoints.  To make your life easier, use these silent install parameters:

“FSLogixAppsSetup.exe /install /quiet /norestart ProductKey=YOURPRODUCTKEY”. 

With the install, you also get a FSLogix.ADML and ADMX file. You need to copy these to your PolicyDefinitions folder in \YOURDOMAIN\SYSVOL\Policies. Next, you need  to create a new GPO object and set the following options:

Make sure you don’t forget to disable roaming profiles and enable local profiles on the endpoint. You can monitor if the Profile Container is working correctly with the easy FSLogix Tray application located in: “C:\Program Files\FSLogix\Apps\frxtray.exe”.

And that’s it. 🙂  Your users can now log in with the speed of Flash Gordon and you never have to worry about profile issues again. It’s a win\win!

FSLogix technology will be available to Microsoft customers with the following licenses vs just WVD as they had originally stated:

    • M365 E3, E5, F1  – These are subscriptions that include the Windows OS which also includes everything in the Office 365 license and additional tools and security software.
    • Windows E3, E5 – These are subscription licenses of the Windows OS
    • Any Microsoft RDS Server Cal holder  (For example, Citrix XenApp users and this is the newly added part that makes it more available)

Now that we understand how it works, a basic understanding of the setup and licensing.  My next blog post in this series will be a video walkthrough on the setup and usage.

Until next time,

Rob

My thoughts on the Future of the Cloud

Many people in the IT consider containers, a technology used to isolate applications with their own environment, to be the future.

However, serverless geeks think that containers will gradually fade away. They will exist as a low-level implementation detail bubbling below the surface but most software developers will not have to deal with them directly. It may seem premature to declare victory for serverless just yet but there are enough positive signs already. Forward-thinking organizations like iRobot, Coca-Cola, Thomson Reuters, and Autodesk are experimenting and adopting serverless technologies. All major and minor Cloud providers — including the aforementioned ones as well as players like Azure, AWS, GCP, IBM, Oracle, and Pivotal are working on serverless offerings.  If you wan to learn more just take a quick look to this link, https://docs.microsoft.com/en-us/archive/blogs/wincat/validating-hybrid-cloud-scenarios-in-the-server-2012-technology-adoption-program-tap.

Together with the major players, a whole ecosystem of startups is emerging. These startups attempt to solve problems around deployment and observability, provide new security solutions, and help enterprises evolve their systems and architectures to take advantage of serverless. This isn’t, of course, to mention a vibrant community of enthusiasts who contribute to serverless open source projects, evangelize at conferences and online, and promote ideas within their organizations.

It would be great to close the book now and declare victory for the serverless camp, but the reality is different. There are challenges that the community and vendors are yet to solve. These challenges are cultural and technological; there’s tribal friction within the tech community; inertia to adoption within organizations, and issues around some of the technology itself. Also remember to make sure that you are properly certified if you are running cloud-based services, it’s the ISO 27017 certificate that you need for that.

Confusion and the Cloud

While adoption of serverless is growing, more work needs to be done by the serverless community to communicate what this technology is all about. The community needs to bring more people in and explain how serverless adds value. It’s inarguable that there are good questions from members of the tech community. These can range from trivial disagreements over “serverless” as a name, to more philosophical arguments about fit, use-case, and lock-in. This as a perfectly normal example of past successes (with other technologies) breeding inertia to change.

This isn’t to say that those who have objections are wrong. Serverless in its current incarnation isn’t suitable in all cases. There are limitations on how long functions can run, tooling is immature and monitoring distributed applications made up of a lot of functions and cloud services can be difficult (although some progress is being made to address this).

There’s also a need for a robust set of example patterns and architectures. After all, the best way to convince someone of the merit of technology is to build something with it and then show them how it was done.

Confusingly, there is a tendency by some vendors to label their offerings as serverless when they aren’t. This makes it look like they are jumping on the bandwagon rather than thoughtfully building services that adhere to serverless principles. Some of the bigger cloud vendors are guilty of this and unfortunately, this confuses people’s understanding of technology.

Go Big or Go Home

At the very large end of the scale, companies like Netflix and Uber are building their own internal serverless-like platforms. But unless you are the size of Netflix or Uber, building your own Function as a service (FaaS) platform from scratch is a terrible idea. Think of it this way like this, its like building a toaster yourself rather than buying a commoditized, off-the-shelf product. Interestingly, Google recently released a product called kNative. This product — based on the open source Kubernetes container orchestration software— is designed to help build, deploy and manage serverless workloads on your own servers.

For example, Google’s Bret McGowen, at Serverlessconf San Francisco ’18, gave of a real-life customer scenario out on an oil rig in the middle of an ocean with poor Internet connectivity. The customer needed to perform computation with terabytes of telemetry data but uploading it to a cloud platform over a connection equivalent to a 3G modem wasn’t feasible. “They cannot use cloud and it’s totally unfair to say — sorry buddy, hosted functions-as-a-service or bust — their developers deserve to have the same serverless experience as the rest of us” was Bret’s explanation why, in this case, running kNative locally on the oil rig made sense.

He is, of course, correct. Having a serverless system running in your own environment — when you cannot use a cloud platform — is better than nothing. However, for most of us, serverless solutions like Google Cloud Functions, Azure Functions, or AWS Lambda offer a far smaller barrier to entry and remove many administrative headaches. It’s fair to say that most companies should look at serverless solutions like Lambda first and if they don’t satisfy requirements look at other alternatives, like kNative and containers, second.

The Future…in my humble opinion

It’s likely that some of the major limitations with serverless functions are going to be solved in the coming years, if not months. Cloud vendors will allow functions to run for longer, support more languages, and allow deeper customizations. A lot of work is being done by cloud vendors to allow developers to bring their own containers to a hosted environment and then have those containers seamlessly managed by the platform alongside regular functions.

In the end, “do you have a choice?” “No, none, whatsoever” was Bret’s succinct, brutal answer at the conference. Existing limitations will be solved and serverless compute technologies will herald the rise of new, emerging architectural patterns and practices. We are yet to see what these are but, this is the future and it is unavoidable.

Cloud computing is where we are, and where the world is going for the next decade or two. After that, probably something new will come along.

But the reasons for going to cloud computing in general and the inevitable wind-down of on-premises to niche special functions are now pretty obvious.

  • Security – Big cloud operators have FAR more security people and capacity than even a big enterprise, and your own disgruntled employees don’t have the keys to the servers.
  • Cost-effectiveness – Economies of scale. The rule of big numbers.
  • Zero capital outlay – reduced costs.
  • For software developers, no more software piracy. That’s a big saving on the cost of developing software, especially for sales in certain countries.
  • Compliance – So much easier if your cloud vendor is fully certified, so you only have to worry about your part of the puzzle.
  • Energy efficiency – Big, well-designed datacentres use a LOT less global resources.

My next post in this series will be on “The Past and On-prem and the Cloud?

Until next time, Rob

Windows Virtual Desktop now in the Wild – Public Preview Now Available

The Windows Virtual Desktop (WVD) product and strategy announced last September is finally here in public preview.  Something near and dear to my heart for the last 6 months.  I’ve been in private preview and had to keep a lid on it 🙂 Yea!!

What is it?

Simply put, it’s multi-session Windows 10 experience with optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops. It means users can deploy and scale Windows desktops on Azure and on-premise quickly.

The service brings together single-user Windows 7 VDI and multi-user Windows 10 and Windows Server RDS and is hosted on any of Azure’s virtual machine tiers or what you could call DaaS (Desktop as a Service) in a way.

Licensing

Microsoft is pricing WVD aggressively by charging only for the virtual machine costs; the license requirements for the Windows 7 and Windows 10 based services will be fulfilled by Microsoft 365 F1/E3/E, Windows 10 Enterprise E3/E5, and Windows VDA subscriptions. The Windows Server-based services are similarly fulfilled by existing RDS client access licenses. This means that for many Microsoft customers, there will be no additional licensing cost for provisioning desktop computing in the cloud.

The virtual machine costs can be further reduced by using Reserved Instances that commit to purchasing certain amounts of VM time in return for lower pricing.  All of this just means simpler licensing for Office and Windows as opposed to the crazy license models of the past.  I am not saying that crazy licensing models are gone but have gotten much simpler.

What’s the deal with Windows 7 and Support?

The new service will be available to the production environments in the by June before Windows 7 support ends in January 2020.

But, there is a big incentive, Windows 7 users will receive all three years of Extended Security Updates (ESU) at no extra cost. This should ease the cost of migration to the service; this is in contrast to on-premises deployments that will cost either $25/$50/$100 for the three years of ESU availability or $50/$100/$200, depending on the precise Windows license being used.

WVD and O365

WVD will also provide particular benefits for Office 365 users. In November last year, Microsoft bought a company called FSLogix that develops software to streamline application provisioning in virtualized environments.

Outlook (with its offline data store) and OneDrive (with its synchronized file system) represent particular challenges for virtual desktops, as both applications store large amounts of data on the client machine.  This data is expected to persist across VM reboots and redeployments. FSLogix’s software allows these things to be stored on separate disk images that are seamlessly grafted onto the deployed virtual machine. WVD will use this software for clients running Office 365, but this can be optional.

Liquidware and WVD

The technology of ProfileUnity and FlexApp only complement what Microsoft includes with FSLogix.  But do understand, if you need a simple soution for Profile Disk, then FSlogix is the way to go and save yourself some money. Over my next few blog posts, I plan to show how to set up WVD and a full walk-through of FSLogix running with WVD.

Sizing WVD?

Liquidware has a product called Stratusphere UX. It’s an EUC monitoring tool that allows you to properly size your Azure environment for WVD. This helps make smart decisions on migrations to WVD.  It doesn’t stop there, Stratusphere provides ongoing metrics and alerting that help IT Pro’s to continue to maintain a high performing WVD environment into the future.

How do I get it?

Azure Market Place 🙂 The preview is available in the US East 2 and US Central Azure regions; When GA is announced, it will be available in all regions.

In Microsoft’s eyes, its time to kickass and take names 😉

Check out my next post on WVD and FSLogix.

Until next time, Rob