Introduction to Azure AI and Azure AI Studio: Your Gateway to the Future of Artificial Intelligence

Featured

Please note: Only the graphics are generated by OpenAI Model: Dalle3;
the rest is me 😉

The age of artificial intelligence (AI) is upon us, with advancements being blistering. Microsoft’s Azure AI is at the forefront of this revolution, providing a comprehensive suite of tools and services that enable developers, data scientists, and AI enthusiasts to create intelligent applications and solutions. In this blog post, we will delve deep into Azure AI and explore Azure AI Studio, a powerful platform that simplifies the creation and deployment of AI models.

What is Azure AI?

Azure AI is a collection of cognitive services, machine learning tools, and AI apps designed to help users build, train, and deploy AI models quickly and efficiently. It is part of Microsoft Azure, the company’s cloud computing service, which offers a wide range of services, including computing, analytics, storage, and networking.

Azure AI is built on the principles of democratizing AI technology, making it accessible to people with various levels of expertise. Whether you’re a seasoned data scientist or a developer looking to integrate AI into your applications, Azure AI has something for you.

Key Components of Azure AI

Azure AI consists of several key components that cater to different AI development needs:

  1. Azure Machine Learning (Azure ML): This cloud-based platform for building, training, and deploying machine learning models. It supports various machine learning algorithms, including pre-built models for common tasks.
  2. Azure Cognitive Services: These are pre-built APIs for adding AI capabilities like vision, speech, language, and decision-making to your applications without requiring deep data science knowledge.
  3. Azure Bot Service: It provides tools to build, test, deploy, and manage intelligent bots that can interact naturally with users through various channels.

Introducing Azure AI Studio (In Preview)

Azure AI Studio, also known as Azure Machine Learning Studio, is an integrated, end-to-end data science and advanced analytics solution. It combines a visual interface where you can drag and drop machine learning modules to build your AI models and a powerful backend that supports model training and deployment.

Features of Azure AI Studio

  • Visual Interface: A user-friendly, drag-and-drop environment to build and refine machine learning workflows.
  • Pre-Built Algorithms and Modules: A library of pre-built algorithms and data transformation modules that accelerate development.
  • Scalability: The ability to scale your experiments using the power of Azure cloud resources.
  • Collaboration: Team members can collaborate on projects and securely share datasets, experiments, and models within the Azure cloud infrastructure.
  • Pipeline Creation: The ability to create and manage machine learning pipelines that streamline the data processing, model training, and deployment processes.
  • MLOps Integration: Supports MLOps (DevOps for machine learning) practices with version control, model management, and monitoring tools to maintain the lifecycle of machine learning models.
  • Hybrid Environment: Flexibility to build and deploy models in the cloud or on the edge, on-premises, and in hybrid environments.

Getting Started with Azure AI Studio

To begin using Azure AI Studio, you usually follow these general steps:

  1. Set up an Azure subscription: If you don’t already have one, create a Microsoft Azure account and set up a subscription.
  2. Create a Machine Learning resource: Navigate to the Azure portal and create a new resource.
  3. Launch AI Studio: Launch Azure AI Studio from the Azure portal once your resource is ready.
  4. Import Data: Bring your datasets from Azure storage services or your local machine.
  5. Build and Train Models: Use the visual interface to drag and drop datasets and modules to create machine-learning models. Split your data, select algorithms, and train your models.
  6. Evaluate and Deploy: Evaluate your trained models against test data and deploy them as a web service for real-time predictions or batch processing once satisfied with the performance.

Use Cases for Azure AI

Azure AI powers a variety of real-world applications, including but not limited to:

  • Healthcare: Predictive models for patient care, diagnosis assistance, and medical imaging analysis.
  • Retail: Personalized product recommendations, customer sentiment analysis, and inventory optimization.
  • Banking: Fraud detection, risk management, and customer service chatbots.
  • Legal: Documentation creation, Legal Briefs, and ability to analyze a case with or without bias.
  • Manufacturing: Predictive maintenance, quality control, and supply chain optimization.

Conclusion

Azure AI and Azure AI Studio are powerful tools in the arsenal of anyone looking to harness the power of artificial intelligence. With its comprehensive suite of services, Azure AI simplifies integrating AI into applications, while Azure AI Studio democratizes machine learning model development with its visual, no-code interface. The future of AI is bright, and platforms like Azure AI are more accessible than ever.

Azure AI not only brings advanced capabilities to the fingertips of developers and data scientists but also ensures that organizations can maintain control over their AI solutions with robust security, privacy, and compliance practices. As AI continues to evolve, Azure AI and Azure AI Studio will undoubtedly remain at the cutting edge, empowering users to turn their most ambitious AI visions into reality.

Until next time,

Rob

Azure Site Recovery – An overview

Featured

Azure Site RecoveryAzure Site Recovery (ASR) is a powerful disaster recovery and business continuity solution provided by Microsoft Azure. It enables businesses to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions. With ASR, you can replicate your on-premises virtual machines, physical servers, and even entire data centers to Azure, and quickly restore them when needed.

In this blog post, we will dive deep into the capabilities, benefits, and use cases of Azure Site Recovery. We will also explore the key features, architecture, and pricing model of ASR.

Capabilities of Azure Site Recovery

Azure Site Recovery provides a range of capabilities that can help businesses ensure high availability, data protection, and disaster recovery. Here are some of the key capabilities of ASR:

  1. Replication: ASR can replicate virtual machines, physical servers, and even entire data centers to Azure. This enables businesses to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions.
  2. Orchestration: ASR can orchestrate the failover and failback of replicated virtual machines and servers. This ensures that the entire failover process is automated, orchestrated, and monitored.
  3. Testing: ASR provides a non-disruptive way to test disaster recovery scenarios without impacting the production environment. This enables businesses to validate their disaster recovery plans and ensure that they are working as expected.
  4. Integration: ASR integrates with a range of Azure services, including Azure Backup, Azure Monitor, Azure Automation, and Azure Security Center. This enables businesses to have a holistic view of their disaster recovery and business continuity operations.

Benefits of Azure Site Recovery

Azure Site Recovery provides a range of benefits to businesses of all sizes and industries. Here are some of the key benefits of ASR:

  1. High availability: ASR enables businesses to achieve high availability of their critical applications and services. This ensures that their customers and employees have access to the applications and services they need, even in the event of unexpected downtime, disasters, or disruptions.
  2. Data protection: ASR ensures that data is protected and can be recovered in the event of data loss or corruption. This is essential for businesses that handle sensitive data or have compliance requirements.
  3. Reduced downtime: ASR can help businesses reduce downtime by providing a fast and efficient way to recover from disasters or disruptions. This can save businesses a significant amount of time, money, and resources.
  4. Simplified disaster recovery: ASR simplifies the disaster recovery process by automating failover and failback operations. This reduces the risk of human error and ensures that the entire process is orchestrated and monitored.
  5. Lower costs: ASR can help businesses reduce their disaster recovery costs by eliminating the need for expensive hardware and infrastructure. This is because businesses can replicate their virtual machines and servers to Azure, which provides a cost-effective disaster recovery solution.

Use cases for Azure Site Recovery

  • Business Continuity: ASR can help businesses ensure business continuity by providing a way to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions. With ASR, businesses can replicate their on-premises virtual machines and servers to Azure and failover to them in the event of a disaster.
  • Data Protection: ASR can help businesses protect their data by replicating it to Azure and providing a way to recover it in the event of data loss or corruption. With ASR, businesses can set up a replication policy to replicate data to Azure and configure recovery points to restore data to a specific point in time.
  • Migration: ASR can be used to migrate virtual machines and servers from on-premises to Azure. With ASR, businesses can replicate their on-premises workloads to Azure and then failover to the replicated virtual machines in Azure. This can help businesses move their workloads to Azure in a seamless and efficient manner.
  • Testing: ASR provides a non-disruptive way to test disaster recovery scenarios without impacting the production environment. With ASR, businesses can test their disaster recovery plans and ensure that they are working as expected without interrupting their production environment.
  • DevOps: ASR can be used in DevOps scenarios to replicate development and test environments to Azure. This can help businesses reduce the time and cost of setting up and managing these environments. With ASR, businesses can replicate their development and test environments to Azure and then failover to them when needed.
  • Compliance: ASR can help businesses meet compliance requirements by ensuring that their data is protected and can be recovered in the event of data loss or corruption. With ASR, businesses can replicate their data to Azure and then configure recovery points to ensure that their data can be restored to a specific point in time.
  • Hybrid Cloud: ASR can be used in hybrid cloud scenarios to ensure high availability and disaster recovery across on-premises and Azure environments. With ASR, businesses can replicate their on-premises workloads to Azure and then failover to them in the event of a disaster.
  • Multi-Site Disaster Recovery: ASR can be used to provide disaster recovery across multiple sites. With ASR, businesses can replicate their virtual machines and servers to multiple Azure regions and then failover to the replicated virtual machines in the event of a disaster.

In summary, Azure Site Recovery provides a range of capabilities that can help businesses ensure high availability, data protection, and disaster recovery. It can be used in a wide range of use cases across different industries to provide a cost-effective and efficient disaster recovery solution.

Until next time,

Rob

Azure Sentinel: The Future of Security Information and Event Management

Azure SentinelIn today’s digital world, protecting an organization’s information and assets from cyber threats has never been more critical. The rise in cyber attacks and security breaches has made it crucial for organizations to have a centralized platform to manage their security operations and respond to incidents promptly and effectively. That’s where Azure Sentinel comes in.

Azure Sentinel is a cloud-native Security Information and Event Management (SIEM) solution provided by Microsoft Azure. It provides a comprehensive security solution that integrates with existing security tools and cloud services to provide a complete view of an organization’s security landscape. Azure Sentinel is designed to help organizations quickly detect, investigate and respond to security threats and streamline their security operations.

Azure Sentinel Core

One of the key benefits of Azure Sentinel is its ability to provide a unified view of security events from various sources. It can collect data from on-premises, cloud, and hybrid environments and a wide range of security tools and services. This data is then aggregated and analyzed in real-time to provide organizations with a complete picture of their security posture. Azure Sentinel also uses machine learning algorithms to identify patterns and anomalies and to detect threats that might have gone unnoticed.

Another essential feature of Azure Sentinel is its ability to automate security workflows. It provides a flexible and powerful security automation and orchestration platform that enables organizations to respond to incidents quickly and effectively. Azure Sentinel provides built-in playbooks and pre-configured security workflows that specific events or conditions can trigger. Organizations can also create custom playbooks to automate their security operations.

In addition to its capabilities, Azure Sentinel is highly scalable, allowing organizations to manage security operations at any scale. It is built on Microsoft Azure, which provides a highly scalable, secure, and reliable platform for security operations. Azure Sentinel is also designed to be cost-effective, providing organizations with a cost-effective solution for managing their security operations without significant investments in hardware or software.

In conclusion, Azure Sentinel provides organizations with a comprehensive and centralized security solution that integrates with existing security tools and cloud services to provide a complete view of an organization’s security landscape. With its ability to detect and respond to threats quickly and effectively, automate security workflows, and provide a cost-effective solution, Azure Sentinel is the future of SIEM. Azure Sentinel is a solution worth considering if you’re looking to enhance your security posture and streamline your security operations.

Until next time, Rob

Azure Batch: A Comprehensive Guide

Azure Batch Example

Azure Batch is a cloud-based platform offered by Microsoft Azure that enables users to run large-scale parallel and batch computing workloads. With Azure Batch, users can manage, schedule, and run their applications and tasks on a pool of virtual machines. This provides a flexible and scalable solution for businesses and organizations looking to run complex computing tasks in the cloud.

Key Features of Azure Batch

Scalability: Azure Batch allows users to scale their computing resources on demand, enabling them to handle even the largest computing workloads. The platform can automatically allocate and manage the virtual machines needed to run your tasks, ensuring that your applications have the resources they need to run smoothly.

Flexibility: Azure Batch supports a wide range of applications and languages, including .NET, Python, and Linux. This makes it easy for organizations to integrate their existing applications and tools with Azure Batch.

Monitoring and Management: Azure Batch provides real-time monitoring and management capabilities, making it easy to track your batch jobs’ progress and quickly identify and resolve any issues.

Cost-Effective: Azure Batch offers a pay-per-use pricing model, so you only pay for the resources you consume. This helps to keep costs down, making it an attractive solution for organizations looking to reduce their IT expenses.

How to Use Azure Batch

To get started with Azure Batch, you’ll need to create a Batch account in the Azure portal. Once your account is set up, you can create a pool of virtual machines to run your tasks on. These virtual machines can be managed and scaled using the Azure Batch API or the Azure portal.

Next, you’ll need to create a batch job to run your tasks on the virtual machines in your pool. A batch job is a collection of tasks executed on your pool’s virtual machines. You can submit your tasks to the job, and Azure Batch will automatically manage the distribution of the tasks across the virtual machines in your pool.

Once your batch job runs, you can monitor its progress in real-time using the Azure portal or the Azure Batch API. You can also retrieve detailed information about each task, such as its status and any errors that may have occurred during its execution.

Examples of Effective Usage

  • Use auto-scaling to save cost: Azure Batch provides an auto-scaling feature that automatically adds or removes compute nodes based on the demand for your applications. This helps you save cost by only paying for what you use and avoiding over-provisioning of compute resources. To enable auto-scaling, you can use the auto-pool and auto-scale features in the Azure portal or through the Azure Batch API.
  • Utilize the cloud-init script: You can use the cloud-init script to customize the behavior of your compute nodes. For example, you can use the script to install necessary software, configure firewall rules, or download data. The cloud-init script is executed every time a new compute node is created, ensuring that all nodes are consistently configured.
  • Make use of custom images: Azure Batch allows you to use custom images to deploy your applications, which can greatly reduce the time required to set up your
    environment. By creating a custom image with all the necessary software pre-installed, you can quickly create new compute nodes and start processing your data.
  • Take advantage of the task dependencies: Azure Batch provides the capability to specify task dependencies, which can help you ensure that tasks are executed in the correct order. You can use task dependencies to specify the order in which tasks are executed, or to make sure that a task is not executed until its dependencies have been completed.
  • Utilize the Job Preparation task: The Job Preparation task is a special task that runs on each compute node before the other tasks are executed. You can use the Job Preparation task to perform any necessary setup or configuration, such as installing software, copying data, or configuring firewall rules.
  • Monitor your jobs: Azure Batch provides robust monitoring capabilities that allow you to monitor the status of your jobs, tasks, and compute nodes. You can use the Azure portal, Azure Monitor, or the Azure Batch API to monitor your resources and get insights into the performance of your applications.

Conclusion

Azure Batch is a powerful and flexible platform for running large-scale batch computing workloads in the cloud. With its ability to scale resources on demand, support for a wide range of applications and languages, and real-time monitoring and management capabilities, it’s an attractive solution for organizations looking to take their computing to the next level. Whether you’re running scientific simulations, data processing, or any other type of batch computing workload, Azure Batch can help you get the job done quickly and efficiently.

Until next time, Rob

Azure Powershell – How to Build and Deploy Azure IaaS VMs

Throughout my career, my primary role has always been to make things more efficient and automated.  And now more than ever, automation is needed to manage and deploy IT services at scale to support our ever-changing needs.

In my opinion, one of the most convenient aspects of public cloud-based services is the ability to host virtual machines (VMs). Hosting VMs in the cloud doesn’t just mean putting your VMs in someone else’s datacenter. It’s a way to achieve a scalable, low-cost and resilient infrastructure in a matter of minutes.

What once required hardware purchases, layers of management approval and weeks of work now can be done with no hardware and in a fraction of the time. We still probably have those management layers though 🙁

Microsoft Azure is in the lead pack along with Google (GCP) and Amazon (AWS). Azure has made great strides over the past few years on in its Infrastructure as a Service (IaaS) service which allows you to host VMs in their cloud.

Azure provides a few different ways to build and deploy VMs in Azure.

  • You could choose to use the Azure portal, build VMs through Azure Resource Manager(ARM) templates and some PowerShell
  • Or you could simply use a set of PowerShell cmdlets to provision a VM and all its components from scratch.

Each has its advantages and drawbacks. However, the main reason to use PowerShell is for automation tasks. If you’re working on automated VM provisioning for various purposes, PowerShell is the way to go 😉

Let’s look at how we can use PowerShell to build all of the various components that a particular VM requires in Azure to eventually come up with a fully-functioning Azure VM.

To get started, you’ll first obviously need an Azure subscription. If you don’t, you can sign up for a free trial to start playing around. Once you have a subscription, I’m also going to be assuming you’re using at least Windows 10 with PowerShell version 6. Even though the commands I’ll be showing you might work fine on older versions of PowerShell, it’s always a good idea to work alongside me with the same version, if possible.

You’ll also need to have the Azure PowerShell module installed. This module contains hundreds of various cmdlets and sub-modules. The one we’ll be focusing on is called Azure.RM. This contains all of the cmdlets we’ll need to provision a VM in Azure.

Building a VM in Azure isn’t quite as simple as New-AzureVM; far from it actually. Granted, you might already have much of the underlying infrastructure required for a VM, but how do you build it out, I’ll be going over how to build every component necessary and will be assuming you’re beginning to work from a blank Azure subscription.

At its most basic, an ARM VM requires eight individual components

  1. A resource group
  2. A virtual network (VNET)
  3. A storage account
  4. A network interface with private IP on VNET
  5. A public IP address (if you need to access it from the Internet)
  6. An operating system
  7. An operating system disk
  8. The VM itself (compute)

In order to build any components between numbers 2 and 7, they must all reside in a resource group so we’ll need to build this first. We can then use it to place all the other components in. To create a resource group, we’ll use the New-AzureRmResourceGroup cmdlet. You can see below that I’m creating a resource group called NetWatchRG and placing it in the East US datacenter.

New-AzureRmResourceGroup -Name 'NetWatchRG' -Location 'East US'

Next, I’ll build the networking that is required for our VM. This requires both creating a virtual subnet and adding that to a virtual network. I’ll first build the subnet where I’ll assign my VM an IP address dynamically in the 10.0.1.0/24 network when it gets built.

$newSubnetParams = @{
'Name' = 'NetWatchSubnet'
'AddressPrefix' = '10.0.1.0/24'
}
$subnet = New-AzureRmVirtualNetworkSubnetConfig @newSubnetParams

Next, I’ll create my virtual network and place it in the resource group I just built. You’ll notice that the subnet’s network is a slice of the virtual network (my virtual network is a /16 while my subnet is a /24). This allows me to segment out my VMs

$newVNetParams = @{
'Name' = 'NetWatchNetwork'
'ResourceGroupName' = 'MyResourceGroup'
'Location' = 'West US'
'AddressPrefix' = '10.0.0.0/16'
'Subnet' = $subnet
}
$vNet = New-AzureRmVirtualNetwork @newVNetParams

Next, we’ll need somewhere to store the VM so we’ll need to build a storage account. You can see below that I’m building a storage account called NetWatchSA.

$newStorageAcctParams = @{
'Name' = 'NetWatchSA'
'ResourceGroupName' = 'NetWatchRG'
'Type' = 'Standard_LRS'
'Location' = 'East US'
}
$storageAccount = New-AzureRmStorageAccount @newStorageAcctParams

Once the storage account is built, I’ll now focus on building the public IP address. This is not required but if you’re just testing things out now it’s probably easiest to simply access your VM over the Internet rather than having to worry about setting up a VPN.

Here I’m calling it NetWatchPublicIP and I’m ensuring that it’s dynamic since I don’t care what the public IP address is. I’m using many of the same parameters as the other objects as well.

$newPublicIpParams = @{'Name' = 'NetWatchPublicIP''ResourceGroupName' = 'NetWatchRG''AllocationMethod' = 'Dynamic' ## Dynamic or Static'DomainNameLabel' = 'NETWATCHVM1''Location' = 'East US'}$publicIp = New-AzureRmPublicIpAddress @newPublicIpParams
Once the public IP address is created, I then need somehow to get connected to my virtual network and ultimately the Internet. I’ll create a network interface again using the same resource group and location again. You can also see how I’m slowly building all of the objects I need as I go along. Here I’m specifying the subnet ID I created earlier and the public IP address I just created. Each step requires objects from the previous steps.
$newVNicParams = @{
'Name' = 'NetWatchNic1'
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'SubnetId' = $vNet.Subnets[0].Id
'PublicIpAddressId' = $publicIp.Id
}
$vNic = New-AzureRmNetworkInterface @newVNicParams
Once we’ve got the underlying infrastructure defined, it’s now time to build the VM.
First, you’ll need to define the performance of the VM. Here I’m choosing the lowest performance option (and the cheapest) with a Standard A3. This is great for testing but might not be enough performance for your production environment.
$newConfigParams = @{
'VMName' = 'NETWATCHVM1'
'VMSize' = 'Standard_A3'
}
$vmConfig = New-AzureRmVMConfig @newConfigParams
Next, we need to create the OS itself. Here I’m specifying that I need a Windows VM, the name it will be, the password for the local administrator account and a couple of other Azure-specific parameters. However, by default, an Azure VM agent is installed anyway but does not automatically update itself. You don’t explicitly need a VM agent but it will come in handy if you begin to need more advanced automation capabilities down the road.
$newVmOsParams = @{
'Windows' = $true
'ComputerName' = 'NETWATCHVM1'
'Credential' = (Get-Credential -Message 'Type the name and password of the local administrator account.')
'ProvisionVMAgent' = $true
'EnableAutoUpdate' = $true
}
$vm = Set-AzureRmVMOperatingSystem @newVmOsParams -VM $vmConfig
Next, we need to pick what image our OS will come from. Here I’m picking Windows Server 2016 Datacenter with the latest patches. This will pick an image from the Azure image gallery to be used for our VM.
$newSourceImageParams = @{
'PublisherName' = 'MicrosoftWindowsServer'
'Version' = 'latest'
'Skus' = '2016-Datacenter'
'VM' = $vm
}$offer = Get-AzureRmVMImageOffer -Location 'East US' -PublisherName 'MicrosoftWindowsServer'
$vm = Set-AzureRmVMSourceImage @newSourceImageParams -Offer $offer.Offer
Next, we’ll attach the NIC we’ve built earlier to the VM and specify the NIC ID on the VM that we’d like to add it as in case we need to add more NICs later.
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $vNic.Id
At this point, Azure still doesn’t know how you’d like the disk configuration on your VM. To define where the operating system will be stored, you’ll need to create an OS disk. The OS disk is a VHD that’s stored in your storage account. Here I’m putting the VHD in a VHDs storage container (folder) in Azure. This step gets a little convoluted since we must specify the VhdUri. This is the URI to the storage account we created earlier.
$osDiskUri = $storageAcct.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + $osDiskName + ".vhd"

$newOsDiskParams = @{
'Name' = 'OSDisk'
'CreateOption' = 'fromImage'
'VM' = $vm
'VhdUri' = $osDiskUri
}

$vm = Set-AzureRmVMOSDisk @newOsDiskParams
Ok, Whew! We now have all the components required to finally bring up our VM. To build the actual VM, we’ll use the New-AzureRmVM cmdlet. Since we’ve already done all of the hard work ahead of time, at this point, I simply need to pass the resource group name, the location, and the VM object which contains all of the configurations we just applied to it.
$newVmParams = @{
'ResourceGroupName' = 'NetWatchRG'
'Location' = 'East US'
'VM' = $vm
}
New-AzureRmVM @newVmParams

Your VM should now be showing up under the Virtual Machines section in the Azure portal. If you’d like to check on the VM from PowerShell you can also use the Get-AzureRmVM cmdlet.

Now that you’ve got all the basic code required to build a VM in Azure, I suggest you go and build a PowerShell script from this tutorial. Once you’re able to bring this code together into a script, building your second, third or tenth VM will be a breeze!

One final tip, in addition to managing Azure Portal through a browser, there are mobile apps for IOS and Android and now the new Azure portal app (Currently in Preview).  It gives you the same experience as the Azure Portal, without the need of a browser, like Microsoft Edge or Google Chrome.  Great for environments that have restrictions on browsing.

Until next time, Rob…

Windows Virtual Desktop and FSLogix – What you need to know?

Expanding on my last post on Windows Virtual Desktop,  let’s talk about FSLogix.  So, let start at the beginning, FSLogix was founded by Randy Cook and Kevin Goodman, VDI industry veterans, tackling user experience problems with virtual desktops.

FSLogix was one of the first along with Liquidware to use virtual hard disks as a way to migrate the user’s profile data between virtual desktops and sessions.

Giving users local admin rights on Windows desktops has become a thing of the past.  More and more apps (for example, Modern Apps) install themselves and their caches directly into the user profile (because the user always has permissions to write there).  While there are proven solutions for using only the required parts of the user profile and ignoring things like app installs some administrators prefer the approach of just roaming everything and not trying to manage the contents of the profile.

In the last couple of years, the attention has shifted from user profile roaming to solving the problem of roaming Office 365 caches in virtual desktops, so that they perform and feel as fast as a physical desktop. Microsoft’s early attempt using this approach – User Profile Disks, as introduced in Windows Server 2012 – was a step in the right direction but was lacking, and the acquisition of FSLogix allows them to accelerate their support for this capability. Then another great business tip is to use a virtual office so that you can have a business postal address in a capital city like London as these just look amazing so work really well.

When a user logs on to their Windows session, the Windows User Profile is loaded. The profile includes everything from the user’s download folder to their mouse scrolling speed preference and everything in between. So you can imagine that profiles can get big.  Check out my blog post on Windows Users Profiles – The Untold Mysteries to learn more.

There are also some programs that create massive profile data like AutoCAD – which – due to Nvidia GRID, works great in a VDI environment but easily generates GB’s of profile data. If the user’s profile grows this big, a roaming profile solution won’t work. Logon will take minutes or in some extreme cases hours to complete because the FileServer will copy all the profile data to the endpoint. Even “just in time” profile technology like Zero Profiling isn’t able to handle the big application data quick enough for a good user experience because it also just copies the data from a FileServer to the endpoint but not in one big chunk like roaming profiles.

So, how does FSLogix Profile Containers help?

FSLogix Profile Containers creates a Virtual Hard Drive (VHD) file on a FileServer and stores the user profile including registry in the VHD file. Sounds relatively simple, right? Although, why does this improve speed? Well, during login the only thing that is happening is that the endpoint mounts the VHD file as a Virtual Hard Drive and then the profile is just accessible. So there is NO data copy! This results in lighting fast logons. And eliminates FileServer and network bottlenecks from login storms.

FSLogix Profile Containers also has additional benefits for the end user native support for Office 365 products, such as Outlook, Search, OneDrive for business, SharePoint folder synchronization, Teams, and Skype for Business GAL

Profile Containers Cloud support

It’s worth mentioning that FSLogix has a cool tech called Cloud Cache. This functionality adds the possibility to add multiple storage repositories to the existing products to provide high availability to on-premises and cloud environments.

Imagine a workspace scenario where you are running a VDIWVD environment in Microsoft Azure. Typically, you store your profile data on a Windows file share in Azure Infrastructure-as-a-Service. The Cloud Cache Driver makes it possible to provide the store of the Containers directly on much less expensive Azure Blob Storage. This is just one of the significant use-cases which FSLogix is solving with this tremendous new Cloud technology.

Other uses of Cloud Cache include high availability in the event of storage or network interruptions, profile storage server migrations, cloud migrations, offline access to FSLogix containers, and more.

So, how do you setup FSLogix Profile containers?

As always first, download the software here.

Next, you need to push the installer to your endpoints.  To make your life easier, use these silent install parameters:

“FSLogixAppsSetup.exe /install /quiet /norestart ProductKey=YOURPRODUCTKEY”. 

With the install, you also get a FSLogix.ADML and ADMX file. You need to copy these to your PolicyDefinitions folder in YOURDOMAINSYSVOLPolicies. Next, you need  to create a new GPO object and set the following options:

Make sure you don’t forget to disable roaming profiles and enable local profiles on the endpoint. You can monitor if the Profile Container is working correctly with the easy FSLogix Tray application located in: “C:Program FilesFSLogixAppsfrxtray.exe”.

And that’s it. 🙂  Your users can now log in with the speed of Flash Gordon and you never have to worry about profile issues again. It’s a winwin!

FSLogix technology will be available to Microsoft customers with the following licenses vs just WVD as they had originally stated:

    • M365 E3, E5, F1  – These are subscriptions that include the Windows OS which also includes everything in the Office 365 license and additional tools and security software.
    • Windows E3, E5 – These are subscription licenses of the Windows OS
    • Any Microsoft RDS Server Cal holder  (For example, Citrix XenApp users and this is the newly added part that makes it more available)

Now that we understand how it works, a basic understanding of the setup and licensing.  My next blog post in this series will be a video walkthrough on the setup and usage.

Until next time,

Rob

My thoughts on the Future of the Cloud

Many people in the IT consider containers, a technology used to isolate applications with their own environment, to be the future.

However, serverless geeks think that containers will gradually fade away. They will exist as a low-level implementation detail bubbling below the surface but most software developers will not have to deal with them directly. It may seem premature to declare victory for serverless just yet but there are enough positive signs already. Forward-thinking organizations like iRobot, Coca-Cola, Thomson Reuters, and Autodesk are experimenting and adopting serverless technologies. All major and minor Cloud providers — including the aforementioned ones as well as players like Azure, AWS, GCP, IBM, Oracle, and Pivotal are working on serverless offerings.  If you wan to learn more just take a quick look to this link, https://docs.microsoft.com/en-us/archive/blogs/wincat/validating-hybrid-cloud-scenarios-in-the-server-2012-technology-adoption-program-tap.

Together with the major players, a whole ecosystem of startups is emerging. These startups attempt to solve problems around deployment and observability, provide new security solutions, and help enterprises evolve their systems and architectures to take advantage of serverless. This isn’t, of course, to mention a vibrant community of enthusiasts who contribute to serverless open source projects, evangelize at conferences and online, and promote ideas within their organizations.

It would be great to close the book now and declare victory for the serverless camp, but the reality is different. There are challenges that the community and vendors are yet to solve. These challenges are cultural and technological; there’s tribal friction within the tech community; inertia to adoption within organizations, and issues around some of the technology itself. Also remember to make sure that you are properly certified if you are running cloud-based services, it’s the ISO 27017 certificate that you need for that.

Confusion and the Cloud

While adoption of serverless is growing, more work needs to be done by the serverless community to communicate what this technology is all about. The community needs to bring more people in and explain how serverless adds value. It’s inarguable that there are good questions from members of the tech community. These can range from trivial disagreements over “serverless” as a name, to more philosophical arguments about fit, use-case, and lock-in. This as a perfectly normal example of past successes (with other technologies) breeding inertia to change.

This isn’t to say that those who have objections are wrong. Serverless in its current incarnation isn’t suitable in all cases. There are limitations on how long functions can run, tooling is immature and monitoring distributed applications made up of a lot of functions and cloud services can be difficult (although some progress is being made to address this).

There’s also a need for a robust set of example patterns and architectures. After all, the best way to convince someone of the merit of technology is to build something with it and then show them how it was done.

Confusingly, there is a tendency by some vendors to label their offerings as serverless when they aren’t. This makes it look like they are jumping on the bandwagon rather than thoughtfully building services that adhere to serverless principles. Some of the bigger cloud vendors are guilty of this and unfortunately, this confuses people’s understanding of technology.

Go Big or Go Home

At the very large end of the scale, companies like Netflix and Uber are building their own internal serverless-like platforms. But unless you are the size of Netflix or Uber, building your own Function as a service (FaaS) platform from scratch is a terrible idea. Think of it this way like this, its like building a toaster yourself rather than buying a commoditized, off-the-shelf product. Interestingly, Google recently released a product called kNative. This product — based on the open source Kubernetes container orchestration software— is designed to help build, deploy and manage serverless workloads on your own servers.

For example, Google’s Bret McGowen, at Serverlessconf San Francisco ’18, gave of a real-life customer scenario out on an oil rig in the middle of an ocean with poor Internet connectivity. The customer needed to perform computation with terabytes of telemetry data but uploading it to a cloud platform over a connection equivalent to a 3G modem wasn’t feasible. “They cannot use cloud and it’s totally unfair to say — sorry buddy, hosted functions-as-a-service or bust — their developers deserve to have the same serverless experience as the rest of us” was Bret’s explanation why, in this case, running kNative locally on the oil rig made sense.

He is, of course, correct. Having a serverless system running in your own environment — when you cannot use a cloud platform — is better than nothing. However, for most of us, serverless solutions like Google Cloud Functions, Azure Functions, or AWS Lambda offer a far smaller barrier to entry and remove many administrative headaches. It’s fair to say that most companies should look at serverless solutions like Lambda first and if they don’t satisfy requirements look at other alternatives, like kNative and containers, second.

The Future…in my humble opinion

It’s likely that some of the major limitations with serverless functions are going to be solved in the coming years, if not months. Cloud vendors will allow functions to run for longer, support more languages, and allow deeper customizations. A lot of work is being done by cloud vendors to allow developers to bring their own containers to a hosted environment and then have those containers seamlessly managed by the platform alongside regular functions.

In the end, “do you have a choice?” “No, none, whatsoever” was Bret’s succinct, brutal answer at the conference. Existing limitations will be solved and serverless compute technologies will herald the rise of new, emerging architectural patterns and practices. We are yet to see what these are but, this is the future and it is unavoidable.

Cloud computing is where we are, and where the world is going for the next decade or two. After that, probably something new will come along.

But the reasons for going to cloud computing in general and the inevitable wind-down of on-premises to niche special functions are now pretty obvious.

  • Security – Big cloud operators have FAR more security people and capacity than even a big enterprise, and your own disgruntled employees don’t have the keys to the servers.
  • Cost-effectiveness – Economies of scale. The rule of big numbers.
  • Zero capital outlay – reduced costs.
  • For software developers, no more software piracy. That’s a big saving on the cost of developing software, especially for sales in certain countries.
  • Compliance – So much easier if your cloud vendor is fully certified, so you only have to worry about your part of the puzzle.
  • Energy efficiency – Big, well-designed datacentres use a LOT less global resources.

My next post in this series will be on “The Past and On-prem and the Cloud?

Until next time, Rob

Windows Virtual Desktop now in the Wild – Public Preview Now Available

The Windows Virtual Desktop (WVD) product and strategy announced last September is finally here in public preview.  Something near and dear to my heart for the last 6 months.  I’ve been in private preview and had to keep a lid on it 🙂 Yea!!

What is it?

Simply put, it’s multi-session Windows 10 experience with optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops. It means users can deploy and scale Windows desktops on Azure and on-premise quickly.

The service brings together single-user Windows 7 VDI and multi-user Windows 10 and Windows Server RDS and is hosted on any of Azure’s virtual machine tiers or what you could call DaaS (Desktop as a Service) in a way.

Licensing

Microsoft is pricing WVD aggressively by charging only for the virtual machine costs; the license requirements for the Windows 7 and Windows 10 based services will be fulfilled by Microsoft 365 F1/E3/E, Windows 10 Enterprise E3/E5, and Windows VDA subscriptions. The Windows Server-based services are similarly fulfilled by existing RDS client access licenses. This means that for many Microsoft customers, there will be no additional licensing cost for provisioning desktop computing in the cloud.

The virtual machine costs can be further reduced by using Reserved Instances that commit to purchasing certain amounts of VM time in return for lower pricing.  All of this just means simpler licensing for Office and Windows as opposed to the crazy license models of the past.  I am not saying that crazy licensing models are gone but have gotten much simpler.

What’s the deal with Windows 7 and Support?

The new service will be available to the production environments in the by June before Windows 7 support ends in January 2020.

But, there is a big incentive, Windows 7 users will receive all three years of Extended Security Updates (ESU) at no extra cost. This should ease the cost of migration to the service; this is in contrast to on-premises deployments that will cost either $25/$50/$100 for the three years of ESU availability or $50/$100/$200, depending on the precise Windows license being used.

WVD and O365

WVD will also provide particular benefits for Office 365 users. In November last year, Microsoft bought a company called FSLogix that develops software to streamline application provisioning in virtualized environments.

Outlook (with its offline data store) and OneDrive (with its synchronized file system) represent particular challenges for virtual desktops, as both applications store large amounts of data on the client machine.  This data is expected to persist across VM reboots and redeployments. FSLogix’s software allows these things to be stored on separate disk images that are seamlessly grafted onto the deployed virtual machine. WVD will use this software for clients running Office 365, but this can be optional.

Liquidware and WVD

The technology of ProfileUnity and FlexApp only complement what Microsoft includes with FSLogix.  But do understand, if you need a simple soution for Profile Disk, then FSlogix is the way to go and save yourself some money. Over my next few blog posts, I plan to show how to set up WVD and a full walk-through of FSLogix running with WVD.

Sizing WVD?

Liquidware has a product called Stratusphere UX. It’s an EUC monitoring tool that allows you to properly size your Azure environment for WVD. This helps make smart decisions on migrations to WVD.  It doesn’t stop there, Stratusphere provides ongoing metrics and alerting that help IT Pro’s to continue to maintain a high performing WVD environment into the future.

How do I get it?

Azure Market Place 🙂 The preview is available in the US East 2 and US Central Azure regions; When GA is announced, it will be available in all regions.

In Microsoft’s eyes, its time to kickass and take names 😉

Check out my next post on WVD and FSLogix.

Until next time, Rob