Azure Site Recovery – An overview

Featured

Azure Site RecoveryAzure Site Recovery (ASR) is a powerful disaster recovery and business continuity solution provided by Microsoft Azure. It enables businesses to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions. With ASR, you can replicate your on-premises virtual machines, physical servers, and even entire data centers to Azure, and quickly restore them when needed.

In this blog post, we will dive deep into the capabilities, benefits, and use cases of Azure Site Recovery. We will also explore the key features, architecture, and pricing model of ASR.

Capabilities of Azure Site Recovery

Azure Site Recovery provides a range of capabilities that can help businesses ensure high availability, data protection, and disaster recovery. Here are some of the key capabilities of ASR:

  1. Replication: ASR can replicate virtual machines, physical servers, and even entire data centers to Azure. This enables businesses to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions.
  2. Orchestration: ASR can orchestrate the failover and failback of replicated virtual machines and servers. This ensures that the entire failover process is automated, orchestrated, and monitored.
  3. Testing: ASR provides a non-disruptive way to test disaster recovery scenarios without impacting the production environment. This enables businesses to validate their disaster recovery plans and ensure that they are working as expected.
  4. Integration: ASR integrates with a range of Azure services, including Azure Backup, Azure Monitor, Azure Automation, and Azure Security Center. This enables businesses to have a holistic view of their disaster recovery and business continuity operations.

Benefits of Azure Site Recovery

Azure Site Recovery provides a range of benefits to businesses of all sizes and industries. Here are some of the key benefits of ASR:

  1. High availability: ASR enables businesses to achieve high availability of their critical applications and services. This ensures that their customers and employees have access to the applications and services they need, even in the event of unexpected downtime, disasters, or disruptions.
  2. Data protection: ASR ensures that data is protected and can be recovered in the event of data loss or corruption. This is essential for businesses that handle sensitive data or have compliance requirements.
  3. Reduced downtime: ASR can help businesses reduce downtime by providing a fast and efficient way to recover from disasters or disruptions. This can save businesses a significant amount of time, money, and resources.
  4. Simplified disaster recovery: ASR simplifies the disaster recovery process by automating failover and failback operations. This reduces the risk of human error and ensures that the entire process is orchestrated and monitored.
  5. Lower costs: ASR can help businesses reduce their disaster recovery costs by eliminating the need for expensive hardware and infrastructure. This is because businesses can replicate their virtual machines and servers to Azure, which provides a cost-effective disaster recovery solution.

Use cases for Azure Site Recovery

  • Business Continuity: ASR can help businesses ensure business continuity by providing a way to keep their critical applications and services up and running in the event of unexpected downtime, disasters, or disruptions. With ASR, businesses can replicate their on-premises virtual machines and servers to Azure and failover to them in the event of a disaster.
  • Data Protection: ASR can help businesses protect their data by replicating it to Azure and providing a way to recover it in the event of data loss or corruption. With ASR, businesses can set up a replication policy to replicate data to Azure and configure recovery points to restore data to a specific point in time.
  • Migration: ASR can be used to migrate virtual machines and servers from on-premises to Azure. With ASR, businesses can replicate their on-premises workloads to Azure and then failover to the replicated virtual machines in Azure. This can help businesses move their workloads to Azure in a seamless and efficient manner.
  • Testing: ASR provides a non-disruptive way to test disaster recovery scenarios without impacting the production environment. With ASR, businesses can test their disaster recovery plans and ensure that they are working as expected without interrupting their production environment.
  • DevOps: ASR can be used in DevOps scenarios to replicate development and test environments to Azure. This can help businesses reduce the time and cost of setting up and managing these environments. With ASR, businesses can replicate their development and test environments to Azure and then failover to them when needed.
  • Compliance: ASR can help businesses meet compliance requirements by ensuring that their data is protected and can be recovered in the event of data loss or corruption. With ASR, businesses can replicate their data to Azure and then configure recovery points to ensure that their data can be restored to a specific point in time.
  • Hybrid Cloud: ASR can be used in hybrid cloud scenarios to ensure high availability and disaster recovery across on-premises and Azure environments. With ASR, businesses can replicate their on-premises workloads to Azure and then failover to them in the event of a disaster.
  • Multi-Site Disaster Recovery: ASR can be used to provide disaster recovery across multiple sites. With ASR, businesses can replicate their virtual machines and servers to multiple Azure regions and then failover to the replicated virtual machines in the event of a disaster.

In summary, Azure Site Recovery provides a range of capabilities that can help businesses ensure high availability, data protection, and disaster recovery. It can be used in a wide range of use cases across different industries to provide a cost-effective and efficient disaster recovery solution.

Until next time,

Rob

Azure vs AWS vs Google Cloud: The Ultimate Cloud Marketplace Showdown

Featured

In today’s rapidly evolving digital landscape, businesses and developers increasingly use cloud marketplaces to access various applications, services, and tools. The leading cloud providers—Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP)—each offer a unique marketplace experience catering to diverse needs and preferences. This comprehensive blog post will dive deep into the world of cloud marketplaces, comparing Azure, AWS, and Google on multiple dimensions, including user experience, available services, pricing, and more. Let’s get started!

  1. User Experience

Azure Marketplace: Microsoft Azure boasts an intuitive and visually appealing user interface, making it easy for users to navigate and discover relevant services. A well-organized layout and comprehensive search functionality simplify finding, deploying, and managing applications and services.

AWS Marketplace: The AWS Marketplace is similarly user-friendly, with a clean interface allowing users to browse and find services without hassle. It offers advanced filtering options, enabling users to narrow their search based on specific criteria such as pricing, rating, etc.

Google Cloud Marketplace: Google Cloud Marketplace is known for its simplicity and easy-to-use interface. It incorporates Google’s signature minimalist design, making it an enjoyable user experience. Like the other two, Google Cloud Marketplace also provides advanced search and filtering options to streamline the search process.

  1. Available Services

Azure Marketplace: Azure Marketplace offers various applications and services, including AI and machine learning, data analytics, security, and IoT solutions. Microsoft has a robust ecosystem of partners, allowing them to provide a wide variety of third-party applications and services that cater to the unique needs of its customers.

AWS Marketplace: AWS Marketplace has an extensive selection of applications and services, making it one of the most comprehensive cloud marketplaces available. It covers everything from machine learning and big data to application development and security, ensuring users can find the tools to build and maintain their cloud infrastructure.

Google Cloud Marketplace: While Google Cloud Marketplace may not have as many offerings as Azure and AWS, it still provides an impressive range of services, including data analytics, AI and machine learning, and security tools. Google has rapidly expanded its marketplace, consistently adding new applications and services to stay competitive.

  1. Pricing

Azure Marketplace: Microsoft Azure follows a pay-as-you-go pricing model for most services, meaning users only pay for what they use. Some services have a fixed monthly fee, while others provide a combination of free and paid tiers. Azure also offers cost management tools to help users monitor and control their spending.

AWS Marketplace: Like Azure, AWS employs a pay-as-you-go model for most services. It also provides several cost-saving options, such as reserved instances and savings plans. AWS’s cost management tools allow users to track and optimize their spending across various services effectively.

Google Cloud Marketplace: Google Cloud also adheres to a pay-as-you-go pricing model, with additional options for committed use contracts and sustained discounts. Google’s pricing is often considered more competitive than Azure and AWS, making it an attractive choice for cost-conscious users.

  1. Support and Ecosystem

Azure Marketplace: Microsoft Azure has an extensive support network, including an active community forum, documentation, and tutorials. Additionally, users can access premium support services for a fee. The Azure ecosystem is expansive, with a multitude of partners offering a variety of services and applications.

AWS Marketplace: AWS provides many support options, including documentation, tutorials, and an active community forum. Like Azure, AWS also offers excellent support for a fee. The AWS ecosystem is vast, and its marketplace continually grows as more partners, and third-party providers join the platform.

Google Cloud Marketplace: Google Cloud offers a robust support system, including comprehensive documentation, tutorials, and a community forum. While premium support is available for a fee, Google also provides various free resources to help users navigate their cloud journey. The Google Cloud ecosystem is steadily growing, with new partners and third-party providers continually added to the marketplace.

  1. Compliance and Security

Azure Marketplace: Microsoft Azure is known for its commitment to security and compliance, offering various certifications and attestations to meet multiple industry standards. Azure’s Security Center provides users an integrated security monitoring and policy management solution to safeguard their cloud resources.

AWS Marketplace: AWS is equally committed to security and compliance, with numerous certifications and attestations available to address industry-specific requirements. AWS offers robust security features, such as identity and access management, threat detection, and encryption, ensuring a secure cloud environment for users.

Google Cloud Marketplace: Google Cloud takes security and compliance seriously, strongly focusing on data protection and privacy. It offers certifications and attestations to meet industry standards and provides tools like Cloud Security Command Center to help users monitor and manage their cloud security.

Conclusion

The choice between Azure, AWS, and Google Cloud Marketplaces ultimately depends on your unique needs, preferences, and budget. Each provider offers a slightly different user experience, range of services, pricing model, and support ecosystem. When selecting a cloud marketplace, consider your organization’s infrastructure, technical requirements, and long-term growth plans.

Microsoft Azure is an excellent choice for organizations already using Microsoft products and services, as it offers seamless integration with their existing infrastructure. AWS Marketplace provides many applications and services, making it ideal for those seeking a comprehensive cloud solution. With its competitive pricing and a strong focus on data protection, Google Cloud Marketplace is an attractive option for cost-conscious users and organizations prioritizing data privacy.

Ultimately, the best cloud marketplace for your organization will depend on your specific requirements and goals. Take the time to explore each platform, evaluate its offerings, and select the one that best aligns with your organization’s vision for the future.

Until next time,

Rob

Microsoft Teams vs Slack…Which Collaboration Tool is Right for Your Team?

Microsoft Teams and Slack are two of the most popular collaboration and communication platforms used by organizations today. Both platforms offer a wide range of features, including instant messaging, file sharing, video conferencing, and more. However, some key differences between Microsoft Teams and Slack are worth considering when choosing the right platform for your organization.

Integration: Microsoft Teams integrates with several other Microsoft applications, such as SharePoint, OneDrive, and OneNote, making it an ideal choice for organizations already using Microsoft products. Slack, on the other hand, integrates with a broader range of third-party applications and services, making it a good option for organizations that use various tools.

File Management: Microsoft Teams offers built-in file management capabilities with its integration with OneDrive and SharePoint. This means you can store, share, and access all your files in one place. Slack also has robust file management capabilities, but you may need to integrate it with a third-party storage solution to get the same functionality as Teams.

Video Conferencing: Both Microsoft Teams and Slack offer video conferencing capabilities, but Teams has a clear advantage here with its integration with Microsoft’s Teams Meetings. Teams Meetings offers advanced features such as screen sharing, recording, scheduling and joining meetings directly from the Teams app. Slack also provides video conferencing capabilities that are not as comprehensive as those offered by Teams.

Pricing: Both Microsoft Teams and Slack offer free and paid plans, but Microsoft Teams is generally more expensive than Slack. However, organizations with a Microsoft 365 subscription may find Teams a more cost-effective solution, as it is included in their subscription.

In conclusion, if you want to spice up your office communication and make it a bit more fun, you could try sending your colleagues messages in Morse code or using carrier pigeons instead of Microsoft Teams or Slack. But on a serious note, while both platforms have their pros and cons, ultimately, the choice between them should be based on your organization’s needs and preferences. So, pick the one that suits you best, and don’t forget to send a GIF or two to keep things lighthearted!

Until next time,

Rob

Azure Sentinel: The Future of Security Information and Event Management

Featured

Azure SentinelIn today’s digital world, protecting an organization’s information and assets from cyber threats has never been more critical. The rise in cyber attacks and security breaches has made it crucial for organizations to have a centralized platform to manage their security operations and respond to incidents promptly and effectively. That’s where Azure Sentinel comes in.

Azure Sentinel is a cloud-native Security Information and Event Management (SIEM) solution provided by Microsoft Azure. It provides a comprehensive security solution that integrates with existing security tools and cloud services to provide a complete view of an organization’s security landscape. Azure Sentinel is designed to help organizations quickly detect, investigate and respond to security threats and streamline their security operations.

Azure Sentinel Core

One of the key benefits of Azure Sentinel is its ability to provide a unified view of security events from various sources. It can collect data from on-premises, cloud, and hybrid environments and a wide range of security tools and services. This data is then aggregated and analyzed in real-time to provide organizations with a complete picture of their security posture. Azure Sentinel also uses machine learning algorithms to identify patterns and anomalies and to detect threats that might have gone unnoticed.

Another essential feature of Azure Sentinel is its ability to automate security workflows. It provides a flexible and powerful security automation and orchestration platform that enables organizations to respond to incidents quickly and effectively. Azure Sentinel provides built-in playbooks and pre-configured security workflows that specific events or conditions can trigger. Organizations can also create custom playbooks to automate their security operations.

In addition to its capabilities, Azure Sentinel is highly scalable, allowing organizations to manage security operations at any scale. It is built on Microsoft Azure, which provides a highly scalable, secure, and reliable platform for security operations. Azure Sentinel is also designed to be cost-effective, providing organizations with a cost-effective solution for managing their security operations without significant investments in hardware or software.

In conclusion, Azure Sentinel provides organizations with a comprehensive and centralized security solution that integrates with existing security tools and cloud services to provide a complete view of an organization’s security landscape. With its ability to detect and respond to threats quickly and effectively, automate security workflows, and provide a cost-effective solution, Azure Sentinel is the future of SIEM. Azure Sentinel is a solution worth considering if you’re looking to enhance your security posture and streamline your security operations.

Until next time, Rob

Azure Batch: A Comprehensive Guide

Azure Batch Example

Azure Batch is a cloud-based platform offered by Microsoft Azure that enables users to run large-scale parallel and batch computing workloads. With Azure Batch, users can manage, schedule, and run their applications and tasks on a pool of virtual machines. This provides a flexible and scalable solution for businesses and organizations looking to run complex computing tasks in the cloud.

Key Features of Azure Batch

Scalability: Azure Batch allows users to scale their computing resources on demand, enabling them to handle even the largest computing workloads. The platform can automatically allocate and manage the virtual machines needed to run your tasks, ensuring that your applications have the resources they need to run smoothly.

Flexibility: Azure Batch supports a wide range of applications and languages, including .NET, Python, and Linux. This makes it easy for organizations to integrate their existing applications and tools with Azure Batch.

Monitoring and Management: Azure Batch provides real-time monitoring and management capabilities, making it easy to track your batch jobs’ progress and quickly identify and resolve any issues.

Cost-Effective: Azure Batch offers a pay-per-use pricing model, so you only pay for the resources you consume. This helps to keep costs down, making it an attractive solution for organizations looking to reduce their IT expenses.

How to Use Azure Batch

To get started with Azure Batch, you’ll need to create a Batch account in the Azure portal. Once your account is set up, you can create a pool of virtual machines to run your tasks on. These virtual machines can be managed and scaled using the Azure Batch API or the Azure portal.

Next, you’ll need to create a batch job to run your tasks on the virtual machines in your pool. A batch job is a collection of tasks executed on your pool’s virtual machines. You can submit your tasks to the job, and Azure Batch will automatically manage the distribution of the tasks across the virtual machines in your pool.

Once your batch job runs, you can monitor its progress in real-time using the Azure portal or the Azure Batch API. You can also retrieve detailed information about each task, such as its status and any errors that may have occurred during its execution.

Examples of Effective Usage

  • Use auto-scaling to save cost: Azure Batch provides an auto-scaling feature that automatically adds or removes compute nodes based on the demand for your applications. This helps you save cost by only paying for what you use and avoiding over-provisioning of compute resources. To enable auto-scaling, you can use the auto-pool and auto-scale features in the Azure portal or through the Azure Batch API.
  • Utilize the cloud-init script: You can use the cloud-init script to customize the behavior of your compute nodes. For example, you can use the script to install necessary software, configure firewall rules, or download data. The cloud-init script is executed every time a new compute node is created, ensuring that all nodes are consistently configured.
  • Make use of custom images: Azure Batch allows you to use custom images to deploy your applications, which can greatly reduce the time required to set up your
    environment. By creating a custom image with all the necessary software pre-installed, you can quickly create new compute nodes and start processing your data.
  • Take advantage of the task dependencies: Azure Batch provides the capability to specify task dependencies, which can help you ensure that tasks are executed in the correct order. You can use task dependencies to specify the order in which tasks are executed, or to make sure that a task is not executed until its dependencies have been completed.
  • Utilize the Job Preparation task: The Job Preparation task is a special task that runs on each compute node before the other tasks are executed. You can use the Job Preparation task to perform any necessary setup or configuration, such as installing software, copying data, or configuring firewall rules.
  • Monitor your jobs: Azure Batch provides robust monitoring capabilities that allow you to monitor the status of your jobs, tasks, and compute nodes. You can use the Azure portal, Azure Monitor, or the Azure Batch API to monitor your resources and get insights into the performance of your applications.

Conclusion

Azure Batch is a powerful and flexible platform for running large-scale batch computing workloads in the cloud. With its ability to scale resources on demand, support for a wide range of applications and languages, and real-time monitoring and management capabilities, it’s an attractive solution for organizations looking to take their computing to the next level. Whether you’re running scientific simulations, data processing, or any other type of batch computing workload, Azure Batch can help you get the job done quickly and efficiently.

Until next time, Rob

Lessons Learned – Managing your Critical IT Infrastructure during a Pandemic

Worldwide Craziness

The Novel Coronavirus has already devastated the global economy. Historically, most business continuity plans for data centers are based on local scenarios, where “acts of God” wreaked havoc in one place. Rarely had anyone considered that one place being all of Earth.

A Change in IT Mindset

It is not — at least not yet — the equivalent of a worldwide hurricane. Today, the world’s data centers are, for the most part, functional. Modern enterprise data centers have already been designed to operate with as few as three full-time staff members onsite.

You don’t have to look far to see how the global COVID-19 pandemic has fundamentally upended IT. As organizations in all sectors have rapidly emptied their offices and sent their employees home to comply with ever more expansive shelter-in-place and quarantine mandates, replicating the full breadth of services remotely has been IT’s biggest priority.

All of this is nothing short of a remote collaboration revolution. It is already rewriting how work gets done — and how technology gets supported — when direct access to traditional, physical infrastructure is no longer a given.

But this is merely one aspect of IT. As we begin to digest how these changes will shape technology best practices, both during the current crisis and well into the future, we can’t afford to ignore the often unseen underpinnings of IT infrastructure that don’t have the luxury of working remotely.

Not an Option

Put merely, mission-critical facilities like data centers can’t be relocated into employees’ home offices. While transferring end-user productivity out of a traditional office context is a fairly straightforward process. The same can’t be said for the highly specialized workloads that can only be managed within the framework of a data center. Beyond the uniquely visible and non-transferrable capabilities of the facilities themselves — grid access, raw compute power, failover, security, etc. — there is the genuine accountability associated with the sheer volume and type of workloads managed within them.

Regulatory constraints around how incalculably vital data must be managed and protected throughout all phases of its lifecycle add even more complexity to data center protocols during a pandemic.

So while you can’t simply abandon your data center in the same manner as your end users have cleared out their offices, you can — and must — understand how to rebalance your provision of data center services in light of how the pandemic continues to evolve. And it would be best if you did so while you continue to keep the lights on for stakeholders who need uninterrupted access to data center services now more than ever.

Against this backdrop, if you haven’t already examined your data center management strategy through a COVID-19 lens, now is the time to do so. As with anything related to the data center, however, this will be a complex, multifaceted process. It would be best if you positioned yourself to navigate it by looking at it through the following contexts.

  • Capacity Management

    The historically unpredictable global business environment is putting unprecedented pressure on capacity management, with businesses barely able to forecast demand — or, in many cases, keep up with it. Global internet traffic is trending upward, with several exchanges routinely reaching record throughput as entire economies and workforces adjust to the new lockdown paradigm. Some organizations facing spiking demand have no choice but to move services out of their own data centers and lean more heavily on vendors. This makes absolute sense in an unpredictable landscape where scale needs to be implemented without delay. Still, it doesn’t make everyday issues like bandwidth, power, CPU, memory, and disk space disappear. Instead, it shifts the burden onto these external providers and their specific infrastructure. IT leadership must adapt these partnerships to keep pace because, if vendors don’t stay ahead of the curve, IT may find itself unable to serve the business adequately.

  • Connectivity

    The old truth to avoid putting all your eggs in one basket has never been more valid than it is now. This issue relates directly to capacity management, and, as the crisis deepens, the strain on all aspects of infrastructure will only increase. Diversify your upstream providers as much as possible to mitigate the risks associated with any one of them being compromised by pandemic-related resourcing constraints. This minimizes the potential for back-end interruptions to reach your customers. Leverage third-party user reviews and analyst resources to better assess and compare vendors, match provider capabilities to fast-changing business needs, and position yourself to make best-of-breed decisions faster.

  • Disaster Recovery

    The uptick in adopting mission-critical services being deployed off-premises doesn’t only impact day-to-day service delivery and the service level agreements (SLAs) that set expectations and confirm accountabilities. It also has significant implications for disaster recovery (DR) planning and implementation. It shifts a fair degree of risk over to the third-party providers now responsible for delivering these services. DR plans must be updated to reflect this new world of vendor-distributed work, and vendors must be integral to this process to ensure they are in a position to fulfill all requirements.

  • Security

    Cybercriminals have never missed an opportunity to take advantage of periods of uncertainty to ply their evil trade, and the COVID-19 pandemic is no exception. As more organizations move their services to centralized locations, bad actors suddenly have significantly more — and better defined — higher-value targets. From a cybercriminal’s perspective, why attack one company and net only one victim when you can strike a mission-critical data center and compromise many victims? This sobering reality reinforces the need to nail down end-to-end security protocols with all vendors, including, but not limited to, encryption, authentication, and onsite access control. Reaffirming your cybersecurity skills inventory — and closing any gaps with targeted training — should also be prioritized.

  • Colocation

    If you are either using or responsible for colocated resources or infrastructure, you must take immediate steps to reduce physical risks at all levels, including:

    • Focus on disease control and disinfection throughout the facility.
    • Enforce monitoring — including temperature checks — at tightly controlled entries, and turn away anyone exhibiting symptoms to avoid compromising the facility itself.
    • Reduce the number of people onsite, especially unknowns and other individuals not considered essential to the business.
    • Consider extending shift lengths from eight to 12 hours and moving to a two-shift schedule, if local labor laws will accommodate.
    • Take individual steps to protect technical staff with skills required to maintain data center uptime, including sequestering them in a third, unscheduled shift, and holding them in reserve if primary staff exhibit symptoms.
    • Incorporate in-person monitoring of tasks during shift rotations to ensure continuity of operations. Implement contactless handovers to minimize transmission risk during these critical periods.
    • Assign activities and technical resources to single buildings and prevent them from moving to other buildings within a more massive campus.
    • Prioritize the implementation of “smart hands” services to ensure trained, known resources handle tasks requiring onsite engagement.
    • Leverage guidance from local and regional health authorities to ensure nothing is missed, including physical traffic control methods in shared areas to support social distancing.

Focus on the Opportunity

Not everything about the current pandemic should incite fear — all significant disruptions offer opportunities to rethink how data center operations are planned, managed, and evolved over time. The possibilities can be game-changing, but only if you take the time to get out of firefighting mode and zero in on what your strategy should look like once COVID-19 is firmly behind us.

For example, as more data physically moves offsite toward data centers, hardware GPUs can be leveraged for compute-intensive artificial intelligence, machine learning, and related data analysis applications. Recognize that data has gravity and tends to pull surrounding apps with it. Position yourself to sell compute capacity to meet these shifting demands.

Don’t Reinvent the Wheel

As the pandemic continues to play out, expect the value of traditional data center best practice to be reinforced. This isn’t so much a time to rip apart and rebuild as it is to validate what you’ve been doing all along and double down on it.

Start by ensuring your basics are sound and that your existing slate of products and services is reliable, secure, and well-communicated to your stakeholders. The sudden increase in demand for data center services and capacity may be unique in history, but stakeholders will depend on you having a firm foundation. By taking the time to reaffirm that this is indeed the case, you’re in a much better position to scale and meet this demand.

Learn from experience

As unique as this experience seems to us all, recognize that we’ve been through this before — including the SARS, H1N1, and Ebola outbreaks in 2003, 2009, and 2014, respectively. Refer back to any documentation you may have from those periods to inform your thinking and responses for the current pandemic, but bear in mind that the impact in those previous cases was significantly smaller, and we “returned to normal” much more quickly.

This time out, the impact is unprecedented, and the future timeline won’t be resolving itself anytime soon. Expect it to take far longer than initially expected to return to anything remotely approaching “normal,” and, even then, expect the very definition of the word to evolve.

Many economic, technological, and social changes will indeed be permanent, which means your go-forward strategy to manage data center resources should not be to overutilize what you’ve got and hope to ride out the storm. Instead, now is the time to scale your investments in critical infrastructure and prepare for a changing world after that. This strategy will maximize your business continuity and minimize the risks associated with navigating these strange times.

Until next time, Rob.