Most companies would love to move enterprise applications to the public cloud and cut the costs, complexities, and limitations of current infrastructure.
Major public cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have done well in enabling web, mobile, and content applications in the cloud. These are cloud-native applications designed to use cloud "object" storage. However, enterprise applications typically require "block" storage. Although native cloud block storage exists, it has several shortcomings, particularly around reliability, durability, and lack of data mobility. So, enterprise applications have largely stayed on-premises or in private clouds.
Now there's an innovative new way to move enterprise applications to the public cloud while actually reducing risks and trade-offs. It's called multicloud storage, and it's an insanely simple, reliable, secure way to deploy your enterprise apps in the cloud and also move them between clouds and onpremises infrastructure, with no vendor lock-in. Multicloud storage allows you to simplify your infrastructure, meet your service-level agreements, and save a bundle.
We know that enterprises are embracing the public cloud in record numbers. According to the Rightscale 2017 State of the Cloud Report, 89 percent of enterprises have at least some applications in the public cloud, according to a recent study. In fact, 41 percent of workloads are now run in the public cloud and 38 percent in the private cloud.
And we know that interest in the multicloud model is also soaring. According to the same report, 85 percent of enterprises now have a multicloud strategy.
In this chapter, I give you a closer look at what's driving multicloud adoption from several perspectives: the role of multicloud in digital transformation, the urgency of modernizing the storage infrastructure with multicloud, and the capabilities that make multicloud a smarter option for enterprise application storage.
Been to any industry conferences lately? Chances are good that there were multiple sessions and keynotes about the urgent need to modernize your data center infrastructure. What's behind that? Two words: digital transformation.
Business leaders are eager to take advantage of the opportunities of the digital age. The Internet of Things, cognitive computing, artificial intelligence, augmented reality, machine learning, consumerization of business applications, and automation of business processes all represent multi-billion-dollar market opportunities.
The common thread among all these opportunities is that they depend on infrastructure. But all too often today, infrastructure is the obstacle to digital transformation rather than the catalyst. You see the evidence in reports of stalled or delayed initiatives. According to a recent TechValidate survey, 64 percent of business leaders say their transformation initiative is behind schedule, and a MuleSoft survey found that only 18 percent of executives are confident they'll succeed in meeting digital transformation goals in the next 12 months.
That puts enormous pressure on IT managers to modernize their infrastructure so that they can quickly provision highperformance, reliable, scalable, agile resources to run a wide range of workloads.
The cloud offers a unique opportunity to modernize infrastructure while also cutting costs. More specifically, the private cloud can provide operational efficiencies that translate to reduced operational expenditures (OpEx), while the public cloud can help cut capital expenditures (CapEx) by reducing the amount of infrastructure you need to purchase. And the move to a multicloud model can increase the simplicity, agility, and cost-effectiveness of infrastructure provisioning — and thereby serve as an accelerant to the digital transformation agenda.
As the cloud model has caught on, we've all witnessed a big bang–style explosion in the buzzwords related to cloud computing. Unfortunately, there are significant differences in how these terms are used. It seems that every stakeholder or vendor attempts to spin the definition in a way that showcases their products or expertise, and the terms become meaningless, like "open computing" back in the 1990s. In particular, many people have started using the terms multicloud and hybrid cloud interchangeably, and that's going to lead to trouble. So, let's look to the U.S. National Institute of Standards and Technology (NIST) for some objective definitions of what's what.
Multicloud differs from hybrid cloud in that it refers to multiple cloud services rather than multiple deployment modes (public, private, and legacy).
Multicloud uses multiple cloud providers (Amazon Web Services, Azure, internal IT, and so on) for multiple workloads.
Storage infrastructure is particularly critical to digital transformation initiatives because it houses application data — which is the lifeblood of the enterprise. Data is the raw material for understanding customers, identifying new market opportunities, and creating the innovative new software, products, and services that deliver competitive advantages.
Storage infrastructure is also vital because the flow of data between the infrastructure and the application directly impacts the performance experienced by end users — and today's end users have zero patience for sluggish applications, services, and business processes.
So, if you're looking to modernize your storage infrastructure, why not look to the cloud (or, more correctly, the clouds)? You could use private cloud storage to minimize OpEx, leverage public cloud storage to minimize CapEx, and use multicloud services to optimize performance. Right?
But there's a problem: When it comes to storage, not all clouds are ready for all applications.
Public clouds typically offer object storage because it is massively scalable, which is great for content and web apps that store documents, videos, music, or social media content. But when it comes to mission-critical business applications, such as customer relationship management (CRM) and enterprise resource planning (ERP), object storage is not up to the task.
Object storage is a storage architecture that manages data as objects, unlike file storage (which manages data in a hierarchical file structure) and block storage (which manages data as blocks in sectors and tracks.
Business applications require the enterprise-grade features, flexibility, and performance provided by block storage. Block storage is usable by almost any application, file, database, or file system, and it delivers the low latency needed for business applications. Block storage also lets you use backup tools that are native to your various operating systems — without requiring extra steps or new processes.
So, companies that are looking to the cloud for storage need to look for block storage alternatives for their enterprise applications — but they also need the option of mixing public and private clouds. They need to be able to move data at will between private clouds and public clouds to optimize costs, performance, reliability, security, and so on — which can now be delivered in multicloud storage.
With all the myriad cloud types and cloud offerings out there, why go with a multicloud strategy for application data storage? Because it combines the best attributes of multiple cloud options and takes away many of the trade-offs to which IT and application administrators have become accustomed.
You can move enterprise application data to public clouds
(such as Amazon Web Services [AWS], Microsoft Azure, and Google Cloud), transfer it between them, or migrate it back to your data center, your on-premises private cloud, or a private cloud from a trusted third-party service provider — with no vendor lock-in.
When you decide to move data, there are no data migration or costly egress charges — you just flip the switch from the multicloud portal, and the connection is switched to the new cloud provider instantly, without moving a single byte of data! Plus, if you decide to move data off the public cloud and back to your own data centers, you can do so easily without any egress charges. And you can manage your storage volumes through a simple web portal just as you do with AWS or Azure, but with data durability that's orders of magnitude higher.
Modern multicloud storage offerings can deliver the following:
According to Forrester Research, "Cloud maturity is not a one-lane road; it's a multilane highway. Cloud services have matured to the point that they can replace, augment, and host an increasingly wide range of enterprise workloads."
There's no question that the multicloud model holds considerable promise for achieving high-priority business goals such as improving the performance and agility of enterprise applications, cutting total infrastructure costs, and hitting the accelerator on digital transformation initiatives.
The new question is, can the multicloud model actually help debunk any remaining concerns and misconceptions about cloud computing? When you examine these apprehensions head on, you find that multicloud can be instrumental in overcoming objections and broadening your view of the value of the cloud. Forrester Research reports that enterprise cloud computing adoption accelerated in 2016 and predicts that it will do so again in 2017.
In this chapter, you explore the cloud concerns that multicloud can address and the frequently overlooked considerations in implementing a cloud strategy in terms of organizational, educational, and financial impacts.
The mere thought of moving enterprise applications to the cloud makes many IT managers and administrators break into a cold sweat. But fear and anxiety aren't good reasons to avoid action. There was a time when people also believed it wasn't safe to ride in an elevator without a human operator. Some cloud concerns are justified, but many are simply myths and misconceptions. Let's take a closer look:
Clouds may increase the risk of data loss. This misconception stems from the fact that cloud-based block storage services historically have had significant differences in their data durability (the likelihood of data loss) when data is not backed up. In some cases, the annual failure rate can be as high as 1 in 500, which is clearly unacceptable for enterprise applications. The assumption is that multicloud storage services would be even less likely to provide reliable, consistently low data loss rates for enterprise applications.
However, the reality is that multicloud services can bring enterprisegrade reliability and security to the public cloud and deliver measured, proven data durability that is millions of times higher than cloud-native block storage.
Vendors won't offer enterprise-grade support. Enterprise-grade support requires extensive expertise built on many years of real-world experience. But traditional cloud support models have largely been based on a "do-it-yourself" (DIY) approach via online support forums and knowledge base articles.
Today, multicloud service providers are prioritizing support as an integral part of their service. Many offer enterprise-grade capabilities such as support delivered by experienced technical support engineers; deep visibility into overall health, automated actionable reporting, and proactive troubleshooting; 24/7 service options for mission-critical deployments; and more.
Data gravity increases in the cloud. As workloads scale, data is typically the hardest component of a workload to move. The metric that describes this inertia is data gravity. It's easy to presume that data gravity would increase with multicloud models because you're talking about moving data not only to the cloud but also between clouds, as well as between clouds and on-premises data centers.
However, the reality is that multicloud services can actually "grease the skids" in data mobility as customers move workloads between their data centers and different cloud providers. For example, multicloud storage can remove data gravity by acting as a single repository for all your cloud providers while providing easy mobility — making the transitions not only faster, but easier as well — because there's no need to migrate when moving between clouds.
Cloud services create vendor lock-in. You don't have to be a cynic to suspect that cloud service providers might try to include "fishhooks" in the contract such as data egress charges that effectively lock you in to their service by making it so difficult and expensive to move your data back from their cloud to your data center that you really have no choice but to stick with their offering. However, you'll discover that some multicloud service providers recognize that it's in their interest to increase your agility rather than lock you in. Multicloud service providers have an economic incentive to keep you nimble. For cloud service providers, the future favors the flexible.
Cloud storage will never be secure enough. It's true that when you push enterprise applications to the public cloud, you may be pushing sensitive data outside of your direct control — and that could open you up to increased security risks. But let's not neglect the other side of the coin — because the multicloud model can provide new ways to solve security problems.
For example, multicloud services can test security mechanisms across huge populations of users and possible attacks. This makes it possible for users to benefit from the experience and remedies of millions, as opposed to the efforts of a single organization trying to address all possible security threats.
In addition, multicloud services can allow you to take advantage of predictive analytics — which can make it possible for cloud security systems to identify critical security information, evaluate the significance of that information, analyze user behavior, and pinpoint risky activities to help keep your enterprise applications secure.
Multicloud also can incorporate data encryption technology, such as 256-bit Advanced Encryption Standard (AES) encryption, adding a layer of protection to privacy and confidentiality for sensitive data. Moreover, multicloud makes it possible to utilize and extend the existing firewall and network architecture to cloud-based services, providing additional security with relatively little cost and effort.
Backup and recovery options will be limited. Not necessarily. Some multicloud services come with a rich set of data management capabilities, such as instant backups, that don't impact production windows and performance, and instant thin clones so you can quickly create zero-copy clones for test/dev, analytics, and bursting. And by providing fast, easy restoration, they can help you meet accelerated recovery point objective and recovery time objective requirements without impacting production workloads.
Surprise fees will blow up the business case. One of the most common complaints you hear from users about the public cloud is that sooner or later they receive monstrous "surprise bills" that are several times larger than they're expecting. Stories about monthly bills spiking as much as five-fold are all too common. That kind of volatility can quickly kill the business case you carefully crafted for adopting a cloud strategy.
The root cause of these awful surprises is poor monitoring and tracking tools and lack of best practices on cloud use. Multicloud services can track your current usage, estimate future usage, charge only for the resources you use, and provide features that help minimize the bill (such as charging only for newly changed data rather than full copies) compared to cloud-native storage.
The "black box penalty" will increase costs. Simply moving applications to the cloud doesn't make all your problems go away. Cloud services are often like a black box — you can't see inside. The result is a black box penalty of spiraling costs where troubleshooting of issues is nearly impossible, leaving you no choice but to purchase and install additional third-party monitoring tools and licenses. Most of these tools are designed for the cloud and provide little or no visibility into your own data centers.
This concern naturally applies to multicloud storage offerings as well. However, multicloud services are now available that provide visibility whether your data is in the cloud or on-premises. It also allows you to see up the stack into the virtualization layer, and this level of visibility is also being extended to the entire stack, including the network, servers, storage, and even the application itself.
You'll need separate tools and processes to take advantage of analytics and automation. Two of the primary drivers for moving to the cloud for many organizations are lowering costs and increasing agility. Analytics and automation can play a huge role in accomplishing those goals — if they're easily accessible. Some cloud offerings require separate tools or new ways of working, but others are specifically designed to take advantage of analytics and intelligent automation using your existing tools and processes. Current offerings can also take analytics to new levels — for example, by using predictive analytics to anticipate and prevent issues across the stack, so you can optimize data placement and resource usage, and uncover opportunities for savings.
"Cloud-first" and "all-in-cloud" strategies are best. Lots of companies are looking to move every application and data set into the cloud as quickly as they can. There are some problems with this approach:
The fact is, there's still plenty of value in keeping some applications and data on your internal, on-premises infrastructure — and buying on-premises infrastructure to satisfy immediate needs — even if you have a cloudfirst strategy.
Table 1 compares some of the features and capabilities of multicloud and native cloud storage.
Native Cloud Block Storage
Assured enterprise storage reliability
Millions of times more durable
0.1% to 0.2% annual failure rate
Snapshots and clones
Instant and efficient
Full and slow
Manual migration appliance
No, costly egress charges
Hybrid/private cloud portable
No, manual migration only
Global visibility via predictive analytics
See and manage public cloud and on-premises
No, requires third-party tools
No, requires third-party tools
Cross-stack problem isolation
No, requires third-party tools
IT leaders look at the potential of multicloud services and see a host of new possibilities for transformation and increasing agility. But all too often, critical aspects of harnessing the cloud are overlooked, including organizational readiness, educational requirements, and financial implications. I take a closer look at a few of these in this section.
Adopting multicloud is a journey, not an action. It's going to require skills your IT staff may not currently possess; it's going to change your processes and procedures; it may even require a new organizational structure. You've got to ask, "How ready is my organization?" You'll need to carefully assess both your organizational maturity for cloud adoption and the attitudes of current IT staff about moving to multicloud — because their viewpoints and mindsets can ultimately determine the success or failure of any cloud adoption initiatives.
Organizational change also creates cultural considerations. Change can be disruptive, and it's important to understand how IT staff will react to that disruption. Do you have a culture that thrives on exploring new concepts and new ways of working? If not, have you considered incentives that could improve acceptance of the change or facilitate better collaboration among team members?
There are a couple of key questions to ask in assessing educational issues. How much training or retraining will be required to obtain the needed skillsets or best-practice knowledge for the transition to multicloud, and how will IT staff obtain those skills?
Equally important, it pays to give some thought to who among the existing staff is best qualified to take on a leadership role in implementing a multicloud model. Who has the right combination of skills, positive attitude, and ability to communicate with and educate others, including both technical staff and business leaders?
It's one thing to understand how multicloud will save money. It's quite another to do the math and determine precisely how much it will save in CapEx and OpEx, and over what period of time — and to incorporate your economic analysis into a comprehensive business case. Many individual financial factors come into play — from the cost of making your data center infrastructure cloud-ready to the cost of training and the incremental costs of supporting new services — and all of them need to be taken into account.
With cloud infrastructure, compute resources are ephemeral. They come and go, turn on and off, burst up and shut down. However, storage needs to be much more persistent because people and organizations tend to want to keep their data for ever longer periods. As a result, the sheer volume of data and the storage required to keep it continues to grow and accumulates gravity.
In a multicloud environment, storage infrastructure becomes the lifeblood of your critical data. As such it must become more fluid, allowing data to move more freely among storage resources, whether they reside in your data center or in a public or private cloud. That means that when you're architecting a multicloud data center, you need to keep storage considerations front and center — and deploy storage infrastructure that is truly cloud-ready. Because the future belongs to the fast and flexible.
This chapter describes the key challenges and opportunities for architecting a data center that is both storage-centric and multicloud-ready.
No matter how excited you are about the possibilities of the multicloud model, and regardless of how far along you are in adopting a multicloud strategy, your data center isn't going anywhere any time soon. You're going to need to maintain and manage your on-premises data center infrastructure for many years to come.
Many workloads are not yet cloud-ready, many IT departments are not yet multicloud-skilled, and many business leaders are not yet ready to embrace multicloud as a safe, cost-effective alternative — particularly for extremely sensitive corporate or customer data. Let's take a closer look at the challenges that data center managers face in a multicloud environment.
While the opportunities of the digital age beckon, the realities of meeting current obligations remain unchanged. It seems ironic: Virtually every enterprise today is embarking on a digital transformation initiative — in fact, a recent Progress global survey found that 96 percent of organizations see digital transformation as critical or important. Yet 80 percent to 99 percent of current IT budgets are still allocated to traditional IT activities, also known as "keeping the lights on" (or, simply, KTLO).
IT departments need to continue delivering on current service-level agreements that specify metrics for such things as performance, availability levels, service request fulfillment time frames, and so on. In particular, they need to continue meeting the requirements for applications, such as fast load times, data accessibility, problem resolution, and a whole host of user experience metrics.
Many organizations continue to struggle with data center modernization or cloud-readiness initiatives. For example, integrating pools of existing resources — such as compute instances, storage volumes, and networking infrastructure — and making them more future-proof for the cloud era can be daunting. What's needed is a more structured approach to understanding and dealing with data center challenges in the multicloud era, and broader adoption of new innovations that can expedite cloud readiness.
Application performance has become increasingly critical to both end users and data center managers. Slow applications mean lower productivity, and in a time when acceleration of business processes is the key to competitiveness, no business can afford slow applications.
In a recent survey by Oxford Economics and Nimble Storage, almost half of employees said they lose more than 10 percent of their workdays — about 48 minutes — just waiting for software to load. IT decision makers feel that pain too: 43 percent said they lose 11 to 30 minutes each workday because of delays they encounter while trying to use applications (see Figure 1).
Businesses are also under increasing pressure to meet the instant gratification requirements of today's digital natives and "Millennialize" their applications. In the same Oxford/Nimble survey, more than three-quarters (77 percent) of Millennials say that suboptimal application performance affects their ability to achieve their personal best, compared with just half of Baby Boomers and 72 percent of Generation Xers. In fact, half of Millennials say they've stopped using a cloud-based application because it runs too slowly — significantly more than other groups.
Figure 1: IT decision makers and business users are tired of sluggish applications.
Of course, if applications become unavailable because of infrastructure downtime, performance is a secondary concern. Various recent research shows that the average cost of an hour of downtime can be as much as half-a-million dollars, and this will only increase with the continued digitization of industries.
The delicate balancing act between CapEx and OpEx gets even more complicated for data center managers in a cloud environment. The challenge is to minimize total infrastructure costs, even as total infrastructure capacity scales to unprecedented levels.
The physical systems you purchase must be both cloud-ready and future-ready. And the infrastructure you subscribe to as a service must have predictable costs. That means there must be ✓ No "shock fees"
New innovations in predictive analytics and flash storage infrastructure have arrived to address data center challenges — old and new — and the multicloud model is putting these innovations to work right now.
When users experience application performance issues, they have hit what's known as the app-data gap, any slowdown or disruption of the delivery of data to applications. We've all experienced it — that frustrating delay that forces you to wait and wait for your application to do something.
But closing the app-data gap is not as simple as adding more high-performance storage systems such as flash arrays. In fact, 54 percent of application performance issues are not related to storage, according to a recent study by Nimble Labs Research. The app-data gap can be caused by issues anywhere across the entire stack (storage, networks, servers, and software). And that means not only are blazing-fast systems required, but businesses also need predictive insights into their entire infrastructure stack if they want to get ahead of problems.
Predictive analytics anticipate and prevent virtually any barrier that slows down data velocity or leads to costly downtime before the issue occurs. By combining these capabilities with blazing-fast all-flash or hybrid-flash arrays and converged, integrated infrastructure systems, companies can not only close but obliterate the app-data gap.
Today's predictive analytics solutions collect more sensor data points than there are stars in our galaxy. They use data science and machine learning to analyze and correlate trillions of sensor data points to find problems and resolve complex infrastructure issues. Analytics solutions can diagnose and prevent problems even outside of storage. It's like having a team of data scientists watching over your infrastructure so that it runs perfectly without your having to lift a finger.
With predictive analytics, nine out of ten problems can be automatically detected by collecting and analyzing billions of sensor data points from each storage array. Non-storage problems, misconfigurations, and other user errors can be quickly diagnosed and resolved, resulting in higher availability levels.
Predictive analytics can also accelerate root-cause analysis and cut hours of tedious manual troubleshooting. Administrators can see across storage, networks, servers, and virtual machines; view correlated analysis on-demand to quickly resolve issues (even when they're not related to storage); and avoid all the vendor finger-pointing that slows down problem resolution. With predictive analytics, there shouldn't be a need to talk to a support engineer asking for your name, support contract number, and whether you turned on your storage — all the information is already there. For the relatively few problems that do require manual assistance, you can go straight to a Level 3 expert who uses "pre-collected" data to rapidly resolve the issue.
You can also take advantage of analytics to predict future infrastructure needs. Analytics solutions can accurately forecast capacity, performance, and bandwidth needs based on historical data, and correlate and match similar consumption patterns across the entire install base. They can even identify potential resource ceilings in your environment as your usage increases — and tell you how to avoid them.
Additionally, analytics solutions can provide prescriptive guidance to ensure optimal long-term performance of your entire infrastructure stack. Planning guesswork is eliminated by leveraging installed-base learning and statistical modeling to precisely predict future requirements.
Using predictive analytics has resulted in measured availability levels over 99.9999 percent across a storage vendor's entire installed base of customers. How is this extremely high system reliability achieved? It starts with features that are built into the storage platform: no single point of failure, dual controllers that allow for non-disruptive upgrades, a fault-tolerant software architecture, and extremely robust data integrity including triple+ parity redundant array of independent disks (RAID) and end-to-end integrity validation. But the breakthrough innovation is the addition of predictive analytics.
For any new problem experienced across the company's installed base, the analytics solution uses pattern matching algorithms and continuously searches health signatures across all systems. If a signature is detected, the analytics solution will either prevent the problem from occurring or proactively resolve it, even if the problem is outside of storage. There are no false alerts as machine learning normalizes performance behavior across the installed base. Each system continually gets smarter, learning from the installed base, and downtime events are increasingly prevented.
With a combination of predictive analytics and intuitive dashboards, you can have complete visibility through the cloud to all information you need to maintain a resilient environment and ensure smooth operations. Executive dashboards give you peace of mind that everything is running perfectly and alert you to things you need to know, such as performance, capacity, and efficiency metrics. Correlated visualization can give you a view of what's happening across the stack, from applications to storage, so you can quickly see and resolve issues before they impact end users.
Previous economic models required you to pay for resources you weren't actually using. The multicloud storage model makes it possible to tie costs directly to actual resources used — so there's no waste. You also get visibility into usage before, during, and after. And with the use of predictive analytics, you can accurately predict your total costs even before you deploy your storage volumes. You can monitor usage on a midmonth cycle and reconcile against your end-of-the-month bill.
Because your data center isn't going anywhere, it makes sense to optimize your on-premises storage infrastructure for the cloud model. So, what are the key attributes to consider in selecting cloud-ready infrastructure? Focus on simplicity, reliability, performance, and mobility — and make sure these operate the same way in and out of the cloud. Here's how that translates into specific capabilities.
Cloud-ready storage infrastructure should simplify working with cloud services. That means it should have native support for the cloud model and shouldn't require additional hardware and software in the cloud, or any additional on-premises equipment to act as a bridge or gateway into the cloud. Moreover, it should simplify the scale-up process both in the cloud and on-premises so you can balance resources and capacity in the places they're most needed.
Speed is of the essence for workloads in the cloud, so storage infrastructure should enable a level of data velocity that eradicates the app-data gap. In most cases, this means it should be based on flash technology.
For primary workloads, such as enterprise applications, an allflash approach may make the most sense. For other primary or even secondary workloads, hybrid flash arrays that allow you to change the service level of any volume at any time may be a better choice. Whether you choose all-flash, adaptive flash, hybrid flash, or secondary flash, flash technology gives you the combination of speed and scalability you need to build a multicloud environment.
In the era of big data, there's no reason to accept infrastructure issues as a given. Cloud-based predictive analytics solutions are available that can anticipate and resolve problems before they impact workloads, application users, or your business. These solutions have proven to automatically predict and resolve up to 86 percent of all issues.
Your data center infrastructure should make it simple to migrate data from anywhere to anywhere. From your data center to a public cloud. From a public cloud to a private cloud. Between clouds. Between multiple clouds and your data center. And this level of data mobility should not be expensive. It's your data. You should be able to put it wherever you want — painlessly.
What's the big picture of cloud adoption today, and where do multicloud storage services fit in that picture? What types of multicloud storage services are available today? What's coming next? How will new multicloud storage offerings impact the Infrastructure as a Service (IaaS) market? And, most important, what types of features and capabilities should you be looking for as you evaluate the various multicloud storage services available today? Read on. I answer all those questions in this chapter.
The Four Waves of Cloud Adoption: Where Are You?
As you contemplate your move to multicloud, it pays to start by understanding the 30,000-foot view of cloud adoption. This will give you insights into your company's cloud maturity, or readiness to take full advantage of the benefits of multicloud services. Here are the four primary waves of cloud adoption, according to an October 2016 Forrester Research paper titled
Take the Wheel: Build Your Cloud Computing Strategic Plan Now:
Cloud storage was already a thing even before the term cloud computing was coined. It started back in 1983 when CompuServe offered its consumer users a small amount of disk space that could be used to store any files they chose to upload. Since then, several new iterations of cloud storage have emerged, leading up to the multicloud storage services we're beginning to see today, including those covered in the following sections.
These services are based on a proven, durable architecture that manages data as objects, as opposed to file hierarchies. Object storage is commonly used in the first wave of cloud adoption (systems of engagement): social, mobile, and other cloud-native applications. This provides a layer of abstraction that makes management easier for administrators, but requires additional work programming to specific APIs. Today, the majority of cloud storage services leverage this architecture, including Amazon S3, Microsoft Azure Blobs (object storage), and many others.
This more recent innovation in cloud presents cloud servers with logical storage that can access data in blocks as if it were attached to physical storage. The first major cloud block storage offering was Amazon's Elastic Block Store (EBS), which provides persistent block storage volumes for use with Amazon EC2 instances in the Amazon Web Services (AWS) cloud. In addition to Amazon EBS, virtual block storage offerings in the market today include Azure Disks from Microsoft Azure, Google Cloud Platform Persistent Disks, and the Cloud Block Storage from DigitalOcean.
However, several challenges continue to plague cloud block storage offerings, including durability issues, lack of enterprisegrade features and data services, single host connectivity, limited scalability, lack of mobility, and the notorious black box penalty.
In addition to the problems mentioned earlier, cloud block storage services that are currently on the market have only been available for use within the service provider's public cloud. Today's enterprises want more flexibility, less risk of vendor lock-in, and more enterprise-grade capabilities so that they can move enterprise applications to the public cloud without having to worry.
To that end, HPE Cloud Volumes (formerly Nimble Cloud Volumes) is the first multicloud block storage service that combines simplicity with enterprise-grade durability and feature sets. With HPE Cloud Volumes, a customer can attach and use their compute virtual machines in AWS or Microsoft Azure, the same way they would with the native storage volume offerings available within those public clouds.
You just provision the storage volume from the HPE Cloud Volumes console, select your desired storage volume size, I/O operations per second (IOPS) performance, product tier, and the AWS or Azure instances you want to attach.
A recent IDC Market Note offered an analysis of how multicloud storage services such as HPE Cloud Volumes would affect the overall IaaS public cloud market. To summarize the findings, IDC wrote that HPE Cloud Volumes "brings clear benefits to public cloud IaaS customers and existing cloud service providers [and] also represents an interesting new dimension of growth for infrastructure vendors."
More specifically, the IDC report highlighted three broad areas of impact:
With all the cloud storage options out there, and with the pace of innovation around cloud storage accelerating, what are the key features and capabilities you really should insist upon today? Read on.
When you're looking to move enterprise applications to the public cloud, take a hard look at the prospective service provider's enterprise-class capabilities:
Assess exactly how many hoops you're going to have to jump through to move your data. For example, do you have to rearchitect your applications before you can move them to the cloud? Is there an intuitive dashboard that doesn't require you to be a storage architect just to move data? Can you migrate data among the clouds or back to your data center at will? Can you containerize the application itself and move it, too? Do you need to purchase third-party devices and learn entirely new skillsets?
You need global visibility and insights across the entire infrastructure stack, no matter where your data lives. Make sure the prospective service provider can deliver advanced monitoring capabilities and harness predictive analytics to improve visibility:
A multicloud storage service should be its own separate cloud, not running on top of another cloud service. This allows you to separate cloud storage from cloud compute capabilities, enables mobility, and helps you avoid vendor lock-in.
The multicloud storage model makes it easy to lift and shift, or move data and applications from the data center to the cloud. Several innovations have had a profound impact on simplifying the process.
First, storage clouds that are built for enterprise applications and data have emerged. These storage clouds
Second, new multicloud storage offerings also provide replication with speed and efficiency that wasn't possible before. You can create instant clones by the dozens in no time and push them to the cloud. There is now an economical consumption model that makes multicloud storage eminently practical: You pay as you go, with no prohibitive data egress charges and no vendor lock-in.
Finally, multicloud storage services provide consistent data services. When you move data into the cloud and between clouds, you get consistent performance and reliability without having to change anything.
SQL databases often contain the enterprise's most valuable, critical, and sensitive data, and any SQL failure could spell disaster. Unfortunately, clustering to protect data is typically not an option with cloud block storage because data can't be shared — cloud block storage can't be accessed by two compute instances, which is a fundamental requirement for clustering.
However, the clustering capabilities provided by a multicloud storage service make it possible to host mission-critical SQL databases in the cloud — with peace of mind. Clustering can provide redundancy so that even if the server-side fails, data remains available. Clustering can also give multiple compute instances access to the same storage while maintaining data protection, shared access to storage, and all the other benefits of the multicloud model.
With multicloud storage, developers can now easily create data sets and clones for building, testing, and deploying apps into production. Developers can make multiple copies — even a hundred or more — in an instant. So, if the company doesn't want to invest in more CapEx for development resources, it can adopt the multicloud storage model, copy data to the cloud, and give developers and testers fast access to the resources they need, and pay only for what's actually used. And now that the data is portable, databases, file servers, file shares, and other applications that rely on them can be portable, too.
One way to speed up software development is to automate infrastructure provisioning. This requires support for public application programming interfaces (APIs) that enable DevOps teams to simplify existing workflows, and that capability is built into multicloud storage offerings today. According to a May 2016 report by 451 Research titled Automate or Die, "In order to sustain growth, everything in the business must be process-driven and automated in software to the furthest extent possible."
Docker containers are suddenly everywhere. Containers pack multiple user-space instances into one receptacle, and that makes it easier to move entire applications — not just application data.
Enterprise IT and DevOps teams want to extend this portability to enterprise-class applications and workloads, and Docker containers make that possible. Now you can build, ship, and run with persistent data anywhere, without sacrificing production performance or storage efficiency, and without having to retrain staff.
Effective disaster recovery (DR) depends on "separation of storage." You need to have copies of data in multiple locations to protect against a failure at any given site. By allowing enterprises to put data in the cloud — or in multiple clouds — the multicloud storage model provides new options for DR while bringing down the cost. Instead of maintaining a separate DR site with redundant compute infrastructure that doesn't get used unless there's a disaster (or you're testing your DR capabilities), now you can simply move data copies to the cloud and avoid all that CapEx.
Cloud bursting is simply a model in which any unexpected spike in demand for computing capacity (burst) is delivered by public cloud resources. Typically, this model is used in scenarios where an unanticipated event creates a sudden surge in demand, but cloud bursting is also used for foreseeable spikes such as the end of a quarter, a seasonal sale (like "Black Friday"), and so on.
In the past, cloud bursting could be an expensive and somewhat risky proposition, particularly for enterprise applications and data. However, with new multicloud storage innovations this is no longer the case. It is now possible to create many copies of data using cloning features without the need to pay for multiple copies of data. You can make many copies and host them where the demand is, making it possible to spin up needed infrastructure resources with very little incremental cost.
Another interesting version of this use case is also emerging: the ability to leverage more compute resources for a limited but predictable period of time — for example, to test a new ecommerce concept or to perform analytics on a data set — and then scale back to normal levels. The multicloud storage model also accommodates this use case with low and predictable incremental costs.
In the past, moving to the cloud sometimes meant the wholesale migration of data and applications; it required the purchase of additional hardware to handle migrations; and it exposed companies to unanticipated costs. Multicloud storage can give you the freedom to compare and contrast cloud service providers, pick the ones that make the most economic sense for your workloads, move whatever portion of your data and apps you want — and not worry about getting locked in.
You can even move data between multiple public and private cloud providers to meet specific service-level agreements or respond to downtime issues. For example, if Provider A is experiencing a server outage, you can quickly shift data to Provider B for any amount of time, with minimal incremental cost. You're just changing the connection, so there is no data migration and no egress charge.
By integrating with predictive analytics capabilities, multicloud storage services can now perform automated monitoring and tracking of the resources you're actually using, so you know what your costs are going to be. You can see precisely how much capacity you're using in any given period of time, so there are no surprises when you receive an invoice. Multicloud storage services can also help you estimate future usage based on any number of variables that you provide, and resize capacity on-the-fly to meet future requirements.
Does your company produce its own electricity? Does it maintain its own water treatment facility? No? Then why should it have to acquire and maintain all the physical systems needed to store data? The multicloud model makes it easy to on-ramp data to the cloud and still get enterprise-grade data capabilities and consistent data services — so you can actually start using the cloud the way you used to use storage area networks, but for a lot less money.