Hybrid Cloud: How to avoid implementation mistakes and re-examine your tools
Picking the wrong public Cloud
The first common mistake made during a hybrid cloud implementation is choosing the wrong type of public cloud storage. There are six types of public cloud storage:
- Block storage, which is local embedded disk or SAN storage for applications
in the cloud that require higher performance.
- File or NAS storage, which is for applications that need NFS or SMB protocols.
- Object storage used for active archiving.
- Object storage used for cool archiving.
- Object storage used for cold archiving.
- Tape storage — typically a linear tape file system — which is also for cold
Each type of cloud storage has distinctive performance characteristics and costs, and choosing the wrong type can have disastrous consequences for a hybrid cloud implementation. For example, block storage has the lowest latency and the highest IOPS and throughput, but it also has the highest storage cost. It can cost as much as 30 times more than active or cool archive storage. Choosing block cloud storage when object cloud storage will do the job is a very costly mistake.
A similar cost issue can occur if a shop inappropriately selects cold archive cloud storage. Cold archive storage is affordable; usually less than 1 cent per gigabyte per month. But if users need access to the data in that cold archive, they may run into some problems. First, it takes a long time to retrieve the data from the cold archive. The first byte of data can take five hours to retrieve. In addition, there are transit fees: The cloud storage service provider charges customers for reading more than a very small percentage of data from the archive. These fees can be as much as 12 times the storage costs.
Avoiding this hybrid cloud implementation mistake requires accurately matching the characteristics of the data to where it will be stored. How frequently will users access the data? What are the performance requirements for reads? What are the data retention requirements? How much data will be kept on premises versus in public cloud storage? Answers to these questions also affect the second common mistake.
Picking the wrong on-premises storage
The second most common hybrid cloud implementation mistake is selecting the wrong on-premises storage. There are four primary ways to deploy hybrid cloud storage systems:
1. Use a primary NAS or SAN storage system that replicates snapshots or tiers data to the public cloud storage based on policy. When tiering, the system leaves a stub locally that makes it appear as though the public cloud storage data is still local.
2. Utilize a gateway or cloud integrated storage (CIS). The CIS looks like local NAS or SAN storage. It caches the data locally while it moves all or most data to the public cloud based on policies. It also leaves a stub that makes data in the public cloud appear to be local.
3. Install an on-premises object storage system that either provides the same de facto interface as public cloud storage or extends to it. When the on-premises object storage utilizes the same interface as the public cloud storage, applications can write to either — or both — based on their requirements. When the on-premises object storage system treats the public cloud storage as an extension or remote target of the object store, it replicates data to the public cloud based on policy, similar to NAS or SAN tiering storage to the cloud. If the public cloud uses the same object storage software, then it can become a geographic extension of the on-premises object storage.
Every one of these options has pros and cons and works best with different use cases. Picking the wrong one can have severe consequences. CIS systems tend to be quite cost-effective, for example. Some public cloud storage service providers include them for zero or limited additional monthly cost, which can be a great deal. It can also be quite costly if the amount of data cached locally is less than what applications need. When that happens, the CIS constantly pulls data from the public cloud back to the on-premises storage. There is a large performance penalty from the internet and an additional latency penalty for data rehydration. There is also a high likelihood companies will have to pay transit fees to the service provider for reading out the data from the public cloud.Disaster recovery (DR) can be problematic for the CIS and tiering storage system options. Data in the public cloud cannot be read directly without reading it through the CIS or an on-premises cloud tiering storage system. That means a duplicate of the CIS or cloud tiering storage system must be made available in the cloud provider’s facility or at the DR facility. Several CIS and tiering storage system providers now offer software variations that can run as virtual machines in the cloud or at the DR provider’s facilities. Regardless, the additional hardware and software variations add to the cost.
Object storage can be one of the simpler integration between on-premises storage and a public cloud; however, object storage is not known for high performance.To avoid excessive user complaints about on-premises performance, it is imperative to make sure the object storage system’s performance matches.
Additionally, object storage systems use the standard Amazon Web Services Simple Storage Service interface, but not all S3 interfaces are the same. Many interfaces are a subset of S3. An application designed for the S3 interface must be certified to work with the subset that the on-premises object storage uses, as well as the one in the public cloud. Otherwise, administrators should expect irritation, aggravation and stress. Troubleshooting this problem takes time, effort and labor.
Backing up or archiving to the cloud can be a significant cost saver, but it can also cause intense heartburn. Sending backups to the cloud is fairly simple, but recovering them may not be. Typically, a backup requires a media server to recover and restore the data. Most hybrid cloud implementations include one or more media servers on premises. That simplifies recoveries and restores on premises, and it makes them much faster than attempting to recover and restore from the public cloud. But when the data is recovered and restored in the cloud, it still requires a physical or virtual media server in the public cloud. If there is no media server in the cloud, then there are no recovery and restores in the cloud. In addition, if the recoveries and restores are coming from one of the variations of object storage archive in public cloud storage, do not expect fast recoveries.
Archiving to create a hybrid cloud is often complicated. The on-premises source storage and the public cloud storage are totally ignorant of each other. Applications and users need to know where their data currently resides to be able to access it. Some archiving software will leave behind a stub; however, links can break. Users might become annoyed or angry at not being able to find their data. Most archiving software can help locate data with admins’ help, but troubleshooting is often a time-consuming exercise.
Just like with the public cloud, it is crucial to match the characteristics of the data stored on premises to the ability of the on-premises storage systems to meet them. Shops can avoid mistakes by spending time and effort doing the groundwork upfront.
Is 2017 the year of the hybrid cloud management platform?
Cloud growth, driven by both legacy migrations and new development, has resulted in a multicloud operating environment that needs to be managed with some degree of central control and unification. According to an October 2016 market research report from MarketsandMarkets, global spending on multicloud management is expected to grow from $939 million in 2016 to $3.4 billion by 2021, a compound annual growth rate of 29.6%.
One of those changes, at least on the fringes for any hybrid cloud management platform, is cloud repatriation, the movement of applications previously migrated to the cloud back to an internal or on-premises private infrastructure.
Reasons for doing so may be related to regulatory compliance or data sovereignty. Cost may also be a factor, according to Ganthier, who said while cloud provider and resource costs can climb unceasingly as data volumes and compute power requirements grow, they tend to reach a plateau in a private, on-premises scenario.
More Cloud management options
Hurwitz expects to see a significant number of hybrid cloud management platform products hit the market in 2017. As businesses increase their reliance on cloud-based infrastructures, platforms, and application services, the need to have a clear understanding of how they are used is essential for operation that is simultaneously uninterrupted yet efficient and economical, she said. “Management will expand to include cost considerations.”
That expectation is in line with the 2016 State of the Cloud survey from RightScale, one of several vendors providing hybrid cloud management platform products. Multicloud is central to the IT plans of 82% of survey respondents.
A year ago, Ganthier dubbed 2016 as the year of the multicloud. He was right. A year later, corralling and managing those assets in a consistent manner will continue to take on new importance.
Hybrid Cloud: What it is, what it does and Hybrid Cloud use cases
Hybrid cloud has been on the tip of nearly every IT professional’s tongue since the concept first hit the scene in the early 2000s. In essence, the term hybrid cloud refers to any environment that mixes private and public cloud services, though it can also refer to the ability to connect colocation to dedicated services with cloud resources. The hybrid cloud model owes its popularity to its ability to provide greater flexibility, its resource automation, the way it maximizes containers, its relatively low cost and its testing and development benefits.
Want to learn more about what a hybrid cloud is, hybrid cloud use cases and benefits? Start by reading these five quick tips.
What is Hybrid Cloud Computing?
Cloud computing has become an immediately recognizable term in modern IT that refers to a broad range of technologies that deliver hosted services over the internet. Included in this broad range are three classes of cloud computing: public, private and hybrid. The first class, public cloud, is made up of publicly available IT resources and provides greater levels of automation and orchestration than traditional hosting. The second class, private cloud, is similar in nature to the public cloud, with the exception that it’s dedicated to a single organization; private cloud is also highly resilient. The third and final class, hybrid cloud, allows workloads to coexist on either a vendor-run public cloud or a customer-run private cloud. The hybrid part of hybrid cloud comes from its networking; software-defined networking and hybrid WAN technologies are just two of the technologies that help ensure networking connectivity in a hybrid cloud.
Is there such a thing as a true hybrid cloud?
Just as there is no absolute definition for cloud computing, the concept of the hybrid cloud is equally broad and subject to interpretation, depending on whom you ask. To some, it’s an offering that uses automation and orchestration to take your on-premises cloud and infrastructure and extend it to the public cloud. To others, hybrid cloud refers to any IT services hosted in both public and private locations. Some even say that the hybrid cloud is an extension of private cloud services, and that the definition of a hybrid cloud depends on how you use in-house and off-site cloud. Although there isn’t a definitive answer to what a true hybrid cloud is, the general consensus is that it’s more complicated than just running workloads on and off premises, and that hybrid cloud will remain a buzzword for some time to come.
Assess your business’s Hybrid Cloud needs
A hybrid cloud platform is appealing to businesses because it can provide greater workflow agility, departmental autonomy and better security, but, as with all major purchasing decisions, buyers must first gauge whether the value of hybrid cloud merits the cost. The best way to do this is to figure out which hybrid cloud deployment and management tools your business needs; this depends on what you intend to use a hybrid cloud platform for. Some popular hybrid use cases include cloud bursting, security and compliance requirements, cost control, testing and development and storage capacity. Keep in mind that each of these hybrid cloud use cases comes with its own unique challenges.
VMWare takes a big step forward with NSX
For an example of how businesses are implementing hybrid cloud, look no further than VMware. With a recent dip in revenue for its flagship vSphere product, the virtualization company has pinned its hopes on the latest version of NSX. This version of the networking and security product will allegedly allow customers to apply NSX security to Amazon Web Services (AWS) workloads. Hybrid cloud networking plays an important role in the future of NSX because it bridges the gap between private and public clouds, creating an overlay between in-house servers and AWS and allowing users to manage different
end points homogenously.
Big in 2017: Hybrid Cloud Management
As the hybrid cloud model continues to gain momentum in enterprise IT, hybrid cloud management tools have become a priority. Projections from the market research firm MarketsandMarkets indicate that global spending on multicloud management will increase exponentially by 2021, and for good reason — hybrid cloud management platforms make it easier to consistently apply policy changes and automate operations in multicloud environments. As the line between private and public cloud continues to blur and the hybrid cloud model undergoes rapid changes and advancements, hybrid cloud management platforms are expected to keep pace. As a result, experts predict that 2017 will be a huge year for hybrid cloud, with more hybrid cloud use cases emerging and more management platform products hitting the market.
Want better Cloud infrastructure management? Re-examine your IT tool set
A cloud migration brings a lot of change for enterprise IT teams, from how they monitor costs to staff organization. But one of the biggest changes, and challenges, is the need to evolve their infrastructure management tool sets.
Not only does a move to cloud infrastructure generally require a re-examination of existing system management tools, but those tools will differ depending on private, public or hybrid clouds. Add in the decision of whether to use a cloud provider’s native tooling or a third-party system, and the choices quickly become complex.
Private Cloud infrastructure management extends from legacy systems
Of the various cloud computing models, private cloud most likely aligns with an enterprise’s existing infrastructure vendors, and its tool sets. This is because, when choosing a private cloud platform, most organizations tend to go with what, or who, they know.
“When [many enterprises] decided to implement a private cloud solution, invariably they turned to the incumbent vendor, who was VMware,” said John Martin, president and founder of The Cavan Group, an independent cloud and data center consulting firm based in Boston.
For many organizations, their go-to private cloud stack was VMware’s vCloud Suite. As an extension of that, they would use vRealize, vSphere and other components of that suite to handle provisioning, self-service capabilities, monitoring and other essential cloud infrastructure management tasks.
The same concept rings true for OpenStack, the open source platform that also serves as a foundation for private cloud deployment.
“You need the cloud platform itself, be it OpenStack or vCloud, and then from there the management is an extension of that platform,” said Carl Brooks, an analyst at 451 Research. “So with OpenStack, for instance, you have a number of baked-in management tools and abilities.”
Another option is to layer on private cloud management tools from other vendors, ranging from BMC and CA Technologies to open source configuration management tools such as Chef and Puppet.
Third-party tools gain traction for public Cloud
The global public cloud services market is expected to grow 18% in 2017, totaling $246.8 billion, compared to $209.2 billion in 2016, according to analyst firm Gartner. Martin sees that growth reflected within his own enterprise client base.
“Many customers have dabbled — and I do mean dabble a little bit — in private cloud,” he said. “But they’ve also recognized that it’s unlikely they can ever deliver the level of quality service, resiliency, redundancy, performance and agility that an AWS (Amazon Web Services) or Azure or the other [public cloud providers] could do.”
Like the shift to private cloud, the move to public cloud demands a new setof infrastructure management tools. But, in this case, some of that management burden is offloaded onto the public cloud provider.
“When we’re talking about cloud from an infrastructure perspective, like an AWS or Azure, that’s when the vendor is managing from the hypervisor and below,” said Lauren E. Nelson, a principal analyst and private infrastructureas- a-service cloud lead at Forrester Research. “You don’t need to have access to a vSphere portal or your hypervisor tool. You also don’t need access to a storage tool or a network tool — it kind of eliminates some of those native monitoring tools you need.”
Instead, most public cloud users adopt another kind of native management tool — those specific to their cloud vendor’s platform. An AWS user, for instance, would employ the AWS Management Console to manage and control access to AWS resources including Elastic Compute Cloud, Simple Storage
Service (S3) and Elastic Load Balancing. The console also helps users monitor their AWS spending.
Some enterprises, however, supplement those provider-native tools with a third-party cloud infrastructure management tool, such as those from Right-Scale, ScalR or CloudHealth. There are various reasons for this, but one of the most common is simply that these tools provide independent insight into a cloud deployment, Nelson said.
For instance, when AWS suffered a major S3 outage in February 2017, the vendor’s health dashboard, which tracks the status of its cloud services, was also disrupted. A third-party monitoring tool would help fill that gap.
Hybrid, multicloud bring new management challenges
A second reason enterprises adopt these third-party tools is for a hybrid cloud deployment. While traditionally private cloud-centric tools such as vRealize have added cross-platform capabilities to work with public clouds like AWS, they sometimes fall short of the range of management features offered by thirdparty, born-in-the-cloud tools, Martin said.
“You have companies that are trying to shoehorn, quite frankly, a proprietary private cloud management platform into a public cloud governance, access management, service management and service optimization model, and what happens is that it’s just limited,” he said.
A third, and similar, reason that organizations opt for these third-party tools is that cloud provider-native tools are specific to that vendor — admins can’t use the AWS Management Console to manage resources on Azure, for example. That presents a challenge given the rise of multicloud computing, where organizations use a mix of different infrastructure as a service providers.
“Companies are now literally parsing through their application stack saying, ‘This one is probably better suited to our Microsoft cloud, and this one is probably more suited to our Amazon cloud,’” Martin said. “It really does drive you toward a tool that can manage multiple environments.”
The harsh reality of the single pane of glass
While third-party cloud infrastructure management tools can help bridge different platforms and supplement provider-native tools, IT buyers should be a bit skeptical when any tool is pegged as a “single pane of glass.”
“Every one of these cloud management platforms, especially from the larger vendors, really has to be examined in detail because a lot of them make sweeping claims, but when you get down to the details, it’s a lot more rickety than it would appear,” Brooks said.
For example, one tool could have sophisticated functionality for cost containment to alert organizations when they over provision their public cloud instances, but might have limited functionality for governance or access management.
The industry as a whole, however, is inching closer to end-to-end cloud management tools, according to Martin. “Within the next 12 months, you will see a rapid consolidation, either through [mergers and acquisitions] or other activity, of all of these functions into a unified cloud management platform,” he said.
Given all the complexity around choosing a cloud infrastructure management tool, another trend has taken hold in the enterprise: the use of managed services providers, or partners within public cloud vendors’ ecosystems, to lessen that management burden.
Companies including Rackspace and Datapipe will help take the “brunt” of it for you, Brooks said. “A lot of the promise of cloud management is being delivered by partners right now.”