Advancements in hardware, cloud, and container technology provide most data-center managers with many appealing options.
However, while these options provide ways to help improve user experience, they also affect data-center costs and security complexity.
Here’s a breakdown of the different options available, and how to choose the best one based on your budget and business.
Hyperconvergence combines computing, networking, and storage into one system.
The standardization of this module simplifies your data center while increasing its scalability. And, it has the potential to reduce your overall costs.
Hyperconverged systems can be acquired either as hardware for your data center, or as a software platform that runs on standard servers. That flexibility helps to expand the data center quickly, and also to expedite recovery from hardware failure.
Leading research and advisory company, Gartner, predicts that up to 20% of business applications will transition to a hyperconverged infrastructure by the end of the 2020.
While flexibility usually pertains to software, hyperconverged hardware is also very adaptable. For example, Dell’s Power Edge MX servers feature technology that puts server memory, storage, GPUs, and other components into modules that can be hot-swapped and upgraded, which continuously keeps the server up-to-date.
If you’re looking for a less expensive option, open source hardware provides modular power that is disposable. Instead of constantly upgrading your high-end hardware, data centers replace entire servers cheaply and quickly.
Despite the flexibility offered by modular servers, certain applications need specialty hardware. For example, Apple now offers a rack-mountable Mac Pro that significantly reduces video rendering time. While video rendering isn’t important to everyone, the emergence of this specialty hardware provides a whole new set of options to consider.
If you’re like most other organizations, you’re probably already using some cloud technology.
Cloud environments reduce costs, and offer plenty of flexibility in terms of expansion. Kubernetes, an open-source container hosting platform helps to fuel the cloud transformation with greater independence from the underlying architecture. It scales without increasing your ops team, runs anywhere, and delivers even the most complex applications consistently.
That sort of freedom makes multi-cloud data management a great option. Some companies pursue multi-cloud data management to decrease dependence upon any single cloud provider, which adds leverage during price negotiations. Multi-cloud can also include a hybrid cloud, which is a mix of in-house and cloud deployment.
Many companies use cloud environments because of their turnkey managed solutions.
Recently, Cisco started offering turnkey convenience by adding Kubernetes-as-a-Service to its HyperFlex hyperconverged infrastructure (HCI) environment.
The system, HyperFlex Application Platform (HXAP), minimizes service burden for data center managers. Cisco provides basic setup and licensing for the Kubernetes platform. Then, it manages security updates and checks for consistency between components when deploying a cluster to reduce ongoing service requirements.
Cisco’s services allow you to focus on deploying the containers and other issues, such as cost and security.
Initially, the cloud environment’s biggest incentive was the promise of lowered costs.
However, many companies fail to recognize any cost improvements.
According to Bain & Company, many companies that moved their data centers to a cloud actually saw cost increases due to several key factors, including:
- Direct-match migration, which might cost 10-15% more than keeping workloads on-premise; and,
- 84% of on-premise workloads are overprovisioned.
Unfortunately, many companies fail to consider the cloud’s flexibility when they create their cloud-based environments. In order to successfully implement cloud, you should create dynamic and flexible solutions that adapt with their computing needs, instead of simply creating copies of their current data centers.
Another issue is control. When moving to the cloud, costs shift from the tightly controlled Capital Expenditures (CapEx) budgets to loosely controlled Operating Expenditures (Opex). Sure, the dynamic needs of cloud computing can reduce the overall costs, but it often creates surges and dips that make budgeting a nightmare.
Gartner estimates that, “through 2020, 80 percent of organizations will overshoot their cloud IaaS budgets due to a lack of cost optimization approaches.”
That excess spending occurs because inexperienced cloud computing teams:
- Forget to shut down finished projects;
- accidentally spin up servers; and,
- provision an unnecessary amount cloud resources.
To overcome those issues, it’s important to focus on visibility, coordination, and disciplined deployment.
Visibility and coordination are related. Assign one employee (or a focused team) as the internal cloud expert. This expert then authorizes all cloud relationships for the organization. Then, he/she/they reports on normal and peak storage usage, and the cores required for processing.
With this data in hand, the cloud expert can ably offer budget projections for standard Opex. He/she/they should also monitor daily, weekly and monthly usage to ensure consistency.
Lastly, the cloud expert should also help his/her/their teams with disciplined deployment. Those teams must understand how to appropriately utilize cloud resources and take advantage of features, such as: Cost calculators, prepaid resource discounts, and automated features for scaling and resource termination.
Sure, this process takes time, but even casual planning controls costs considerably.
Some organizations believe that, by moving to the cloud, the burden of security shifts to the provider.
Unfortunately, it doesn’t.
While cloud providers provide some security, the organization is responsible for most of the fundamental security requirements.
According to year’s RSA Conference, 59% of IT and security professionals felt that they were unable to properly secure their company’s cloud-based services as quickly as the companies deploying them.
With 41% of business deployed in hybrid environments, it’s easy to see how weak cloud security could quickly lead to a devastating cybersecurity breach.
A huge factor is misconfiguration – for both cloud deployments and containers.
Just this year, Microsoft’s own team exposed 250 million customer records because of a security misconfiguration that made their customer service database publicly searchable.
Docker, the most popular way to create standardized containers, had similar issues with misconfiguration in February. Researchers found over 900 containers with exposed Docker registries – of which 117 did not require authentication to access.
Between the containers and Kubernetes, a mesh of services need of proper security.
Configuration automation may be on the horizon, but for now, focus on these key controls:
- Are roles-based access controls enabled?
- Are resources set for read-only?
- Are containers blocked from making changes to the Kubernetes platform?
While there may be legitimate reasons to create exceptions, most application containers should be locked down more securely.
Finally, consider converging people, tools, and services to minimize the number of resources to monitor. By consolidating more systems, your data center manager can spend time analyzing data instead of gathering it.
The Right Data Center Support
Ultimately, as with all IT pursuits, expertise and experience matter.
As your data center strategies change, consult our experts to assist in your strategic decisions, implementations, and security. We’re always here to help – 24/7/365!