Preparing for Software-Defined Everything

The IT industry is seeing its most rapid change in decades. From Big Data analytics to flash storage to the cloud, the old rules are being upturned and IT teams have to figure a way through the confusion. Things aren’t helped by all the hype, on one side, and all the FUD, on the other that is being heaped into the marketplace.

Together technologies mentioned above have generated enormous pressure on the way basic operations are run. The old approach of manually controlling servers, storage and networks is giving ground to mainly automated control systems, such as the “orchestration” software that runs the typical cloud. This software takes the pool of virtual server instances and controls what runs in them, when it runs or closes down and what happens when failures occur.

Clearly, orchestration is the only way to run very large clouds at places like Amazon’s AWS or Google. These Cloud service Provider (CSP) companies have created their own tools for this purpose, but the open source community has rallied to develop OpenStack, based on tested technology from NASA and Rackable, a major CSP. Vendors such as VMware are getting into the picture as well.

Orchestration solves the server management problem, but it leaves storage and networking out in the cold. With ever-more complex configurations of instances, and app mashups that are transient, as needed connections of apps and data in different places in the cloud, the need to add storage and networking to the orchestration mix has helped create an environment for change.

The orchestration of all the resources has led the (very heretical) CSP engineering teams to question the distinction between basic platform hardware and data services. Driven by a never-ending price war and vast expenditures on new datacenters, this looks like a way to drive infrastructure prices down, with mere mortals in IT datacenters also reaping many of the rewards.

The result is that we are moving rapidly towards “Software-Defined Infrastructure” and a very radical new way to build IT setups. The concept is deceptively simple, at least before the spin-doctors and FUD-pushers get going. Instead of expensive, complicated fixed-function arrays, switches and servers, get the hardware as close to bare metal as possible and deliver all those value-add services inside virtual instances or containers.

This might just sound like a new way to spread the cost of gear and maybe even increase prices and support costs, but the key is that the bare-metal gear commodity hardware and that there are recognized APIs to talk to the services, which allows 3rd- party software vendors to competitively deliver services and features.

There’s more to this, because the new gear is fresh and positioned for the SDI approach, on the one hand, while the service software is designed around generic switch or data-store services. This will drive prices down by increasing competition and removing vendor lock-in.

We are seeing tremendous interest in SDI approaches and almost half the IT shops across several surveys are planning near-term implementations. What do you need to do to prepare for this veritable tsunami of change?

The first key is that IT has to change roles from being a regulator of compliance and the only source of computing to being a shop-keeper to all the other departments in the company.

Embrace app stores as a way to limit inefficient buying by departmental staff (so-called shadow IT), while encouraging better tools and creative ways to exploit them.

Plan to virtualize everything, but probably start with servers, then networks and then storage.

This reflects the maturity differences in the Software-Defined movements in each area. Figure out how to control resource allocation and how to bill for services. The more flexible you are, the better your customers will be satisfied. Why bill by the month if your software can handle minute increments, for instance.

Now it’s time to look at hardware.

Virtualization is well in hand in most medium and large IT shops, so servers probably are already x64 COTS units. Switches are another issue. If you have a comfort zone with Cisco, buying a white-box from China might be a stretch initially, but companies like Dell and HP are already selling suitable switches. Commodity storage boxes are already appearing in the market, too, but SDS-Storage is more likely to be a 2016 need.

Software is a bit more challenging, but much more interesting.

Starting from scratch has allowed startups to invent new methods of controlling gear, targeting scale-out configurations in the cloud. Data services apps that are controlled by policy templates, for example, allow the departmental IT admin to build virtual VLAN networks with little effort and to provide automatic provisioning and tear-down based on app sets and workloads.

In fact, it’s clear that network admin efforts will be reduced with the new approach. This highlights one of the as yet undefined impacts of the software-defined Infrastructure approach. It will see a migration of workload to other parts of the skills base, and likely a reduction in focus on infrastructure support across the board. This implies retraining and reassignment, with more focus on service delivery, agility of use and efficiency.

Those admins that remain on the infrastructure side will need to reskill from the CLI model to a GUI-based, template-driven virtual resource approach. Hardware maintenance will become a thing of the past, as the new software bypasses failures, uses spare nodes, and allows failed systems to be turned off and only replaced infrequently or even only when the gear itself is removed at the end of its 4-year useful life.

With all of these changes, SDI will drive costs down, and make the IT operation much more agile and responsive to the needs of departmental computing. This is why SDI is inevitable for any larger datacenter.

 

Vice President of Engineering at Germane Systems, CEO at startups Scalant and CDS, and a senior consultant focused on storage and cloud computing.

Jim OreillyContributor

3