Best Practices for Software-Defined Storage

computer-storage-arrayCurrently being celebrated as the revolutionary new face of IT infrastructure, software-defined networks or SDNs have gained a lot of hype, but not too much practical guidance in the industry press. In this article, we’ll suggest some best practices to enable you to make the most of the data storage capabilities of this evolving technology.

What is Software-Defined Storage?

Software-defined storage or SDS uses virtualisation technology to create a conceptual model of the data on storage hardware. This virtual modelling of physical data eases the burden on its associated physical carriers, and makes it easier to reconfigure your network resources. Storage systems from different manufacturers may be used in tandem with each other, and may be managed from a central console, irrespective of their underlying physical hardware, location, or the operating platforms in use.

Derided by some as an attempt to rebrand such recent trends as private storage clouds, storage virtualisation and storage hypervisors, SDS is nonetheless a growing concern. Some industry analysts expect the market for software-defined storage to exceed $5 billion by 2018.

SDS Characteristics

SDS is the technology at the heart of the ideal vision of the software-defined data centre or SDDC: an automated, self-healing and self-provisioning environment that runs on an infrastructure foundation of generic hardware, with minimal staff requirements.

Formal standards for the technology have yet to be laid down, but several manufacturers offer solutions that conform to these broad conditions:

· The data and control planes of all storage hardware are decoupled: This makes it possible to implement SDS on cheaper, generic storage hardware – and to extend the capacity and features of proprietary storage solutions.

· There’s a range of use cases: SDS technologies may be deployed as physical servers, virtual servers, or converged hardware appliances with storage and computational resources.

· The data pool and underlying storage resources are virtualised: Applications perceive all connected storage as a single storage sub-system – even with a range of storage products from different vendors.

· Data paths may be established from several sources: They may be based on storage blocks, objects, file interfaces, or combinations of these.

· Integration with application programming interfaces (APIs): These may be used to implement the programmed automation and provisioning of storage resources.

· The scaling out of storage expansion: This approach contrasts with existing storage technologies, which tend to scale up.

· A policy-based management interface: This allows for centralised control, and the enhanced management of your data storage infrastructure.

Using Software Only

In principle, software is at the heart of the data writing process. The hardware you use for storing data can in theory come from anywhere. So the SDS approach of abstracting functionality away from the hardware layer is a logical progression, intended to make it easier for administrators to manage the provisioning and use of storage resources.

In a virtualized server environment, administrators need to know how best to allocate storage resources to applications and data as required – rather than necessarily having to know the ins and outs of the storage hardware. However, this shouldn’t be taken as an excuse for admins to shirk their responsibility to the hardware – or forget their knowledge of it. Rather, the goal should be to present resources to their users and applications in a simple manner, as made possible through SDS virtualisation.

Plan the Infrastructure

Approach the technology while bearing in mind the objectives of your enterprise, the performance levels you’re aiming for, and the nature of the technology you’re going to use.

Some manufacturers provide SDS solutions in software, only. Here, you’ll have to assemble your own storage hardware infrastructure, either by building up arrays from the ground up, or adapting existing systems.

SDS service provision bundles the virtualisation software with hardware. This reduces the burden of administrators having to match the software with hardware compatible with a particular vendor’s SDS solution, but leaves the enterprise somewhat at the mercy of the provider, who is solely responsible for upgrades to both hardware and software – which may not occur at the same time, leaving potential for additional costs to creep in.

API or Mount Points?

The SDS storage virtualisation layer may be built up using APIs to exploit the “hooks” provided by the on-board software associated with a vendor’s storage hardware. For administrators, this approach is a potential headache if multiple storage vendors are involved, each with their own characteristic firmware and software configurations, which may change at any time. And if a manufacturer goes out of business, you could be left with a redundant elephant in the works, that’s impossible to upgrade as other elements of the system evolve.

A better method is to use the mount points on the storage hardware: the connections that manufacturers include on their equipment for integration with popular operating platforms like Windows.

Monitor the Hardware

As indicated previously, the fact that SDS is a software-based platform doesn’t excuse administrators from their responsibilities to the system at large. It’s still essential to properly configure and continuously monitor your hardware, as SDS solutions won’t do this for you.

Format in Stages

The initial formatting process for an SDS system is typically a lengthy one. Data may have to be migrated from each array that’s going to be virtualised and consigned to the common resource pool, before it’s shipped off to its relevant virtual storage volumes. It’s best to undertake this one business process or application at a time.

Expand Gradually

There’s no need to pour everything into the resource pool at once. Begin with a small-scale deployment, work with it for a while to become familiar with the ins and outs of the system, then expand in stages.

Agnosticism is Key

With storage solutions from several manufacturers likely to be in play, it’s important to choose storage solutions that are hardware-agnostic (and thus capable of accommodating products from a range of vendors) and workload-agnostic (independent of whatever application software or hypervisor is running on a server).

Don’t Forget the Extras

Instead of buying in yearly contracts for value-added software on individual storage arrays, you can deploy the capacity management and data protection services inherent to many SDS offerings, using these at the software-defined storage layer.

This is just one the cost-saving features that make SDS such a promising technology for the present and future.

Des Nnochiri has a Master’s Degree (MEng) in Civil Engineering with Architecture, and spent several years at the Architectural Association, in London. He views technology with a designer’s eye, and is very keen on software and solutions which put a new wrinkle on established ideas and practices. He now writes for markITwrite across the full spectrum of corporate tech and design. In previous lives, he has served as a Web designer, and an IT consultant to The Learning Paper, a UK-based charity extending educational resources to underprivileged youngsters in West Africa. A film buff and crime fiction aficionado, Des moonlights as a novelist and screenwriter. His short thriller, “Trick” was filmed in 2011 by Shooting Incident Productions, who do location work on “Emmerdale”.


Posted

in

,

by

Tags: