• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

mletschin

Is 2016 the year of the software-defined data center? Part 2

May 20, 2016 By mletschin

In the last blog we determined that the world has changed since the early 2000s and that x86 has won the server war, now giving you a chance to have vendors compete for you. But that’s only part of the data center — we still have to look at the rest of the infrastructure.

Networks that can shift and deliver at the same time

In the past, if you wanted to have the fastest network infrastructure you had to buy from one or two specific vendors. In reality, they still have a huge share of the networking market, but we’ve seen a transition in recent years.

That being said, it is not just about the software layer — with scale-out systems, the network has become critical for more than just inter-rack and inter-data center connectivity, and critical for processing power and speed as well. The move from 1GB to 10GB to 40GB, and even up to 100GB, is allowing for larger amounts of traffic than ever before to move around the globe. This is all done at a scale that can give even the smallest enterprise access to faster connectivity, but it still comes at a cost.

While 10GB is starting to become commonplace, it is still a much more expensive option than 1GB and not something that has trickled down to the end user yet. The true question is, does it need to? If the data center is driven by commodity hardware that is governed by software and all the compute happens there, do we really need to be faster at the endpoint? To answer that, we will have to wait and see.

The brains behind the brawn

The No. 1 buzzword of the last few years has been “software-defined,” and rings true for the transition we have seen in the data center over the last 10 years. The first step is compute, with virtualization taking its place as a stalwart to provide the processing power needed on commodity hardware.

The next two parts that need to move to the software-defined space to allow for a commodity-driven data center are storage and networking. Some would argue that we are already there, with executives from major storage vendors making statements as early as 2010 that “we all run on commodity hardware.”

In reality, you are normally limited to the hardware that a vendor chooses. Moving into a software-defined model means we have choice in vendors and can truly liberate the storage needs from the hardware vendor’s grasp. The growth of open source-based solutions has helped this along the way, and we will see if 2016 is the year we take the leap to make software-defined the norm.

Networking, on the other hand, is the last step forward. You cannot buy a blank switch off the shelf and just layer on the software of your choosing and make it work at the speed and capacity that you desire, but we are starting to see a transition toward this. The combination of software-defined solutions is what gives companies the agility to transition to the enterprises of tomorrow, providing the ability to custom-fit a solution into the business instead of the other way around. It appears that software-defined is more than just a buzzword.

So is it the year?

My gut says the commodity-based data center is not ready to go mainstream, but for intrepid companies willing to source the right gear, it may be. We have seen so many changes in the data center since the ’90s and the early 2000s that I have to believe the pace of change will continue.

Before we cross into 2020, the data center will be in the domain of the business again, not the vendors. That will be the commodity data center that anyone who has managed a data center in the last 20 years has been dreaming about.

Is 2016 the year of the software-defined data center?

May 7, 2016 By mletschin

Even with my 20 years of experience working in technology, I am just a baby compared to some people I have encountered along the way.

Still, I sometimes think about the “old days” in IT. It was a time when data centers were dominated by mainframe systems and midrange monoliths like the AS/400. A time when virtualization was something done only on these big systems, and the only need for an x86 system was under your desk to access a mainframe when you didn’t have a classic thin client.

When my career in IT really began, I was working with systems like Windows NT and Exchange 5.5, and we were creating Web pages with this mysterious language called HTML that magically made things appear. Even in those days, there was this thought that there was no need for companies to be beholden to the largest vendors.

There was a notion/belief in the early 2000s that with the growth of VMware, eventually the data center would become a blend of multiple vendors and companies would be able to take back their own infrastructure, enabling them to control the way they supported their business. This transition marked the evolution from the traditional data center to the commodity-based data center, driven by business needs and created using software, rather than the monolith systems that big vendors charged exorbitant amounts for.

Fast-forward to 2016

Are we there yet? Not quite, but this could be the year. Most enterprises have nearly 90% of their systems virtualized on VMware, Xen or KVM and are managed by newer solutions like OpenStack. 2016 could finally be the year we see the commodity-based data center become reality.

To make this happen, there are so many components that need to be in the right place. We need to see x86 servers with enough power to support large database systems — check. We need the ability to choose between multiple vendors that all produce a similar-quality product, but differentiate themselves with their support and supply chain — check. We need to have networks that can handle multiple systems and the communication between them, without losing packets and without being based on a single large infrastructure — check. (Well, sorta . . . we will get to that one in a few.) Finally, we need software to drive all these systems and define the solutions, whether it is storage, compute or even networking.

We are just about there and are seeing an influx of companies that can provide these services, but can they unseat the incumbent large, publicly traded vendors that look to buy up all the small fish? That just may be the key to 2016 being the year of change for the data center we have all dreamed about since we walked away from worrying about what happens when the clock strikes midnight for the new millennium.

Over the next couple of blogs I will take a look at each of the categories and see if we are really as close as we think to 2016 being “The Year,” starting with the server and compute side.

IT Infrastructure Breaking Down? It’s Time for ChatOps…

BrandPost Sponsored by Atlassian

IT Infrastructure Breaking Down? It’s Time for ChatOps…

ChatOps helps bridge the tasks of reporting and communications for faster, better-informed incident management

Won the battle and the war

There is little doubt that the data center is now dominated by x86 server hardware. The hardware can run with multiple CPU manufacturers, and has more cores than we could imagine in the ’90s and memory levels that exceed what helped put a man on the moon. All of this hardware is going to support everything from the large single applications to scale out big data apps that let us analyze every picture that was posted on Instagram or Facebook about kittens in the last 24 hours.

Let the vendors compete for you

Most users probably have their preference now in server vendors, and it is all about the extras these companies provide. It is not about the components within the servers. Most network cards are from one or two companies, and most processors are from one of two companies — you may have hard drives from multiple firms, but overall it’s just different-shaped, bent metal with a different label on the front.

The true differentiator for vendor choice comes in the form of supply chain and parts availability. The supply chain has begun to collapse around fewer global vendors, making it easier to price match between them. If you are only looking for regional or local support, that too has limited choice, and all the vendors know it. The choice is now truly in the hands of the consumer and, if planned out, can result in a race to the bottom on price, along with increased “hand-holding” when the equipment arrives.

Managing Software-Defined storage for your virtualized infrastructure just got a whole lot easier.

September 19, 2014 By mletschin

Nexenta is proud to announce our first vCenter Web Client plugin to support the NexentaStor platform. The NexentaStor vCenter Web Client Plugin is a plug-in for vSphere 5.5 and NexentaStor 4.0.3 that provides integrated management of NexentaStor storage systems within vCenter. The plug-in will allow the vCenter administrator to automatically configure NexentaStor nodes via vCenter.

VMware administrators can provision, connect, and delete storage from NexentaStor to the ESX host, and view the datastores within vCenter.

04CreateiSCSINot only can you provision the storage but managing it is also simple with integrated snapshot management.

05SnapshotThe plugin also allows for closer analytics and reporting on the storage through vCenter as detailed below.

Check out the screenshots below, and download the vCenter Web Client Plugin today from the NexentaStor product downloads page.

General details about Storage:

  • Volume Name
  • Connection Status
  • Provisioned IOPs
  • Provisioned Throughput
  • Volume available space and Used space

Storage Properties

  • Datastore name
  • NFS Server IP address
  • Datastore Path and capacity details

Datastore Properties:

  • Total capacity
  • Used capacity
  • Free capacity
  • Block size
  • Datastor IOPs, Throughput, and Latency

Snapshot Management:

  • List existing snapshots
  • Create new snapshots
  • Clone existing snapshots
  • Restore to a snapshot
  • Delete Snapshots
  • Schedule snapshots

End-to-End Datastore Provisioning:

  • Creating a new volume in Storage Array
  • Attach the volume to host as datastore

Accelerate your Horizon 6 deployment with NexentaConnect 3.0!

September 18, 2014 By mletschin

Nexenta is proud to announce the general availability of NexentaConnect 3.0 for VMware Horizon (with View).  The VDI acceleration and automation tool provides increased desktop density and higher IO performance for existing storage deployments as well as greenfield new VDI solutions.  NexentaConnect 3.0 introduces many new features and enhancements for a VDI solution to include

  • Full support for VMware Horizon 6NexentaConnect for Horizon 3
  • Pass Through Support for VMware GPU
  • Import Horizon View desktop pools
  • Fast desktop pool restoration from backup

Combining all these great new features allows you to now accelerate and grow your existing Horizon deployment, which may have been limited by traditional storage solutions.

To learn more about NexentaConnect for VMware Horizon go to http://www.nexenta.com/products/nexentaconnect/nexentaconnect-horizon and download the 45 day free trial.

 

 

VMware View Acceleration on Display

October 9, 2013 By mletschin

As VMworld Europe 2013 approaches, Nexenta continues to drive home the advancements in application-centric storage with Nexenta VSA for VMware Horizon View.  As the demo for the show is completed and desktop pools are created and tested, it is always exciting to see some independent test done and presented.  Just ahead of VMworld, VMware’s EUC team posted a blog detailing the results of their testing of Nexenta’s VSA for VMware Horizon View .  With a 72% increase in desktop density and a 38X reduction in physical SAN traffic, VMware found VSA for VMware Horizon View to be key to a successful VDI rollout.  These performance statistics are not just for show.  The reduction in traffic and increased density does not just help the balance sheet, but it can help stalled deployments move forward.

“With VSA for Horizon View, Nexenta has introduced an amazing product that unlocks outstanding user experience at a low TCO and makes it possible to recover stalled deployments without requiring a disruptive and painful rip and replace scenario.”  -John Dodge, VMware

If you would like to see this technology in action for deployments, or performance metrics, or the acceleration it provides, make sure to come by the Nexenta booth (Hall 8, S-300) at VMworld Europe.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2023