• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

Software-defined storage

Raise Your SDS IQ (4 of 6): Practical Review of Hyperconverged “SDS”

June 14, 2016 By Nexenta

by Michael Letschin, Field CTO

This is the fourth of six posts (the last one was Scale-out) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms. The fourth SDS flavor in our series is Hyperconverged “SDS”.

Hyperconverged systems are the subject of much industry hype and analyst debate. Some consider hyperconverged systems to be a form of SDS, others keep them out of the category for not having software-only options. What to know: a hyperconverged system is a single integrated hardware and software system comprising multiple head nodes that present all storage as one virtual pool (think Nutanix or VMware’s EVO Rail). This means that some of the software-only SDS benefits – like flexibility and cost effectiveness – are severely limited.

That said, because it’s fast and easy to set up and drop in a hyperconverged system, it’s a good choice for branch offices or green-field deployments, where there are no existing storage systems to integrate with. Hyperconverged systems are somewhat of a “black box”– meaning you’re not going to have access to software to tune – but you can dial up the performance by increasing the number of nodes.

The downside of Hyperconverged “SDS” is that it’s difficult to impossible to change the system later. Hyperconverged “SDS” provides building block only. You buy what the vendor is selling, which narrows your options. Because you’re tied to a vendor and their pricing models, cost efficiency is also limited. Plus, you’ll need to buy equal amounts of storage and compute capacity. Unless you’re an organization where requirements for storage and compute capacity scale in perfect step, this means you’ll end up with too much of one or the other, wasting part of your investment.

Overall grade: C

See below for a typical build and the report card:

Screen Shot 2016-06-14 at 10.19.33 AM

Raise Your SDS IQ (3 of 6): Practical Review of Scale-out

June 7, 2016 By Nexenta

This is the third of six posts (the last one was Scale-up Software-Only “SDS”) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms. The third SDS flavor in our series is Scale-out.

Scale-out is a fundamentally different approach from scale-up; with Scale-out, multiple head nodes can be attached over the network to dramatically increase scalability. This is a broad category, and solutions for it could be either vendor-defined / hardware based (think EMC’s Isilon), or software-only (Nexenta’s NexentaEdge); while we’d consider the software-only approach to be the SDS version, the technical benefits of either type of scale out are similar. You use low-latency networking to connect as many nodes as you want and form a cluster that provides storage services out to applications as unified name space.

The scale-out approach works well for Archive or Web 2.0 applications use cases. Scalability is top notch, because you can start small and grow just by adding nodes. While it provides the performance needed to handle huge capacities, there’s an important dependence on the network – the quality of your gear will significantly impact performance, because of the amount of communication between nodes; that may mean that your IOPS aren’t great

The flexibility of scale-out SDS is generally good but currently offers limited protocol support. Often the maturity of the platforms themselves limits your flexibility; for example, you can’t use Exchange to write to an object back end. Likewise, object-oriented applications won’t work with some back ends, either. Protocol support considerations also impact the cost effectiveness of Scale-out: they may restrict your hardware choices and lock you in to more expensive purchases.

Overall grade: B

See below for a typical build and the report card:

Screen Shot 2016-06-07 at 11.26.20 AM

Raise Your SDS IQ (2 of 6): Practical Review of Scale-Up Software-Only SDS

May 24, 2016 By Nexenta

by Michael Letschin, Field CTO

This is the second of six posts (the last one was Scale-up Vendor-Defined “SDS”) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms.  The second SDS flavor in our series is Scale-up Software-Only SDS.

Scale-up Software-Only SDS is just that – SDS benefits delivered via a software-only approach; the software sits on industry standard servers (think Cisco, Dell, Supermicro) and can leverage a variety of JBODs (like Fujitsu, Supermicro, Quanta).  The end result is similar to what you get with Scale-up Vendor-Defined “SDS” — one or two commodity head nodes with JBOD behind it – but with two big differences: much better cost efficiency because it’s more vendor agnostic, and greater opportunity for scale up by leveraging big disks for the JBODs (like Supermicro’s 90-bay). Both provide REST-based management with rich APIs via front-end software, to enable easy, automatic provisioning and management of storage. Unlike Vendor-Defined, Scale-up Software-Only SDS offers good scalability and excellent cost efficiency and flexibility because of it’s software-only approach.

Scale-up Software-Only SDS is an excellent option for companies running virtual machines or the legacy apps that help run their business. It’s a good choice if you have accounting applications and legacy hardware because its performance in this use case is excellent. Unlike vendor-defined options, this one gives you the flexibility to design your solution as needed; and that applies both to the solution you need today, as well as creating flexibility for the future.. Plus, you can choose the most cost-effective commodity-priced machines rather than paying a premium for proprietary hardware – we’ve seen as much as 50-80% cost difference between identical solutions of the different types.  Scale-up Software-Only SDS is limited to one or two head nodes, but you can put larger drives behind them, so you can scale up pretty big, but obviously scale out is limited.

Overall grade: A-

See below for a typical build and the report card:

Screen Shot 2016-05-24 at 10.11.24 AM

Is 2016 the year of the software-defined data center? Part 2

May 20, 2016 By mletschin

In the last blog we determined that the world has changed since the early 2000s and that x86 has won the server war, now giving you a chance to have vendors compete for you. But that’s only part of the data center — we still have to look at the rest of the infrastructure.

Networks that can shift and deliver at the same time

In the past, if you wanted to have the fastest network infrastructure you had to buy from one or two specific vendors. In reality, they still have a huge share of the networking market, but we’ve seen a transition in recent years.

That being said, it is not just about the software layer — with scale-out systems, the network has become critical for more than just inter-rack and inter-data center connectivity, and critical for processing power and speed as well. The move from 1GB to 10GB to 40GB, and even up to 100GB, is allowing for larger amounts of traffic than ever before to move around the globe. This is all done at a scale that can give even the smallest enterprise access to faster connectivity, but it still comes at a cost.

While 10GB is starting to become commonplace, it is still a much more expensive option than 1GB and not something that has trickled down to the end user yet. The true question is, does it need to? If the data center is driven by commodity hardware that is governed by software and all the compute happens there, do we really need to be faster at the endpoint? To answer that, we will have to wait and see.

The brains behind the brawn

The No. 1 buzzword of the last few years has been “software-defined,” and rings true for the transition we have seen in the data center over the last 10 years. The first step is compute, with virtualization taking its place as a stalwart to provide the processing power needed on commodity hardware.

The next two parts that need to move to the software-defined space to allow for a commodity-driven data center are storage and networking. Some would argue that we are already there, with executives from major storage vendors making statements as early as 2010 that “we all run on commodity hardware.”

In reality, you are normally limited to the hardware that a vendor chooses. Moving into a software-defined model means we have choice in vendors and can truly liberate the storage needs from the hardware vendor’s grasp. The growth of open source-based solutions has helped this along the way, and we will see if 2016 is the year we take the leap to make software-defined the norm.

Networking, on the other hand, is the last step forward. You cannot buy a blank switch off the shelf and just layer on the software of your choosing and make it work at the speed and capacity that you desire, but we are starting to see a transition toward this. The combination of software-defined solutions is what gives companies the agility to transition to the enterprises of tomorrow, providing the ability to custom-fit a solution into the business instead of the other way around. It appears that software-defined is more than just a buzzword.

So is it the year?

My gut says the commodity-based data center is not ready to go mainstream, but for intrepid companies willing to source the right gear, it may be. We have seen so many changes in the data center since the ’90s and the early 2000s that I have to believe the pace of change will continue.

Before we cross into 2020, the data center will be in the domain of the business again, not the vendors. That will be the commodity data center that anyone who has managed a data center in the last 20 years has been dreaming about.

Three Dimensions of Storage Sizing & Design – Part 1: Overview

May 16, 2016 By Nexenta

By Edwin Weijdema, Sales Engineer Benelux & Nordics

Why do we build ICT infrastructures? What is the most important part in those infrastructures? It is Data! With the digital era we need to store bits of information, or so called data, so we can process, interpreter, organize or present that as information by providing context and adding value to the available data. By giving meaning and creating context around data we get useful information. We use data carriers to protect the precious data against loss and alteration. Exchanging floppies/tapes with each other was not that efficient and fast. So storage systems came into play, to make it easier to use, protect and manage data.

Networks and especially the internet connects more and more data and people. Designing and sizing storage systems that hold and protect our data can be a bit of a challenge, because there are several different dimensions you should take into consideration while designing and sizing your storage system(s). In the Three Dimensions of Storage Sizing & Design blog series we will dive deeper into the different dimensions around storage sizing and design.

Use

edwin1In this digital era we are using and creating more data than ever before. To put this into perspective, the last two years we created more data than all years before combined! Social media, mobile devices and smart devices combined in internet of things (IoT) accelerate the creation of data tremendous. We use applications to work with different kind of data sources and to help us streamline these workloads. We create valuable information with the available data by adding context to it. A few examples of data we use, store and manage are: documents, photos, movies, applications and with the uprise of virtualization and especially the Software Defined Data Center (SDDC) also complete infrastructures in code as a backend for the cloud. In the last two years we created more data worldwide, than all digital years before!

Workloads

To make it easier and faster to work with data, applications have been created. Today we use, protect and manage a lot of different kind of workloads that run on our storage systems. The amount of work that is expected to be done in a specific amount of time, has several characteristics that defines the workload type.

  1. Speed– measured in IOPS (Input/Output Per Second), defines the IOs per second. Read and/or Write IOs can be of different patterns (for example, sequential and random). The higher the IOPS the better the performance.
  2. Throughput– measured in MB/s, defines data transfer rate also often called bandwidth. The higher the throughput, the more data that can be processed per second.
  3. Response– measured in time like ns/us/ms latency, defines the amount of time the IO needs to complete. The lower the latency the faster a system/application/data responds, the more fluid it looks to the user. There are many latency measurements that can be taken throughout the whole data path.

It is very depended on the type of workload which characteristic is the most important metric. For example stock trading is very latency dependent, while a backup workload needs throughput to fit within a back-up window. You can size and design the needed storage system if the workloads are known that will be using the storage system(s). Knowing what will use the storage is the first dimension for sizing and designing storage systems correctly.

Protect

A key factor to use storage systems for, is to protect the data we have. We must know the level of insurance the organization and users need/are willing to pay for, so we can design the storage system accordingly. We need to know the required usable capacity combined with the level of redundancy so we can calculate the total amount of capacity needed in the storage system. Always ask yourself do you want to protect against the loss of one system, entire data centers or even across geographic regions, so you can meet the availability requirements.

  1. Capacity– measured in GB/TB/PB or GiB/TiB/PiB. The amount of usable data we need to store on the storage and protect.
  2. Redundancy– measured in number of copies of the data or meta data which can rebuild the data to original. e.g. measures like for instance parity, mirroring, multiple objects.
  3. Availability– measured in RTO and RPO. The maximum amount of data we may lose called the Recovery Point Objective (RPO) and the maximum amount of time it takes to restore service levels Recovery Time Objective (RTO).

edwin2

Knowing how much data we must protect, against defined protection/service levels (SLA’s), gives us the second dimension for sizing and designing the storage system correctly.

Manage

Working with storage systems and designing them takes some sort of future insight or like some colleagues like to call it: a magic whiteboard.  How is the organization running, where is it heading towards and what are the business goals to accomplish. Are business goals changing rapidly or will it be stable for the next foreseeable future? That are just a few examples of questions you should ask. Also a very important metric is the available budget. Budgets aren’t limitless so designing a superior system that is priceless will not work!

  1. Budget– an estimate of income and expenditure for a set period of time. So whats the amount of money that can be spend on the total solution and if departments are charged whats the expected income and how is that calculate, e.g. price per GB/IOPS/MB, price per protection level (SLA’s) or a mix of several. Also specific storage efficiencies features, that reduce the amount of data, should be taken into account.
  2. Limitations– know the limitations of the storage system you are sizing and designing. Every storage system has a high-watermark where performance gets impacted if the storage system fills up beyond that point.
  3. Expectations– How flexible should the system be and how far should it scale?

Its all about balancing cost and requirements and managing expectations to get the best possible solution, within the limitations of the storage system and/or organization. Managing the surroundings of the proposed storage solution gives us the third and final dimension for sizing and designing storage systems.

Overview

Sizing and designing storage systems is, and will always be, a balancing act about managing expectations, limitations and available money to get the best possible solution. So the proposed workloads will run with the correct speed, throughput and responsiveness while full-filling the defined availability requirements. With the uprise of clouds and Infrastructures as a Service (IaaS) vendors, organizations just tend to buy a service. Even if that’s the case it helps tremendous selecting the correct service when you understand how your data is used, against which protection levels so you can manage that accordingly.

To get the best bang for your buck its helps to understand how to determine the three dimensions correctly, so you can size & design the best solution possible. In the next parts of this blog series we will dive deeper into the three dimensions of storage sizing & design: Use, Protect and Manage. In part 2 we will dive into the first Dimension Use with determining workloads.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 9
  • Go to Next Page »

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2025