• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

Nexenta

Three Dimensions of Storage Sizing & Design – Part 1: Overview

May 16, 2016 By Nexenta

By Edwin Weijdema, Sales Engineer Benelux & Nordics

Why do we build ICT infrastructures? What is the most important part in those infrastructures? It is Data! With the digital era we need to store bits of information, or so called data, so we can process, interpreter, organize or present that as information by providing context and adding value to the available data. By giving meaning and creating context around data we get useful information. We use data carriers to protect the precious data against loss and alteration. Exchanging floppies/tapes with each other was not that efficient and fast. So storage systems came into play, to make it easier to use, protect and manage data.

Networks and especially the internet connects more and more data and people. Designing and sizing storage systems that hold and protect our data can be a bit of a challenge, because there are several different dimensions you should take into consideration while designing and sizing your storage system(s). In the Three Dimensions of Storage Sizing & Design blog series we will dive deeper into the different dimensions around storage sizing and design.

Use

edwin1In this digital era we are using and creating more data than ever before. To put this into perspective, the last two years we created more data than all years before combined! Social media, mobile devices and smart devices combined in internet of things (IoT) accelerate the creation of data tremendous. We use applications to work with different kind of data sources and to help us streamline these workloads. We create valuable information with the available data by adding context to it. A few examples of data we use, store and manage are: documents, photos, movies, applications and with the uprise of virtualization and especially the Software Defined Data Center (SDDC) also complete infrastructures in code as a backend for the cloud. In the last two years we created more data worldwide, than all digital years before!

Workloads

To make it easier and faster to work with data, applications have been created. Today we use, protect and manage a lot of different kind of workloads that run on our storage systems. The amount of work that is expected to be done in a specific amount of time, has several characteristics that defines the workload type.

  1. Speed– measured in IOPS (Input/Output Per Second), defines the IOs per second. Read and/or Write IOs can be of different patterns (for example, sequential and random). The higher the IOPS the better the performance.
  2. Throughput– measured in MB/s, defines data transfer rate also often called bandwidth. The higher the throughput, the more data that can be processed per second.
  3. Response– measured in time like ns/us/ms latency, defines the amount of time the IO needs to complete. The lower the latency the faster a system/application/data responds, the more fluid it looks to the user. There are many latency measurements that can be taken throughout the whole data path.

It is very depended on the type of workload which characteristic is the most important metric. For example stock trading is very latency dependent, while a backup workload needs throughput to fit within a back-up window. You can size and design the needed storage system if the workloads are known that will be using the storage system(s). Knowing what will use the storage is the first dimension for sizing and designing storage systems correctly.

Protect

A key factor to use storage systems for, is to protect the data we have. We must know the level of insurance the organization and users need/are willing to pay for, so we can design the storage system accordingly. We need to know the required usable capacity combined with the level of redundancy so we can calculate the total amount of capacity needed in the storage system. Always ask yourself do you want to protect against the loss of one system, entire data centers or even across geographic regions, so you can meet the availability requirements.

  1. Capacity– measured in GB/TB/PB or GiB/TiB/PiB. The amount of usable data we need to store on the storage and protect.
  2. Redundancy– measured in number of copies of the data or meta data which can rebuild the data to original. e.g. measures like for instance parity, mirroring, multiple objects.
  3. Availability– measured in RTO and RPO. The maximum amount of data we may lose called the Recovery Point Objective (RPO) and the maximum amount of time it takes to restore service levels Recovery Time Objective (RTO).

edwin2

Knowing how much data we must protect, against defined protection/service levels (SLA’s), gives us the second dimension for sizing and designing the storage system correctly.

Manage

Working with storage systems and designing them takes some sort of future insight or like some colleagues like to call it: a magic whiteboard.  How is the organization running, where is it heading towards and what are the business goals to accomplish. Are business goals changing rapidly or will it be stable for the next foreseeable future? That are just a few examples of questions you should ask. Also a very important metric is the available budget. Budgets aren’t limitless so designing a superior system that is priceless will not work!

  1. Budget– an estimate of income and expenditure for a set period of time. So whats the amount of money that can be spend on the total solution and if departments are charged whats the expected income and how is that calculate, e.g. price per GB/IOPS/MB, price per protection level (SLA’s) or a mix of several. Also specific storage efficiencies features, that reduce the amount of data, should be taken into account.
  2. Limitations– know the limitations of the storage system you are sizing and designing. Every storage system has a high-watermark where performance gets impacted if the storage system fills up beyond that point.
  3. Expectations– How flexible should the system be and how far should it scale?

Its all about balancing cost and requirements and managing expectations to get the best possible solution, within the limitations of the storage system and/or organization. Managing the surroundings of the proposed storage solution gives us the third and final dimension for sizing and designing storage systems.

Overview

Sizing and designing storage systems is, and will always be, a balancing act about managing expectations, limitations and available money to get the best possible solution. So the proposed workloads will run with the correct speed, throughput and responsiveness while full-filling the defined availability requirements. With the uprise of clouds and Infrastructures as a Service (IaaS) vendors, organizations just tend to buy a service. Even if that’s the case it helps tremendous selecting the correct service when you understand how your data is used, against which protection levels so you can manage that accordingly.

To get the best bang for your buck its helps to understand how to determine the three dimensions correctly, so you can size & design the best solution possible. In the next parts of this blog series we will dive deeper into the three dimensions of storage sizing & design: Use, Protect and Manage. In part 2 we will dive into the first Dimension Use with determining workloads.

Raise Your SDS IQ (1 of 6): Practical Review of Scale-up Vendor-Defined “SDS”

May 10, 2016 By Nexenta

by Michael Letschin, Field CTO

This is the first of six posts (aside from the Introduction) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms. The first SDS flavor in our series is Scale-up Vendor-Defined “SDS”.

Scale-up Vendor-Defined “SDS” is where most of the traditional “big box” vendor solutions lie – think EMC’s VNX, NetApp’s FAS, or IBM DS4000 Series – each one being one or two usually commodity head nodes with JBOD behind it. While sold as an appliance, the argument is that SDS comes into play as front-end software that delivers REST-based management with rich APIs, to enable easy, automatic provisioning and management of storage.

Companies choose scale-up vendor-defined “SDS” often because it’s a well recognized brand, they have the vendor in-house already, and it appears to bring SDS benefits while maintaining a familiar in-house infrastructure. Scale-up Vendor-Defined “SDS” is often selected for legacy applications and some virtual apps, largely because its performance in these use cases is excellent. It’s a great choice if you’re running virtual machines with NFS, Exchange, or MS SQL. But, it’s still vendor-defined and not true SDS, so your hardware choices, and your flexibility, are restricted. It also means giving up one of the core SDS benefits – cost effectiveness – because you’ll be paying a premium for proprietary hardware. And that hardware is generally only one or two head nodes, so scalability is limited too.

Overall grade: D+

See below for a typical build and the report card: scale-up_vendor-defined

scale-up_vendor-defined_checklist

Watch this space for the next review in our series – Scale-up, Software-Only SDS

GleSYS’ joint NexentaStor/InfiniFlash system deployment to be focus of upcoming webinar

May 4, 2016 By Nexenta

In April, we announced that GleSYS Internet Services AB (GleSYS), a next-generation cloud platform provider based in Sweden, had deployed ours and SanDisk’s joint All Flash Software-Defined Storage Solution. The NexentaStor/InfiniFlash solution delivers cost effective, high performance storage architecture, empowering GleSYS and its customers to quickly scale capacity as required.

Providing hosted internet solutions to nearly 3,500 customers around the world, GleSYS specializes in offering its public cloud services to small- and medium-sized businesses on a self-serve basis.  Prior to selecting the joint solution, the company was struggling to ensure the reliability of its storage solutions for customers, many of whom require extra performance provisioning instantly. However, GleSYS now has access to flexible and reliable storage architecture and, with unparalleled and limitless scalability to boot, the company has a tool that can support its impressive year-on-year growth predictions.

The deployment has been a huge success with GleSYS able to provide powerful and reliable cloud architectures to its increasing customer base. The solution has provided the company with numerous technical benefits, including:

  • The new architecture runs at a constant 20k Input/Output Operations per Second (IOPS), with a latency of less than 1.5ms in its daily operations. The previous solution was only able to reach a maximum of 12k IOPS
  • While IOPS currently peak at around 80k, GleSYS believes this is not the limit of the solution, but rather the max utilization that it has seen from its current hardware
  • In regards to data center footprint, GleSYS has a 64TB hybrid storage solution allocating 16U. The joint solution can hold up to 512TB in only 7U

The GleSYS success story will be discussed in more detail during an upcoming webinar, May 11th 2016, 7am PT.  Executives from GleSYS, Nexenta, SanDisk and Layer 8 will take a deep dive into the challenges that the organization was facing with its legacy storage set-ups, and how the NexentaStor/InfiniFlash system is ensuring better storage performance and reliability for the company and its customers. Register for the webinar here: https://www.brighttalk.com/webcast/12587/198547

For more information on the GleSYS deployment, the full case study can be viewed here (English) or here (Swedish).

 

 

It’s time to raise your SDS IQ

April 26, 2016 By Nexenta

by Michael Letschin, Field CTO

If you’re like other storage buyers – you’re going to invest in a solution, you want storage on your terms – optimized for your organization, its requirements, now and in the future. When it comes to distinguishing the wealth of Software Defined Storage (SDS) solutions from one another, you probably have a better shot of telling monkeys apart (note: there are 260 species of monkeys). Even respected analysts like Gartner, IDC, 451 Research and TechTarget have different SDS definitions – SDS must be software only, SDS can be hyperconverged, SDS is open source, or SDS can be hardware-based.   What most people seem to agree on is that SDS enables storage services through a software interface, and often runs on commodity hardware, enabled by the decoupling of storage software and hardware.

Yet that still doesn’t help answer the question, what meets YOUR needs? It may seem a little unconventional for a vendor blog, but our goal in this series (expect another six blogs after this one) is to give you some practical information on SDS types – what are the flavors, what works best where, how different SDS types rate against common use cases, and what you should select to bring up your organization’s SDS IQ.

We’re going to cover six types of SDS solutions:

  • Scale-up Vendor-Defined “SDS”
  • Scale-up Software Only
  • Scale Out
  • Hyperconverged
  • Virtual Storage Appliance
  • Containerized

Review our report cards to see whether your favorite SDS solution made the grade – we’ll look at each type and rate them on four critical categories: flexibility, scalability, performance, and cost; we’ll suggest the best use cases for each solution, and even share a few vendors to look at. We’ll be using a 5 point grading system:

  • A: Excellent; well-rounded and recommended solution
  • B: Very Good; above average solutions, especially for certain use cases
  • C: Passing; improvement needed for overall usage
  • D: Close fail; almost passing, solution with numerous gaps
  • F: Failing, not a workable solution

What’s on your SDS wish list?

To help you raise your SDS IQ, it’s helpful to start by doing your homework – what’s on your SDS wish list? For example, making sure you’re still in charge of managing drives, so you can handle predictable drive failures. Many organizations also want policy-based provisioning using REST-based APIs, specifically thin provisioning and scripted storage solution. Tiering is also often a must-have for SDS because of its ability to match data with storage types and maximize your return on investment. You might also be looking for SDS that’s independent of hypervisors. Hyperconvergence expands the portfolio of solutions even further. Take a few minutes to think through what matters most, and we’ll help you figure out how to get it.

Watch this space for the first review in our series – Scale-up, Vendor-Defined “SDS”

Nexenta has joined Intel’s new Storage Builders Program for Open Software-Defined Storage

April 19, 2016 By Nexenta

by Nexenta

Earlier in April, Intel hosted its Intel Data Center Group (DCG) Cloud Day. The event brought together its vast network of industry partners across the cloud and networking space, as well as journalists, analysts and other influencers, to discuss the the industry’s progress in the adoption of Software-Defined Infrastructure (SDI), highlighting the importance of Software-Defined Storage (SDS) to SDI. As part of this discussion, Intel announced it would be launching a Storage Builders Program – an extension of the Intel Builders Program.

We were delighted to be invited to join the exclusive group of vendors supporting the program – many of which we’ve worked with before and we’re excited to extend those partnerships further.

The program is a cross-industry initiative designed to increase ecosystem alignment, ignite innovation, reduce development efforts, lead open storage standards development and accelerate adoption of intelligent, cost-effective and efficient SDS.

There is no doubt that SDS has become an increasingly important market segment, and the introduction of this program is yet more evidence of that fact. Organizations across the world are finding that their legacy storage systems just aren’t up to scratch, cost far too much and, on top of that, they’re tied to a single vendor. At Nexenta our long-term mission has been to make storage open and we’re excited that Intel has the same future in mind.

As a participant in the program, we’ll continue to build on our current OpenSDS offerings, but will have the added benefit of using Intel’s expertise to create joint solutions for customers. We’ll have the ability to collaborate with other Intel Storage Builders members to drive broader market adoption of software-defined technologies across the data center and seriously accelerate traction in the storage market.

Our participation in the Intel Storage Builders program supplements our existing and continued support of all Intel server providers including Cisco, Dell, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, SuperCloud and Supermicro.

“Supermicro is raising the bar on performance and density with the Industry’s broadest portfolio of all-flash NVMe storage solutions and complete support for Intel’s latest Xeon E5-2600 v4 processors across our server platforms,” said Don Clegg, VP of Marketing and Business Development at Supermicro. “As a member of Intel Storage Builders with longstanding partner Nexenta, we are developing and deploying next generation software-defined storage solutions which deliver the most innovative, highly scalable, hyper-converged infrastructure, with lowest overall TCO.”

“We are proud to join Nexenta and select others as part of the Intel Storage Builders program. We’ve worked with Nexenta for a long time in supporting our mission to provide customers with the industry’s widest-range of tested and validated OpenSDS solutions,” said Travis Vigil, Executive Director Product Management at Dell Storage. “We are excited to continue working together along with Intel to produce high-end results for our customers.”

“Nexenta’s solutions will enable our customers to deploy flexible, software-defined storage environments on purpose-built HPE Apollo 4000 big data servers,” said Susan Blocher, Vice President of Compute Solutions at Hewlett Packard Enterprise. “The collaboration between HPE, Nexenta and Intel ensures our customers benefit from the combined innovation and high-quality of our next generation solutions.”

“Nexenta is a strong partner in supporting our growth in the enterprise storage market,” said Stuart McRae, Director of Product Marketing for the Lenovo Storage Business Unit. “At Lenovo, we strive to bring innovative customer value to our enterprise customers, and together with Intel and Nexenta, we will continue to expand our reach to bring new economics to data center storage. “

“We are pleased to partner once again with Nexenta, as they continue to support our company’s vision of shaping the future of software-defined storage by creating unprecedented value and opportunities for our customers and partners,” said Philippe Vincent, CEO at Load DynamiX. “Ensuring that software defined storage will perform and scale to enterprise data center requirements is a key focus of Load DynamiX. We are confident that, together with other Intel Storage Builder partners, we can achieve these goals.”

“Nexenta has proven to our customers the joint benefits of SDS in conjunction with our expertise in cloud computing hardware, software and services,” said Yuzhen Fang, CEO at SuperCloud. “We look forward to continuing development of these solutions with both Nexenta and Intel.”

“Nexenta and VMware have worked together for many years to provide mutual customers with flexible storage solutions to enable increased efficiency and availability, while lowering costs,” said Howard Hall, senior director, Global Technology Partnering Organization, VMware. “We look forward to working with both Nexenta and Intel to create additional value and drive broader adoption of software-defined technologies.”

For more information, visit: Storage on Your Terms: Nexenta Software Defined Storage with Intel

 

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2025