• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

Uncategorized

Three Dimensions of Storage Sizing & Design – Part 1: Overview

May 16, 2016 By Nexenta

By Edwin Weijdema, Sales Engineer Benelux & Nordics

Why do we build ICT infrastructures? What is the most important part in those infrastructures? It is Data! With the digital era we need to store bits of information, or so called data, so we can process, interpreter, organize or present that as information by providing context and adding value to the available data. By giving meaning and creating context around data we get useful information. We use data carriers to protect the precious data against loss and alteration. Exchanging floppies/tapes with each other was not that efficient and fast. So storage systems came into play, to make it easier to use, protect and manage data.

Networks and especially the internet connects more and more data and people. Designing and sizing storage systems that hold and protect our data can be a bit of a challenge, because there are several different dimensions you should take into consideration while designing and sizing your storage system(s). In the Three Dimensions of Storage Sizing & Design blog series we will dive deeper into the different dimensions around storage sizing and design.

Use

edwin1In this digital era we are using and creating more data than ever before. To put this into perspective, the last two years we created more data than all years before combined! Social media, mobile devices and smart devices combined in internet of things (IoT) accelerate the creation of data tremendous. We use applications to work with different kind of data sources and to help us streamline these workloads. We create valuable information with the available data by adding context to it. A few examples of data we use, store and manage are: documents, photos, movies, applications and with the uprise of virtualization and especially the Software Defined Data Center (SDDC) also complete infrastructures in code as a backend for the cloud. In the last two years we created more data worldwide, than all digital years before!

Workloads

To make it easier and faster to work with data, applications have been created. Today we use, protect and manage a lot of different kind of workloads that run on our storage systems. The amount of work that is expected to be done in a specific amount of time, has several characteristics that defines the workload type.

  1. Speed– measured in IOPS (Input/Output Per Second), defines the IOs per second. Read and/or Write IOs can be of different patterns (for example, sequential and random). The higher the IOPS the better the performance.
  2. Throughput– measured in MB/s, defines data transfer rate also often called bandwidth. The higher the throughput, the more data that can be processed per second.
  3. Response– measured in time like ns/us/ms latency, defines the amount of time the IO needs to complete. The lower the latency the faster a system/application/data responds, the more fluid it looks to the user. There are many latency measurements that can be taken throughout the whole data path.

It is very depended on the type of workload which characteristic is the most important metric. For example stock trading is very latency dependent, while a backup workload needs throughput to fit within a back-up window. You can size and design the needed storage system if the workloads are known that will be using the storage system(s). Knowing what will use the storage is the first dimension for sizing and designing storage systems correctly.

Protect

A key factor to use storage systems for, is to protect the data we have. We must know the level of insurance the organization and users need/are willing to pay for, so we can design the storage system accordingly. We need to know the required usable capacity combined with the level of redundancy so we can calculate the total amount of capacity needed in the storage system. Always ask yourself do you want to protect against the loss of one system, entire data centers or even across geographic regions, so you can meet the availability requirements.

  1. Capacity– measured in GB/TB/PB or GiB/TiB/PiB. The amount of usable data we need to store on the storage and protect.
  2. Redundancy– measured in number of copies of the data or meta data which can rebuild the data to original. e.g. measures like for instance parity, mirroring, multiple objects.
  3. Availability– measured in RTO and RPO. The maximum amount of data we may lose called the Recovery Point Objective (RPO) and the maximum amount of time it takes to restore service levels Recovery Time Objective (RTO).

edwin2

Knowing how much data we must protect, against defined protection/service levels (SLA’s), gives us the second dimension for sizing and designing the storage system correctly.

Manage

Working with storage systems and designing them takes some sort of future insight or like some colleagues like to call it: a magic whiteboard.  How is the organization running, where is it heading towards and what are the business goals to accomplish. Are business goals changing rapidly or will it be stable for the next foreseeable future? That are just a few examples of questions you should ask. Also a very important metric is the available budget. Budgets aren’t limitless so designing a superior system that is priceless will not work!

  1. Budget– an estimate of income and expenditure for a set period of time. So whats the amount of money that can be spend on the total solution and if departments are charged whats the expected income and how is that calculate, e.g. price per GB/IOPS/MB, price per protection level (SLA’s) or a mix of several. Also specific storage efficiencies features, that reduce the amount of data, should be taken into account.
  2. Limitations– know the limitations of the storage system you are sizing and designing. Every storage system has a high-watermark where performance gets impacted if the storage system fills up beyond that point.
  3. Expectations– How flexible should the system be and how far should it scale?

Its all about balancing cost and requirements and managing expectations to get the best possible solution, within the limitations of the storage system and/or organization. Managing the surroundings of the proposed storage solution gives us the third and final dimension for sizing and designing storage systems.

Overview

Sizing and designing storage systems is, and will always be, a balancing act about managing expectations, limitations and available money to get the best possible solution. So the proposed workloads will run with the correct speed, throughput and responsiveness while full-filling the defined availability requirements. With the uprise of clouds and Infrastructures as a Service (IaaS) vendors, organizations just tend to buy a service. Even if that’s the case it helps tremendous selecting the correct service when you understand how your data is used, against which protection levels so you can manage that accordingly.

To get the best bang for your buck its helps to understand how to determine the three dimensions correctly, so you can size & design the best solution possible. In the next parts of this blog series we will dive deeper into the three dimensions of storage sizing & design: Use, Protect and Manage. In part 2 we will dive into the first Dimension Use with determining workloads.

Is 2016 the year of the software-defined data center?

May 7, 2016 By mletschin

Even with my 20 years of experience working in technology, I am just a baby compared to some people I have encountered along the way.

Still, I sometimes think about the “old days” in IT. It was a time when data centers were dominated by mainframe systems and midrange monoliths like the AS/400. A time when virtualization was something done only on these big systems, and the only need for an x86 system was under your desk to access a mainframe when you didn’t have a classic thin client.

When my career in IT really began, I was working with systems like Windows NT and Exchange 5.5, and we were creating Web pages with this mysterious language called HTML that magically made things appear. Even in those days, there was this thought that there was no need for companies to be beholden to the largest vendors.

There was a notion/belief in the early 2000s that with the growth of VMware, eventually the data center would become a blend of multiple vendors and companies would be able to take back their own infrastructure, enabling them to control the way they supported their business. This transition marked the evolution from the traditional data center to the commodity-based data center, driven by business needs and created using software, rather than the monolith systems that big vendors charged exorbitant amounts for.

Fast-forward to 2016

Are we there yet? Not quite, but this could be the year. Most enterprises have nearly 90% of their systems virtualized on VMware, Xen or KVM and are managed by newer solutions like OpenStack. 2016 could finally be the year we see the commodity-based data center become reality.

To make this happen, there are so many components that need to be in the right place. We need to see x86 servers with enough power to support large database systems — check. We need the ability to choose between multiple vendors that all produce a similar-quality product, but differentiate themselves with their support and supply chain — check. We need to have networks that can handle multiple systems and the communication between them, without losing packets and without being based on a single large infrastructure — check. (Well, sorta . . . we will get to that one in a few.) Finally, we need software to drive all these systems and define the solutions, whether it is storage, compute or even networking.

We are just about there and are seeing an influx of companies that can provide these services, but can they unseat the incumbent large, publicly traded vendors that look to buy up all the small fish? That just may be the key to 2016 being the year of change for the data center we have all dreamed about since we walked away from worrying about what happens when the clock strikes midnight for the new millennium.

Over the next couple of blogs I will take a look at each of the categories and see if we are really as close as we think to 2016 being “The Year,” starting with the server and compute side.

IT Infrastructure Breaking Down? It’s Time for ChatOps…

BrandPost Sponsored by Atlassian

IT Infrastructure Breaking Down? It’s Time for ChatOps…

ChatOps helps bridge the tasks of reporting and communications for faster, better-informed incident management

Won the battle and the war

There is little doubt that the data center is now dominated by x86 server hardware. The hardware can run with multiple CPU manufacturers, and has more cores than we could imagine in the ’90s and memory levels that exceed what helped put a man on the moon. All of this hardware is going to support everything from the large single applications to scale out big data apps that let us analyze every picture that was posted on Instagram or Facebook about kittens in the last 24 hours.

Let the vendors compete for you

Most users probably have their preference now in server vendors, and it is all about the extras these companies provide. It is not about the components within the servers. Most network cards are from one or two companies, and most processors are from one of two companies — you may have hard drives from multiple firms, but overall it’s just different-shaped, bent metal with a different label on the front.

The true differentiator for vendor choice comes in the form of supply chain and parts availability. The supply chain has begun to collapse around fewer global vendors, making it easier to price match between them. If you are only looking for regional or local support, that too has limited choice, and all the vendors know it. The choice is now truly in the hands of the consumer and, if planned out, can result in a race to the bottom on price, along with increased “hand-holding” when the equipment arrives.

Nexenta has joined Intel’s new Storage Builders Program for Open Software-Defined Storage

April 19, 2016 By Nexenta

by Nexenta

Earlier in April, Intel hosted its Intel Data Center Group (DCG) Cloud Day. The event brought together its vast network of industry partners across the cloud and networking space, as well as journalists, analysts and other influencers, to discuss the the industry’s progress in the adoption of Software-Defined Infrastructure (SDI), highlighting the importance of Software-Defined Storage (SDS) to SDI. As part of this discussion, Intel announced it would be launching a Storage Builders Program – an extension of the Intel Builders Program.

We were delighted to be invited to join the exclusive group of vendors supporting the program – many of which we’ve worked with before and we’re excited to extend those partnerships further.

The program is a cross-industry initiative designed to increase ecosystem alignment, ignite innovation, reduce development efforts, lead open storage standards development and accelerate adoption of intelligent, cost-effective and efficient SDS.

There is no doubt that SDS has become an increasingly important market segment, and the introduction of this program is yet more evidence of that fact. Organizations across the world are finding that their legacy storage systems just aren’t up to scratch, cost far too much and, on top of that, they’re tied to a single vendor. At Nexenta our long-term mission has been to make storage open and we’re excited that Intel has the same future in mind.

As a participant in the program, we’ll continue to build on our current OpenSDS offerings, but will have the added benefit of using Intel’s expertise to create joint solutions for customers. We’ll have the ability to collaborate with other Intel Storage Builders members to drive broader market adoption of software-defined technologies across the data center and seriously accelerate traction in the storage market.

Our participation in the Intel Storage Builders program supplements our existing and continued support of all Intel server providers including Cisco, Dell, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, SuperCloud and Supermicro.

“Supermicro is raising the bar on performance and density with the Industry’s broadest portfolio of all-flash NVMe storage solutions and complete support for Intel’s latest Xeon E5-2600 v4 processors across our server platforms,” said Don Clegg, VP of Marketing and Business Development at Supermicro. “As a member of Intel Storage Builders with longstanding partner Nexenta, we are developing and deploying next generation software-defined storage solutions which deliver the most innovative, highly scalable, hyper-converged infrastructure, with lowest overall TCO.”

“We are proud to join Nexenta and select others as part of the Intel Storage Builders program. We’ve worked with Nexenta for a long time in supporting our mission to provide customers with the industry’s widest-range of tested and validated OpenSDS solutions,” said Travis Vigil, Executive Director Product Management at Dell Storage. “We are excited to continue working together along with Intel to produce high-end results for our customers.”

“Nexenta’s solutions will enable our customers to deploy flexible, software-defined storage environments on purpose-built HPE Apollo 4000 big data servers,” said Susan Blocher, Vice President of Compute Solutions at Hewlett Packard Enterprise. “The collaboration between HPE, Nexenta and Intel ensures our customers benefit from the combined innovation and high-quality of our next generation solutions.”

“Nexenta is a strong partner in supporting our growth in the enterprise storage market,” said Stuart McRae, Director of Product Marketing for the Lenovo Storage Business Unit. “At Lenovo, we strive to bring innovative customer value to our enterprise customers, and together with Intel and Nexenta, we will continue to expand our reach to bring new economics to data center storage. “

“We are pleased to partner once again with Nexenta, as they continue to support our company’s vision of shaping the future of software-defined storage by creating unprecedented value and opportunities for our customers and partners,” said Philippe Vincent, CEO at Load DynamiX. “Ensuring that software defined storage will perform and scale to enterprise data center requirements is a key focus of Load DynamiX. We are confident that, together with other Intel Storage Builder partners, we can achieve these goals.”

“Nexenta has proven to our customers the joint benefits of SDS in conjunction with our expertise in cloud computing hardware, software and services,” said Yuzhen Fang, CEO at SuperCloud. “We look forward to continuing development of these solutions with both Nexenta and Intel.”

“Nexenta and VMware have worked together for many years to provide mutual customers with flexible storage solutions to enable increased efficiency and availability, while lowering costs,” said Howard Hall, senior director, Global Technology Partnering Organization, VMware. “We look forward to working with both Nexenta and Intel to create additional value and drive broader adoption of software-defined technologies.”

For more information, visit: Storage on Your Terms: Nexenta Software Defined Storage with Intel

 

Questions from the Field: How do you define Software-defined?

April 15, 2016 By Nexenta

By Michael Letschin, Field CTO

If you’re in IT, and attending the usual industry events, you can’t help but notice the explosion of companies, from those making software to those making hardware, claiming to have Software-defined solutions.  I even bumped into someone (we’ll leave him nameless) who claimed that his company’s flash controller was “software-defined”, because after all, it was software that defined how the hardware should be managed; right?  Right …

Yes, software needs hardware, but that doesn’t make the resulting solution Software-defined.  While there are an increasing number of Software-defined solutions out there, it’s still a bit of the Wild West, and buyers best beware.  Have some healthy caution as you explore solutions and understand how your vendor defines Software-defined.  Getting that base-level understanding is important, because the solutions that flow from the definition have different characteristics that either will or won’t work with your use case.

So, how do we define Software-defined?  Well, we are Nexenta after all, the storage software company; we would say that the only true SDS solutions are ones where the software is hardware agnostic, architecturally flexible, and able to deliver business agility (along with the usual storage software features).  But don’t take our word for it, for a deeper dive into SDS definitions, check out George Crump’s latest blog “What Exactly is Software Defined Storage“.  The important takeaway here is that the definition of SDS matters when you’re trying to solve a problem – it defines the benefits that the solution is capable of delivering to you.

Next week we’ll be starting a blog series on how to Raise Your SDS IQ, where we’ll walk through the six different types of SDS, as the industry defines them, and explain where and why they excel, and fall down.  So, watch this space as we work to build out your buyer’s toolkit; that way you won’t be the guy (or gal) with the knife at the gun fight ;).

Using the Right Storage Protocol for the Right Use Case

April 5, 2016 By Nexenta

By Michael Letschin, Field CTO

IT professionals have no shortage of storage protocols to choose from, such as NFS, SMB, Fibre Channel (FC), iSCSI and Object. “Experts” are writing books about which protocol is best, usually taking the side of a vendor with a particular axe to grind. The truth is they each have their sweet spot. The key is to make sure that your storage solution is flexible enough to support all your data center’s needs at the same time.

Virtualization

In most data centers the virtual infrastructure supports the majority of the business critical applications and workloads. The virtualization platform of choice, at least for now, is VMware. While FC is still very prevalent in VMware environments and VVOLs makes FC more flexible, we believe NFS is the ideal choice for most VMware environments. Let’s face it, VMs are essentially files and what better way to store files than a protocol designed specifically for file based data like NFS. The advantages of NFS are well documented but the key is that NFS provides a much easier mapping of a VM to its datastore. You can now make decisions, like what tier of storage to place a VM on, at a discrete VM level.

Mission Critical Applications

Many environments, for a variety of reasons, choose not to virtualize certain mission critical applications. They may already be clustered or there may be too many performance concerns. For these situations, many data centers will leverage a block protocol like FC or iSCSI. If the high performance storage requires low latency access, then FC is ideal, but iSCSI can hold its own for other applications where latency is not critical. Again, your storage software should give you the flexibility to choose any or all of these protocols as it makes sense.

Files

Managing file data, or unstructured data for those who want to sound cool, is one of the biggest challenges facing IT. And just like applications not all this data is equal. Most IT professionals immediately think of user data here, created by office productivity applications. It needs to be put on moderately performing storage but not the fastest storage since most users today are connecting via WiFi or even broadband. You need to keep it a long time because users never want you to delete their files. For this data, assuming most users are running Windows, SMB is the protocol of choice.

Another type of file data comes from machines like cameras, recording devices and sensors. It can range in size from trillions of very small sensor files to a few very large files from video cameras. The industry will tell you that object storage is the way to go here, and it very well may be. But we encourage you to use NFS first. It takes a lot of data to exceed the maximum potential of a modern NFS server. Again, the storage solution should not force you to make a choice.

At the other end of the file spectrum is high performance data that you need to access rapidly or a process that needs to write data quickly. For this, NFS is ideal. It is a high performance file system and with the appropriate use of flash delivers the performance that these use cases demand.

Conclusion

If you noticed, NFS is most appropriate in the majority of the use cases but not all of them. We think the storage solution you use should not also force you into a specific protocol. You should choose the one that makes sense for your specific use cases. And that’s why we built NexentaStor.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2025