• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

Software-defined data center

Raise Your SDS IQ (3 of 6): Practical Review of Scale-out

June 7, 2016 By Nexenta

This is the third of six posts (the last one was Scale-up Software-Only “SDS”) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms. The third SDS flavor in our series is Scale-out.

Scale-out is a fundamentally different approach from scale-up; with Scale-out, multiple head nodes can be attached over the network to dramatically increase scalability. This is a broad category, and solutions for it could be either vendor-defined / hardware based (think EMC’s Isilon), or software-only (Nexenta’s NexentaEdge); while we’d consider the software-only approach to be the SDS version, the technical benefits of either type of scale out are similar. You use low-latency networking to connect as many nodes as you want and form a cluster that provides storage services out to applications as unified name space.

The scale-out approach works well for Archive or Web 2.0 applications use cases. Scalability is top notch, because you can start small and grow just by adding nodes. While it provides the performance needed to handle huge capacities, there’s an important dependence on the network – the quality of your gear will significantly impact performance, because of the amount of communication between nodes; that may mean that your IOPS aren’t great

The flexibility of scale-out SDS is generally good but currently offers limited protocol support. Often the maturity of the platforms themselves limits your flexibility; for example, you can’t use Exchange to write to an object back end. Likewise, object-oriented applications won’t work with some back ends, either. Protocol support considerations also impact the cost effectiveness of Scale-out: they may restrict your hardware choices and lock you in to more expensive purchases.

Overall grade: B

See below for a typical build and the report card:

Screen Shot 2016-06-07 at 11.26.20 AM

Is 2016 the year of the software-defined data center? Part 2

May 20, 2016 By mletschin

In the last blog we determined that the world has changed since the early 2000s and that x86 has won the server war, now giving you a chance to have vendors compete for you. But that’s only part of the data center — we still have to look at the rest of the infrastructure.

Networks that can shift and deliver at the same time

In the past, if you wanted to have the fastest network infrastructure you had to buy from one or two specific vendors. In reality, they still have a huge share of the networking market, but we’ve seen a transition in recent years.

That being said, it is not just about the software layer — with scale-out systems, the network has become critical for more than just inter-rack and inter-data center connectivity, and critical for processing power and speed as well. The move from 1GB to 10GB to 40GB, and even up to 100GB, is allowing for larger amounts of traffic than ever before to move around the globe. This is all done at a scale that can give even the smallest enterprise access to faster connectivity, but it still comes at a cost.

While 10GB is starting to become commonplace, it is still a much more expensive option than 1GB and not something that has trickled down to the end user yet. The true question is, does it need to? If the data center is driven by commodity hardware that is governed by software and all the compute happens there, do we really need to be faster at the endpoint? To answer that, we will have to wait and see.

The brains behind the brawn

The No. 1 buzzword of the last few years has been “software-defined,” and rings true for the transition we have seen in the data center over the last 10 years. The first step is compute, with virtualization taking its place as a stalwart to provide the processing power needed on commodity hardware.

The next two parts that need to move to the software-defined space to allow for a commodity-driven data center are storage and networking. Some would argue that we are already there, with executives from major storage vendors making statements as early as 2010 that “we all run on commodity hardware.”

In reality, you are normally limited to the hardware that a vendor chooses. Moving into a software-defined model means we have choice in vendors and can truly liberate the storage needs from the hardware vendor’s grasp. The growth of open source-based solutions has helped this along the way, and we will see if 2016 is the year we take the leap to make software-defined the norm.

Networking, on the other hand, is the last step forward. You cannot buy a blank switch off the shelf and just layer on the software of your choosing and make it work at the speed and capacity that you desire, but we are starting to see a transition toward this. The combination of software-defined solutions is what gives companies the agility to transition to the enterprises of tomorrow, providing the ability to custom-fit a solution into the business instead of the other way around. It appears that software-defined is more than just a buzzword.

So is it the year?

My gut says the commodity-based data center is not ready to go mainstream, but for intrepid companies willing to source the right gear, it may be. We have seen so many changes in the data center since the ’90s and the early 2000s that I have to believe the pace of change will continue.

Before we cross into 2020, the data center will be in the domain of the business again, not the vendors. That will be the commodity data center that anyone who has managed a data center in the last 20 years has been dreaming about.

Three Dimensions of Storage Sizing & Design – Part 1: Overview

May 16, 2016 By Nexenta

By Edwin Weijdema, Sales Engineer Benelux & Nordics

Why do we build ICT infrastructures? What is the most important part in those infrastructures? It is Data! With the digital era we need to store bits of information, or so called data, so we can process, interpreter, organize or present that as information by providing context and adding value to the available data. By giving meaning and creating context around data we get useful information. We use data carriers to protect the precious data against loss and alteration. Exchanging floppies/tapes with each other was not that efficient and fast. So storage systems came into play, to make it easier to use, protect and manage data.

Networks and especially the internet connects more and more data and people. Designing and sizing storage systems that hold and protect our data can be a bit of a challenge, because there are several different dimensions you should take into consideration while designing and sizing your storage system(s). In the Three Dimensions of Storage Sizing & Design blog series we will dive deeper into the different dimensions around storage sizing and design.

Use

edwin1In this digital era we are using and creating more data than ever before. To put this into perspective, the last two years we created more data than all years before combined! Social media, mobile devices and smart devices combined in internet of things (IoT) accelerate the creation of data tremendous. We use applications to work with different kind of data sources and to help us streamline these workloads. We create valuable information with the available data by adding context to it. A few examples of data we use, store and manage are: documents, photos, movies, applications and with the uprise of virtualization and especially the Software Defined Data Center (SDDC) also complete infrastructures in code as a backend for the cloud. In the last two years we created more data worldwide, than all digital years before!

Workloads

To make it easier and faster to work with data, applications have been created. Today we use, protect and manage a lot of different kind of workloads that run on our storage systems. The amount of work that is expected to be done in a specific amount of time, has several characteristics that defines the workload type.

  1. Speed– measured in IOPS (Input/Output Per Second), defines the IOs per second. Read and/or Write IOs can be of different patterns (for example, sequential and random). The higher the IOPS the better the performance.
  2. Throughput– measured in MB/s, defines data transfer rate also often called bandwidth. The higher the throughput, the more data that can be processed per second.
  3. Response– measured in time like ns/us/ms latency, defines the amount of time the IO needs to complete. The lower the latency the faster a system/application/data responds, the more fluid it looks to the user. There are many latency measurements that can be taken throughout the whole data path.

It is very depended on the type of workload which characteristic is the most important metric. For example stock trading is very latency dependent, while a backup workload needs throughput to fit within a back-up window. You can size and design the needed storage system if the workloads are known that will be using the storage system(s). Knowing what will use the storage is the first dimension for sizing and designing storage systems correctly.

Protect

A key factor to use storage systems for, is to protect the data we have. We must know the level of insurance the organization and users need/are willing to pay for, so we can design the storage system accordingly. We need to know the required usable capacity combined with the level of redundancy so we can calculate the total amount of capacity needed in the storage system. Always ask yourself do you want to protect against the loss of one system, entire data centers or even across geographic regions, so you can meet the availability requirements.

  1. Capacity– measured in GB/TB/PB or GiB/TiB/PiB. The amount of usable data we need to store on the storage and protect.
  2. Redundancy– measured in number of copies of the data or meta data which can rebuild the data to original. e.g. measures like for instance parity, mirroring, multiple objects.
  3. Availability– measured in RTO and RPO. The maximum amount of data we may lose called the Recovery Point Objective (RPO) and the maximum amount of time it takes to restore service levels Recovery Time Objective (RTO).

edwin2

Knowing how much data we must protect, against defined protection/service levels (SLA’s), gives us the second dimension for sizing and designing the storage system correctly.

Manage

Working with storage systems and designing them takes some sort of future insight or like some colleagues like to call it: a magic whiteboard.  How is the organization running, where is it heading towards and what are the business goals to accomplish. Are business goals changing rapidly or will it be stable for the next foreseeable future? That are just a few examples of questions you should ask. Also a very important metric is the available budget. Budgets aren’t limitless so designing a superior system that is priceless will not work!

  1. Budget– an estimate of income and expenditure for a set period of time. So whats the amount of money that can be spend on the total solution and if departments are charged whats the expected income and how is that calculate, e.g. price per GB/IOPS/MB, price per protection level (SLA’s) or a mix of several. Also specific storage efficiencies features, that reduce the amount of data, should be taken into account.
  2. Limitations– know the limitations of the storage system you are sizing and designing. Every storage system has a high-watermark where performance gets impacted if the storage system fills up beyond that point.
  3. Expectations– How flexible should the system be and how far should it scale?

Its all about balancing cost and requirements and managing expectations to get the best possible solution, within the limitations of the storage system and/or organization. Managing the surroundings of the proposed storage solution gives us the third and final dimension for sizing and designing storage systems.

Overview

Sizing and designing storage systems is, and will always be, a balancing act about managing expectations, limitations and available money to get the best possible solution. So the proposed workloads will run with the correct speed, throughput and responsiveness while full-filling the defined availability requirements. With the uprise of clouds and Infrastructures as a Service (IaaS) vendors, organizations just tend to buy a service. Even if that’s the case it helps tremendous selecting the correct service when you understand how your data is used, against which protection levels so you can manage that accordingly.

To get the best bang for your buck its helps to understand how to determine the three dimensions correctly, so you can size & design the best solution possible. In the next parts of this blog series we will dive deeper into the three dimensions of storage sizing & design: Use, Protect and Manage. In part 2 we will dive into the first Dimension Use with determining workloads.

Is 2016 the year of the software-defined data center?

May 7, 2016 By mletschin

Even with my 20 years of experience working in technology, I am just a baby compared to some people I have encountered along the way.

Still, I sometimes think about the “old days” in IT. It was a time when data centers were dominated by mainframe systems and midrange monoliths like the AS/400. A time when virtualization was something done only on these big systems, and the only need for an x86 system was under your desk to access a mainframe when you didn’t have a classic thin client.

When my career in IT really began, I was working with systems like Windows NT and Exchange 5.5, and we were creating Web pages with this mysterious language called HTML that magically made things appear. Even in those days, there was this thought that there was no need for companies to be beholden to the largest vendors.

There was a notion/belief in the early 2000s that with the growth of VMware, eventually the data center would become a blend of multiple vendors and companies would be able to take back their own infrastructure, enabling them to control the way they supported their business. This transition marked the evolution from the traditional data center to the commodity-based data center, driven by business needs and created using software, rather than the monolith systems that big vendors charged exorbitant amounts for.

Fast-forward to 2016

Are we there yet? Not quite, but this could be the year. Most enterprises have nearly 90% of their systems virtualized on VMware, Xen or KVM and are managed by newer solutions like OpenStack. 2016 could finally be the year we see the commodity-based data center become reality.

To make this happen, there are so many components that need to be in the right place. We need to see x86 servers with enough power to support large database systems — check. We need the ability to choose between multiple vendors that all produce a similar-quality product, but differentiate themselves with their support and supply chain — check. We need to have networks that can handle multiple systems and the communication between them, without losing packets and without being based on a single large infrastructure — check. (Well, sorta . . . we will get to that one in a few.) Finally, we need software to drive all these systems and define the solutions, whether it is storage, compute or even networking.

We are just about there and are seeing an influx of companies that can provide these services, but can they unseat the incumbent large, publicly traded vendors that look to buy up all the small fish? That just may be the key to 2016 being the year of change for the data center we have all dreamed about since we walked away from worrying about what happens when the clock strikes midnight for the new millennium.

Over the next couple of blogs I will take a look at each of the categories and see if we are really as close as we think to 2016 being “The Year,” starting with the server and compute side.

IT Infrastructure Breaking Down? It’s Time for ChatOps…

BrandPost Sponsored by Atlassian

IT Infrastructure Breaking Down? It’s Time for ChatOps…

ChatOps helps bridge the tasks of reporting and communications for faster, better-informed incident management

Won the battle and the war

There is little doubt that the data center is now dominated by x86 server hardware. The hardware can run with multiple CPU manufacturers, and has more cores than we could imagine in the ’90s and memory levels that exceed what helped put a man on the moon. All of this hardware is going to support everything from the large single applications to scale out big data apps that let us analyze every picture that was posted on Instagram or Facebook about kittens in the last 24 hours.

Let the vendors compete for you

Most users probably have their preference now in server vendors, and it is all about the extras these companies provide. It is not about the components within the servers. Most network cards are from one or two companies, and most processors are from one of two companies — you may have hard drives from multiple firms, but overall it’s just different-shaped, bent metal with a different label on the front.

The true differentiator for vendor choice comes in the form of supply chain and parts availability. The supply chain has begun to collapse around fewer global vendors, making it easier to price match between them. If you are only looking for regional or local support, that too has limited choice, and all the vendors know it. The choice is now truly in the hands of the consumer and, if planned out, can result in a race to the bottom on price, along with increased “hand-holding” when the equipment arrives.

Nexenta has joined Intel’s new Storage Builders Program for Open Software-Defined Storage

April 19, 2016 By Nexenta

by Nexenta

Earlier in April, Intel hosted its Intel Data Center Group (DCG) Cloud Day. The event brought together its vast network of industry partners across the cloud and networking space, as well as journalists, analysts and other influencers, to discuss the the industry’s progress in the adoption of Software-Defined Infrastructure (SDI), highlighting the importance of Software-Defined Storage (SDS) to SDI. As part of this discussion, Intel announced it would be launching a Storage Builders Program – an extension of the Intel Builders Program.

We were delighted to be invited to join the exclusive group of vendors supporting the program – many of which we’ve worked with before and we’re excited to extend those partnerships further.

The program is a cross-industry initiative designed to increase ecosystem alignment, ignite innovation, reduce development efforts, lead open storage standards development and accelerate adoption of intelligent, cost-effective and efficient SDS.

There is no doubt that SDS has become an increasingly important market segment, and the introduction of this program is yet more evidence of that fact. Organizations across the world are finding that their legacy storage systems just aren’t up to scratch, cost far too much and, on top of that, they’re tied to a single vendor. At Nexenta our long-term mission has been to make storage open and we’re excited that Intel has the same future in mind.

As a participant in the program, we’ll continue to build on our current OpenSDS offerings, but will have the added benefit of using Intel’s expertise to create joint solutions for customers. We’ll have the ability to collaborate with other Intel Storage Builders members to drive broader market adoption of software-defined technologies across the data center and seriously accelerate traction in the storage market.

Our participation in the Intel Storage Builders program supplements our existing and continued support of all Intel server providers including Cisco, Dell, Hewlett Packard Enterprise (HPE), Inspur, Lenovo, SuperCloud and Supermicro.

“Supermicro is raising the bar on performance and density with the Industry’s broadest portfolio of all-flash NVMe storage solutions and complete support for Intel’s latest Xeon E5-2600 v4 processors across our server platforms,” said Don Clegg, VP of Marketing and Business Development at Supermicro. “As a member of Intel Storage Builders with longstanding partner Nexenta, we are developing and deploying next generation software-defined storage solutions which deliver the most innovative, highly scalable, hyper-converged infrastructure, with lowest overall TCO.”

“We are proud to join Nexenta and select others as part of the Intel Storage Builders program. We’ve worked with Nexenta for a long time in supporting our mission to provide customers with the industry’s widest-range of tested and validated OpenSDS solutions,” said Travis Vigil, Executive Director Product Management at Dell Storage. “We are excited to continue working together along with Intel to produce high-end results for our customers.”

“Nexenta’s solutions will enable our customers to deploy flexible, software-defined storage environments on purpose-built HPE Apollo 4000 big data servers,” said Susan Blocher, Vice President of Compute Solutions at Hewlett Packard Enterprise. “The collaboration between HPE, Nexenta and Intel ensures our customers benefit from the combined innovation and high-quality of our next generation solutions.”

“Nexenta is a strong partner in supporting our growth in the enterprise storage market,” said Stuart McRae, Director of Product Marketing for the Lenovo Storage Business Unit. “At Lenovo, we strive to bring innovative customer value to our enterprise customers, and together with Intel and Nexenta, we will continue to expand our reach to bring new economics to data center storage. “

“We are pleased to partner once again with Nexenta, as they continue to support our company’s vision of shaping the future of software-defined storage by creating unprecedented value and opportunities for our customers and partners,” said Philippe Vincent, CEO at Load DynamiX. “Ensuring that software defined storage will perform and scale to enterprise data center requirements is a key focus of Load DynamiX. We are confident that, together with other Intel Storage Builder partners, we can achieve these goals.”

“Nexenta has proven to our customers the joint benefits of SDS in conjunction with our expertise in cloud computing hardware, software and services,” said Yuzhen Fang, CEO at SuperCloud. “We look forward to continuing development of these solutions with both Nexenta and Intel.”

“Nexenta and VMware have worked together for many years to provide mutual customers with flexible storage solutions to enable increased efficiency and availability, while lowering costs,” said Howard Hall, senior director, Global Technology Partnering Organization, VMware. “We look forward to working with both Nexenta and Intel to create additional value and drive broader adoption of software-defined technologies.”

For more information, visit: Storage on Your Terms: Nexenta Software Defined Storage with Intel

 

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2023