March Madness of Home Brews Community Contest

They say it takes a village to raise a child and at Nexenta we believe that technology start-ups are similar to children.  The community has been a critical part of Nexenta’s success and we want to share the excitement that our community members have with the world.  To do this, we are kicking off monthly contests to show the growth, support and amazing intelligence of the Nexenta Community.

The first of these contests begins today with the March Madness of Home Brews!  We ask that you submit a brief description of your home storage kit, along with a picture.  After all, a picture is worth 1000 words.  To coincide with the NCAA men’s basketball tournament, we will create a bracket of the entries for the community to vote on until we narrow it down to the final four and a champion.

The final four will be sent a gift set of Nexenta swag along with being showcased as a featured build on the Nexenta Community website.  The winner will receive an even larger kit that includes a custom-branded Nexenta Basketball.

We are excited to see how creative you have all been.  Make sure to get your entry submitted no later than midnight PST on March 16th.  We will kick off voting on the 17th with the first round, just like the basketball tournament. Good luck!

To register, please click on the following link and fill out the form:

*Note: To win, you must be a registered member of the Nexenta Community.

Posted in Uncategorized

Citrix and Nexenta Deliver Flexible and Cost-efficient Software-Defined Approach to VDI

By Michael Letschin, Director of Product Management, Solutions

We had a great time at Citrix Summit 2015 and had a chance to talk to partners about our new converged infrastructure validated solution stack for VDI workloads using Citrix XenServer. XenDesktop and NexentaStor.

Available as an industry standard x86 architecture, this economically viable, converged and integrated solution is geared for small and mid-sized businesses that are looking to embrace and benefit from the security and efficiencies of desktop virtualization and Software-Defined Storage. NexentaStor brings dramatic economic benefits through substantially higher performance, lower cost per terabyte and a pre-integrated converged infrastructure stack based on Citrix XenDesktop MCS.

The solution starts at a minimal 4U of physical space and is able to present 355 desktops at full workload.  The Citrix XenDesktop and Nexenta converged architecture can easily scale to provide up to a 1000 desktops in only 6U, a 25% density improvement compared to EVO: RAIL. The solution minimizes storage latency with over a 98 percent cache hit ratio, giving the fastest possible end user experience.

Over the past few years, we’ve collaborated with Citrix to produce multiple cost-efficient, flexible and scalable infrastructures for customers to build and scale VDI workloads. NexentaStor has proven to deliver the best performance and value on hybrid or pure storage configurations, as part of the Citrix Ready VDI Capacity Validation Program. This is just another step in the collaboration.

“Citrix sees great value for customers from the integration of Citrix XenDesktop and NexentaStor,” said Calvin Hsu, vice president of product marketing, Desktop and Apps, Citrix. “The end result is that our existing customers and prospects will benefit from the flexibility and cost-effectiveness of this solution.”

Details of this solution can be obtained by downloading the Citrix Validated design reference architecture.

Additional Resources: Nexenta Delivers a Converged Infrastructure Citrix Ready Validated Solution Stack

Tagged with: ,
Posted in Uncategorized

2015 Predictions: What Goes Around Comes Around

by Michael Letschin, Director of Product Management, Solutions, Nexenta

Everything in culture has a way of repeating itself, it happens in every arena of our life.  In fashion we look at items as vintage, whether it is from the 70s or the roarin’ 20s.  In music, artist like Justin Timberlake harken back to the days of early Michael Jackson and we have some artists today that people view like the modern day Rat Pack of Sinatra’s era.  Technology is no different, and as we move into 2015, life is in fact repeating itself.  I have spent nearly 20 years in technology, starting with working on mainframes and green screen clients, then came the shift to the x86 server. Over the past few years we have seen virtual servers become mainstream, being us back to a centralized server setup and as virtual desktops gain traction we move towards thin clients and back to what I remember from growing up… simplicity and efficiency.

What have we learned from all this?  The importance of versatility, simplicity and efficiency… Over the past few years we have heard buzzwords that have driven the technology decisions but now that IT departments have finally shrunk to point where you can’t “do any more with less”, CIOs have the choice of either outsourcing all their products or going with something that makes it easier on the staff they have.  The efficiency comes from not only simplicity but also on an economic front, you pay for a service like you would electricity.  During 2014 we talked of Software-Defined Data Centers but I have yet to see any single enterprise truly adopt the notion that hardware is not the answer.  Deploying hardware in the traditional sense is starting to move to the wayside, with the software controlling the hardware, the “bent metal” is not the treasure.  Add in the idea that an enterprise can have freedom to deploy their choice of hardware and remove the proprietary upgrades and process of the past and we move towards the software defined future.

2015 will begin with more and more enterprises adopting the idea that hardware independence means that their staff can be more efficient by concentrating on the software and letting the hardware vendors spend their time competing for their business.  The rise of DevOps will continue to make datacenters simpler and more automated.  Projects like OpenCompute can finally gain traction in the enterprise as hardware is bought as simply a platform regardless if the need is for servers, storage or networking.  Software-Defined Storage will continue to grow in the enterprise as IT staff see that they no longer can support the complexity of forklift migrations just to get some more speed.  Software-Defined Networking has been lagging in the past year or so but the efficiency need will surely allow networking teams to built the global enterprise.

We used to say that no one got fired for buying IBM, well now IBM is services and buying from all the cloud based services.  What goes around comes around and the giants in the IT industry may just end up being the users and admins in 2015, not the hardware vendors of the last decade.

Tagged with:
Posted in Corporate, Software-defined storage

Epidemics at the Speed of Software

By Michael Letschin, Director, Product Management, Solutions, Nexenta

In today’s global economy and 24/7 news culture, word of health risks spread faster than ever before. If a child is diagnosed with a rare disease in Asia at 10 AM, it could be sent to the US Center for Disease Control and reported on US news outlets in time for the 11 o’clock news (a 2 hour time lapse). But, that’s even slow compared to how quickly it could spread through social media. This is not to say that health concerns are sensationalized or should not be treated with the utmost urgency and concern.

The scare over Ebola has shed new light on how quickly news and information flows around the globe. It also showcases how quickly NGOs (non-governmental organizations) can be spun up around an issue. A quick search on the USAid web site lists 60 NGOs responding to the Ebola crisis. These organizations range from religious groups to relief groups to groups dedicated to specific continents. Some of which are smaller organizations, scarcely known of, like BRAC (creating ecosystems in 11 countries in which the poor have the chance to seize control of their own lives), while others are some of the largest organizations in the world, like the Red Cross. One thing that ties all of these groups together is their need to keep up-to-date on news and information, which is highly dependent on technology. It’s these same technology networks that spread panic and concern via the news and social media, that also transport critical, life-saving information to these organizations in need.

The worldwide growth of this need-to-know data is exponential. Some Gartner reports have shown that enterprise data is growing at rates of 40-60% year over year. If enterprises are growing this fast, you can only imagine the growth rate of data collection during a health crisis. This explosion of data is also what drives innovation, enabling organizations to move away from legacy systems that slow down data accessibility.

A key innovation in the effort to easily track the spread of disease is the Software-Defined Data Center (SDDC). The SDDC changes the game by providing software based solutions where any hardware can be repurposed for various purposes. In the time of a crisis, these Software-Defined solutions will ensure your data includes the most up-to-date trends for the next airborne illness.

While server virtualization has made a huge impact by enabling compute power to live in the data center, storage has previously lagged behind. Now, Software-Defined Storage, a robust scalable storage solution, is being deployed on any existing hardware allowing organizations to rapidly analyze complex issues. For instance:

Imagine being a researcher or doctor in a remote location. You have critical information that could show trends of treatment practices, but you have no place to store all the millions of data points that have been collected on paper. You also have no way to get or install traditional legacy limited hardware solutions, nor a place to power and store such solutions. The easy answer is to use the existing industry standard hardware and deploy a software solution. I am in no way saying that the Software-Defined movement is going to save the world in a health crisis, but I only hope that the NGOs and world leaders see that Software-Defined technology can lead to a more cost-effective and faster time to market. And hopefully time to cure.

To hear more about the benefits of Software-Defined Data Centers and Storage easing health crises, please join Forrester, VM Racks and Nexenta on 12/16 at 8am PT. Click here to register for this webinar.

Posted in Corporate, Software-defined data center, Software-defined storage

Big Data Meets Software-Defined Storage at

Allison Darin, Director of Communications & Public Relations, Nexenta Systems, Inc.

Big Data and Software-Defined Storage go hand in hand as part of the NexentaStor implementation for – the world’s leading Secure Private Cloud Service for the entire Big Data Pipeline – are market leaders in the innovative area of “high performance data analysis ” (HPDA) for industries from media and entertainment, life sciences, financial services, oil & gas, engineering and more.

They recently added Software-Defined Storage to their IT toolkit through their deployment of NexentaStor and other leading Software-Defined Data Center solution providers including Brocade, SanDisk (FusionIO) and VMware. Take a look at our new case study here for lots more information.

Posted in Corporate, Software-defined storage

Managing Dynamic Storage Demands at ServerCentral

Allison Darin, Director of Communications & Public Relations, Nexenta Systems, Inc.

Scalability and economics go hand in hand as part of the NexentaStor implementation for ServerCentral. ServerCentral has been delivering managed data center solutions since 1999 for customers such as Ars Technica, CDW, DePaul University, Discovery Communications, New Relic, Outbrain, Shopify, TrueCar, and USG.

They recently added software-defined storage to their IT toolkit through their deployment of NexentaStor. Take a look at our new case study here for lots more information.

Tagged with: ,
Posted in Uncategorized

Sofware-Defined Storage – Vertical Sectors Sharing the Love

Allison Darin, Director of Communications & Public Relations, Nexenta Systems, Inc.

First, the good news. We are in a year that promises to shake up the storage world with open source Software-Defined Storage (SDS) solutions revolutionizing the market and helping today’s organizations build toward the Software Defined Enterprise (SDE) of the future.

Even better, the SDS revolution, spearheaded by Nexenta, is spreading to more and more vertical markets and countries. And while the institutions and organizations adopting an SDS approach have made their choice for a number of reasons, they all have one thing in common: they’re very happy with what it delivers. But don’t take our word for it – read on for the stories of several customers and their journey towards becoming Software Defined.

In Germany, regional energy supplier Stadtwerke Tuttlingen (SWT) provides electricity, gas, and water utilities to over 34,200 residents across 980 square miles in the South West area of the country.

SWT’s existing storage solution was proving inadequate to run multiple high performance databases (Oracle, MSSQL) and incapable of supporting a planned VDI deployment. Searching for a storage system that offered high performance, flexibility and high availability, SWT found NexentaStor.

One of the key selling points for SWT was the simplicity of Nexenta’s products. There was no need for complex licensing and NexentaStor provided features such as unlimited snapshots, thin provisioning and hybrid storage pooling that helped SWT to implement cost-effective storage with high performance.

In the UK, the University of Sussex turned to Nexenta when its original solution started to approach end of life. The IT department at the University provides support to the entire university – 13,000 students and over 2,100 staff – with a single home directory service that enables users to access files from whatever device and operating system they are using, wherever they are on the campus.

NexentaStor was selected because of its flexibility, scalability and attractive economics. It delivered the performance the University sought, and paved the way for sustainable expansion. The stability, scalability and performance of Nexenta has proven so effective that it has prompted the IT department to look at other campus storage systems and further opportunities to consolidate, increase speed and grow efficiently.

In the media industry, London-based boutique VFX organization BlueBolt was searching for an adaptive storage solution to manage the growing workload of its creative and technical staff and to keep up with the evolving demands of the industry.

BlueBolt has provided the visual effects for many award-winning productions including the BBC’s Great Expectations, Game of Thrones (Season 1) and Mandela – Long Walk to Freedom. Needing to implement a secure, reliable, scalable storage solution, the company chose Nexenta’s SDS solution to centralize and manage its storage infrastructure and replace the various storage platforms in situ.

Best of all, Nexenta provided all of the features BlueBolt expected from an enterprise storage platform while remaining one of the most cost-effective solutions on the market.

There are many advantages for organizations like SWT, University of Sussex and BlueBolt that opt for Nexenta’s SDS approach. They escape from vendor lock-in, total cost of ownership (TCO) is radically improved, performance gets significantly better and the result is true scalability. In other words, it’s future-proof.

Nexenta helps customers to bypass the MESS (Massively Expensive Storage Systems) produced by traditional vendors and concentrate on growing and building their business – boosting productivity and customer service. It’s a win win for everybody.

No wonder customers in industries as diverse as media (BlueBolt), energy supply (SWT) and education (the University of Sussex) are turning to Nexenta, the global leader in SDS, to deliver easy to use, secure and ultra low cost storage software solutions.

By now, some of you are probably starting to ask: That’s great, but what happened to the bad news? Good news everybody, there isn’t any.

Posted in Corporate, Software-defined storage

Managing Software-Defined storage for your virtualized infrastructure just got a whole lot easier.

Nexenta is proud to announce our first vCenter Web Client plugin to support the NexentaStor platform. The NexentaStor vCenter Web Client Plugin is a plug-in for vSphere 5.5 and NexentaStor 4.0.3 that provides integrated management of NexentaStor storage systems within vCenter. The plug-in will allow the vCenter administrator to automatically configure NexentaStor nodes via vCenter.

VMware administrators can provision, connect, and delete storage from NexentaStor to the ESX host, and view the datastores within vCenter.

04CreateiSCSINot only can you provision the storage but managing it is also simple with integrated snapshot management.

05SnapshotThe plugin also allows for closer analytics and reporting on the storage through vCenter as detailed below.

Check out the screenshots below, and download the vCenter Web Client Plugin today from the NexentaStor product downloads page.

General details about Storage:

  • Volume Name
  • Connection Status
  • Provisioned IOPs
  • Provisioned Throughput
  • Volume available space and Used space

Storage Properties

  • Datastore name
  • NFS Server IP address
  • Datastore Path and capacity details

Datastore Properties:

  • Total capacity
  • Used capacity
  • Free capacity
  • Block size
  • Datastor IOPs, Throughput, and Latency

Snapshot Management:

  • List existing snapshots
  • Create new snapshots
  • Clone existing snapshots
  • Restore to a snapshot
  • Delete Snapshots
  • Schedule snapshots

End-to-End Datastore Provisioning:

  • Creating a new volume in Storage Array
  • Attach the volume to host as datastore
Tagged with: , , ,
Posted in Virtualization

Accelerate your Horizon 6 deployment with NexentaConnect 3.0!

Nexenta is proud to announce the general availability of NexentaConnect 3.0 for VMware Horizon (with View).  The VDI acceleration and automation tool provides increased desktop density and higher IO performance for existing storage deployments as well as greenfield new VDI solutions.  NexentaConnect 3.0 introduces many new features and enhancements for a VDI solution to include

  • Full support for VMware Horizon 6NexentaConnect for Horizon 3
  • Pass Through Support for VMware GPU
  • Import Horizon View desktop pools
  • Fast desktop pool restoration from backup

Combining all these great new features allows you to now accelerate and grow your existing Horizon deployment, which may have been limited by traditional storage solutions.

To learn more about NexentaConnect for VMware Horizon go to and download the 45 day free trial.



Posted in Corporate, Software-defined data center, Software-defined storage, Virtualization

Nexenta Launches Revolutionary Software-Defined Scale Out Object Storage Solution for OpenStack and Big Data Infrastructures

NexentaEdge – Taking Nexenta’s ZFS DNA to Cloud Scale

Thomas Cornely, Chief Product Officer, Nexenta

VMworld 2014 in San Francisco promises to be an incredible event for Nexenta. In addition to our OpenSDx Summit on 8/28, as a VMworld 2014 Platinum Sponsor we’re gearing up for a slew of new product announcements and demos. We’re particularly excited about the opportunity to showcase NexentaEdge, the latest addition to our Software-Defined Storage portfolio, specifically targeting the petabyte scale, shared nothing, scale-out architectures required for next generation cloud deployments. NexentaEdge is a software only solution deployed on industry standard servers running Ubuntu, CentOS or RHEL Linux. Version 1.0 will support iSCSI Block services with OpenStack Cinder integration, Swift and S3 Object APIs as well as a Horizon management plugin. File services are part of our design and will be delivered in follow-on versions.

If you’re familiar with Nexenta, you know all about NexentaStor, our unified block and file Software-Defined Storage solution built on the ZFS file system. What made ZFS great were some key design choices that took advantage of emerging trends in the hardware landscape. Things like Copy On Write, delivering guaranteed file system consistency at very large scale, as well as high performance unlimited snapshots and clones. Things like block level checksums, trading increasingly cheap CPU cycles for way more valuable end-to-end data integrity. This strong technology heritage, paired with Nexenta’s continued investment on performance, scale and manageability has led service providers across the globe and a growing number of enterprise customers to rely on NexentaStor as the storage backend for their legacy cloud and enterprise applications.

If you’re in technology, you know that the only constant is change. Our service provider customers are busy scaling their infrastructure, moving to next generation open source cloud infrastructure like OpenStack and CloudStack and looking for even more scalable and cost efficient storage backends. For that, we’ve built NexentaEdge. And rather than quickly combine existing open source pieces and hack on top of them, we deliberately took some time to design a solution from the ground-up, and we made sure our design reflected the lessons we learned from our years of working with ZFS. We also made sure our design looked forward and was ready for what we see as new emerging trends in the storage landscape. The net result is a truly next generation scale out solution that incorporates what we like to call our ZFS DNA.

For example, one core design aspect of NexentaEdge is something we call Cloud Copy On Write. While the system is fully distributed truly, without any single point of failure, data in the cluster is never updated in place. Very much like ZFS, but applied in a distributed context, this gives us a great foundation for advanced data management and the ability to gracefully handle a variety of failure scenarios that affect large scale out clusters. Another example is end-to-end data integrity. All chunks of objects in NexentaEdge are stored with cryptographic hash checksums that deliver ultimate data integrity. The system is also built with automatic self-healing in case corruption is detected in a chunk.

Another critical aspect of the design is the recognition that building large scale out clusters is as much a storage challenge as a networking one. So we paid particular attention to how we consume precious network bandwidth and how we automatically route data around hot spots and busy nodes. These functions are implemented as part of what we call FlexHash (for dynamic flexible hashing that automatically selects the best targets for data storage, or data access based on actual load states) and Replicast (our own network protocol that minimizes data transfers and enables lower latency data access).

Last but not least, a great design proves itself by how advanced features naturally flow out of it. One such feature is cluster wide inline deduplication. NexentaEdge clusters get inline deduplication of all data at the object chunk or block level. These are variable sizes and can be as small as 4KB as we will demonstrate at the Nexenta booth at VMworld 2014 we will create 100’s of virtual machines on iSCSI LUNs while consuming slightly more than one copy worth in the cluster.

NexentaEdge is here. And we think it will be big. Over the coming weeks, we’ll dig a bit deeper into the technology and share details on Cloud Copy On Write, Flexhash and Replicast. See you at the Nexenta booth at VMworld 2014!

Posted in Corporate

Software-Defined Storage Saving the Economy

Jill Orhun, VP of Marketing and Strategy, Nexenta

Faced with the challenge of an explosion of data from macro trends like social media, mobile, the Internet of Things, and Big Data, many organisations are faced with snowballing technology requirements and yet declining IT budgets that mean doing more with less.

Storage is often the highest single line item in these reduced or static IT budgets, making the strategy of throwing more storage hardware at the data explosion problem less and less acceptable. Many of today’s organisations, such as picturemaxx, University of Sussex and the Institut of Laue Langevin have found a way to step away from such a MESS (massively expensive storage systems) solution and have discovered more scalable, flexible, available and cost effective storage solutions – Software-Defined Storage (SDS) solutions.

Open Source SDS solutions can be deployed in conjunction with industry standard hardware, avoiding the vendor lock-in of expensive proprietary models. This gives organisations the freedom to choose their hardware, ensuring they always get the right hardware their requirements – and with the right price. Democratising infrastructure in this way delivers cost savings of up to 80%.

Software-Defined Storage will change everything

2014 is the year that SDS is shaking up the market by delivering on its promise of a truly vendor agnostic approach, and providing a single management view across the data centre. Organisations are beginning to coalesce around a standard definition of Software-Defined Storage, and clearing up the confusion that arises from the proliferation of approaches taken by vendors purporting to provide “Software-Defined” solutions.

Some vendors claim to offer SDS but are merely providing virtualised storage, characterised by a separation and pooling of capacity from storage hardware resources. Others claim to have SDS solutions even though their solution is 100% reliant on a specific kind of hardware. Neither definition fulfills the fundamental SDS requirement of enabling enterprises to make storage hardware purchasing decisions independent from concerns about over or under-utilisation or interoperability of storage resources. It is important to be aware of these subtle distinctions, otherwise the key SDS benefits of increased flexibility, automated management and cost efficiency simply won’t be realised.

True SDS solutions let organisations work with any protocol stack to build specialised systems on industry standard hardware, rather than limiting their choices to the expensive specialised appliances sold by the ‘MESS’ vendors.

Storage-Defined Storage changes the economic game for the Storage-Defined Data Centre

SDS is one of the three legs of the stool that make up the Software-Defined Data Centre (SDDC), along with the server virtualisation and Software-Defined Networking (SDN). As the most costly leg, however, SDS is also a target for mis-direction of terms and capture of high margins. Many vendors claiming to deliver SDS are selling hardware products with the 60% to 70% margin that has come to define the enterprise storage market. SDS is about much more than new technology innovation. True SDS lets customers do things they couldn’t do before and, most critically, fundamentally changes the economics of the enterprise storage business by increasing the hardware choices available to end customers.

Making the right choices

Organisations are in the middle of a data tsunami. According to recent reports the global tidal wave of data has been predominantly created in the last two years is going to get faster as we all demand 24/7 connection.

According to Research and Markets Global Software Defined Data Centre report, the market is set to explode, growing at a CAGR of 97.48% between this year and 2018. Much of this growth is due to an increased demand for cloud computing, which creates a companion demand for Software-Defined technologies to achieve large scale, economically.

Customers realise that SDDC technologies offer flexibility, security, storage availability and scalability. All organisations should get informed on what true Software Defined solutions are – so they can make better decisions on which vendors to invest in for the SDDCs in their future. The first step is understanding the definitions, asking the right questions, and moving towards SDS solutions as a first critical step on their Software-Defined journey.

Posted in Corporate, Software-defined storage

Welcome to the Software-Defined World

Thomas Cornely, VP of Product Management, Nexenta

It’s no secret that today’s organizations are experiencing an unprecedented data explosion. As much as 90% of the world’s data has been created in the last two years and the pace of data generation continues to accelerate thanks to trends like cloud, social, mobile, big data, and the Internet of Things. These developments create additional pressure on data centre managers already struggling to make do with flat or decreasing IT budgets.

Thankfully, help is on the horizon with the emergence of the Software-Defined Data Centre (SDDC). The SDDC promises to deliver new levels of scalability, availability and flexibility, and will do so with a dramatically lower total cost of ownership. As companies like Amazon, Google and Facebook prove every day, SDDC is the future and is built on three key pillars: compute, storage and networking.

The last decade saw the transformation of the compute layer thanks to technologies from the likes of VMware, Microsoft and the open source community. The next stages are storage and networking. While Software Defined Networking (SDN) was all the rage a couple of years ago, actual market traction and customer adoption has been slower than expected as industry players continue to work to align all the technology pieces required to deliver full SDN solutions. The story is quite different for storage. With storage typically being the most expensive part of an enterprise infrastructure, we are witnessing a dramatic acceleration in Software-Defined Storage (SDS) adoption.

2014 promises to be very significant for Software-Defined Storage as customers realize its potential for addressing their critical pain points: scalability, availability, flexibility and cost. As SDS increasingly takes centre stage, it is important to ensure customers see through legacy vendor marketing seeking to preserve their hegemony by dressing up high margin, inflexible proprietary hardware in SDS clothes. Thanks to fairly creative marketing teams most, if not all, storage vendors make some claim related to Software-Defined Storage. It is amusing to note, however, that almost all are selling closed hardware products with the 60% to 70% margin that has been the norm in the enterprise storage market over the past decade. Calling a product SDS does not make it so.

Having a lot of software in a given hardware product (as most storage arrays do) might make a product Software-Based, but it does not make it Software-Defined. Similarly, adding an additional management layer or abstraction layer on existing proprietary hardware (a la EMC ViPR) might increase the amount of software sold to customers, but really does not make the solution Software-Defined. What legacy storage vendors are doing is very similar to what Unix vendors of old (remember Sun, HP and IBM) did when they added visualization and new management software to their legacy Unix operating systems to compete with VMware. While these were technically interesting extensions to legacy technology, it was VMware running on standard Intel based servers that truly unleashed Software-Defined compute and changed the economics of enterprise compute forever. The same is true for SDS.

Done right, Software-Defined Storage allows customers to build scalable, reliable, full featured, high performance storage infrastructure from a wide selection of (low cost) industry standard hardware. As such, SDS is about much more than the latest technology innovation. True SDS allows customers to do things they could not do before while fundamentally changing the economics of the enterprise storage business. True SDS allows customers to deal with their storage assets in the same way they deal with their virtualized compute infrastructure: pick a software stack for all their storage services and seamlessly swap industry standard hardware underneath as cost, scale and performance requirements dictate. Eliminating vendor lock-in without compromising on availability, reliability and functionality is how SDS will change the storage industry.

From a technology perspective, true SDS must be able to support any ecosystem (VMware, HyperV, OpenStack and CloudStack) and any access protocol (block, file and object), while running on a wide variety of hardware configurations, be they all flash, all disk, or hybrid. Having a strong open source DNA helps in getting an active community of users and developers around the SDS solution. SDS openness will play an increasingly important role as customers move towards converged software-led stacks that harness technologies such as cloud, hyperscale, big data, NoSQL, flash hybrids, all flash, object stores and intelligent automation.

As mentioned earlier, the SDDC will deliver new levels of scalability, availability and flexibility with significantly lower cost than today’s approaches. With storage playing such a critical role in the SDDC, the accelerating adoption of SDS in 2014 will make it a breakthrough year for what we like to call Software-Defined Everything aka SDx. When the building blocks of software defined compute, storage and networking have all been put in place, enterprises will be free from expensive vendor lock-in and free to scale more easily, innovate more rapidly and bring new solutions to market more efficiently. More than yet another technology fad, SDDC is poised to change core data centre economics and free enterprises to invest more in their own business.

Posted in Corporate, Software-defined data center

Software-Defined Storage – Savior of The Internet of Things

Jill Orhun, VP of Marketing at Nexenta investigates how a new, software driven approach to storing data could end up saving hosting providers a small fortune…

‘The Internet of Things’, or connected devices, is an integral part of many people’s daily lives. From its beginnings in Internet banking and online grocery shopping, the Internet of Things has moved on to driverless cars, learning thermostats and wearable fitness technology – and the future only holds more opportunities. As these advancements in technology continue and become more widely adopted, we will become increasingly reliant on the services they deliver and the data they generate. And the Internet of Things is only one of several ingredients contributing to today’s explosion of data – key trends like mobility, social media and big data also are driving strong demand.

The net effect of these trends and technical advancements is that data is growing at an exponential rate. Analyst firm IDC1 predicts the digital universe will increase to 40 trillion gigabytes by 2020, equating to more than 5,200 gigabytes for every man, woman and child. It also forecasts the digital universe will double every two years from now until 2020. This data growth is not only driven by people, but also by the huge number of devices permanently connected to the Internet, transmitting data 24/7. Important questions arise – where will all this data live? And how will we manage it?

Over the next 5 years, CIOs anticipate up to 44% growth in workloads in the Cloud, versus 8.9% growth for “on-premise” computing workloads 2. While consumer behavior often lags behind that of enterprises, the expectation is that over time greater and greater amounts of data will live in Clouds.   All of this data will put huge pressure on hosting providers to deliver industry-leading data management systems – ones that are simple, flexible, and economically friendly. We see evidence of this not only locally, but also globally.

For example, large, multinational hosting providers like Korean Telecom manage 100+ Petabyte environments that will be multi-Exabyte ones in the not so distant future. With a demanding portfolio of enterprise clients, its critical, to keep performance high, access flexible, and costs down. To do this, hosting providers need high functioning data centers, ones that take advantage of leading edge technologies. To achieve this goal, we need to address each layer of the data center- compute, network and storage – with storage having the most significant budget impact.

To address the storage component, hosting providers should explore the benefits of Software-Defined Storage (SDS) solutions. SDS is already helping 1000+ Nexenta hosting provider customers to keep a competitive edge by delivering high performance environments with the economics required to address demanding trends like ‘The Internet of Things’. True SDS solutions – meaning software defined, not software-based – provide flexible, simple, manageable, enterprise-class, high performance, infrastructure – with no vendor lock in.

SDS technology is deployed on industry standard hardware rather than tied to expensive proprietary models. This freedom of choice gives organizations the flexibility they need to select hardware that matches their requirements, for both new and legacy environments. Combine this with all the expected storage services, plus a price point that is often 50-80% less than proprietary models, and organizations now have the means to handle the data explosion gracefully – and competitively.

And don’t just take our word for it, our customers agree.

“We have spent less than one third of the investment we could have made with one Oracle SAN unit. This cost saving provides us with additional capital, which can be used for other IT resources to ensure we are meeting all our service level agreements.”

“SDS has helped us to provide the highest performance, scalability and flexibility to our growing customer base at a fraction of the cost of legacy vendors. The new solution has met all of our needs, offering enterprise features with the open source background we trust.”

“SDS has resolved all of our storage issues. We are now able to provide our customers with instant access to mission critical data. This has helped us not only to save money and time, but also to remain competitive in an increasingly growing industry.”

To embrace growing trends like ‘The Internet of Things’, hosting providers should recognize that SDS solutions belong on their infrastructure roadmaps. Flexible, manageable, simple and low cost – SDS is the future.


2 Tech Trader Daily

Posted in Corporate, Software-defined data center, Software-defined storage, Virtualization

Independence and Freedom is at the Heart of the Software-Defined Movement

Independence and freedom is at the heart of the software-defined movement in IT today. Whether you are looking for your choice of hypervisors, from ESX to Microsoft to KVM, or building a cloud with OpenStack or CloudStack, to the networking world with the likes of Nicira (VMware NSX), Big Switch and even Cisco, the options seem almost endless. The options available in the Software-Defined Storage world are no less proliferate. Traditional storage has been migrating from the big iron of yesteryear to choices that include all flash arrays, hyper-converged solutions and hybrid arrays, all custom built for the requirements of today’s enterprise.

Nexenta has been at the leading edge of the Software-Defined Storage movement for years, with NexentaStor as one of the first software-only enterprise storage solutions. Over the past year we have continued to innovate, developing a hyper-converged solution around virtual desktops with NexentaConnect. Our software, however, is only part of the overall storage solution. Nexenta has built and continues to grow strategic relationships with some of the largest hardware companies in the IT world. Building upon success with Cisco, Dell, HP, IBM, Supermicro and others, Nexenta continues to enable our customers with the flexibility of hardware options.

This unique flexibility not only lets the customer separate the data and control plane, but also gives them the reassurance to know that the pieces will work together flawlessly. To further this reassurance and sense of reliability, Nexenta also provides reference architectures with tested and validated solutions for each vendor.

Let’s take our strategic partnership with Dell as an example. This week, CRN pointed out that Dell has built their software-defined vision around Nexenta. As a certified Dell technology partner, Dell and Nexenta continue to develop solutions covering most aspects of your enterprise storage needs, including the only solutions to deliver the lowest cost deployment for non-HA environments with shared pooled desktops and Fibre Channel support. The partnership also means that customers can not only purchase their entire storage solution from Dell with the flexibility of the Nexenta Software-Defined Storage options, but get the Dell global supply chain and world class support structure at the same time.

If you are looking for a next-generation storage solution that is cost effective and can easily scale from the smallest builds up to almost a petabyte in a single array, opt for the Dell PowerEdge storage array servers combined with PowerVault backup powered by NexentaStor software-defined storage. Or, maybe the project you are working on requires you to deploy virtual desktops. Then, opt for a solution based on NexentaConnect with Dell hardware. Validated solutions on Dell VRTX, PowerEdge R620s and R720s give you building blocks to make the transition from proof of concept to production less of a concern and more of the independence that software-defined enterprise storage gives to the IT teams of today and tomorrow.


Michael Letschin
Director of Product Management, Solutions

Tagged with: , , , , , , ,
Posted in Dell, Software-defined data center, Software-defined storage, Uncategorized

Software. Defined. Everything. The Next Big Thing.

100% Software. Total Freedom. All Love.

By Tarkan Maner, CEO Nexenta Systems
@tarkanmaner, #nexenta, #OpenSDx

Every day at Nexenta we start the day energized and pumped up because we are part of a true revolution – The Next Big Thing – the Software Defined Everything revolution. It’s going to fundamentally change enterprise computing, as we know it today – and tomorrow.

Our team has been blessed working with some of the most innovative companies in the past 25 years; from E-commerce to E-business, On-Demand computing to Enterprise Infrastructure Management and Thin Computing to Cloud Computing. We have been blessed working with some of the most innovative people around, including leading innovators, entrepreneurs and business and technology leaders with truly innovative ideas. So many ideas and technologies have come to pass but only a few have truly been disruptive: E-commerce, social media, virtualization, and cloud computing to count a few. Having watched and experienced the success and failure of countless companies and technologies, we can say with 100% confidence that the Software Defined Everything trend is real. It’s disruptive. It’s now. It’s changing everything.

Over the past few years I kept hearing about Nexenta. Especially over the past 12 months when our field folks were always talking about Software Defined Storage-driven Infrastructures and Enterprises. We were a bit skeptical at first. We were working with our customers and partners on big data, mobility and virtualization projects, and there was a big buzz around becoming a Software Defined enterprise. We quickly realized that the big inhibitor to all of this innovation was the high cost of information management and storage. It was simply too expensive and customers were locked into rigid and chaotic infrastructure systems, trapped around onerous long-lasting vendor contracts. Customers wanted flexible and open alternatives to the old school hardware vendors, who were holding them hostage by design with proprietary solutions developed with the single-dimensional IT vision of the 90s.

Finally, I met with the Nexenta team and was amazed by their story. The company has passionate customers worldwide across many verticals and engaged partners; from the largest players in the Western world to smallest partners in the developing world, along with unbelievably well-designed IP. We have come to realize that Software Defined Everything is The Next Big Thing and Nexenta has built a solid Global Leadership in Software Defined Storage. I joined the company as the CEO and most importantly as a humble servant for our customers who have been craving this game-changing solution for decades. We have over 5,000 customers, hundreds of partners and almost an Exabyte of storage under management. Nexenta has a start-up soul that comes through in its innovative products – like NexentaEdge for Object Storage Management and Nexenta Fusion, the only true and open single pane of glass Orchestration solution for any storage sub-system, hardware or software; it is your Nexenta Glass to your storage infrastructure.

After being with Nexenta over the past six months and talking with hundreds of customers, partners, investors, analysts and experts, I am more convinced than ever before that Software Defined Everything is the future. Its open and innovative solutions are what’s needed to deal with today’s big trends around social media, mobility, Internet of Things, Big Data, and cloud computing, and bring TCO of computing to the lowest levels we have ever seen. What we have seen has proven that Software Defined Everything is more than a buzz-word. It’s what’s needed to deliver a true Software Defined Storage platform, to build a true Software Defined Data Center, to support delivery of a true Software Defined Infrastructure, and achieve the Software Defined Enterprise end game – through a complete and open Software Defined Everything vision; which we simply call #OpenSDx.

VMware, Citrix, Microsoft and others have successfully liberated “Compute” over the past decade. Virtualizing infrastructures planted the undeniable seed for today’s cloud computing frenzy. That revolution led Amazon, Google, Facebook, Twitter and many leading public cloud service providers around the world to prove cloud computing cannot be economically viable unless these service providers have completely open and disruptive Software Defined Infrastructures to service their own customers. From their rightful “selfish” innovation to out-innovate each other came more innovation from countless developers and innovators who were finished with the dictatorships of the overall IT industry. Countless open source projects, including OpenStack and CloudStack, opened the way for enterprises to prove that they can Amazon-ize themselves in much more innovative and cost-effective ways. They learned fast that “Goliath” service providers like Amazon also crave long-lasting and onerous contracts – just like the old-school system vendors of the 90s with the similar shackles of punitive and locked-in contracts with little room for innovation and collaboration. And the divine comedy continues. Or does it?

We, the free people of innovation and collaboration, see the Year 2014, the Year of the Horse in Chinese Zodiac, as the “breakthrough” and “improvement” year for The Next Big Thing. Software Defined Everything, especially its critical ingredient of Software Defined Storage, is real. This year, we expect several more Fortune 1000 enterprises to become liberated and Software Defined in some shape or form. This collaborative and open revolution for liberation will result in greater agility, interoperability, faster speed to market, improved risk management, new opportunities for innovation and, most importantly, achieving all of this at TCO levels we have never seen before. So, maybe the divine comedy stops here and turns into divine innovation and collaboration by those who seek true Freedom via The Next Big Thing.

At Nexenta, we move fast. We listen. We rationalize. We deliver with open innovation and collaboration with our customers and partners. We believe we have the best team in the industry. We have the most innovative solutions with tens of patents and awards under our brand – and ALL built on an unashamed Open Source vision. We are committed to bringing Software Defined solutions for the enterprise to our customers and partners. We are now crossing that exciting chasm from a small start up to becoming a fast-growing world-class enterprise company. If the last six months are any indication, the next phase of Nexenta will be simply AMAZING!

Keep your eyes on Nexenta and our open, independent, free and friendly Software Defined Storage vision – it’s the first step towards The Next Big Thing.

Nexenta. 100% Software. Total Freedom. All Love.

Tagged with: , , , ,
Posted in Corporate

Nexenta Software-Defined Storage Ranked #1 with Lowest Cost, Highest Performance for Virtualization

Nexenta recently participated in the Citrix Ready VDI Capacity Validation Program for Storage Partners white paper, resulting in a series of reports looking individually at how a variety of storage solutions implemented Citrix XenDesktop using the VDI FlexCast approach.

As described in the program overview, Citrix constructed a turnkey “VDI Capacity” test rig in its Santa Clara Solutions Lab. The VDI farm was complete and fully operational with the exception of storage. Citrix storage partners were invited to connect their storage to the VDI farm and participate in a “VDI Capacity” test that simulated “a day in the life” of a 750 user Citrix farm. Upon completion, Nexenta and 11 other storage solutions became “750 User Verified” partners for XenDesktop. To read the full analysis of NexentaStor’s performance and ROI results, download the free white paper “Nexenta Liberates Storage to Deliver a Better ROI.”

What we found most interesting about the report, however, was the economics of storage vendors that was made transparent as part of this process.

We weren’t the only ones to notice. Take a look at this graphic from a recent blog post entitled The real cost of VDI storage by Gartner Research Director Gunnar Berger.

Storage vendors comparison chart

Berger created this comparison chart of the cost of storage per desktop from all twelve vendors that received verification from the program. Nexenta delivers the most cost-efficient storage solution for Citrix XenDesktop users, providing an unprecedented full-featured storage solution for $15 per desktop. This beats the nearest competition with savings of $22 per desktop. The last-place finisher in Citrix’s validation testing was an astounding 36x more expensive than NexentaStor, NetApp was approximately 2.5x more costly than Nexenta and EMC more than 6.5x more expensive.

This validation demonstrates what we’ve known for some time: that the Massively Expensive Storage System (MESS) vendors have been giving storage a bad name and preventing IT departments all over the world from developing a true VDI environment. In the Citrix validation study, Nexenta delivered the best operational performance with the best ROI.

Simply put, Nexenta shattered the competition.

Nexenta is working tirelessly to help customers implement a software-defined approach to data center storage. We’re proud of these results and will continue to innovate in order to help customers unlock the true ROI potential of their move to VDI and a software-defined data center.


Michael Letschin
Director of Product Management, Solutions
Nexenta Systems

Posted in Software-defined data center, Software-defined storage

10+ Lessons from my Software-Defined (SDx) Life

After an exciting week in Amsterdam, Paris and London, where we had Nexenta’s quarterly sales meeting, the first of our global OpenSDx Summits, our French launch in Paris, and my first TV interview for Nexenta in London (@cloudchantv) a number of key themes are bubbling up.  Multiple industries, organizations, people and technologies are energized by the promise of Open, SoftwareDefined “everything” – from storage, servers, and networks, to data centers, infrastructure and ultimately enterprises.  Few vendors are delivering on this promise, and few organizations understand that Software Defined Storage (#softwaredefinedstorage) (SDS) is the first critical step on this journey.  Here’s what’s top of mind from a week with the movers, shakers and influencers of OpenSDx:

  1. Hip or Hype? the Future of OpenSDx
  2. Software-Based and Software-Defined are different
  3. “Open” is also for Organizations
  4. CIOs have new imperatives – and new opportunities 
  5. Evolution is the name of the game
  6. Scale changes everything
  7. Agility + Simplicity = Happy
  8. Macroeconomics affect storage economics
  9. CIOs need a better technology cost basis
  10. Think twice about the cost of Cloud
  11. Civil rights mean data rights
  12. UK, France & Spain lead SDS pack

1. Hip or Hype? the Future of OpenSDx

There’s a lot of buzz in the industry right now around Software Defined “everything”– even traditional hardware vendors suddenly have “software-defined”, yet inexplicably hardware-based, solutions.   The resulting market fog means end users can’t get a clear view of OpenSDx.  What to do?  Get the facts straight.  Look to industry leaders like VMware who invite you to master the new reality of the Software Defined Enterprise.  Look to analysts like IDC pointing to the starring role that software increasingly plays in IT infrastructures, and survey data that show over 50% of companies in leading countries considering software defined solutions.  There’s a lot of hype, but at the end of the day, your enterprise will be software defined, and SDS is the first platform on which to achieve competitive advantage.

2. Software-Based and Software-Defined are different

To improve the view of SDx, we need standards and definitions that we can all agree on.  Analysts differentiate between Software Based and Software Defined solutions.  Why shouldn’t you?  If you want to tell the difference, ask, “How many hardware platforms does your Software Defined Solution run on?”  One is not the right answer … Software Based means you should expect hardware dependency, Software Defined means hardware, application, and protocol agnostic, enabling a Software Defined Infrastructure, for your Software Defined Enterprise.

3.  “Open” is also for Organizations

Open Source started a fundamental shift in how people think about technology – collaborative, flexible, cross-functional, team oriented.  OpenSDx is the next generation of Open Source for enterprise technology, be it compute, storage, or networking  –  like Nexenta – 100% Software.  Total Freedom.  All love. :)  Development of such solutions means breaking down barriers to create integration opportunities, increasing communication between both people and technologies.  As the gravitational pull of OpenSDx gets stronger, organizational movements are beginning to reflect this desire for open, collaborative environments, from the creation and empowerment of cross-functional Dev Ops teams, to industry collaborations like the Open Compute framework.

4. CIOs have new imperatives – and new opportunities 

The best CIOs have never been just about technology – they have a holistic view of the business and IT, and now they’ll be rewarded as they use this special insight to qualify OpenSDx solutions and understand where and why it best fits in their organization.  OpenSDx holds incredible power for CIOs – it helps transform them from IT service providers to strategic partners, capable of improving the speed of business and delivering not just technology but innovation.   OpenSDx = Efficiency = Innovation = Competitive Advantage.

5.  Evolution is the name of the game

Revolution is a scary thought for most IT leaders – few organizations want to be on the bleeding edge of technology innovation for their mission critical systems – but rapid, low risk evolution is oh so attractive, especially when your initial steps build a foundation giving a competitive advantage.  Organizations taking steps now to implement Software Defined solutions will find not only near term business benefits, but also longer term competitive ones.  This is why so many analysts and industry leaders are highlighting SDS as one of today’s big trends.  Storage is the bedrock of the data center; if you can evolve this expensive, growing component of your data center, all the change layered on top will be easier.

6.  Scale changes everything

The scale of IT has exponentially increased – environments are built with hundreds and thousands of devices and systems, proof of concepts on 100 units no longer suffices.  The complexity associated with such environments is immense, and resources and knowledge must scale more efficiently.  Architectures, people and processes will all need to evolve, and simple, manageable tools are needed to do so effectively.  At Nexenta, our typical installation used to be on the order of tens of terabytes; now, our largest customer will grow to half an Exabyte in the next eighteen months.  We expect to see more customers and organizations moving that direction.

  7.  Agility + Simplicity = Happy

Much like taking part in a revolution, end users cheer for flexibility and choice – but with boundaries.  While Software Defined (SDx) solutions mean you can make more choices, the potential permutations can be overwhelming – analysis stops, paralysis ensues, innovation stalls.  It’s incumbent upon OpenSDx solution providers to develop simple, manageable solutions, so that agility is delivered with simplicity.  Choice is wonderful, but introduces complexity, and at the end of the day, IT managers need to be sure they can still run their own house. (#agilesimplehappy).  When we engaged customers in release planning for NexentaStor 4, the loudest chants were for simplicity and improved manageability, and delivery of those characteristics is critical to customer satisfaction.

8.  Macroeconomics affect storage economics

According to the IDC surveys shared by Donna Taylor (@Donna_IDC), macroeconomic trends have a trickle down effect on storage buying behaviors.  Storage is the fastest growing, and often largest, line item in an organization’s IT budget.  How do you stave off additional CAPEX and OPEX costs?   Keep your storage longer, keep it off warranty, do more with less.  Customers are also willing to pay a little extra for flexibility and choice, so that longer-term options exist that extend the life of their storage assets.   You can also just buy NexentaConnect to get simple, better performance and density for your VDI environment. (Yes, shameless plug!)

9.  CIOs need a better cost baseline

Organizations have two related problems when trying to address their return on investment.  First, many IT organizations spend over 70% of their budget to keep the lights on – this inhibits innovation, because resources focus on maintenance instead of value add.  Second, most IT organizations lack true IT cost transparency.  Budgets are based on past behaviors and high-level estimates vs. on fact-based usage of IT services.  The highest benefit of the Software Defined Enterprise is that it balances Business and Technology, empowering Technology to deliver, price and project IT services against a business strategy – and make recommendations on the right course of action.  How to solve for these challenges and achieve SDE benefits?  Deliver simple, flexible, manageable solutions that free up time, and enable a better IT operating model with intelligence from cost data.  CIOs looking to improve their infrastructure economics via SDDC / SDI will quickly need to examine their costs.

10.  Think twice about the cost of Cloud

Like the Hotel California, when it comes to your data and the cloud, “you can check-out any time you like, but you can never leave.”  (#ThomasCornely).  Many organizations choose public cloud services as an easy way to quickly add capacity, or ramp up new products; however cost needs to be examined carefully and holistically.  Remember that you’re not only paying for storage, but also for use and access.  It’s not a one-time cost, but a year over year, growing, expenditure.   Nexenta’s CIO-validated cost analysis, based on list prices, reveals that our solutions are 70% cheaper than legacy system solutions over a 3 year time horizon, and 15% less than cloud providers.  Do you know the true cost of your storage?

And a few bonus items for our friends in Europe!

11.  Civil rights mean data rights

With Edward Snowden on the television screen, frequent discussion of the US Patriot Act and concerns about European data being on American soil, it was clear that data security and privacy are top of mind for our EMEA friends.  European organizations must enable end user preferences on how their is data used, understand what constitutes consent, how long it lasts, and also permit the “right to be forgotten”.  While data security itself is generally handled in the application layer, storage solutions like Nexenta’s with self-healing properties like those of ZFS help reduce data corruption and ensure data integrity, thus making sure the right data is available to the right people at the right time.

12.  UK, France & Spain lead SDS pack

The localization of SDS solution adoption is evident in Europe not only by industry but also by geography; as I am finding, it’s incredibly important to understand not only the culture and expectations of the countries where we work, but also where they are in their SDS journey, and what’s needed to help them take the next step.  According to survey data presented by Donna Taylor, IDC’s EMEA storage analyst, there is a continuum of adoption in Europe, with the UK, France and Spain leading the pack in terms of interest and adoption around SDS solutions, and the Nordics at the other end, exhibiting some interest.

So, what’s the upshot?  OpenSDx is real.  It’s here.  Everyone’s talking about it.  In my interview with CloudTV (@CloudTV), I was asked what makes me passionate about Nexenta and Software Defined Storage.  My answer?  We are at the forefront of a fundamental shift in how business and technology operate today – one that’s going to make all industries more efficient, more innovative, more competitive, and better.  What better place to be than leading that revolution?

Join the conversation. Let us know what you think via @Nexenta and stay tuned for videos of Nexenta’s OpenSDx Summit EMEA sessions on our YouTube channel.


Jill Orhun
Marketing & Chief of Staff
Nexenta Systems

Posted in Software-defined data center, Software-defined storage, Uncategorized

A Bridge Over Troubled (v)Storage

Every great technology shift requires the means to move from the old way of doing things to a new and vastly improved approach. A bridge, if you will.

We found that analogy very fitting while reading some interesting and thought-provoking articles on the storage industry. These are A Major Shift in the Data Storage Market is on the Horizon by Kalen Kimm of TweakTown, and Understanding Storage Architectures by Chad Sakac at VirtualGeek.

Both articles are ambitious (nearly 6,000 words combined!). Of the many observations, however, these lines from Kalen Kimm led us to comment:

“The shared visibility between compute, application, and storage is a large step forward to a true software-defined data center. Instead of having to pre-configure LUNs and then presenting them to applications to be consumed, applications will be able to consume storage on an as-needed basis.”

At Nexenta, we are obviously big believers in the software defined data center (SDDC), and the importance of software defined storage (SDS). The SDDC makes too much sense not to take hold; the only variables are around timing and adoption speed. We see VMware’s release of their Virtual SAN SDS solution as an important catalyst to address these variables. A larger player such as VMware can have significant impact on the way that enterprises run their datacenter.

A clear example of the impact is with networking. When VMware first released the hypervisor, all network switching was contained within the host and you were wholly dependent upon the physical switching layer from traditional companies, then the distributed virtual switch (DVS) was released. This allowed network segments to traverse hosts and spread throughout the virtualization cluster. This took switching to a software layer but then they added the Cisco 1000v as an option. The 1000v allowed management to be consistent throughout the data center, physical or virtual but still software based.

In our opinion, VSAN is the necessary bridge between legacy storage and a fully efficient SDS model. VSAN allows customers to utilize their existing internal storage and spread across hosts, similar to the DVS. What is missing is the next layer that allows software defined storage to traverse the entire data center. This is where VSAN bridges internal storage, then Nexenta extends the bridge across the data center. The ability to not only utilize the internal storage but also third party arrays and commodity hardware all presented to both physical and virtual machines.

Nexenta takes this one step further with NexentaConnect, our solution that simplifies the process of deploying a VDI solution. It is a combined all-in-one VDI automation, storage auto-deployment and storage acceleration solution. NexentaConnect can either be used in conjunction with or as an alternative to VSAN. Think of NexentaConnect this way, if you have VSAN in place then deploying your virtual desktops using local storage is a process of creating the VSAN, then using it much like any other traditional storage array. Using NexentaConnect, you deploy the storage only after looking at the desktop needs. This gives you end to end SDS.

The technology industry is famous for forcing customers into either/or decisions. But while vendors want customers to choose one product over another, customers very often need and want both. VSAN strikes us as a great example of a savvy vendor realizing that customers want both the comfort of their existing legacy storage system, and the gateway to SDS. The combination of VSAN and NexentaStor gives the combination that users are looking for.

Posted in Software-defined storage, Virtualization

CCOW-Negotiating Rendezvous Groups

CCOWTM Replicast is a storage protocol that uses multicast communications to determine which storage servers will store content and then retrieve it for a consumer. It also allows content to be accepted/delivered/replicated using multicast communications. Content can be placed once and received concurrently on multiple storage servers. Replicast can also scale to very large clusters and can support multiple sites, and each site can be as large as the networking elements will allow.

In order to understand how Replicast works, you must first understand how it uses Multicast addressing. Specifically, how the role of the Negotiating Group and Rendezvous Group differs from Consistent Hashing algorithms which are the normal solution for distributed storage systems.

How Conventional Object Storage Systems use Consistent Hashing

The Object name, sometimes referred to as the payload of a chunk, is used to calculate a Hash ID. This ID is then mapped to an aggregate container for multiple objects/chunks (for OpenStack Swift these are called “Swift Partitions” and for CEPH they are called “Placement Groups”). Although the quality of the hashing algorithm can vary, the content of a chunk has to map to a set of storage servers that is based on an Object Name in order to achieve a consistent hash algorithm. If you start with the same set of storage servers, the same content will always map to the same storage servers.

Promoters of Consistent Hashing make the point that Consistent Hashing limits the amount of content that must be moved when a set of storage servers change. If there is a 1% change in the cluster membership then 1% of the content must be relocated. In the long run, you actually want 1% of the content to move to the new servers. Should 1% of the content be lost, you will want to create new replicas of the lost 1% on other servers anyway.

Where CCOW Replicast differs is that it can be far more flexible about when that replication occurs and more selective as to which data is replicated. Replicast has a different method of assigning locations. These more efficiently deal with evolving cluster membership to achieve far higher utilization of cluster resources when the membership isn’t changing.

Negotiating Group

CCOW Replicast uses a “Negotiating Group” to effectively support the chunks “location”. An object name still yields a Name Hash ID (using the Name of the Named Manifest) but that Hash ID maps to a Negotiating Group. When a Manifest references a Chunk, it is found by mapping its Chunk ID (which is the Content Hash ID of the Chunk) to a Negotiating Group.

The Negotiating Group will be larger than the set of servers that would have been assigned by Consistent Hashing. Typically ten to twenty members of the Negotiating Group is preferred. The key is that the client, or more typically the Putget Broker on the client’s behalf, uses multicast messaging to communicate with the entire Negotiating Group at the same time. Effectively the Putget Broker asks “Hey you guys in Group X, I need three of you to store this Chunk”. A Negotiation then occurs amongst the members of the Negotiating Group to determine which three (or more) of members will accept the Chunk, when and at what bandwidth.

“Negotiating” sounds complex but the required number of message exchanges is actually the same as any TCP/IP connection setup. So the Negotiating Group can determine where the Chunk will be stored and with the same number of network interactions as Swift requires for the first TCP/IP connection. For the default replication count of three, Swift requires three connections to be setup.

More importantly, a consistent hashing algorithm (such as Swift uses) will always pick the same storage servers. This is independent of the workload of these servers. Many consider this as the price of eliminating the need for a central metadata server.

With Consistent Hashing, the 3 servers with the lightest workload are selected out of 3 storage servers (assuming the replication count is 3). Of course that also means you are also selecting the 3 busiest servers. With CCOW Replicast you select the 3 servers with the least workload from all the available servers.

Implications of Dynamically Selection

With dynamic load-sensitive selection, CCOW Replicast enables you to a) run your cluster at higher performance levels than Consistent Hashing would allow, and b) still have lower latency.

A well balanced storage cluster will at peak usage want individual storage servers to be loaded only 50% of the time. If they are heavily loaded less than 50%  of the time then the cluster could accommodate heavier peak traffic and you have overspent on your cluster. If they are loaded more than 50% of the time then some requests will be much delayed and your users could start complaining. Should the chance of a randomly selected storage server being busy is 50%, what are the chances that all 3 randomly selected storage servers will not already be working on at least one request.

When it is time to retrieve a Chunk, the client/putget broker does not need to know what servers were selected. It merely sends a request to the Negotiating Group. The negotiating group picks one of its members with the desired chunk and the rendezvous is scheduled to transfer the data.

Rendezvous Group

While the Negotiating Group plans a transfer, it is executed by the Rendezvous Group. The Rendezvous Group implements Replicast’s most obvious feature: Send Once, Receive Many times. Transfers sent via the Rendezvous Group are efficient not only because they only need to be sent once, but also because all Rendezvous Transfers are using reserved bandwidth which they can start at full speed. There is no need for a TCP/IP ramp up.

An important aspect of Rendezvous Groups is that they are easily understood and verified with a known relationship to the membership in the Negotiating Group:

  • Put transactions – every member of a Rendezvous Group is a member of the Negotiating Group with a planned Rendezvous Transfer or a slave-drive under active control of a member
  • Get transactions – the principle member of the Rendezvous Group is the client or Putget Broker that initiated the get transaction. Additional storage servers could have also been part of a put transaction. These additional targets are piggy-backing creation of extra replicas on the delivery that the client required anyway.
Posted in Uncategorized

Cheers – It’s been a blast! – From Evan Powell

After six years I’m leaving Nexenta.

I could not be prouder of what we’ve built at Nexenta.  We took an idea that at the time was radical – let’s bring openness right to the foundation of IT, to the storage itself.  And we pulled it off.

Along the way I learned a lot including:

  • Team, team, team – the best team beats the brilliant individual every time
  • ZFS is great – and not perfect; thank you Sun for ZFS and Solaris (now Illumos).  It says a lot about the requirements of storage that even Solaris, arguably the 2nd most deployed OS for mission critical enterprise environments was not fully mature for storage. We’ve added a lot of fixes to Illumos over the last few years.
  • And much more that I’ll just call “experience.” Much of what you learn when you start and build companies ends up sounding like common sense; for example, what kind of executive is right for what stage of a company or how do you make money on open source?

So, what am I up to now?  A few answers:

  • Continuing to spend time as an EIR with xSeed.  Over the last several months I’ve beenblessed to meet many start-ups while an EIR at xSeed.  xSeed is an unusual seed fund.  They are deep in enterprise and have arguably the best ties of any early stage enterprise focused fund into Stanford and Berkeley.  They are old school in that they are definitely NOT spray and pray (which can be a great strategy actually).  Instead, they seek to go deep and to understand a domain and to find within that domain the very best entrepreneurs; they then surround that entrepreneur with coaches and advisors and much more than just capital.  In addition to meeting many companies I’ve been privy to many mind expanding discussions at xSeed about companies, and domains – about everything from the chess of strategy to the tough soft stuff of finding, retaining and sometimes transforming teams.  So I’ll be spending a bit more time with my colleagues at xSeed.
  • Finally, the rumors are true – I’m helping found another company.  We are in stealth mode.  I can tell you it is NOT in storage (although storage infects everything in IT).  And that it IS still very much on the side of openness and open source.  As you might imagine, I did a huge amount of diligence and founder dating before deciding to bet the next X years of my life on the opportunity.  So it is out of the frying pan and into the fire for me.

Here’s what I’d like you to do:

  • Stay in touch.  Maybe I’m getting to be sentimental in my middle age, but the people I’ve met as Nexenta has grown up are truly extraordinary.  You know who you are. You are the early channel partners who never shied away from offering “constructive feedback.”  You complain too much, but I love you.  And likewise you are the team members that parked your personal lives for weeks, months and years at a time as we fought the good fight.  Despite many missteps and the joy of starting a company during the worst financial collapse since the 1930s, you stayed true to a vision of more open storage and of what we now call software defined storage.  And maybe most importantly – you are the thousands of customers that bet your company’s data on Nexenta and sometimes bet your jobs and careers as well.  Follow me and, yes, complain to me at @epowell101.
  • Insist on excellence.  Like many entrepreneurs I think I start companies because half assed solutions to important problems make me sick (plus I probably have a complicated relationship with authority).  At a fundamental level – at the level of a life’s mission – I am dedicated to finding waste and calcification and cracking through it somehow.  How many of the world’s problems would be less serious if we all talked less and fixed stuff more?
  • Insist on openness and transparency.   Go open, it is the future, and it will make your company a smarter, more competitive business; don’t cop out and defer the decision onto the next guy in your job.  Eventually people will be fired for buying vendor X’s product; don’t be the last buyer to abandon ship when a legacy vendor goes aground.  Also, please be open within your organization as well.  Collectively we are all smarter than any one of us; by remaining open, we’ll get better technology, better team work, and happier lives.

I guess I did get sentimental.

Thanks for tolerating that sentimentality and for reading this blog and thanks everybody for helping a crazy vision come true.  Stay tuned here or via @epowell101 on twitter.

-Evan Powell

Tagged with: ,
Posted in Corporate, Software-defined storage

How Does Software Defined Data Storage Equate to Savings?

Ask any CIO what their greatest concern is, and they’ll invariably come up with some variation of concern over the budget. It’s a Catch-22 for many businesses when it comes to technology: There is always a faster, more reliable option, but it always comes at a cost. So how did the big guys get to the top? How have they learned to strike a balance between cost and effectiveness without compromising either entirely? When it comes to data, more and more have chosen to look at software defined data centers. Here’s why:

There’s no question that data is growing exponentially as we frequently use technology for every aspect of our lives – from shopping and paying bills to reconnecting with friends on social media. All of these things produce data – a lot of data in fact. Studies have estimated that we create 2.5 quintillion bytes of data each day. All of this data needs to be stored, and it can be quite costly. Some organizations are spending as must as 40% of their IT budget on storage solutions.

Therefore, government agencies, credit card companies, health care facilities, social media sites, retailors and many other entities are constantly looking for ways to store this data efficiently and affordably. Software Defined Data Centers (SDDC) address all of these concerns and more because they take the virtues of virtualization and apply them to data storage.  In fact, our customers have reported that they have saved as much as 75% in storage costs.

Here are just a few of the ways businesses can save using software defined data storage:

Scalability: Did you know that 90% of the data out there has been created in the last two years? Just think about what this could mean in terms of data storage ten years from now. As data grows, more and more hardware must be purchased to keep up with demand. Software defined data storage solutions are more easily scalable, with greater savings.

Operational Cost Savings: We talked about scalability, but what about the problems that traditional storage methods present in regard to operational costs? Energy costs to cool the storage system, labor costs to monitor and troubleshoot the system, and maintenance costs are just some things that are negated through a software defined storage system.

Open Source Opportunities: Hardware storage systems are closed systems. You are bound by specific vendors, which could limit your flexibility and opportunities to adapt when needed. Because software defined data storage solutions are often open-source, you can take advantage of the latest technologies and adapt your storage solution for the lowest cost.

For more information about the benefits of software defined data storage solutions, contact us today.

Tagged with: , ,
Posted in Corporate, Software-defined storage

VMware View Acceleration on Display

As VMworld Europe 2013 approaches, Nexenta continues to drive home the advancements in application-centric storage with Nexenta VSA for VMware Horizon View.  As the demo for the show is completed and desktop pools are created and tested, it is always exciting to see some independent test done and presented.  Just ahead of VMworld, VMware’s EUC team posted a blog detailing the results of their testing of Nexenta’s VSA for VMware Horizon View .  With a 72% increase in desktop density and a 38X reduction in physical SAN traffic, VMware found VSA for VMware Horizon View to be key to a successful VDI rollout.  These performance statistics are not just for show.  The reduction in traffic and increased density does not just help the balance sheet, but it can help stalled deployments move forward.

“With VSA for Horizon View, Nexenta has introduced an amazing product that unlocks outstanding user experience at a low TCO and makes it possible to recover stalled deployments without requiring a disruptive and painful rip and replace scenario.”  -John Dodge, VMware

If you would like to see this technology in action for deployments, or performance metrics, or the acceleration it provides, make sure to come by the Nexenta booth (Hall 8, S-300) at VMworld Europe.

Tagged with: , , ,
Posted in Virtualization

Nexenta Systems-Powered Storage Solution Achieves 1.6 Million IOPS

Nexenta has achieved 1.6 million IOPS (Input/Output Operations per Second) and high-availability with no single point of failure. Comparable solutions from proprietary vendors cost significantly more than the Nexenta and Area Data Systems solution and cannot guarantee high-availability. With the combination of Nexenta’s Software-defined Storage, NexentaStor™, and high-performance, all-flash hardware, there is now a clear enterprise-class alternative to meet the scalability demands of big data.

“Our customers can now reach well over one million IOPS and capitalize on big data opportunities without breaking the bank on proprietary storage technologies that cost hundreds of thousands of dollars,” said Bridget Warwick, Chief Marketing Officer, Nexenta Systems. “This is further proof that Nexenta’s Software-defined Storage is changing the economics of the enterprise storage market.”

Nexenta is demonstrating the 1.6 million IOPS storage configuration at Intel® Solutions Summit 2013 from March 19-21, 2013 in Los Angeles, Calif. Nexenta is a Silver Sponsor and will be at its booth in the storage zone to discuss the enormous opportunity for Intel channel partners to drive ideal storage solutions, powered by Nexenta, to their customers.Architecture recipes using Nexenta and Intel products are listed on Intel’s website at:

Tagged with: , ,
Posted in Corporate, Software-defined data center

A Few Impressions from Dell’s Banking Day in NYC

Three Somewhat Surprising Trends

Contributed by Evan Powell, Chief Strategy Officer, Nexenta Systems

Back in May,  I was thrilled to discuss Software Defined Storage at Dell’s banking day in their offices in One Penn in NYC. I was one of two guest speakers, the other was Gartner’s Joe Unsworth who did a great job outlining the transition to flash-based storage. After our fairly brief presentations and some Q&A there was an open round table discussion. The attendees were a who’s who of global financial IT leaders including CIOs and VPs of technology and storage of most “too big to fail” banks; we had a couple of already highly referencable customers in the audience as well which was great. A friend at Dell estimated that the collective IT capital purchases of the attendees were approximately $20-$30bn per year. I cannot thank Dell enough for the opportunity and for the partnership.

As an aside – I think all of us in IT owe Dell a debt for their willingness to shift towards enterprise and towards a vision of enterprise IT that, for me, is more compelling, more open, and much more dynamic than many legacy system vendors from which Dell is rapidly taking market share. Maybe I should blog sometime soon about why we are Dell fans – I’d welcome the input of folks that read this blog. For now, suffice it to say that I think Dell is doing a good job leveraging their strengths including supply chain management and global support to both enable and benefit from the ongoing re-platforming of IT. Yes – I am biased since Dell recently started paying their sales teams on NexentaStor – so take those comments with a grain of salt. On the other hand – we targeted Dell as a preferred tier one vendor because they are so well positioned so our money and focus is where our mouth is.

The nature of the Banking Day conversations is that they are closed door and vendor neutral. I did not try to sell Nexenta’s products or even the Dell hardware and services we leverage to deliver software defined storage. Instead I tried to kick off a real conversation.

Here are a few observations. First – some comments and themes I expected and then 2-3 really surprising comments.

As expected, these buyers are more interested in agility than they are in cost savings. And, with one or two exceptions, they assented freely to the notion that legacy storage is done, finished, a thing of the past; it feels like the transition to a software defined data center is just the straw breaking the legacy camel’s back.

Perhaps most surprising to me were a few items:

  1. Increased recognition of the inevitability of cloud-based approaches. I’ll call this acquiescence #1. Many financials have been fighting the easy on-ramp of AWS for years as they struggled to get their thousands of developers to keep their IP on premise and protected. There seems to be a sense that only by building a better, safer, more performant and massively easier to deploy and manage IT platform could they attract developers to stay within the enterprise. I sensed a lot less willingness to fight their own users than in the past and much more confidence in their ability to deliver a better solution that will retain users.
  2. Acquiescence #2 – BYOD is here to stay. Again, maybe I’m just out of touch however RIM and blackberry rose to prominence in part because of the mandates of buyers (and their colleagues in the government). And now the iPad and Android devices and similar are a fact of life that Software Defined Storage and the rest of the IT has got to accommodate.
  3. Nobody believes today’s all flash landscape will be with us in 18 months. Here I may be stealing Joe and Gartner’s thunder slightly. Suffice it to say that he presented a fairly provocative view of likely changes and everyone agreed that today’s apparent leaders are unlikely to win longer term. Hybrid players like Nexenta-based solutions and Nimble did receive more support.

I’d be remiss if I didn’t point out one final acquiescence which may be why the event was so well attended – I think there is more uncertainty over the fundamental structure of IT than I’ve seen since I first startedpartnering with and selling to these buyers 10-15 years ago. The storage teams feel like they are under threat – and they are. In a way it is similar to what I experienced when building Clarus Systems (now Riverbed) and the voice teams were realizing that voice and video convergence with the IP networks could mean “career convergence” as well. As the software defined data center progresses, you’ll see much more need for a true DevOps mindset and skill set. Service engineering is now the hot commodity and folks that know a particular silo really well are increasingly being flanked by those that build IT platforms that deliver on the agility promised by software defined data centers.

Hopefully these few nuggets are of interest. All in all, it is tremendously exciting to see some of the most credible and financially powerful IT buyers and partners (again – thank you Dell!) assent to the notion that software defined storage has got to happen for IT to remain relevant and to deliver on the promise of a more agile platform. I learned a lot from the conversations.

Tagged with: , ,
Posted in Corporate, Software-defined storage

Congratulations to EMC!

Congratulations to EMC and their software teams for announcing ViPR. Since we have been selling software defined storage for a number of years – and now have many more times customers than Vmware did when EMC bought them (and more than 10x than 3PAR when they went public for example) – I take exception to the lead in the press release proclaiming ViPR as “the world’s first Software Defined Storage platform…”

Nonetheless, ViPR appears to be a real step forward towards software defined storage. And EMC deserves a lot of credit for again showing a willingness to risk aspects of their core business in order to keep up with customer requirements.

If you are one of the folks to read this blog regularly, you know we have shared a simple definition of SDS. You can read more about it here. Our definition is based on countless discussions with our cloud and enterprise customers who have shared with us why they started down the journey to software defined storage in the first place.

Basically it is 1) Abstract away the underlying hardware. 2) Achieve flexibility through the ability to handle multiple data access methods and data types. 3) Be truly software defined – through an architecture and set of APIs that allows, for example, orchestration software to manage the storage and to determine to what extent it is meeting application requirements.

If you look at what we know about ViPR – I think it is software that is policy driven that delivers object storage and that also manages and possibly virtualizes block and file storage. I gathered this especially from the more detailed write up over onEFYTimes.

It’s difficult to glean much from a press storm and I know that things will be much clearer once we see more detail from EMC and customers but let’s look at early indications of how ViPR might shape up based on those criteria.

  1. Abstraction
      • ViPR: ViPR does not, it appears, add a consistent set of storage management capabilities over any hardware – it exposes and manages those that are already available on the hardware. If you are on an array with snapshots – congratulations, you’ve got (some sort of) snap shots. On the other hand if you are on a JBOD, no luck. Additionally, of course, ViPR does not open up the on disk format as it is generally not in the data path. This means vendor lock–in remains and arguably increases as ViPR hooks into your Vmware environment.
      • NexentaStor: Conversely NexentaStor runs on any hardware, including high performance SSDs to deliver caching, and of course JBODs and does deliver that consistent set of capabilities irrespective of the underlying hardware. But – NexentaStor really prefers JBODs to legacy storage arrays and it is extremely likely that ViPR will be better able to manage heterogenous storage arrays, especially those from EMC, than NexentaStor does; NexentaStor can virtualize them but is not aware of their underlying capabilities in a way that ViPR will be.
  2. Achieve flexibility. The basic difference is that NexentaStor is broader and more flexible that we think ViPR will be when it ships thanks, again, to controlling everything from the on disk format to the access methods. On the other hand, while Nexenta has sponsored open source object approaches we are not shipping today a object storage solution whereas ViPR will include object. Whether we will ship object by the time ViPR ships is yet to be seen.
      • A lot depends on to what extent ViPR can actually virtualize the underlying resources by combining them into pools that include SSDs; NexentaStor has this ability today which is why we have partners shipping JBODs with cache achieving 1 million IOPS and more. On the other hand, the promised capability of ViPR to turn object into file and vise–versa could be important.
      • I am hopeful that in this area ViPR will be a massive step forward vs. legacy arrays which are essentially black-holes for your data, each requiring a different set of expertise to manage and built to address a different silo of data.
      • What needs to be seen is how ViPR will handle putting the right data on the right underlying array. Whereas with NexentaStor the configurations themselves, such as the block sizes used to write the data disk, are themselves variable in the case of ViPR the software has make sure that, for example, video files needed for streaming are stored on underlying Isilon arrays whereas structured data like Oracle remains on VNX and presumably high random I/O workloads from larger cloud and Vmware deployments are served from XtremeIO.
  3. Being software defined this is arguably the most vague section of our fairly vague definition of software defined storage.Today, however, IF ViPR is routing data sets based on application requirements to the right underlying array – per the point above – than it may well have the architecture necessary to close the application management loop. By comparison, NexentaStor can absolutely eliminate the need for deep storage engineering with solutions like VSA for VDI. In this solution the customer must simply enter the number and type of desktops and NexentaStor – with integration code for VDI – does the rest AND, crucially, tests and manages the system to insure that the requirements are being met.
      • Nexenta, however, built the VSA for VDI business logic in part in hopes of seeing others in the industry run with the task. Arguably orchestration solutions like aspects of OpenStack and CloudStack and even VMTurbo should pick up the baton if they are truly going to be the brain inside the software defined data center. It may be that EMC with ViPR and of course Vmware will lead the industry in creating an open approach to characterizing application requirements and using them to simplify management.
      • Please note – plaintive request – what the storage layer really needs is something like the recently announced Project Daylight from IBM, Cisco, Juniper and of course the Linux Foundation. I think even Nicra / Vmware / EMC is joining that effort to open up the control layer. Read more about Project Daylight here
      • In the meantime, Nexenta’s upcoming Metis utility – which ties application logic to details like pool configurations – is growing in value and importance with integration into our and our partner’s Salesforce for example and ServiceNow and other management solutions in the future. However, again, Nexenta cannot be the business logic of a software defined data center on our own. The industry needs to come together here and maybe ViPR will be a catalyst to make that happen.
Tagged with: , , , ,
Posted in Corporate, Software-defined storage

A Few Observations From OpenStorage Summit About SDS

OSS EMEA 2013 was one of the more inspiring few days I’ve experienced recently at Nexenta.  It was not a marketing event.  It was war stories about the shift to OpenStorage and Software Defined Storage shared in sessions, over demos, and, yes, over beers.

A few things I learned included:

  • SDS is already real and leverages commodity hardware.  I had the opportunity to facilitate a cloud panel where I learned that the top hosting and cloud companies in Northern Europe are using NexentaStor as software defined storage – right now.  SDS is not done – we are furiously adding capabilities as is the broader community.  More below on that subject.  But – Schuberg Phils presented on how they manage NexentaStor via Chef and use it as the basis of their cloud infrastructure TODAY.  And how the use of commodity hardware means cost savings now and in the future and also – greater flexibility and supportability.  Schuberg Phils is rapidly moving towards an infrastructure comprised almost entirely of commodity hardware and software, including their use of CloudStack, KVM, Arista, Nicira, and open approaches to security and load balancing as well.
  • SDS is about to take the next step.  While approximately 190 customers and partners attended OSS and discussed in part their usage of NexentaStor as a version of software defined storage – the booths attracting the most attention were those showing forthcoming Nexenta capabilities that add infinite scalability of the management framework through further separating the control and the management frameworks as well as those that map application requirements to storage software and hardware configurations.
  • SDS isn’t just about deep and cheap.  While many larger enterprises use NexentaStor initially as second and third tier storage to save money – over time customers often use NexentaStor for the flexibility of the solution and for the ability of NexentaStor to perform when used as hybrid flash or or all flash storage.  Marik Lubinski, LeaseWeb’s Virtualization and Storage Engineer, reported that as one of the largest hosting companies in Europe, with operations extending to the United States, they have used just about every legacy storage solution available on the market today.  And he reported that, by far, NexentaStor is “the fastest storage by far, of any storage we run.”

SDS and its foundation, OpenStorage, are not just about marketing.  Despite many blogs and statements from legacy vendors arguing either that they already have SDS or that they soon will have SDS, the simple fact is that they have neither the open approach and software only business model needed OR – as last week reminded me – the people, the community and the sheer number of progressive users that OpenStorage based SDS has accumulated.  Together we are making a reality a fundamentally better approach to enterprise class storage.

Tagged with: , ,
Posted in Software-defined data center

Get every new post delivered to your Inbox.

Join 4,158 other followers