• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Nexenta Blog

Nexenta Blog

Global Leader in Software Defined Storage

  • About Nexenta

Accelerate your Horizon 6 deployment with NexentaConnect 3.0!

September 18, 2014 By mletschin

Nexenta is proud to announce the general availability of NexentaConnect 3.0 for VMware Horizon (with View).  The VDI acceleration and automation tool provides increased desktop density and higher IO performance for existing storage deployments as well as greenfield new VDI solutions.  NexentaConnect 3.0 introduces many new features and enhancements for a VDI solution to include

  • Full support for VMware Horizon 6NexentaConnect for Horizon 3
  • Pass Through Support for VMware GPU
  • Import Horizon View desktop pools
  • Fast desktop pool restoration from backup

Combining all these great new features allows you to now accelerate and grow your existing Horizon deployment, which may have been limited by traditional storage solutions.

To learn more about NexentaConnect for VMware Horizon go to http://www.nexenta.com/products/nexentaconnect/nexentaconnect-horizon and download the 45 day free trial.

 

 

Nexenta Launches Revolutionary Software-Defined Scale Out Object Storage Solution for OpenStack and Big Data Infrastructures

August 19, 2014 By Nexenta

NexentaEdge – Taking Nexenta’s ZFS DNA to Cloud Scale

Thomas Cornely, Chief Product Officer, Nexenta

VMworld 2014 in San Francisco promises to be an incredible event for Nexenta. In addition to our OpenSDx Summit on 8/28, as a VMworld 2014 Platinum Sponsor we’re gearing up for a slew of new product announcements and demos. We’re particularly excited about the opportunity to showcase NexentaEdge, the latest addition to our Software-Defined Storage portfolio, specifically targeting the petabyte scale, shared nothing, scale-out architectures required for next generation cloud deployments. NexentaEdge is a software only solution deployed on industry standard servers running Ubuntu, CentOS or RHEL Linux. Version 1.0 will support iSCSI Block services with OpenStack Cinder integration, Swift and S3 Object APIs as well as a Horizon management plugin. File services are part of our design and will be delivered in follow-on versions.

If you’re familiar with Nexenta, you know all about NexentaStor, our unified block and file Software-Defined Storage solution built on the ZFS file system. What made ZFS great were some key design choices that took advantage of emerging trends in the hardware landscape. Things like Copy On Write, delivering guaranteed file system consistency at very large scale, as well as high performance unlimited snapshots and clones. Things like block level checksums, trading increasingly cheap CPU cycles for way more valuable end-to-end data integrity. This strong technology heritage, paired with Nexenta’s continued investment on performance, scale and manageability has led service providers across the globe and a growing number of enterprise customers to rely on NexentaStor as the storage backend for their legacy cloud and enterprise applications.

If you’re in technology, you know that the only constant is change. Our service provider customers are busy scaling their infrastructure, moving to next generation open source cloud infrastructure like OpenStack and CloudStack and looking for even more scalable and cost efficient storage backends. For that, we’ve built NexentaEdge. And rather than quickly combine existing open source pieces and hack on top of them, we deliberately took some time to design a solution from the ground-up, and we made sure our design reflected the lessons we learned from our years of working with ZFS. We also made sure our design looked forward and was ready for what we see as new emerging trends in the storage landscape. The net result is a truly next generation scale out solution that incorporates what we like to call our ZFS DNA.

For example, one core design aspect of NexentaEdge is something we call Cloud Copy On Write. While the system is fully distributed truly, without any single point of failure, data in the cluster is never updated in place. Very much like ZFS, but applied in a distributed context, this gives us a great foundation for advanced data management and the ability to gracefully handle a variety of failure scenarios that affect large scale out clusters. Another example is end-to-end data integrity. All chunks of objects in NexentaEdge are stored with cryptographic hash checksums that deliver ultimate data integrity. The system is also built with automatic self-healing in case corruption is detected in a chunk.

Another critical aspect of the design is the recognition that building large scale out clusters is as much a storage challenge as a networking one. So we paid particular attention to how we consume precious network bandwidth and how we automatically route data around hot spots and busy nodes. These functions are implemented as part of what we call FlexHash (for dynamic flexible hashing that automatically selects the best targets for data storage, or data access based on actual load states) and Replicast (our own network protocol that minimizes data transfers and enables lower latency data access).

Last but not least, a great design proves itself by how advanced features naturally flow out of it. One such feature is cluster wide inline deduplication. NexentaEdge clusters get inline deduplication of all data at the object chunk or block level. These are variable sizes and can be as small as 4KB as we will demonstrate at the Nexenta booth at VMworld 2014 we will create 100’s of virtual machines on iSCSI LUNs while consuming slightly more than one copy worth in the cluster.

NexentaEdge is here. And we think it will be big. Over the coming weeks, we’ll dig a bit deeper into the technology and share details on Cloud Copy On Write, Flexhash and Replicast. See you at the Nexenta booth at VMworld 2014!

Software-Defined Storage Saving the Economy

August 6, 2014 By Nexenta

Jill Orhun, VP of Marketing and Strategy, Nexenta

Faced with the challenge of an explosion of data from macro trends like social media, mobile, the Internet of Things, and Big Data, many organisations are faced with snowballing technology requirements and yet declining IT budgets that mean doing more with less.

Storage is often the highest single line item in these reduced or static IT budgets, making the strategy of throwing more storage hardware at the data explosion problem less and less acceptable. Many of today’s organisations, such as picturemaxx, University of Sussex and the Institut of Laue Langevin have found a way to step away from such a MESS (massively expensive storage systems) solution and have discovered more scalable, flexible, available and cost effective storage solutions – Software-Defined Storage (SDS) solutions.

Open Source SDS solutions can be deployed in conjunction with industry standard hardware, avoiding the vendor lock-in of expensive proprietary models. This gives organisations the freedom to choose their hardware, ensuring they always get the right hardware their requirements – and with the right price. Democratising infrastructure in this way delivers cost savings of up to 80%.

Software-Defined Storage will change everything

2014 is the year that SDS is shaking up the market by delivering on its promise of a truly vendor agnostic approach, and providing a single management view across the data centre. Organisations are beginning to coalesce around a standard definition of Software-Defined Storage, and clearing up the confusion that arises from the proliferation of approaches taken by vendors purporting to provide “Software-Defined” solutions.

Some vendors claim to offer SDS but are merely providing virtualised storage, characterised by a separation and pooling of capacity from storage hardware resources. Others claim to have SDS solutions even though their solution is 100% reliant on a specific kind of hardware. Neither definition fulfills the fundamental SDS requirement of enabling enterprises to make storage hardware purchasing decisions independent from concerns about over or under-utilisation or interoperability of storage resources. It is important to be aware of these subtle distinctions, otherwise the key SDS benefits of increased flexibility, automated management and cost efficiency simply won’t be realised.

True SDS solutions let organisations work with any protocol stack to build specialised systems on industry standard hardware, rather than limiting their choices to the expensive specialised appliances sold by the ‘MESS’ vendors.

Storage-Defined Storage changes the economic game for the Storage-Defined Data Centre

SDS is one of the three legs of the stool that make up the Software-Defined Data Centre (SDDC), along with the server virtualisation and Software-Defined Networking (SDN). As the most costly leg, however, SDS is also a target for mis-direction of terms and capture of high margins. Many vendors claiming to deliver SDS are selling hardware products with the 60% to 70% margin that has come to define the enterprise storage market. SDS is about much more than new technology innovation. True SDS lets customers do things they couldn’t do before and, most critically, fundamentally changes the economics of the enterprise storage business by increasing the hardware choices available to end customers.

Making the right choices

Organisations are in the middle of a data tsunami. According to recent reports the global tidal wave of data has been predominantly created in the last two years is going to get faster as we all demand 24/7 connection.

According to Research and Markets Global Software Defined Data Centre report, the market is set to explode, growing at a CAGR of 97.48% between this year and 2018. Much of this growth is due to an increased demand for cloud computing, which creates a companion demand for Software-Defined technologies to achieve large scale, economically.

Customers realise that SDDC technologies offer flexibility, security, storage availability and scalability. All organisations should get informed on what true Software Defined solutions are – so they can make better decisions on which vendors to invest in for the SDDCs in their future. The first step is understanding the definitions, asking the right questions, and moving towards SDS solutions as a first critical step on their Software-Defined journey.

Welcome to the Software-Defined World

July 28, 2014 By Nexenta

Thomas Cornely, VP of Product Management, Nexenta

It’s no secret that today’s organizations are experiencing an unprecedented data explosion. As much as 90% of the world’s data has been created in the last two years and the pace of data generation continues to accelerate thanks to trends like cloud, social, mobile, big data, and the Internet of Things. These developments create additional pressure on data centre managers already struggling to make do with flat or decreasing IT budgets.

Thankfully, help is on the horizon with the emergence of the Software-Defined Data Centre (SDDC). The SDDC promises to deliver new levels of scalability, availability and flexibility, and will do so with a dramatically lower total cost of ownership. As companies like Amazon, Google and Facebook prove every day, SDDC is the future and is built on three key pillars: compute, storage and networking.

The last decade saw the transformation of the compute layer thanks to technologies from the likes of VMware, Microsoft and the open source community. The next stages are storage and networking. While Software Defined Networking (SDN) was all the rage a couple of years ago, actual market traction and customer adoption has been slower than expected as industry players continue to work to align all the technology pieces required to deliver full SDN solutions. The story is quite different for storage. With storage typically being the most expensive part of an enterprise infrastructure, we are witnessing a dramatic acceleration in Software-Defined Storage (SDS) adoption.

2014 promises to be very significant for Software-Defined Storage as customers realize its potential for addressing their critical pain points: scalability, availability, flexibility and cost. As SDS increasingly takes centre stage, it is important to ensure customers see through legacy vendor marketing seeking to preserve their hegemony by dressing up high margin, inflexible proprietary hardware in SDS clothes. Thanks to fairly creative marketing teams most, if not all, storage vendors make some claim related to Software-Defined Storage. It is amusing to note, however, that almost all are selling closed hardware products with the 60% to 70% margin that has been the norm in the enterprise storage market over the past decade. Calling a product SDS does not make it so.

Having a lot of software in a given hardware product (as most storage arrays do) might make a product Software-Based, but it does not make it Software-Defined. Similarly, adding an additional management layer or abstraction layer on existing proprietary hardware (a la EMC ViPR) might increase the amount of software sold to customers, but really does not make the solution Software-Defined. What legacy storage vendors are doing is very similar to what Unix vendors of old (remember Sun, HP and IBM) did when they added visualization and new management software to their legacy Unix operating systems to compete with VMware. While these were technically interesting extensions to legacy technology, it was VMware running on standard Intel based servers that truly unleashed Software-Defined compute and changed the economics of enterprise compute forever. The same is true for SDS.

Done right, Software-Defined Storage allows customers to build scalable, reliable, full featured, high performance storage infrastructure from a wide selection of (low cost) industry standard hardware. As such, SDS is about much more than the latest technology innovation. True SDS allows customers to do things they could not do before while fundamentally changing the economics of the enterprise storage business. True SDS allows customers to deal with their storage assets in the same way they deal with their virtualized compute infrastructure: pick a software stack for all their storage services and seamlessly swap industry standard hardware underneath as cost, scale and performance requirements dictate. Eliminating vendor lock-in without compromising on availability, reliability and functionality is how SDS will change the storage industry.

From a technology perspective, true SDS must be able to support any ecosystem (VMware, HyperV, OpenStack and CloudStack) and any access protocol (block, file and object), while running on a wide variety of hardware configurations, be they all flash, all disk, or hybrid. Having a strong open source DNA helps in getting an active community of users and developers around the SDS solution. SDS openness will play an increasingly important role as customers move towards converged software-led stacks that harness technologies such as cloud, hyperscale, big data, NoSQL, flash hybrids, all flash, object stores and intelligent automation.

As mentioned earlier, the SDDC will deliver new levels of scalability, availability and flexibility with significantly lower cost than today’s approaches. With storage playing such a critical role in the SDDC, the accelerating adoption of SDS in 2014 will make it a breakthrough year for what we like to call Software-Defined Everything aka SDx. When the building blocks of software defined compute, storage and networking have all been put in place, enterprises will be free from expensive vendor lock-in and free to scale more easily, innovate more rapidly and bring new solutions to market more efficiently. More than yet another technology fad, SDDC is poised to change core data centre economics and free enterprises to invest more in their own business.

Software-Defined Storage – Savior of The Internet of Things

July 23, 2014 By Nexenta

Jill Orhun, VP of Marketing at Nexenta investigates how a new, software driven approach to storing data could end up saving hosting providers a small fortune…

‘The Internet of Things’, or connected devices, is an integral part of many people’s daily lives. From its beginnings in Internet banking and online grocery shopping, the Internet of Things has moved on to driverless cars, learning thermostats and wearable fitness technology – and the future only holds more opportunities. As these advancements in technology continue and become more widely adopted, we will become increasingly reliant on the services they deliver and the data they generate. And the Internet of Things is only one of several ingredients contributing to today’s explosion of data – key trends like mobility, social media and big data also are driving strong demand.

The net effect of these trends and technical advancements is that data is growing at an exponential rate. Analyst firm IDC1 predicts the digital universe will increase to 40 trillion gigabytes by 2020, equating to more than 5,200 gigabytes for every man, woman and child. It also forecasts the digital universe will double every two years from now until 2020. This data growth is not only driven by people, but also by the huge number of devices permanently connected to the Internet, transmitting data 24/7. Important questions arise – where will all this data live? And how will we manage it?

Over the next 5 years, CIOs anticipate up to 44% growth in workloads in the Cloud, versus 8.9% growth for “on-premise” computing workloads 2. While consumer behavior often lags behind that of enterprises, the expectation is that over time greater and greater amounts of data will live in Clouds.   All of this data will put huge pressure on hosting providers to deliver industry-leading data management systems – ones that are simple, flexible, and economically friendly. We see evidence of this not only locally, but also globally.

For example, large, multinational hosting providers like Korean Telecom manage 100+ Petabyte environments that will be multi-Exabyte ones in the not so distant future. With a demanding portfolio of enterprise clients, its critical, to keep performance high, access flexible, and costs down. To do this, hosting providers need high functioning data centers, ones that take advantage of leading edge technologies. To achieve this goal, we need to address each layer of the data center- compute, network and storage – with storage having the most significant budget impact.

To address the storage component, hosting providers should explore the benefits of Software-Defined Storage (SDS) solutions. SDS is already helping 1000+ Nexenta hosting provider customers to keep a competitive edge by delivering high performance environments with the economics required to address demanding trends like ‘The Internet of Things’. True SDS solutions – meaning software defined, not software-based – provide flexible, simple, manageable, enterprise-class, high performance, infrastructure – with no vendor lock in.

SDS technology is deployed on industry standard hardware rather than tied to expensive proprietary models. This freedom of choice gives organizations the flexibility they need to select hardware that matches their requirements, for both new and legacy environments. Combine this with all the expected storage services, plus a price point that is often 50-80% less than proprietary models, and organizations now have the means to handle the data explosion gracefully – and competitively.

And don’t just take our word for it, our customers agree.

“We have spent less than one third of the investment we could have made with one Oracle SAN unit. This cost saving provides us with additional capital, which can be used for other IT resources to ensure we are meeting all our service level agreements.”

“SDS has helped us to provide the highest performance, scalability and flexibility to our growing customer base at a fraction of the cost of legacy vendors. The new solution has met all of our needs, offering enterprise features with the open source background we trust.”

“SDS has resolved all of our storage issues. We are now able to provide our customers with instant access to mission critical data. This has helped us not only to save money and time, but also to remain competitive in an increasingly growing industry.”

To embrace growing trends like ‘The Internet of Things’, hosting providers should recognize that SDS solutions belong on their infrastructure roadmaps. Flexible, manageable, simple and low cost – SDS is the future.

1 IDC IVIEW – THE DIGITAL UNIVERSE IN 2020

2 Tech Trader Daily

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 9
  • Go to page 10
  • Go to page 11
  • Go to page 12
  • Go to page 13
  • Go to page 14
  • Go to Next Page »

Primary Sidebar

Search

Latest Posts

Introducing “The Essential”- A Quick-Start NexentaStor Virtual Storage Appliance

Taking the EZ-Pass Lane to a Hybrid Storage Cloud

File Services for HCI and Block Storage Simplified

“NAS-up” Your Hyper-converged Infrastructure or SAN with NexentaStor (and get hybrid cloud, too)

NexentaCloud Complements On-Prem NexentaStor for Hybrid Deployments

Categories

  • all flash
  • cloud
  • Corporate
  • Data Protection
  • Dell
  • NAS
  • Object Storage
  • Raise Your SDS IQ
  • Software-defined data center
  • Software-defined storage
  • Uncategorized
  • Virtualization

Nexenta Blog

Copyright © 2025