Managing Software-Defined storage for your virtualized infrastructure just got a whole lot easier.

Nexenta is proud to announce our first vCenter Web Client plugin to support the NexentaStor platform. The NexentaStor vCenter Web Client Plugin is a plug-in for vSphere 5.5 and NexentaStor 4.0.3 that provides integrated management of NexentaStor storage systems within vCenter. The plug-in will allow the vCenter administrator to automatically configure NexentaStor nodes via vCenter.

VMware administrators can provision, connect, and delete storage from NexentaStor to the ESX host, and view the datastores within vCenter.

04CreateiSCSINot only can you provision the storage but managing it is also simple with integrated snapshot management.

05SnapshotThe plugin also allows for closer analytics and reporting on the storage through vCenter as detailed below.

Check out the screenshots below, and download the vCenter Web Client Plugin today from the NexentaStor product downloads page.

General details about Storage:

  • Volume Name
  • Connection Status
  • Provisioned IOPs
  • Provisioned Throughput
  • Volume available space and Used space

Storage Properties

  • Datastore name
  • NFS Server IP address
  • Datastore Path and capacity details

Datastore Properties:

  • Total capacity
  • Used capacity
  • Free capacity
  • Block size
  • Datastor IOPs, Throughput, and Latency

Snapshot Management:

  • List existing snapshots
  • Create new snapshots
  • Clone existing snapshots
  • Restore to a snapshot
  • Delete Snapshots
  • Schedule snapshots

End-to-End Datastore Provisioning:

  • Creating a new volume in Storage Array
  • Attach the volume to host as datastore
Tagged with: , , ,
Posted in Virtualization

Accelerate your Horizon 6 deployment with NexentaConnect 3.0!

Nexenta is proud to announce the general availability of NexentaConnect 3.0 for VMware Horizon (with View).  The VDI acceleration and automation tool provides increased desktop density and higher IO performance for existing storage deployments as well as greenfield new VDI solutions.  NexentaConnect 3.0 introduces many new features and enhancements for a VDI solution to include

  • Full support for VMware Horizon 6NexentaConnect for Horizon 3
  • Pass Through Support for VMware GPU
  • Import Horizon View desktop pools
  • Fast desktop pool restoration from backup

Combining all these great new features allows you to now accelerate and grow your existing Horizon deployment, which may have been limited by traditional storage solutions.

To learn more about NexentaConnect for VMware Horizon go to http://www.nexenta.com/products/nexentaconnect/nexentaconnect-horizon and download the 45 day free trial.

 

 

Posted in Corporate, Software-defined data center, Software-defined storage, Virtualization

Nexenta Launches Revolutionary Software-Defined Scale Out Object Storage Solution for OpenStack and Big Data Infrastructures

NexentaEdge – Taking Nexenta’s ZFS DNA to Cloud Scale

Thomas Cornely, Chief Product Officer, Nexenta

VMworld 2014 in San Francisco promises to be an incredible event for Nexenta. In addition to our OpenSDx Summit on 8/28, as a VMworld 2014 Platinum Sponsor we’re gearing up for a slew of new product announcements and demos. We’re particularly excited about the opportunity to showcase NexentaEdge, the latest addition to our Software-Defined Storage portfolio, specifically targeting the petabyte scale, shared nothing, scale-out architectures required for next generation cloud deployments. NexentaEdge is a software only solution deployed on industry standard servers running Ubuntu, CentOS or RHEL Linux. Version 1.0 will support iSCSI Block services with OpenStack Cinder integration, Swift and S3 Object APIs as well as a Horizon management plugin. File services are part of our design and will be delivered in follow-on versions.

If you’re familiar with Nexenta, you know all about NexentaStor, our unified block and file Software-Defined Storage solution built on the ZFS file system. What made ZFS great were some key design choices that took advantage of emerging trends in the hardware landscape. Things like Copy On Write, delivering guaranteed file system consistency at very large scale, as well as high performance unlimited snapshots and clones. Things like block level checksums, trading increasingly cheap CPU cycles for way more valuable end-to-end data integrity. This strong technology heritage, paired with Nexenta’s continued investment on performance, scale and manageability has led service providers across the globe and a growing number of enterprise customers to rely on NexentaStor as the storage backend for their legacy cloud and enterprise applications.

If you’re in technology, you know that the only constant is change. Our service provider customers are busy scaling their infrastructure, moving to next generation open source cloud infrastructure like OpenStack and CloudStack and looking for even more scalable and cost efficient storage backends. For that, we’ve built NexentaEdge. And rather than quickly combine existing open source pieces and hack on top of them, we deliberately took some time to design a solution from the ground-up, and we made sure our design reflected the lessons we learned from our years of working with ZFS. We also made sure our design looked forward and was ready for what we see as new emerging trends in the storage landscape. The net result is a truly next generation scale out solution that incorporates what we like to call our ZFS DNA.

For example, one core design aspect of NexentaEdge is something we call Cloud Copy On Write. While the system is fully distributed truly, without any single point of failure, data in the cluster is never updated in place. Very much like ZFS, but applied in a distributed context, this gives us a great foundation for advanced data management and the ability to gracefully handle a variety of failure scenarios that affect large scale out clusters. Another example is end-to-end data integrity. All chunks of objects in NexentaEdge are stored with cryptographic hash checksums that deliver ultimate data integrity. The system is also built with automatic self-healing in case corruption is detected in a chunk.

Another critical aspect of the design is the recognition that building large scale out clusters is as much a storage challenge as a networking one. So we paid particular attention to how we consume precious network bandwidth and how we automatically route data around hot spots and busy nodes. These functions are implemented as part of what we call FlexHash (for dynamic flexible hashing that automatically selects the best targets for data storage, or data access based on actual load states) and Replicast (our own network protocol that minimizes data transfers and enables lower latency data access).

Last but not least, a great design proves itself by how advanced features naturally flow out of it. One such feature is cluster wide inline deduplication. NexentaEdge clusters get inline deduplication of all data at the object chunk or block level. These are variable sizes and can be as small as 4KB as we will demonstrate at the Nexenta booth at VMworld 2014 we will create 100’s of virtual machines on iSCSI LUNs while consuming slightly more than one copy worth in the cluster.

NexentaEdge is here. And we think it will be big. Over the coming weeks, we’ll dig a bit deeper into the technology and share details on Cloud Copy On Write, Flexhash and Replicast. See you at the Nexenta booth at VMworld 2014!

Posted in Corporate

Software-Defined Storage Saving the Economy

Jill Orhun, VP of Marketing and Strategy, Nexenta

Faced with the challenge of an explosion of data from macro trends like social media, mobile, the Internet of Things, and Big Data, many organisations are faced with snowballing technology requirements and yet declining IT budgets that mean doing more with less.

Storage is often the highest single line item in these reduced or static IT budgets, making the strategy of throwing more storage hardware at the data explosion problem less and less acceptable. Many of today’s organisations, such as picturemaxx, University of Sussex and the Institut of Laue Langevin have found a way to step away from such a MESS (massively expensive storage systems) solution and have discovered more scalable, flexible, available and cost effective storage solutions – Software-Defined Storage (SDS) solutions.

Open Source SDS solutions can be deployed in conjunction with industry standard hardware, avoiding the vendor lock-in of expensive proprietary models. This gives organisations the freedom to choose their hardware, ensuring they always get the right hardware their requirements – and with the right price. Democratising infrastructure in this way delivers cost savings of up to 80%.

Software-Defined Storage will change everything

2014 is the year that SDS is shaking up the market by delivering on its promise of a truly vendor agnostic approach, and providing a single management view across the data centre. Organisations are beginning to coalesce around a standard definition of Software-Defined Storage, and clearing up the confusion that arises from the proliferation of approaches taken by vendors purporting to provide “Software-Defined” solutions.

Some vendors claim to offer SDS but are merely providing virtualised storage, characterised by a separation and pooling of capacity from storage hardware resources. Others claim to have SDS solutions even though their solution is 100% reliant on a specific kind of hardware. Neither definition fulfills the fundamental SDS requirement of enabling enterprises to make storage hardware purchasing decisions independent from concerns about over or under-utilisation or interoperability of storage resources. It is important to be aware of these subtle distinctions, otherwise the key SDS benefits of increased flexibility, automated management and cost efficiency simply won’t be realised.

True SDS solutions let organisations work with any protocol stack to build specialised systems on industry standard hardware, rather than limiting their choices to the expensive specialised appliances sold by the ‘MESS’ vendors.

Storage-Defined Storage changes the economic game for the Storage-Defined Data Centre

SDS is one of the three legs of the stool that make up the Software-Defined Data Centre (SDDC), along with the server virtualisation and Software-Defined Networking (SDN). As the most costly leg, however, SDS is also a target for mis-direction of terms and capture of high margins. Many vendors claiming to deliver SDS are selling hardware products with the 60% to 70% margin that has come to define the enterprise storage market. SDS is about much more than new technology innovation. True SDS lets customers do things they couldn’t do before and, most critically, fundamentally changes the economics of the enterprise storage business by increasing the hardware choices available to end customers.

Making the right choices

Organisations are in the middle of a data tsunami. According to recent reports the global tidal wave of data has been predominantly created in the last two years is going to get faster as we all demand 24/7 connection.

According to Research and Markets Global Software Defined Data Centre report, the market is set to explode, growing at a CAGR of 97.48% between this year and 2018. Much of this growth is due to an increased demand for cloud computing, which creates a companion demand for Software-Defined technologies to achieve large scale, economically.

Customers realise that SDDC technologies offer flexibility, security, storage availability and scalability. All organisations should get informed on what true Software Defined solutions are – so they can make better decisions on which vendors to invest in for the SDDCs in their future. The first step is understanding the definitions, asking the right questions, and moving towards SDS solutions as a first critical step on their Software-Defined journey.

Posted in Corporate, Software-defined storage

Welcome to the Software-Defined World

Thomas Cornely, VP of Product Management, Nexenta

It’s no secret that today’s organizations are experiencing an unprecedented data explosion. As much as 90% of the world’s data has been created in the last two years and the pace of data generation continues to accelerate thanks to trends like cloud, social, mobile, big data, and the Internet of Things. These developments create additional pressure on data centre managers already struggling to make do with flat or decreasing IT budgets.

Thankfully, help is on the horizon with the emergence of the Software-Defined Data Centre (SDDC). The SDDC promises to deliver new levels of scalability, availability and flexibility, and will do so with a dramatically lower total cost of ownership. As companies like Amazon, Google and Facebook prove every day, SDDC is the future and is built on three key pillars: compute, storage and networking.

The last decade saw the transformation of the compute layer thanks to technologies from the likes of VMware, Microsoft and the open source community. The next stages are storage and networking. While Software Defined Networking (SDN) was all the rage a couple of years ago, actual market traction and customer adoption has been slower than expected as industry players continue to work to align all the technology pieces required to deliver full SDN solutions. The story is quite different for storage. With storage typically being the most expensive part of an enterprise infrastructure, we are witnessing a dramatic acceleration in Software-Defined Storage (SDS) adoption.

2014 promises to be very significant for Software-Defined Storage as customers realize its potential for addressing their critical pain points: scalability, availability, flexibility and cost. As SDS increasingly takes centre stage, it is important to ensure customers see through legacy vendor marketing seeking to preserve their hegemony by dressing up high margin, inflexible proprietary hardware in SDS clothes. Thanks to fairly creative marketing teams most, if not all, storage vendors make some claim related to Software-Defined Storage. It is amusing to note, however, that almost all are selling closed hardware products with the 60% to 70% margin that has been the norm in the enterprise storage market over the past decade. Calling a product SDS does not make it so.

Having a lot of software in a given hardware product (as most storage arrays do) might make a product Software-Based, but it does not make it Software-Defined. Similarly, adding an additional management layer or abstraction layer on existing proprietary hardware (a la EMC ViPR) might increase the amount of software sold to customers, but really does not make the solution Software-Defined. What legacy storage vendors are doing is very similar to what Unix vendors of old (remember Sun, HP and IBM) did when they added visualization and new management software to their legacy Unix operating systems to compete with VMware. While these were technically interesting extensions to legacy technology, it was VMware running on standard Intel based servers that truly unleashed Software-Defined compute and changed the economics of enterprise compute forever. The same is true for SDS.

Done right, Software-Defined Storage allows customers to build scalable, reliable, full featured, high performance storage infrastructure from a wide selection of (low cost) industry standard hardware. As such, SDS is about much more than the latest technology innovation. True SDS allows customers to do things they could not do before while fundamentally changing the economics of the enterprise storage business. True SDS allows customers to deal with their storage assets in the same way they deal with their virtualized compute infrastructure: pick a software stack for all their storage services and seamlessly swap industry standard hardware underneath as cost, scale and performance requirements dictate. Eliminating vendor lock-in without compromising on availability, reliability and functionality is how SDS will change the storage industry.

From a technology perspective, true SDS must be able to support any ecosystem (VMware, HyperV, OpenStack and CloudStack) and any access protocol (block, file and object), while running on a wide variety of hardware configurations, be they all flash, all disk, or hybrid. Having a strong open source DNA helps in getting an active community of users and developers around the SDS solution. SDS openness will play an increasingly important role as customers move towards converged software-led stacks that harness technologies such as cloud, hyperscale, big data, NoSQL, flash hybrids, all flash, object stores and intelligent automation.

As mentioned earlier, the SDDC will deliver new levels of scalability, availability and flexibility with significantly lower cost than today’s approaches. With storage playing such a critical role in the SDDC, the accelerating adoption of SDS in 2014 will make it a breakthrough year for what we like to call Software-Defined Everything aka SDx. When the building blocks of software defined compute, storage and networking have all been put in place, enterprises will be free from expensive vendor lock-in and free to scale more easily, innovate more rapidly and bring new solutions to market more efficiently. More than yet another technology fad, SDDC is poised to change core data centre economics and free enterprises to invest more in their own business.

Posted in Corporate, Software-defined data center
Follow

Get every new post delivered to your Inbox.

Join 4,151 other followers