Late Night Infomercials and the Data Center

Today’s storage administrators are looking for the best performance for the lowest cost to satisfy their enterprise data requirements. Performance is often improved by adding more solid state drives, but these come at a cost premium. For enterprises looking to save money and resources while meeting performance requirements- data reduction is a key component in creating the ideal solution.

Options for data reduction are classified into two main categories and each has their own purpose- data deduplication and compression.

Inline vs. Post-Process Deduplication

Data deduplication is, in its simplest form, a process of removing duplicates. Take this blog as an example- by the 11th word we already repeated the word “the”. With deduplication, that would remove 3 of the 59 words in the first paragraph. If we did this for every word in the first paragraph we would be down to 47 total words or about a 20% reduction.

There are two methods that storage systems accomplish this with data blocks today- inline or post-process. Inline means that as the data is written, the duplicates are noted and pointers are created instead of writing the full data. Post-process deduplication removes the duplicates on a scheduled basis.

Inline requires more compute resources, but post-process deduplication puts the user at risk if a large file is loaded on the system and fills the capacity before the system has a chance to deduplicate it.

Compression

Compression is the next option to reduce the amount of data stored and it is the one people are typically more familiar with, even if they don’t realize it. Anytime you download a .zip file, you are receiving a compressed file.

To give you a real world example, think of the popular vacuum bags (In the event that you live under a rock and haven’t seen these in action, here is a 13-minute infomercial all about them). Traditionally, when you fold clothes each item takes up a given amount of space. However, when you use space saving bags to remove as much of the air as possible, you reduce the amount of space needed to store the contents. The good thing about compression is that, in many cases, it has a minimal impact on the compute resources of a storage array and can even make the system perform more efficiently.

So which is better?

Both compression and dedupe have a place in the datacenter, but it’s important to understand when each is most effective. You will normally get your highest data reduction ratios with dedupe, but those are primarily going to apply to desktop and server virtualization workloads that have a lot of commonalities.

Even in this case, the use of technologies like linked clones for VMware View can reduce the need for deduplication. Messaging and collaboration tools are another space you will see deduplication used frequently, but this is often built into the application layer and relies less on the storage deduplication.

For most other workloads, compression is ideal- from files in Media and Entertainment, Databases, Analytics and more. Increasingly, you will see the use of compression become the most widely used and effective data reduction strategy.

Tagged with: , , , , ,
Posted in Data Protection, Software-defined storage

How Backend Storage Impacts the Cloud

We all know that there are a variety of factors that impact application performance running on cloud infrastructure, such as network latency and bandwidth, server compute power and, one that is often overlooked, storage. Service providers need strong storage solutions supporting their infrastructure so they can differentiate themselves in this increasingly competitive space, while providing superior service and performance to their end users.

WebSupport, a leading European Cloud Service Provider supports thousands of customers, all requiring differing levels of performance and reliability. WebSupport needed to achieve specific performance metrics with their storage in a cost-effective manner. To solve their dilemma, they deployed an all-flash NexentaStor system that exceeds their performance requirements of 600K read IOPS and 300K write IOPS. Utilizing the advanced caching built into our filesystem, WebSupport is able to deliver 1.2M read IOPS. In addition, NexentaStor’s data reduction reduces their disk space capacity by 40%, providing additional cost-savings for their business.

Earlier this year, WebSupport ran their VPS performance benchmark test against some top regional and global Cloud Service Provider competitors in the market today. To learn more about the results of the testing and to get an in-depth analysis of their findings, read their Benchmarking White Paper here.

Tagged with: , , , , ,
Posted in all flash, cloud, Software-defined data center, Software-defined storage

Declare Your Independence From Legacy Storage

Independence: [in-di-pen-duh ns], noun

Freedom from the control, influence, support, aid, or the like, of others.

Today’s enterprise IT administrators are looking to declare their independence from the legacy vendors that have dominated the technology landscape for decades. This freedom has manifested in the form of software-defined solutions.

The revolution began in the late 1990s, when VMware was founded to disrupt the server empire that had long been controlled by mainframe systems from the likes of IBM and Sun Microsystems. The battle for dominance in the server world has now clearly been won by server virtualization software- still, the war wages on in the realm of enterprise storage.

Software-defined storage companies are now fighting to give power and flexibility back to the enterprise in key areas of their data centers. These solutions range from software-only products bundled into certified reference architectures to appliances that utilize a software-based solution to provide the freedom users need.

Freedom from Vendor Control

Storage Administrators no longer have to rely solely on the limited and costly tools offered by legacy vendors. The use of API-driven solutions, like NexentaStor, allows for simple automation and management. With Software-Defined, you now have control over what a configuration looks like for your business.

However, many companies want the benefits of software-defined without the hassle of piecing together their own configurations from scratch. Solutions like the Lenovo DX8200N give you the autonomy that software-defined affords, ranging from high performance all-flash configurations to all-disk archive systems, with the ease of an appliance model.

Freedom from Influence

Legacy vendors configure solutions to be as cost effective as possible for themselves and don’t place as much emphasis on the user’s needs or price challenges. Deploying a software-defined solution lets the user choose the drives they need and the speed they desire at a lower price point than traditional vendors.

An example of this is the numerous drive and PCI card options that are available with the Lenovo DX8200N. Your options range from 200Gb solid state drives to 4,6,8 and 10TB spinning disks with connectivity options including the latest Fibre Channel (FC) and IP connectivity. The freedom to customize a solution around your needs in a simplified manner is core to Nexenta SDS.

Freedom to Choose Your Support

Choosing to liberate your datacenter with software-defined does not mean that support will be lacking. There are many open source SDS products on the market that rely solely on community support. It can often take days, weeks or even longer to get help with a problem. That is a timeframe that enterprises cannot afford.

In contrast, a mature software-defined storage solution is not complicated to support. Nexenta has developed strong partnerships with key alliance partners, such as Lenovo, which ensures customers receive full support for the entire solution through Lenovo. Ultimately, this makes it simple to implement software-defined storage with full end-to-end support that enterprises need.

Freedom to Take Back Your Data Center

At its core, software-defined is about freedom. The freedom to make your own hardware choices, the freedom to run your enterprise with reliability and dependability, and the freedom to take control of your datacenter. Nexenta puts the power back in your hands and provides your company with the tools you need to win the battle, and as you move towards a full software-defined datacenter, one day win the war.

To learn more about the Lenovo DX8200N Powered by Nexenta, register for our upcoming webinar.

Tagged with: , , , , , , , , , , , , , ,
Posted in Corporate, Software-defined data center, Software-defined storage

3 Simple Ways To Protect Your Data From Ransomware

By: Michael B. Letschin, Field CTO

Ransomware attacks have become one of the biggest threats to enterprise data. WannaCry was released just a few months back, and yesterday, an even more sophisticated attack was launched called Petya.

This new attack locks down a computer’s hard drive and demands payment in Bitcoin to unlock the user’s system. They claim it will send a password after payment, but there is a slim chance that a password that will ever arrive.

The one thing these attacks have in common is they are based on the EternalBlue tool that was leaked from the NSA.  The tool specifically attacks vulnerabilities on Windows systems.

So how do you protect your corporate data? There are 3 Simple ways.

  • Store your Data on a Centralized Storage Array like NexentaStor

Since the file services stack in NexentaStor is not based on Windows, these vulnerabilities are not present in our software and you can be more confident that your data won’t be at risk.

  • Create a Snapshot Schedule

Another benefit of using an enterprise storage array is the ability to create a snapshot schedule that will allow you to roll back to a point in time before any attack on a connected system might have occurred.  With unlimited snapshots in NexentaStor, you can be sure you have a timely and accurate copy of your data, regardless of user interactions.

  • Replicate your Data to a Secondary System

Finally, a centralized array offers you the ability to replicate that data offsite to a secondary system within your own corporate firewall or even to one of the many service providers that use NexentaStor as their replication targets.

Data integrity has always been a core, fundamental component of the Nexenta software stack. We consistently make this a priority to ensure you get the full benefit of that and your critical corporate data isn’t held ransom.  If you’d like to learn more about our storage solutions, click here.

Posted in Data Protection, Software-defined storage

How to Get More from All-Flash Using Software Defined Storage

By: George Crump, Storage Swiss

Flash often makes the hardware selection less important, but makes the software selection very important. The right software can not only ensure optimal flash performance, but also extend flash implementation beyond the typical high-transaction database use cases and into modern apps and big data analytics. It is critical though that IT professionals make sure software defined storage solutions (SDS) include core enterprise features that they have come to count on.

Why is Storage Hardware Less Important?

Storage systems used to be full of proprietary hardware so they could extract every last drop of performance from the array. These custom chips were used because CPUs were relatively slow, software was cumbersome to create and maintain, and rotational hard disks were highly latent. Times have changed. CPU processing power is almost too plentiful, software development is much easier to start and continuously maintain, and most importantly, flash media has almost no latency when comparing the alternatives.

Although it matters what hardware IT uses, a mid-range configuration can help many organizations reach the performance levels they require. While they do exist, most data center workloads will not be able to take advantage of additional hardware investments to extract additional performance. A software-first approach provides IT professionals flexibility, enabling them to select hardware components that are the most affordable while balancing the right quality and the right amount of performance.

Once the organization meets performance requirements, it needs to focus on the other values flash media brings to the data center. For example, flash is denser and requires less power, making it ideal for rapidly growing organizations that are trying to avoid the cost of building another data center.

Why is Software More Important?

If it is flash-enabled, storage software really shouldn’t care what type of storage hardware it runs on. But it should take advantage of the reality of excess storage performance to provide a richer feature set to customers. For legacy data centers, this means a greater focus on data efficiency like inline compression, data management enabled by quality of service, data protection to lower recover point objectives and most importantly ease of use to lower administrative costs.

Flash’s Next Steps

Flash, in the form of either all-flash arrays or hybrid arrays, is now the dominant storage type for core applications based on Oracle or MS-SQL as well as virtualized environments based on VMware or Hyper-V. These environments typically leverage a scale-up storage architecture and SDS vendors need to support this very popular deployment model. Doing so allows the SDS solution to leverage storage hardware already in-place instead of forcing the customer to upgrade or change hardware to support a scale-out solution.

Flash as Deep Storage

An increasing number of organizations are looking to use flash storage not only to take advantage of its performance, but also its density. A flash-based system can store petabytes of information in less than half a data center rack. When factoring the cost of data center floor space, power and cooling, these offerings may actually be less expensive than the hard drive alternative. Then, applications like Splunk and Hadoop can leverage this density to access a broad data set that responds instantly to their queries to deliver deeper analysis and faster answers.

But these environments require a new approach to flash storage system architectures. They need to scale-out the same way the applications do, via commodity servers that act as storage nodes creating a cluster delivering a single pool of storage. Once again, the software creating this storage cluster is critical, as it needs optimization for flash media, automatic scaling and ease of management. In addition to storage management, the SDS software has the added responsibility of cluster management and network management.

StorageSwiss Take

A storage vendor traditionally delivers three elements to their customers: hardware, software and service. While storage vendors need to be aware of and take advantage of flash performance and density, they don’t need to be as concerned about designing proprietary hardware. The flash performance equalizer enables them to focus on the storage software and delivering quality support and service to customers. A focus on software also enables flexibility so customers can choose the hardware that makes the most sense for their environment.

https://storageswiss.com/2017/05/15/more-from-all-flash-software-defined-storage/

Posted in all flash, Software-defined data center, Software-defined storage

A Turnkey Storage Architecture for OpenStack and Docker

By George Crump, Storage Swiss

Click here to learn from George and the team about the Scale-Out All-Flash Solution with Nexenta, Micron and Supermicro…

Tagged with: , , , , , ,
Posted in all flash, Object Storage, Software-defined data center, Software-defined storage

Is DevOps changing storage from simple to API?

“Storage should be easy. It should be that anyone can manage it, it’s something you put in the closet and forget about.”

This was the mantra of storage vendors over the last 10-15 years. We saw vendors like EqualLogic make a dent and then get acquired after selling on simplicity. Then, EMC announced the VNXe and invited a third-grader to come onstage and configure it.

This worked well when selling into the small to medium business space, and many companies jumped on the bandwagon, but is it the future? As we see cross-cloud integration like VMware announced at VMworld, and the rise of containers into the enterprise, is simplicity really the key selling point?

I would argue it is not. The new paradigm is another one of the buzzwords of the past few years: DevOps.

If you are like most people in the market, you are still trying to figure out exactly what DevOps means. Do you need to have programmers on staff even though you sell insurance or auto parts? You don’t. In fact, you just need staffers that understand the underlying concepts.

The most general definition I have found was on Wikipedia: “A movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.”

For this purpose, think about the integration you need as you move from an isolated enterprise to one that works with SaaS tools and newly developed applications. There is a glue that holds all these components together and allows us to achieve tight integration — that is the API.

“API” can be a scary term for many SysAdmins, since they are used to a GUI that lets them manage everything (back to the third-grader deploying storage). However, it does not need to be scary anymore, since more companies are making it easier than ever to work with an API.

The Open API Initiative (OAI) has influenced vendors to keep APIs more consistent and simpler for everyone. Combining the REST APIs with something like Swagger UI tool gives the general admin a simple representation of what an API can do. Swagger even provides a public-facing demo, “Swagger Petstore,” so that any administrator can understand how easy an API can be to use.

Most of the newer storage companies, and specifically those touting “software-defined,” utilize something like the Swagger UI as a web interface to clearly detail exactly what you need to put in a script to make the storage do what you want.

Take the Petstore example: When you use the “Try It Out” under the store inventory, it produces a new command to run

curl -X GET –header ‘Accept: application/json’ ‘http://petstore.swagger.io/v2/store/inventory’

No longer is a developer needed; you simply go to a website and cut and paste into the script. This impact can be felt throughout the data center.

This shift to a simplistic API, and even more importantly a simplistic and powerful interface to those APIs, can be used by enterprises to change the way the SysAdmins of today work. This will not eliminate the need for vendors to make simple, easy-to-navigate graphical interfaces, but it will give the freedom and flexibility that is needed as enterprises move more and more into the software-defined data center.

Posted in Software-defined data center

Designing an All-Flash Object Store

by George Crump, Storage Swiss

Object storage is set to become THE way organizations will store unstructured data. As this transition occurs those same organizations are expecting more from object storage than just a “cheap and deep” way to store information. They expect the system will deliver data as fast as their analytics applications want it. The problem is that in terms of performance most object storage systems are sorely lacking. The reality is the transition to high performance object storage will require more than simply throwing flash at the problem. Underlying object storage software needs to change. 

More than Flash

Our entry “The Need for  Speed – High Performance Object Storage” shows the decisions to use flash for object storage gets support from improving time to results and increased density. The problem is that “just throwing flash at the problem” will lead to less than desirable outcomes. The key to optimizing a flash investment is making sure the rest of the storage infrastructure does not add the latency that flash removes. This is a particular problem for many object storage systems.

The Object Storage Bottlenecks to Flash Performance

One of the key inhibitors to maximizing flash performance is one of object storage’s biggest advantages….

To read this blog in its entirety, please visit: https://storageswiss.com/2016/10/04/designing-all-flash-object-store/

Posted in Object Storage, Software-defined data center, Software-defined storage

The difference between NexentaStor and NexentaEdge

By: Thomas Cornely, Chief Product Officer

Deciding between NexentaStor and NexentaEdge is relatively easy if you understand the products and your applications. NexentaStor delivers unified file and block storage services, where NexentaEdge is our scalable object storage platform. So the question is simple: what are you looking for, file protocols or object storage APIs?

Key Differentiators Between NexentaStor and Nexenta Edge

It is true that both systems provide block services, although NexentaEdge’s block support is limited to iSCSI. But one easy way to choose between the two is if you want shared file services (e.g. NFS, SMB) then only NexentaStor offers that functionality. But if you want to start storing data the modern way using an object storage API such as S3 or Swift, then only NexentaEdge has that capability.

Other differences include that NexentaEdge is built on a Linux kernel versus NexentaStor that is built using OpenSolaris and ZFS. NexentaEdge was also built from the ground up to be our most scalable product, so if scalability is important to you, NexentaEdge will offer you the best choice.

Which one is right for you?

So the next question is, what applications and systems are you running, and what kind of storage are they looking for? If you are considering an OpenStack deployment, NexentaEdge was specifically designed with it in mind, along with full support for both the Swift and S3 APIs. NexentaEdge would, therefore, be the obvious choice for OpenStack.

What about containers – especially those with persistent storage? Cinder is one of the more promising ideas in that space, something NexentaEdge has full support for. In fact, NexentaEdge is so convinced of the concept of containers that we build it on containers. Using containers in our core product gives us a lot of experience with the challenges of running persistent storage with containers, and this experience is found in NexentaEdge.

If you are running a legacy application that requires NFS or SMB access, then NexentaStor is your product of choice. In addition, if you need Fiber Channel block access, only NexentaStor offers this.

The question is a little bit harder when discussing iSCSI, since both platforms offer iSCSI block access. Perhaps the deciding factor will be scalability or performance. While both products offer some level of scalability, we made scalability one of the core things we wanted to accomplish with NexentaEdge. NexentaStor can scale to petabytes, but NexentaEdge can scale to hundreds of them. On the other hand, if performance and low latency are your primary concern then NexentaStor is for you.

Most data centers will need both systems. First, a high performance, feature rich NAS that supports a variety of protocols. Legacy applications and data will be with us for decades consolidating them to a single storage platform that can reduce complexity and increase performance just makes sense. We deliver this with NexentaStor.

NexentaEdge is your choice for storing the petabytes of data that the internet of things (IoT) will generate, as well as delivering that data to modern applications like Splunk, Spark, Cassandra and CouchBase.

Posted in Uncategorized

Nexenta Releases Storage for Persistent Docker Containers by Storage Swiss

What if you do not consider one of the greatest advantages of containers to be an advantage to you? Many tout the stateless nature of containers as their single greatest feature. They start up, they accomplish their task, and they go away – no stateful storage necessary. Many within the container community consider a stateful container to be a violation of best practices.

And yet there is a growing desire by some to run stateful containers. One of the arguments for doing this is it allows development and production to use the exact same infrastructure. This makes it easier to move an app from development to test to production – something essential to a devops workflow. Unfortunately, however, limitations of Docker and its associated partners create difficulties for those wanting to do this….

To read more, please visit https://storageswiss.com/2016/09/20/nexenta-releases-storage-for-persistent-docker-containers/

Posted in Uncategorized