Employee Perspective: Software Defined Infrastructure and Hybrid Cloud Systems

Companies consider cloud usage for enterprise IT infrastructure, yet the majority of data stays in its own organization. They have to decide what data and system should go public and private. Two big trends, Big Data and Cyber Security, do not allow them enough time to construct both systems in public and private separately. Data and security threats are increasing enormously in such short period, and will continue to do so. IT organization also needs to face the fact that they cannot have both an unlimited budget and time for the infrastructure to solve those issues. In this case, I believe Software Defined Infrastructure and Hybrid Cloud systems are the solution. It gives you the tremendous flexibility to adopt the system to public and private cloud based on the commodity hardware platform.

Nexenta software-based storage solutions are seeing the big and promised opportunities in this area. We are facing many customers that are stuck with legacy hardware-based storages and infrastructures, who cannot move to other storage or infrastructure solutions because of those vendor lock-in systems. With Nexenta Software solutions, IT organizations can store their increasing data securely in public and private cloud. The solutions are based on industry standard hardware, so they can have true Software-Defined Infrastructure and keep the system flexibility long term.. This is a resolution for both now and the future as IT organization can enjoy the benefits of hybrid cloud flexibility at reasonable costs with tight security.

I personally believe hybrid cloud is the ultimate goal for anybody, not only for enterprises, but also individuals. For example, in my house I put my own private data in my home HDDs, but I also connect to public cloud like Google Photo to enjoy to access from any device, anytime, anywhere. I believe we all personally select what data stays private, while at the same time, think about saving cost saving and increasing flexibility.. The infrastructure consideration in terms of the balance among security, cost and flexibility are a little behind as people see fancier appearance of the services appealing. What matters is the infrastructure is the behind those services- is it the right solutions for our future from personal and business life. I have worked for Nexenta about three and half years, facing these keen critical subjects for enterprise customers. I strongly believe Software-Defined Infrastructure with hybrid cloud is the only solution that meets the goal for those enterprise customers. I am happy to a part of this solution at Nexenta.

– Jun Matsuura, Country Manager, Japan

Posted in cloud, Corporate, Software-defined storage, Uncategorized

Employee Perspective: SDS – Bringing parity to the rest of the infrastructure stack

When you purchase any OS today, it doesn’t come locked to specific hardware (sorry Apple). I can purchase Microsoft Office and it doesn’t care if I’m running it on a Dell PC, an Acer/Supermicro/HP. If you use applications like Salesforce you aren’t worried about the hardware. They are totally independent of the underlying hardware, and each is sized according to the job at hand. You buy hardware to fit your budget, on aesthetics, or its gaming performance (if you are my son). You buy software similarly and largely because it provides a function you need. So then why are so many people still buying storage hardware and software that are tied together?

Now, there are things I purchase that I expect to be fully integrated. I don’t want to have to assemble a car by myself – the major parts are not typically standard and I don’t want to have to understand how to assemble an automobile. However, today’s enterprise class storage products use the same devices, same processors, motherboards, add on cards that you find in any industry standard servers. They run on variants of, or even standard versions of common operating systems. Why are they bundled with a proprietary set of software? Most companies already have the talent to install, setup, customize and manage their IT infrastructure (in fact, you are forced to because those same Storage vendors aren’t going to do it for you). You have the expertise to create solutions that match your needs. You don’t need to be forced into a bundled solution.

Not only does it not make sense technically, it doesn’t make sense economically. Why face 3 year leases after which you need to replace your storage subsystems (including software) and often face the daunting task of migrating all of your data? It’s archaic and barbaric. There is an alternative.

Software-defined. In this case, Software-Defined Storage (SDS). The ability to merge the best storage functionality with the best underlying hardware and then maintain the currency and validity of that software and hardware with no impact on your data. Stay current, stay competitive. Sounds like a no brainer.

Storage functionality is a software value and function. It has to be. You can’t stay current much less competitive if anything other than the most critical functions are in the hardware (hardware that anyone can purchase). Installing storage functionality software shouldn’t be any harder than installing an OS – and it isn’t with NexentaStor.

This gives you the flexibility to keep up with the emerging trends in the Hybrid-multi cloud revolution/shift due to Big Data getting even Bigger w AI/ML/Robotics/Cyber … You don’t need to engage in long negotiations on bundled storage solutions that won’t keep pace with the rate at which data is accumulated, changed and used. You wouldn’t take weeks to roll out a new server – you can’t afford the delays. Your data is passing you by.

With Nexenta you have all the features and functions, all the performance with all the flexibility and hardware independence you expect at every other level in your IT infrastructure. You’re not buying a car, you’re enabling your business. Go software defined.

-Bill Fuller, Vice President, Engineering

Posted in Corporate, Software-defined storage

Why Nexenta? Employee Perspectives

Nexenta knows the Software-Defined Storage world. We consider ourselves experts within the industry, even across our various teams and job functions. Our tight-knit teams are constantly collaborating with different departments, partners, and customers around the world to leverage each others knowledge and point of views.

We also have passion for our products, company and industry. Whether it’s an Engineer working on the latest product update, or a Sales Rep talking to a new potential customer, our work is driven by passion and excitement for the technology and possibility in making a world impact on SDS.

Here at Nexenta, our employees not only believe in our Software-Defined Storage solution, but can wholeheartedly speak to it. We asked a few of our employees to talk about why they believe in Nexenta. Over the next few weeks we will be releasing entries from our employee’s point of view in a series we’re calling “Why Nexenta- Employee Perspective”. Here are the dates and people you can expect to hear from:

March 8th – Bill Fuller, VP, Engineering

March 14th- Jun Matsuura, Country Manager, Japan

March 20th- Eric Cho, Sales Engineer

March 26th- Rick Hayes, VP, Customer Service

Posted in Uncategorized

Late Night Infomercials and the Data Center

Today’s storage administrators are looking for the best performance for the lowest cost to satisfy their enterprise data requirements. Performance is often improved by adding more solid state drives, but these come at a cost premium. For enterprises looking to save money and resources while meeting performance requirements- data reduction is a key component in creating the ideal solution.

Options for data reduction are classified into two main categories and each has their own purpose- data deduplication and compression.

Inline vs. Post-Process Deduplication

Data deduplication is, in its simplest form, a process of removing duplicates. Take this blog as an example- by the 11th word we already repeated the word “the”. With deduplication, that would remove 3 of the 59 words in the first paragraph. If we did this for every word in the first paragraph we would be down to 47 total words or about a 20% reduction.

There are two methods that storage systems accomplish this with data blocks today- inline or post-process. Inline means that as the data is written, the duplicates are noted and pointers are created instead of writing the full data. Post-process deduplication removes the duplicates on a scheduled basis.

Inline requires more compute resources, but post-process deduplication puts the user at risk if a large file is loaded on the system and fills the capacity before the system has a chance to deduplicate it.


Compression is the next option to reduce the amount of data stored and it is the one people are typically more familiar with, even if they don’t realize it. Anytime you download a .zip file, you are receiving a compressed file.

To give you a real world example, think of the popular vacuum bags (In the event that you live under a rock and haven’t seen these in action, here is a 13-minute infomercial all about them). Traditionally, when you fold clothes each item takes up a given amount of space. However, when you use space saving bags to remove as much of the air as possible, you reduce the amount of space needed to store the contents. The good thing about compression is that, in many cases, it has a minimal impact on the compute resources of a storage array and can even make the system perform more efficiently.

So which is better?

Both compression and dedupe have a place in the datacenter, but it’s important to understand when each is most effective. You will normally get your highest data reduction ratios with dedupe, but those are primarily going to apply to desktop and server virtualization workloads that have a lot of commonalities.

Even in this case, the use of technologies like linked clones for VMware View can reduce the need for deduplication. Messaging and collaboration tools are another space you will see deduplication used frequently, but this is often built into the application layer and relies less on the storage deduplication.

For most other workloads, compression is ideal- from files in Media and Entertainment, Databases, Analytics and more. Increasingly, you will see the use of compression become the most widely used and effective data reduction strategy.

Tagged with: , , , , ,
Posted in Data Protection, Software-defined storage

How Backend Storage Impacts the Cloud

We all know that there are a variety of factors that impact application performance running on cloud infrastructure, such as network latency and bandwidth, server compute power and, one that is often overlooked, storage. Service providers need strong storage solutions supporting their infrastructure so they can differentiate themselves in this increasingly competitive space, while providing superior service and performance to their end users.

WebSupport, a leading European Cloud Service Provider supports thousands of customers, all requiring differing levels of performance and reliability. WebSupport needed to achieve specific performance metrics with their storage in a cost-effective manner. To solve their dilemma, they deployed an all-flash NexentaStor system that exceeds their performance requirements of 600K read IOPS and 300K write IOPS. Utilizing the advanced caching built into our filesystem, WebSupport is able to deliver 1.2M read IOPS. In addition, NexentaStor’s data reduction reduces their disk space capacity by 40%, providing additional cost-savings for their business.

Earlier this year, WebSupport ran their VPS performance benchmark test against some top regional and global Cloud Service Provider competitors in the market today. To learn more about the results of the testing and to get an in-depth analysis of their findings, read their Benchmarking White Paper here.

Tagged with: , , , , ,
Posted in all flash, cloud, Software-defined data center, Software-defined storage

Declare Your Independence From Legacy Storage

Independence: [in-di-pen-duh ns], noun

Freedom from the control, influence, support, aid, or the like, of others.

Today’s enterprise IT administrators are looking to declare their independence from the legacy vendors that have dominated the technology landscape for decades. This freedom has manifested in the form of software-defined solutions.

The revolution began in the late 1990s, when VMware was founded to disrupt the server empire that had long been controlled by mainframe systems from the likes of IBM and Sun Microsystems. The battle for dominance in the server world has now clearly been won by server virtualization software- still, the war wages on in the realm of enterprise storage.

Software-defined storage companies are now fighting to give power and flexibility back to the enterprise in key areas of their data centers. These solutions range from software-only products bundled into certified reference architectures to appliances that utilize a software-based solution to provide the freedom users need.

Freedom from Vendor Control

Storage Administrators no longer have to rely solely on the limited and costly tools offered by legacy vendors. The use of API-driven solutions, like NexentaStor, allows for simple automation and management. With Software-Defined, you now have control over what a configuration looks like for your business.

However, many companies want the benefits of software-defined without the hassle of piecing together their own configurations from scratch. Solutions like the Lenovo DX8200N give you the autonomy that software-defined affords, ranging from high performance all-flash configurations to all-disk archive systems, with the ease of an appliance model.

Freedom from Influence

Legacy vendors configure solutions to be as cost effective as possible for themselves and don’t place as much emphasis on the user’s needs or price challenges. Deploying a software-defined solution lets the user choose the drives they need and the speed they desire at a lower price point than traditional vendors.

An example of this is the numerous drive and PCI card options that are available with the Lenovo DX8200N. Your options range from 200Gb solid state drives to 4,6,8 and 10TB spinning disks with connectivity options including the latest Fibre Channel (FC) and IP connectivity. The freedom to customize a solution around your needs in a simplified manner is core to Nexenta SDS.

Freedom to Choose Your Support

Choosing to liberate your datacenter with software-defined does not mean that support will be lacking. There are many open source SDS products on the market that rely solely on community support. It can often take days, weeks or even longer to get help with a problem. That is a timeframe that enterprises cannot afford.

In contrast, a mature software-defined storage solution is not complicated to support. Nexenta has developed strong partnerships with key alliance partners, such as Lenovo, which ensures customers receive full support for the entire solution through Lenovo. Ultimately, this makes it simple to implement software-defined storage with full end-to-end support that enterprises need.

Freedom to Take Back Your Data Center

At its core, software-defined is about freedom. The freedom to make your own hardware choices, the freedom to run your enterprise with reliability and dependability, and the freedom to take control of your datacenter. Nexenta puts the power back in your hands and provides your company with the tools you need to win the battle, and as you move towards a full software-defined datacenter, one day win the war.

To learn more about the Lenovo DX8200N Powered by Nexenta, register for our upcoming webinar.

Tagged with: , , , , , , , , , , , , , ,
Posted in Corporate, Software-defined data center, Software-defined storage

3 Simple Ways To Protect Your Data From Ransomware

By: Michael B. Letschin, Field CTO

Ransomware attacks have become one of the biggest threats to enterprise data. WannaCry was released just a few months back, and yesterday, an even more sophisticated attack was launched called Petya.

This new attack locks down a computer’s hard drive and demands payment in Bitcoin to unlock the user’s system. They claim it will send a password after payment, but there is a slim chance that a password that will ever arrive.

The one thing these attacks have in common is they are based on the EternalBlue tool that was leaked from the NSA.  The tool specifically attacks vulnerabilities on Windows systems.

So how do you protect your corporate data? There are 3 Simple ways.

  • Store your Data on a Centralized Storage Array like NexentaStor

Since the file services stack in NexentaStor is not based on Windows, these vulnerabilities are not present in our software and you can be more confident that your data won’t be at risk.

  • Create a Snapshot Schedule

Another benefit of using an enterprise storage array is the ability to create a snapshot schedule that will allow you to roll back to a point in time before any attack on a connected system might have occurred.  With unlimited snapshots in NexentaStor, you can be sure you have a timely and accurate copy of your data, regardless of user interactions.

  • Replicate your Data to a Secondary System

Finally, a centralized array offers you the ability to replicate that data offsite to a secondary system within your own corporate firewall or even to one of the many service providers that use NexentaStor as their replication targets.

Data integrity has always been a core, fundamental component of the Nexenta software stack. We consistently make this a priority to ensure you get the full benefit of that and your critical corporate data isn’t held ransom.  If you’d like to learn more about our storage solutions, click here.

Posted in Data Protection, Software-defined storage

How to Get More from All-Flash Using Software Defined Storage

By: George Crump, Storage Swiss

Flash often makes the hardware selection less important, but makes the software selection very important. The right software can not only ensure optimal flash performance, but also extend flash implementation beyond the typical high-transaction database use cases and into modern apps and big data analytics. It is critical though that IT professionals make sure software defined storage solutions (SDS) include core enterprise features that they have come to count on.

Why is Storage Hardware Less Important?

Storage systems used to be full of proprietary hardware so they could extract every last drop of performance from the array. These custom chips were used because CPUs were relatively slow, software was cumbersome to create and maintain, and rotational hard disks were highly latent. Times have changed. CPU processing power is almost too plentiful, software development is much easier to start and continuously maintain, and most importantly, flash media has almost no latency when comparing the alternatives.

Although it matters what hardware IT uses, a mid-range configuration can help many organizations reach the performance levels they require. While they do exist, most data center workloads will not be able to take advantage of additional hardware investments to extract additional performance. A software-first approach provides IT professionals flexibility, enabling them to select hardware components that are the most affordable while balancing the right quality and the right amount of performance.

Once the organization meets performance requirements, it needs to focus on the other values flash media brings to the data center. For example, flash is denser and requires less power, making it ideal for rapidly growing organizations that are trying to avoid the cost of building another data center.

Why is Software More Important?

If it is flash-enabled, storage software really shouldn’t care what type of storage hardware it runs on. But it should take advantage of the reality of excess storage performance to provide a richer feature set to customers. For legacy data centers, this means a greater focus on data efficiency like inline compression, data management enabled by quality of service, data protection to lower recover point objectives and most importantly ease of use to lower administrative costs.

Flash’s Next Steps

Flash, in the form of either all-flash arrays or hybrid arrays, is now the dominant storage type for core applications based on Oracle or MS-SQL as well as virtualized environments based on VMware or Hyper-V. These environments typically leverage a scale-up storage architecture and SDS vendors need to support this very popular deployment model. Doing so allows the SDS solution to leverage storage hardware already in-place instead of forcing the customer to upgrade or change hardware to support a scale-out solution.

Flash as Deep Storage

An increasing number of organizations are looking to use flash storage not only to take advantage of its performance, but also its density. A flash-based system can store petabytes of information in less than half a data center rack. When factoring the cost of data center floor space, power and cooling, these offerings may actually be less expensive than the hard drive alternative. Then, applications like Splunk and Hadoop can leverage this density to access a broad data set that responds instantly to their queries to deliver deeper analysis and faster answers.

But these environments require a new approach to flash storage system architectures. They need to scale-out the same way the applications do, via commodity servers that act as storage nodes creating a cluster delivering a single pool of storage. Once again, the software creating this storage cluster is critical, as it needs optimization for flash media, automatic scaling and ease of management. In addition to storage management, the SDS software has the added responsibility of cluster management and network management.

StorageSwiss Take

A storage vendor traditionally delivers three elements to their customers: hardware, software and service. While storage vendors need to be aware of and take advantage of flash performance and density, they don’t need to be as concerned about designing proprietary hardware. The flash performance equalizer enables them to focus on the storage software and delivering quality support and service to customers. A focus on software also enables flexibility so customers can choose the hardware that makes the most sense for their environment.


Posted in all flash, Software-defined data center, Software-defined storage

A Turnkey Storage Architecture for OpenStack and Docker

By George Crump, Storage Swiss

Click here to learn from George and the team about the Scale-Out All-Flash Solution with Nexenta, Micron and Supermicro…

Tagged with: , , , , , ,
Posted in all flash, Object Storage, Software-defined data center, Software-defined storage

Is DevOps changing storage from simple to API?

“Storage should be easy. It should be that anyone can manage it, it’s something you put in the closet and forget about.”

This was the mantra of storage vendors over the last 10-15 years. We saw vendors like EqualLogic make a dent and then get acquired after selling on simplicity. Then, EMC announced the VNXe and invited a third-grader to come onstage and configure it.

This worked well when selling into the small to medium business space, and many companies jumped on the bandwagon, but is it the future? As we see cross-cloud integration like VMware announced at VMworld, and the rise of containers into the enterprise, is simplicity really the key selling point?

I would argue it is not. The new paradigm is another one of the buzzwords of the past few years: DevOps.

If you are like most people in the market, you are still trying to figure out exactly what DevOps means. Do you need to have programmers on staff even though you sell insurance or auto parts? You don’t. In fact, you just need staffers that understand the underlying concepts.

The most general definition I have found was on Wikipedia: “A movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.”

For this purpose, think about the integration you need as you move from an isolated enterprise to one that works with SaaS tools and newly developed applications. There is a glue that holds all these components together and allows us to achieve tight integration — that is the API.

“API” can be a scary term for many SysAdmins, since they are used to a GUI that lets them manage everything (back to the third-grader deploying storage). However, it does not need to be scary anymore, since more companies are making it easier than ever to work with an API.

The Open API Initiative (OAI) has influenced vendors to keep APIs more consistent and simpler for everyone. Combining the REST APIs with something like Swagger UI tool gives the general admin a simple representation of what an API can do. Swagger even provides a public-facing demo, “Swagger Petstore,” so that any administrator can understand how easy an API can be to use.

Most of the newer storage companies, and specifically those touting “software-defined,” utilize something like the Swagger UI as a web interface to clearly detail exactly what you need to put in a script to make the storage do what you want.

Take the Petstore example: When you use the “Try It Out” under the store inventory, it produces a new command to run

curl -X GET –header ‘Accept: application/json’ ‘http://petstore.swagger.io/v2/store/inventory’

No longer is a developer needed; you simply go to a website and cut and paste into the script. This impact can be felt throughout the data center.

This shift to a simplistic API, and even more importantly a simplistic and powerful interface to those APIs, can be used by enterprises to change the way the SysAdmins of today work. This will not eliminate the need for vendors to make simple, easy-to-navigate graphical interfaces, but it will give the freedom and flexibility that is needed as enterprises move more and more into the software-defined data center.

Posted in Software-defined data center