3 Simple Ways To Protect Your Data From Ransomware

By: Michael B. Letschin, Field CTO

Ransomware attacks have become one of the biggest threats to enterprise data. WannaCry was released just a few months back, and yesterday, an even more sophisticated attack was launched called Petya.

This new attack locks down a computer’s hard drive and demands payment in Bitcoin to unlock the user’s system. They claim it will send a password after payment, but there is a slim chance that a password that will ever arrive.

The one thing these attacks have in common is they are based on the EternalBlue tool that was leaked from the NSA.  The tool specifically attacks vulnerabilities on Windows systems.

So how do you protect your corporate data? There are 3 Simple ways.

  • Store your Data on a Centralized Storage Array like NexentaStor

Since the file services stack in NexentaStor is not based on Windows, these vulnerabilities are not present in our software and you can be more confident that your data won’t be at risk.

  • Create a Snapshot Schedule

Another benefit of using an enterprise storage array is the ability to create a snapshot schedule that will allow you to roll back to a point in time before any attack on a connected system might have occurred.  With unlimited snapshots in NexentaStor, you can be sure you have a timely and accurate copy of your data, regardless of user interactions.

  • Replicate your Data to a Secondary System

Finally, a centralized array offers you the ability to replicate that data offsite to a secondary system within your own corporate firewall or even to one of the many service providers that use NexentaStor as their replication targets.

Data integrity has always been a core, fundamental component of the Nexenta software stack. We consistently make this a priority to ensure you get the full benefit of that and your critical corporate data isn’t held ransom.  If you’d like to learn more about our storage solutions, click here.

Posted in Data Protection, Software-defined storage

How to Get More from All-Flash Using Software Defined Storage

By: George Crump, Storage Swiss

Flash often makes the hardware selection less important, but makes the software selection very important. The right software can not only ensure optimal flash performance, but also extend flash implementation beyond the typical high-transaction database use cases and into modern apps and big data analytics. It is critical though that IT professionals make sure software defined storage solutions (SDS) include core enterprise features that they have come to count on.

Why is Storage Hardware Less Important?

Storage systems used to be full of proprietary hardware so they could extract every last drop of performance from the array. These custom chips were used because CPUs were relatively slow, software was cumbersome to create and maintain, and rotational hard disks were highly latent. Times have changed. CPU processing power is almost too plentiful, software development is much easier to start and continuously maintain, and most importantly, flash media has almost no latency when comparing the alternatives.

Although it matters what hardware IT uses, a mid-range configuration can help many organizations reach the performance levels they require. While they do exist, most data center workloads will not be able to take advantage of additional hardware investments to extract additional performance. A software-first approach provides IT professionals flexibility, enabling them to select hardware components that are the most affordable while balancing the right quality and the right amount of performance.

Once the organization meets performance requirements, it needs to focus on the other values flash media brings to the data center. For example, flash is denser and requires less power, making it ideal for rapidly growing organizations that are trying to avoid the cost of building another data center.

Why is Software More Important?

If it is flash-enabled, storage software really shouldn’t care what type of storage hardware it runs on. But it should take advantage of the reality of excess storage performance to provide a richer feature set to customers. For legacy data centers, this means a greater focus on data efficiency like inline compression, data management enabled by quality of service, data protection to lower recover point objectives and most importantly ease of use to lower administrative costs.

Flash’s Next Steps

Flash, in the form of either all-flash arrays or hybrid arrays, is now the dominant storage type for core applications based on Oracle or MS-SQL as well as virtualized environments based on VMware or Hyper-V. These environments typically leverage a scale-up storage architecture and SDS vendors need to support this very popular deployment model. Doing so allows the SDS solution to leverage storage hardware already in-place instead of forcing the customer to upgrade or change hardware to support a scale-out solution.

Flash as Deep Storage

An increasing number of organizations are looking to use flash storage not only to take advantage of its performance, but also its density. A flash-based system can store petabytes of information in less than half a data center rack. When factoring the cost of data center floor space, power and cooling, these offerings may actually be less expensive than the hard drive alternative. Then, applications like Splunk and Hadoop can leverage this density to access a broad data set that responds instantly to their queries to deliver deeper analysis and faster answers.

But these environments require a new approach to flash storage system architectures. They need to scale-out the same way the applications do, via commodity servers that act as storage nodes creating a cluster delivering a single pool of storage. Once again, the software creating this storage cluster is critical, as it needs optimization for flash media, automatic scaling and ease of management. In addition to storage management, the SDS software has the added responsibility of cluster management and network management.

StorageSwiss Take

A storage vendor traditionally delivers three elements to their customers: hardware, software and service. While storage vendors need to be aware of and take advantage of flash performance and density, they don’t need to be as concerned about designing proprietary hardware. The flash performance equalizer enables them to focus on the storage software and delivering quality support and service to customers. A focus on software also enables flexibility so customers can choose the hardware that makes the most sense for their environment.

https://storageswiss.com/2017/05/15/more-from-all-flash-software-defined-storage/

Posted in all flash, Software-defined data center, Software-defined storage

A Turnkey Storage Architecture for OpenStack and Docker

By George Crump, Storage Swiss

Click here to learn from George and the team about the Scale-Out All-Flash Solution with Nexenta, Micron and Supermicro…

Tagged with: , , , , , ,
Posted in all flash, Object Storage, Software-defined data center, Software-defined storage

Is DevOps changing storage from simple to API?

“Storage should be easy. It should be that anyone can manage it, it’s something you put in the closet and forget about.”

This was the mantra of storage vendors over the last 10-15 years. We saw vendors like EqualLogic make a dent and then get acquired after selling on simplicity. Then, EMC announced the VNXe and invited a third-grader to come onstage and configure it.

This worked well when selling into the small to medium business space, and many companies jumped on the bandwagon, but is it the future? As we see cross-cloud integration like VMware announced at VMworld, and the rise of containers into the enterprise, is simplicity really the key selling point?

I would argue it is not. The new paradigm is another one of the buzzwords of the past few years: DevOps.

If you are like most people in the market, you are still trying to figure out exactly what DevOps means. Do you need to have programmers on staff even though you sell insurance or auto parts? You don’t. In fact, you just need staffers that understand the underlying concepts.

The most general definition I have found was on Wikipedia: “A movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.”

For this purpose, think about the integration you need as you move from an isolated enterprise to one that works with SaaS tools and newly developed applications. There is a glue that holds all these components together and allows us to achieve tight integration — that is the API.

“API” can be a scary term for many SysAdmins, since they are used to a GUI that lets them manage everything (back to the third-grader deploying storage). However, it does not need to be scary anymore, since more companies are making it easier than ever to work with an API.

The Open API Initiative (OAI) has influenced vendors to keep APIs more consistent and simpler for everyone. Combining the REST APIs with something like Swagger UI tool gives the general admin a simple representation of what an API can do. Swagger even provides a public-facing demo, “Swagger Petstore,” so that any administrator can understand how easy an API can be to use.

Most of the newer storage companies, and specifically those touting “software-defined,” utilize something like the Swagger UI as a web interface to clearly detail exactly what you need to put in a script to make the storage do what you want.

Take the Petstore example: When you use the “Try It Out” under the store inventory, it produces a new command to run

curl -X GET –header ‘Accept: application/json’ ‘http://petstore.swagger.io/v2/store/inventory’

No longer is a developer needed; you simply go to a website and cut and paste into the script. This impact can be felt throughout the data center.

This shift to a simplistic API, and even more importantly a simplistic and powerful interface to those APIs, can be used by enterprises to change the way the SysAdmins of today work. This will not eliminate the need for vendors to make simple, easy-to-navigate graphical interfaces, but it will give the freedom and flexibility that is needed as enterprises move more and more into the software-defined data center.

Posted in Software-defined data center

Designing an All-Flash Object Store

by George Crump, Storage Swiss

Object storage is set to become THE way organizations will store unstructured data. As this transition occurs those same organizations are expecting more from object storage than just a “cheap and deep” way to store information. They expect the system will deliver data as fast as their analytics applications want it. The problem is that in terms of performance most object storage systems are sorely lacking. The reality is the transition to high performance object storage will require more than simply throwing flash at the problem. Underlying object storage software needs to change. 

More than Flash

Our entry “The Need for  Speed – High Performance Object Storage” shows the decisions to use flash for object storage gets support from improving time to results and increased density. The problem is that “just throwing flash at the problem” will lead to less than desirable outcomes. The key to optimizing a flash investment is making sure the rest of the storage infrastructure does not add the latency that flash removes. This is a particular problem for many object storage systems.

The Object Storage Bottlenecks to Flash Performance

One of the key inhibitors to maximizing flash performance is one of object storage’s biggest advantages….

To read this blog in its entirety, please visit: https://storageswiss.com/2016/10/04/designing-all-flash-object-store/

Posted in Object Storage, Software-defined data center, Software-defined storage

The difference between NexentaStor and NexentaEdge

By: Thomas Cornely, Chief Product Officer

Deciding between NexentaStor and NexentaEdge is relatively easy if you understand the products and your applications. NexentaStor delivers unified file and block storage services, where NexentaEdge is our scalable object storage platform. So the question is simple: what are you looking for, file protocols or object storage APIs?

Key Differentiators Between NexentaStor and Nexenta Edge

It is true that both systems provide block services, although NexentaEdge’s block support is limited to iSCSI. But one easy way to choose between the two is if you want shared file services (e.g. NFS, SMB) then only NexentaStor offers that functionality. But if you want to start storing data the modern way using an object storage API such as S3 or Swift, then only NexentaEdge has that capability.

Other differences include that NexentaEdge is built on a Linux kernel versus NexentaStor that is built using OpenSolaris and ZFS. NexentaEdge was also built from the ground up to be our most scalable product, so if scalability is important to you, NexentaEdge will offer you the best choice.

Which one is right for you?

So the next question is, what applications and systems are you running, and what kind of storage are they looking for? If you are considering an OpenStack deployment, NexentaEdge was specifically designed with it in mind, along with full support for both the Swift and S3 APIs. NexentaEdge would, therefore, be the obvious choice for OpenStack.

What about containers – especially those with persistent storage? Cinder is one of the more promising ideas in that space, something NexentaEdge has full support for. In fact, NexentaEdge is so convinced of the concept of containers that we build it on containers. Using containers in our core product gives us a lot of experience with the challenges of running persistent storage with containers, and this experience is found in NexentaEdge.

If you are running a legacy application that requires NFS or SMB access, then NexentaStor is your product of choice. In addition, if you need Fiber Channel block access, only NexentaStor offers this.

The question is a little bit harder when discussing iSCSI, since both platforms offer iSCSI block access. Perhaps the deciding factor will be scalability or performance. While both products offer some level of scalability, we made scalability one of the core things we wanted to accomplish with NexentaEdge. NexentaStor can scale to petabytes, but NexentaEdge can scale to hundreds of them. On the other hand, if performance and low latency are your primary concern then NexentaStor is for you.

Most data centers will need both systems. First, a high performance, feature rich NAS that supports a variety of protocols. Legacy applications and data will be with us for decades consolidating them to a single storage platform that can reduce complexity and increase performance just makes sense. We deliver this with NexentaStor.

NexentaEdge is your choice for storing the petabytes of data that the internet of things (IoT) will generate, as well as delivering that data to modern applications like Splunk, Spark, Cassandra and CouchBase.

Posted in Uncategorized

Nexenta Releases Storage for Persistent Docker Containers by Storage Swiss

What if you do not consider one of the greatest advantages of containers to be an advantage to you? Many tout the stateless nature of containers as their single greatest feature. They start up, they accomplish their task, and they go away – no stateful storage necessary. Many within the container community consider a stateful container to be a violation of best practices.

And yet there is a growing desire by some to run stateful containers. One of the arguments for doing this is it allows development and production to use the exact same infrastructure. This makes it easier to move an app from development to test to production – something essential to a devops workflow. Unfortunately, however, limitations of Docker and its associated partners create difficulties for those wanting to do this….

To read more, please visit https://storageswiss.com/2016/09/20/nexenta-releases-storage-for-persistent-docker-containers/

Posted in Uncategorized

Efficient Replication using Multicast Protocols

By Oscar Wahlberg, Director of Product Management

Replication as Data Protection

The use of replication as a data protection method is not new. Historically, it was relegated to disaster recovery for tier-1 applications and did not have a role in day-to-day backups.

Recently, however, many customers are using replication as their primary mechanism for backup and recovery for all tiers of systems – primarily due to the advent of object storage systems with built in replication and versioning. Unfortunately, the side effect is that this significantly increases the amount of data that needs to be replicated.

If one is to rely on replication as the primary method of data protection, one must replicate each version of every object to multiple nodes. Many systems transmit the entire object when it changes, then replicate the object multiple times if they are replicating it to multiple destinations. This exacerbates the issue by requiring even more bandwidth.

There is another way to protect your data…

NexentaEdge reduces the amount of bandwidth necessary to send the data to multiple nodes by doing two things: 1) using a multicast replication protocol that can send data to multiple nodes with a single transmission and 2) sending only portions that have been changed within an object instead of the entire object.

Replicast is our multicast replication protocol that allows us to transmit data to as many nodes as necessary without having to transmit it multiple times. An object is split into many chunks and each chunk transmitted by a single Replicast transmission to multiple destinations.
During transmission, the chunk is split into multiple UDP datagrams. Those destinations then reassemble received datagrams and compare them against a SHA-256 hash value for the chunk. This provides a much stronger check against data corruption than typical network checksums. Replicast transmission is intelligent enough to sustain the loss of UDP datagrams during chunk transmission and will retransmit only lost datagrams – hence, this greatly reduces bandwidth usage on busy networks.

Replicast also results in efficiencies when determining where to send data. Most replication systems decide which nodes to replicate on a round-robin basis without any consideration for how busy an individual node happens to be. Replicast uses multicast datagrams to dynamically select the storage servers for each chunk.

Instead of deciding amongst all servers, Replicast looks up a multicast group that is a subset of all nodes, then those nodes dynamically select which nodes will ultimately store the chunk. This allows fewer nodes to serve a higher number of IOPs by ensuring that the most available nodes are the ones that will receive new data, and other nodes that are congested may be given a break.

Splitting each object into chunks also gives us the ability to use cryptographic hashes to identify which chunks have changed and which haven’t. To store a new version of an object, we only need to replicate the chunks that have changed and update the metadata for that object. This results in significant bandwidth savings over alternative approaches.

Conclusion
Replication is playing a much greater role in day-to-day data protection, and there is significant room for improvement in its efficiency. The combined use of Replicast, our multicast replication protocol, and only transmitting changed chunks for new object versions is about as efficient as replication can get.

Tagged with: , , , , , , ,
Posted in Software-defined data center, Software-defined storage

Three Dimensions of Storage Sizing & Design – Part 3: Speed

By: Edwin Weijdema

No-speed-limitIn this third post of the multi-post Three Dimensions of Storage Sizing & Design we will dive deeper in the dimension: Use and specifically the application Workloads part characteristic Speed. Depending on the application workload requirements, you will need to size the storage system for The Need for Speed. So let us dive deeper into IOPS.

Speed with computer storage devices like hard disk drives (HDD), Solid State Drives (SSD) and Storage Area Networks (SAN) is expressed in Input/Output Per Second (IOPS). Applications will interact with storage for retrieving and storing data.

Speed_Latency_TurboBoost

What are IOPS?

IOPS are expressing the performance a storage device can achieve with completing read and write commands in a second. We will look at a simplified example for how many IOPS a disk can deliver within the boundaries of physics. These are called the back-end IOPS. Simply put how many IOPS can the disk(s) in the back-end deliver. Every disk in your storage system has a maximum IOPS value that is based on a formula, namely:

IOPS = 1000ms / (Average Seek time in ms + Average Latency in ms)

  • Rotational speed is measured in revolutions per minute (RPM). How fast the disk platters are rotating inside the disk.
  • Average Latency is the time taken for the platter to undergo half a disk rotation. Why half? Well at any one time the data can be either a full disk rotation away from the head, or it might already be right at it. The time taken for a half rotation therefore gives us the average time it takes for the platter to spin round enough for the data to be retrieved. To calculate the average latency take the RPM and use the following formula: Average latency = ((60 seconds / RPM speed)/half rotation 2)*1000ms.
  • Average Seek time is the time in milliseconds (ms) it takes for the disk’s head to position itself over the track being read or written. When the I/O request comes in for a particular bit of data, the head will not be above the correct track on the disk. The arm will need to move so that the head is directed over the correct track where it must then wait for the platter spin to present the target data beneath it. As the data could potentially be anywhere on the platter, the average seek time is time taken for the head to travel half way across the disk. There are both read and write seek times; by taking the average of those two values give you the average seek time.

Theoretical IOPS for Calculation Sizing

For disk (HDD and SSD) I normally use the numbers in the following table as a rule of thumb to calculate raw backend random IOPS from disks. Raw? Yes RAW, because we don’t factor in caching/buffering in the whole chain (disk, controller, head, adapter, hypervisor, operating system, application, workstation), RAID influence, interface connection, driver configurations, queue depths, etc. But lets keep this simplified to have a workable model to calculate the IOPS from the backend easily.

Disk RPM IOPS ~ Average IOPS range Average Latency (ms)
5.4K HDD 55 50-65 5,6
7.2K HDD 80 75-100 4,2
10K HDD 120 120-140 3,0
15K HDD 180 175-210 2,0
SSD (WI) 40000 65000-115000 0,1**
SSD (MU) 8500 7000-25000 0,1**
SSD (RI) 3000 2000-15000 0,1**
BSSD (ICE) 20000 5000-85000 0,1**

 

** Latency on SSD is not because the spinning of the disk (duhhh no mechanical moving parts in here, latency on SSDs is on average 0.03ms!), but the network chain between the processing and the SSD. For more insight in different SSDs and their IOPS number take a look at this wiki page.

Front-End versus Back-End IOPS

Often you will hear that an application will need 2.400 IOPS and that it communicates with 64K block size. Like for instance Microsoft SQL Server 2014 OLTP Log files. These are the so-called Front-End IOPS, but to let that data land on the Back-end storage systems they generate Back-End IOPS.

Size does matter!

Lets compare a storage system that stores the data with a block size of 128K and a storage system with 4K blocks.

FrontEnd_BackEnd_IOPS

We will see that it will require 15 disks and 1.200 IOPS in the back-end when the storage system can store with 128K blocks. If you store this same I/O stream on a storage system that utilizes 4K blocks you will need a whopping 38.400 IOPS. To back those incoming IOPS you will need to run with 480x 7.2K NL-SAS disks!!!

Like most applications you can have different front-end IOPS and block sizes per workload, even within one application. Microsoft SQL 2014 server is a good example that uses a different disk access pattern as shown in the following table:

Typical SQL Server Disk Access Patterns

Info: Oracle DB default block size is 8K, and the Hadoop default block size is 64M.

Ok now we know that size does matter but why should you care? Most SQL Server performance issues in (virtual) environments can be traced to improper storage configuration. SQL Server workloads are generally I/O heavy, and a misconfigured storage subsystem can increase I/O latency and significantly degrade performance of SQL Server. Running Microsoft SQL server on VMware? Checkout this valuable resource about Architecting Microsoft SQL Server on VMware vSphere.

Playing the Tetris Storage Block gameStorage_Tetris

With virtualization we introduce another layer into the I/O path. So what about all those different layers? If you look at a complete stack in a datacenter you could see an Application running on an Operating System that is installed within a VM. This VM runs on a Hypervisor, where this hypervisor talks to a storage backend and eventually lands on spinning disk and/or flash.

Example:

  1. Application – Microsoft SQL server – Logs 64K
  2. Operating System – Microsoft Windows 2008R2 – 4K
  3. Hypervisor – VMware vSphere 6 (ESXi) – runs VMFS-5 data store – 1M Unified Block
  4. Storage – NexentaStor 4 – 128K Block
  5. Disk – SanDisk BSSD – 16K block with a 4K Sector size

HDDs have a block size of 512 Bytes or 4K. With the up rise of flash we see the Block size being increased. Looking at the SanDisk – Board Solid State Drive (BSSD) card with its ICE chips gives the flash card 8TB storage capacity, which has a 16K physical block size (virtual page) and a logical 4K or 512 bytes sector size.

Block Size versus Sector Size

While sector specifically means the physical disk area, the term block has been used loosely to refer to a small chunk of data. Block has multiple meanings depending on the context. In the context of data storage, a filesystem block is an abstraction over disk sectors possibly encompassing multiple sectors. Most file systems are based on a block device, which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size.

Determining block size while formatting the file system in an OS is a case of tradeoffs. Every file must occupy at least one block, even if the file is zero bytes, so there’s something for the file’s metadata to be attached to. Unless you can guarantee that your files will ALWAYS be some multiple of the block size in size (e.g. in a 4k block OS, all files are 4k), there’ll be a certain amount of wastage for the files that don’t exactly fit within that block.

It’s all about Balance

Going with small block sizes is good when you need to store many small files. On the other hand, more blocks is more metadata, so you end up wasting a portion of your storage system on overhead, tracking the location of all the files. On the flip side, large blocks means less metadata, but also mean greater wastage when you’re storing small files. e.g. a 4 byte file stored in a 4k block wastes 99% of that block.

Sectors are an obsolete concept in modern drives. They existed when “locations” on a drive were specified by the old CHS (cylinder, head, sector) definition, which wasted a lot of space. All modern drives use LBA – logical block addressing, so sectors don’t really exist anymore. However, an OS can still chain multiple blocks/sector into a single logical OS-level block to reduce space. E.g. “every 8 real blocks/sectors on the drive will be considered 1 block by the Operating System”. A sector is a unit which is normally 512 bytes or 1K, 2K, 4K etc. depending on hardware.

When the logical block size of a drive is not in multiples of 512 bytes, geometry information is not available because the file system does not support other block sizes. Linux does not discover such drives, and Windows shows such drives in disk management, but does not allow you to execute any operations on them.

Partition Alignment

Aligning file system partitions is a well-known storage best practice for database workloads. Partition alignment on both physical machines and VMFS partitions prevents performance I/O degradation caused by unaligned I/O. An unaligned partition results in additional I/O operations, incurring penalty on latency and throughput. Lets look at the example we used before with the SanDisk BSSD, this BSSD uses 16KB Blocks and 4K sector sizes. For protection of the data and increasing the total amount of IOPS and throughput we stripe/mirror several cards together.

Properly Aligned

DiskAlignment.png

If a workload with blocksizes is properly aligned you will see that it looks like a Tetris game filling up and because of striping over more than one disk in storage systems you really want to get the most out of it.

Mis-Aligned

DiskAlignment_Mismatch.png

When for instance a block size of 17K is chosen for the stripe it means that you will have all kinds of problems. The Writes and Reads will cross sectors, blocks and disks carving them up into all kind of chunks and messes the data stream up with lots of additional I/O, latency and fragmentation.

Aligning with VMware vSphere 5.x or later

vSphere 5.0 and later automatically aligns VMFS-5 partitions along a 1 MB boundary (1MB Unified block). If a VMFS-3 partition was created using an earlier version of vSphere that aligned along a 64KB boundary, and that file system is then upgraded to VMFS-5, it will retain its 64KB alignment. 1 MB alignment can only be achieved when the VMFS volume is create using the vSphere Web Client.

More to the equation

There is more to the whole equation about Speed than discussed so far. Factors like:

  • Protection levels like for instance RAID, where you get write penalties or multipliers with Read that effect the overall Speed. I will dive deeper and explore this further in the next blog part(s) around the Storage Dimension Protect.
  • Caching on several levels in the whole I/O data path like Application, OS, Hypervisor, Host, HBA, Storage System, Disk, etc. which makes it much more complicated to predict the Speed and IOPS.
  • Data reduction techniques, which I will highlight in the Storage Dimension Manage.

In the next part of Three Dimensions of Storage Sizing & Design we will dive deeper into workload characteristic Throughput (MB/s).

 

Posted in Uncategorized

Raise Your SDS IQ (6 of 6): Practical Review of Containerized SDS

by Michael Letschin, Field CTO

This is the sixth of six posts (the last one was Virtual Storage Appliances) where we’re going to cover some practical details that help raise your SDS IQ and enable you to select the SDS solution that will deliver Storage on Your Terms. The sixth (and final) SDS flavor in our series is Containerized.

Containers are a relatively new entrant on the storage scene – and they’re hot, because unlike virtual machines, they use shared operating systems; this means they’re incredibly more efficient than hypervisors in terms of system resource usage. The big benefit is that you can get more apps (think four to six times more) on the same old servers, and you can run those apps basically anywhere. Because the container space is virtualized, storage via Containers could be considered SDS. For storage, the containerization approach varies: It’s either all local storage as in the diagram on the left, or, as on the right, external components sharing file, block, or object presentation that gets integrated into container as staple storage. You can use solutions like Flocker to get stateful storage, which is important because not every app is completely stateless.

Containerization is useful for testing in DevOps or for use in hyperscale environments; and the storage is highly portable, which means flexibility is high. Currently few enterprises are actually moving production apps into these environments, with key issues being the ability of apps to write to containers and limited (but growing) knowledge of IT staff on this new virtualization paradigm. Containers aren’t designed to scale up to accommodate a lot of storage—which enterprise apps usually need – but it may be that a solution to this emerges.. You’ll notice that we’ve let off the grades for Performance and Cost Efficiency. For Performance, because Containers run in a virtual environment, there are far too many variables to provide a standard ranking; for Cost Efficiency, again, lots of dependencies on the underlying environment, although the ability to use existing infrastructure is a big plus.

Overall grade: (soft) B – incomplete grading

See below for a typical build and the report card:

Screen Shot 2016-07-05 at 12.51.23 PM

Tagged with: , , , , ,
Posted in Raise Your SDS IQ, Software-defined data center, Software-defined storage, Uncategorized