File data analytics drives millions out of Enterprise IT budgets!

by | Mar 17, 2020 | Uncategorized

Get your team’s buy-in on Object Storage using these 4 Value Propositions  

 

For the better part of the last 25 years, NAS file servers have been the preferred method of storing unstructured data (i.e., data that is stored outside of a structured database format).  The SMB protocol (formerly known as CIFS) is used to access files in Windows environments, while NFS is used to access file content on non-Windows systems. The explosive growth in unstructured data has pushed traditional NAS file server solutions to their limits. Performance, scale, data protection and cost concerns have driven the need for a new approach…and Object Storage was born!

 

Object Storage

Object storage is different from NAS or SAN systems in several ways.  Most notably, the concepts of volumes, LUNs, and RAID do not apply to object storage.  Instead, data objects are stored in containers (aka, buckets) rather than blocks. As with all innovation, object storage has its pluses and minuses.  Unlike RAID protected storage which consumes large amounts of storage with parity/mirrored disks, Object Storage uses erasure coding to reliably and cost-effectively protect extremely large data sets at a fraction of the cost of traditional RAID protection algorithms. Object storage solutions focus on scalability and resiliency not performance. Object storage is a key innovation for the NAS world, but customers MUST have the ability to analyze their NAS data to determine which data is right for Object. 

 

  • Scalability

 

The underlying infrastructure upon which object storage is built is “scale-out” where capacity, processing, and networking resources can be added horizontally by adding nodes.  Unlike the hierarchical structure of file storage, object storage systems are flat with a single namespace in which objects are accessed via a unique object identifier. The use of unique object identifiers enables tremendous scale and avoids some of the inode limitations with traditional filesystems.  In addition, object storage systems replace the limited file system attributes of files with customizable metadata that not only capture common object characteristics but can also hold customized information. This can be particularly useful for application access, advanced analytics, and business intelligence use cases. Initial object use cases for the enterprise are archive and backup file storage. By moving infrequently used files to object companies eliminate significant T1 storage costs as well as massive backup software, server, network and storage costs.

 

  • Resiliency

Traditional SAN and NAS storage systems have fundamental limitations to support massive amounts of data.  For example, when it comes to data protection, it’s not very realistic to backup hundreds of petabytes of data.  Object storage systems are designed to not require backups. Instead, they store data with enough redundancy so the data is never lost.  There are a couple of ways this can be achieved. The first is by keeping multiple replicas of the data(RAID and replication). However, this can be very capacity intensive since you need enough storage to save the additional copies.  The second, and more efficient model, is through erasure coding. In its simplest terms, erasure coding uses math to create additional information that allows for recreating data from a subset of the original data. This is conceptually similar to RAID-5’s ability to retrieve the original data from the remaining drives in a RAID set that experiences a drive failure, except it is 10x more efficient!

 

Additional Benefits

 

Cost

While there are several good use cases for object storage, one of the best is archiving and backing up files. And one of the primary reasons companies are turning to object storage is cost.  Unlike enterprise NAS and SAN arrays, which tend to be some of the most expensive infrastructures in most data centers, object storage can be significantly less expensive. For example, according to Gartner, the TCO for legacy storage systems is approximately $.30/GB/month.  Object/Cloud storage is much closer to $.02/GB/month. You can do the math, but the savings can be substantial and continuing to use enterprise NAS and SAN arrays for archival or backup data can be a poor use of resources. Note to the enterprise consumer – if you are a Data Domain user, look at archiving to Object storage data that is infrequently used. This will save you millions in Data Domain costs!

 

Accessibility

While files and blocks are available to an operating system, object storage is accessed via a RESTful API’s to perform various storage functions.  API applications like Amazon’s Simple Storage Services (S3) make objects accessible via HTTP(S) and facilities management functions related to authentication, permissions, and file properties.  Moreover, every interaction with an object uses simple commands like PUT, GET, UPDATE, and DELETE. Where possible, many companies are developing (or re-writing) their applications to leverage object storage instead of traditional SAN or NAS. In the meantime, most backup and archiving software solutions support writing data to S3.

 

Getting Started with an Object Store

While StorageX can migrate data to less expensive storage tiers (e.g., flash drives to magnetic discs), the most common use case we encounter is customers wanting to archive infrequently accessed data to an object-store.  StorageX natively supports the largest on-premise and cloud-based object store providers. Through an easy to use interface, StorageX users can analyze their unstructured data sets and immediately, or on a scheduled basis, an archive that data to an object-store.  In addition, the Archive provides an opportunity to create additional custom tags specific to the job. These custom tags are added to a companion metadata object file and saved in the object-store.

At any future time, customers can use the StorageX Retrieval Portal or any S3-compliant browser to query and retrieve files stored in the object-store.  Additionally, since the companion metadata object file is stored in an industry-standard JSON format, customers can use enhanced analytics or BI tools to query the data.

The first step in Object storage evaluation is the complete analysis of your existing file data. Know the what, where, when and why attributes hidden within file metadata. From this analysis, you will be able to determine the effective use of Object storage while you maintain or improve SLAs and reduce cost!

 

Data Dynamics Can Help

Our team of file data management experts is available to assist you in a 2 day ‘free’ analytics assessment of key file data. Please register here to receive your analytics-driven report.