Manage Orphaned Data

How to Manage Your Orphaned Data

With exception of structured data, many companies are unaware of which files are present within many Windows shares or NFS exports.  Over time, data have moved from department to department, project to project. It’s been created, unused, and left orphaned by users leaving the company or corporate restructurings.

Managing Orphaned Data with StorageX 8.0

StorageX 8.0 introduces our File Analytics web portal.  The web portal displays a dashboard representing the results of data scans and subsequent analysis.  Each data scan interrogates a specified share, export, or multiple shares and exports.  The scan tags and compiles the file metadata into the file analytics database.  Once the metadata is in the database, we can query the tags and metadata to narrow down the scope of data that is of concern.

File metadata can be used to help a company determine the use, ownership, file type, file size, creation date, access date, last modify date, and many other criteria that can be used to make decisions about where the data should be stored.  For purposes of this discussion, we will focus on ownership—specifically, unowned or orphaned data.

Read the rest of the whitepaper. 

Want to learn more about StorageX?

StorageX Analysis Portal Demo

StorageX Petabyte Scale Migration

 

StorageX is fast (baseline copy)

In a previous article, I stated that StorageX is multi-threaded. I also spent quite a bit of time discussing why I consider this fact to be (mostly) irrelevant to the administrator who is using StorageX to perform his file system migrations. What the user of StorageX really wants is for StorageX to do its job as fast as possible: when he is doing a baseline copy, he wants StorageX to fill his network pipe and move the data as quickly as possible, and when he is cutting over to his shiny new NAS hardware, he wants StorageX to do the final incremental copy within his allotted cutover window.

As I mentioned in my previous article, the techniques StorageX uses to fill the network pipe during a baseline copy are very different from those used to find changed files as quickly as possible during an incremental copy. In this article, I will focus on baseline copies.

Continue reading StorageX is fast (baseline copy)

Distributed File System

Petabyte Scale Migration – A Personal Story

Are you facing a large petabyte scale migration?

Here’s how one Fortune 100 company used StorageX to successfully manage a ten-petabyte migration, in 20% the time compared to traditional tools.

Client Requirements:

  • Eliminate origin storage issues
  • Address unstructured data growth
  • Scale to accommodate 100s of cutovers weekly
  • Move both CIFS & NFS in a mixed environment
  • Maintain security
  • Attain buy-in from business units

Project Details

  • CIFS and Mixed Mode shares
  • 4,000-5,000 total shares
  • Up to 40 TB per share
  • Up to 180 million files per share

Results

  • 10 years’ of work done in 2 years (2PBs in first four months).
  • 10 PBs optimized total
  • 50% cost savings realized

How StorageX Completed the Project Successfully and Under-Budget

  • Phased migration: Using GUI interface, it was easy for the customer to configure migration policies (baseline, incremental, and final copy). This reduced the downtime window required to migrate the NAS data.
  • Security permissions and file attributes: StorageX copied security permissions (in Windows) and mode bits and file attributes (in Unix). No manual intervention or correction was needed.
  • Migration summary: Detailed summary report of the migration, with migration start-time, end-time stamps, bytes copied, number files and folders copied, skipped or deleted files, and error logs.

Customer Story

Data growth, equipment end-of-life (EOL), and increasing support costs are leading organizations to consolidate and eliminate older and more expensive storage systems. Using traditional “free” tools, the cost of these data migration is significant and can present substantial business and technical challenges.

The Drawbacks of Typical Approaches 

A variety of tools are typically required to move data between arrays from different vendors or to newer equipment, and traditional migration solutions do little to simplify the process.

Host-based tools can consume CPU cycles and I/O bandwidth, reducing the performance of other business applications. Array-based solutions either do not support heterogeneous storage environments, or offer a one-way transfer that locks customers into a single-vendor solution. Appliance-based solutions require a service technician to enter the data center, install the appliance in the data path between the host and the storage device, and then remove the appliance after performing the migration. The result is expensive, intrusive, not readily scalable, and requires downtime before and after the migration. To efficiently meet ongoing migration needs, storage administrators needed a simpler, less disruptive, and more cost-effective way to migrate data between heterogeneous storage arrays.

Why StorageX File Management?

StorageX is a software-based solution which facilitates file-based storage management. Its simple, yet powerful GUI based tool simplifies the migration, consolidation, and archiving of file data in large, complex, heterogeneous file storage environments. StorageX is fully automated and uses policy-driven approach to data management.

StorageX supports various NAS devices (EMC VNX/VNX OE for File and Isilon/OneFS, NetApp/Data ONTAP 7-Mode and Cluster Mode, Windows, and Linux) as both sources and destinations, as well as stand-alone CIFS and NFS file storage resources. StorageX software can be installed on virtual or physical servers. Installation is a simple and straightforward process. We migrated more than 3,000 NAS between variety of source and destinations storage systems/servers. Some of the major benefits of StorageX are explained below.

Benefits of StorageX:

  1. GUI interface:
    The graphical user interface makes it very easy to handle bulk NAS migrations. With the GUI, we can add storage resources (source and destination) and create data movement policies for NFS/CIFS share by specifying the replication and migration options.
  2. Migration Policies:
    Migration policies define the data movement between source and destinations. For CIFS shared folders and NFS exports, there are several configuration options available. Carefully input the data in the migration template as it is the base for deciding the migration options. Options are available to copy directories only, delete orphaned file/folder on destination, copy security settings, choose file attributes, filter files by age, exclude files/folders, etc.  After the initial baseline data copy phase, continuous incremental copies ensure replication of new, locked, or recently modified files on the source to the destination.In the final cut-over phase, StorageX options allow you to remove user access to the source, perform a short, final sync to copy any new files recently added or updated, and then share the new destination with users.This phased migration (baseline, incremental and final copy) approach reduces the overall downtime window required to migrate the NAS data.
  3. Migration Schedule:
    Once the baseline copy is completed, subsequent incremental copies can be automated through migration schedules. Migrations can be scheduled every minute, hourly, daily, weekly, etc.
  4. Automated Emails:
    Email notifications can be configured for effective monitoring. We did not need to login to the tool to check the migration status, emails alerted us under different conditions (completed successfully, completed with errors/warnings or cancelled).
  5. Migration Summary Report:
    Summary reports give a detailed status of the migration with migration start, end time stamps, bytes copied, number files and folder copied. If the migration is cancelled with error or if it is unsuccessful, the error logs clearly indicate the cause so we can troubleshoot to fix the issue and retry the data copy. If certain files are skipped or deleted, that is also indicated in the summary.
  6. Security Permissions and File Attributes:
    A major challenge in NAS data migration is copying the security permission (in Windows) and mode bits and file attributes (in Unix). StorageX is able to copy security permissions and file attributes over smoothly from source to destination based on the options specified in the migration template.  No manual intervention or correction is needed after the final data copy.

Have you been putting off the need to update or replace thousands of aging NAS volumes containing millions of old files?

If so, you need StorageX.

StorageX is the ONLY file management solution proven capable of managing massive petabyte migrations.  It is not wonder that six of twelve world’s largest banks rely on StorageX.

Contact Data Dynamics

 

NAS Object Storage

The Future of NAS is Object

If you’ve been around the computer industry long enough, you’ve witnessed major shifts in technology.

One such shift took place in 1992, when NetApp, Inc. was founded.  NetApp pioneered a new enterprise storage technology called Network Attached Storage (NAS).  Compared to traditional direct attached storage (DAS) and storage area network (SAN) storage, NAS storage offered access to large amounts of storage in an easy to manage appliance.  NAS was enabled by improved network technology and new high capacity disk drives.  NetApp combined the new technology in such a way to revolutionize the enterprise storage market.

In the 25 years since its introduction, NAS remains a highly viable enterprise storage solution, but some might say it’s beginning to show its age….

Continue reading the new white paper.

You Need the Right Equipment for the Job

The world’s largest dump truck is the Belaz 75710. (pictured) The Belaz has a payload capacity of nearly 500 tons and a top speed of 40 miles per hour.  When hauling enormous amounts of dirt, you need the Belaz.

But why are we talking about mining trucks?  To make a point.

If you are responsible for moving huge amounts of dirt, you would not use a shovel.  No question.  You would get something with MUCH larger capacity like a dump truck.

So, when you have to move petabytes of data, what do you use?  The answer:  StorageX.

Only StorageX is up to the task of moving and managing hundreds of thousands of volumes, containing petabytes of data, and we have the proof to back up our claim.  Our customers, which include 24 of the top Fortune 100, use StorageX to analyze, move and manage millions of unstructured files.

Based on actual projects, customers report that with StorageX they cut total project time in half with 50% less effort.  Common file management tasks include:

The full power of StorageX is available via a robust RESTful API. You can program repetitive data movement policies to copy, move, archive, and delete files. The StorageX API powers custom large scale migrations, customer service applications and IT Help Desk applications (just to name a few) to orchestrate automatic file movement and file share provisioning.

As an example, StorageX can be integrated into an IT self-service portal to automate a routine task like provisioning a new file share. The developer kit features options to integrate data movement with the cloud for file archival and disaster protection.

The opportunities are endless for file migration, replication and archiving using the full power of StorageX API.

  • Are you currently in the middle of a large technology refresh or file system re-structuring project?
  • Are you trying to move and re-locate thousands of volumes and petabytes of data?

You need StorageX.  Even if you are moving tens or hundreds of terabytes of data, you need StorageX.  To learn more, visit our web pages.