A Dilemma That Will Impact Many Startups in 2018!

The past 6 years have seen a ton of start-ups in the infrastructure management space. There has been a flood of new companies created to address the massive challenge of managing the plumbing and underlying compute and storage that facilitates the growth of the Internet of Things, Public and Private Cloud and Artificial Intelligence. The ease of raising funding from Angels, Venture and Private Equity has provided entrepreneurs an easy source of capital. What a great time to be a start-up….or is it? I equate the existing start-up space like a gambler (in this case an investor) debating whether to bet more chips on another hand, double down on the one that exists or walk away from the table.

2017 saw almost 4 billion internet users, Gartner predicted 8 billion things would have connected and AI became more prevalent and started to make an impact in our daily lives. We truly are in an information technology renaissance with the world becoming virtually smaller, new innovations shaping our daily routines and transforming the workforce to meet the needs of a digital economy. Every day we see new technologies make front page news, from online shopping to self-driving cars to cryptocurrency to robotics and AI. Each and every one of these next generation innovations requires core/edge computing capability, tons of storage to keep data that can be mined and utilized and networks through which the information can flow across the globe. The underlying infrastructure is evolving with new innovations from legacy vendors and start-ups to meet the needs of the market.

This exciting era of technology has led to crowd funding, angels, super angels, venture capitalists for different stages of growth and private equity all pumping money into new ideas and companies. From raising thousands to raising billions, the opportunity to stay private and raise as much capital as required has been the mantra utilized by most start-ups, avoiding the scrutiny of the public markets and all that comes with it. Venture and private equity funds have raised tens, if not hundreds of billions of dollars to invest in the next Amazon or Alibaba! Nobody wants to miss the party and everyone wants ‘in’ on the 10-20-50x return that waits upon an exit! This is analogous to sitting at a blackjack table and everyone around is winning so the enthusiasm keeps building, players keep increasing their bets, doubling down as there are no signs of a losing hand. Investors see their other investments or their peers making multi-fold via a unicorn exit and the exuberance continues in stride.

Unfortunately most of the start-ups never think about profitability and focus solely on customer acquisition, top line growth or worse yet, number of users/clicks without any direct correlation to financial metrics. This focus on customer acquisition, top line growth is an essential component of a company’s growth curve but at some stage there has to be a means to profitability. Start ups in today’s world don’t worry about profit as they are more focused on raising the next round of funding and then the next and the next and before long the company has raised tens if not hundreds of millions without earning a dollar. What’s amazing to me is that investors continue to do round after round of investment despite knowing that throwing good money behind bad doesn’t make sense. The challenge is, once they are ‘in’, they have to keep on investing as they need to show their limited partners (LP’s) that the investments they’ve made are continuing to progress forward. Keeping the blackjack analogy in mind, think of the same table that is full of exuberance and a couple of the players lose a hand or two. The gambling mindset is one where the loss is a fluke, it won’t happen to me or it definitely won’t happen two times in a row. If the gambler keeps playing and maybe even increasing the stake, a win will yield rewards.

What you will see in 2018 is that a large majority of the start ups will end up closing or being sold for pennies on a dollar. The reason is not because the technologies are not good, it is because the companies are not profitable, they are not within site of being profitable and the investment dollars for new capital is drying up. There are several reasons for investors not willing or able to invest further. First, the investors have a time horizon that may be coming due. Most funds have a ‘life’ for each fund raised, typically 10 years from inception, so a fund is bound to exit from what it has invested in before the time horizon runs out. Secondly, to go public requires delivering on numbers. The public markets are rewarding companies that meet or exceed forecasts and just as harshly killing those that don’t. Financials do matter and the public market is clear that you must show profitability or a means to it, in order to continue to be supported with a strong share price. There are exceptions to this but even those exceptions face a crazy roller coaster ride to their share price. The other option is a private exit, an M&A to a strategic. The challenge to this is that most large companies are extremely smart and have fairly mature M&A processes, not to mention activist investors that are monitoring every major spend. They are not going to pay multiples if they know the company is going to run out of money and is on its last breath. In addition, they will not want to take on a transaction unless it is strategic and can be additive to their earnings, or has a diminutive short term negative earnings impact. Going back to my gambling analogy…the gambler has a flight to catch and needs to leave the table pretty soon and he/she must decide what to do, should they bet more chips and double down, take a new hand or simply walk away? My feeling is that many in 2018 will either take ‘even money’ or take the loss and walk away!

To my fellow entrepreneurs, we are the dealers of each hand, making the gambler win is in our best interest. Focus on profit and the analogy ‘the house never loses’ will definitely come to fruition. Best of luck in 2018!

Cloud Computing Storage Infrastructure

The Cloud: Transforming How We Manage Storage Infrastructure

When I first heard of ‘The Cloud,’ I thought it was just marketing jargon used by technology companies to create a false new market.

In reality, The Cloud, in its various forms, is re-defining how we access, utilize, and manage software, hardware, and IT services.

Continue reading The Cloud: Transforming How We Manage Storage Infrastructure

Manage Orphaned Data

How to Manage Your Orphaned Data

With exception of structured data, many companies are unaware of which files are present within many Windows shares or NFS exports.  Over time, data have moved from department to department, project to project. It’s been created, unused, and left orphaned by users leaving the company or corporate restructurings.

Managing Orphaned Data with StorageX 8.0

StorageX 8.0 introduces our File Analytics web portal.  The web portal displays a dashboard representing the results of data scans and subsequent analysis.  Each data scan interrogates a specified share, export, or multiple shares and exports.  The scan tags and compiles the file metadata into the file analytics database.  Once the metadata is in the database, we can query the tags and metadata to narrow down the scope of data that is of concern.

File metadata can be used to help a company determine the use, ownership, file type, file size, creation date, access date, last modify date, and many other criteria that can be used to make decisions about where the data should be stored.  For purposes of this discussion, we will focus on ownership—specifically, unowned or orphaned data.

Read the rest of the whitepaper. 

Want to learn more about StorageX?

StorageX Analysis Portal Demo

StorageX Petabyte Scale Migration

 

Object Storage Popularity

Why is Object Storage Growing in Popularity?

Object Storage is the buzz of the storage industry and for good reason: it encompasses file, block, and object access portals into the same pool of raw object storage.

Compared to traditional file and block storage, object storage offers massive petabyte scalability and built-in availability. It’s a distributed design model that removed the potential of a single drive failure. Object storage nodes are combined to enable unlimited capacity and consistent access performance.

At Data Dynamics, we are very excited about the potential for object storage. In the recent release of StorageX 8.0, we unveiled new file-to-object conversion to support S3-compliance object storage. Based on feedback from our customers, file-to-object conversion was the number one requested new feature.

StorageX is the file management solution of choice for 6 of the 12 largest banks in the world.  Large banks that actively manage petabytes of data rely on object storage for its massive scalability, availability and economies of scale.  One of our customers, whose name we cannot reveal, was kind enough to share how they recently completed a 10 PB filesystem refresh using StorageX.

When measured across all our customers, the benefits delivered by StorageX are truly incredible:

  • Reduce storage-related operational costs 50%
  • Deploy new storage technology 66% faster
  • Modernize applications for 10X productivity improvement

Data Dynamics is proud to work with all its technology partners who share our passion for object storage.  We are witnessing a MAJOR shift in the storage industry and we are excited about the future potential of object storage.

Get on the Cloud

How DevOps Adoption is Changing

To gather insights on the state of DevOps, we spoke with 22 executives at 19 companies implementing DevOps for themselves and helping clients to implement a DevOps methodology. We asked, “How has DevOps changed since you began using the methodology?” Here’s what they told us:

Adoption

  • As I talk to customers and prospects there’s greater awareness of DevOps and what it can do. It’s being taken more seriously. There is sufficient proof of organizations doing well with DevOps. This is a business process change.
  • A survey we just conducted reflected the insurgency within companies to the way things are done. One year ago, it was, “what is DevOps?” Today there’s a common understanding with a desire to know how to scale.
  • CD has become mainstream. Microservices are more commonplace and are a good way to be successful with DevOps. Being in the cloud gives you more flexibility. Containers are becoming more mainstream. Function as a service is a helpful way to solve scaling issues.
  • People are beginning to understand the benefits of DevOps. Best practices have been solidified. Allows you to get code from developers to customers in a fast and secure way.

Read the Full Article

StorageX is fast (baseline copy)

In a previous article, I stated that StorageX is multi-threaded. I also spent quite a bit of time discussing why I consider this fact to be (mostly) irrelevant to the administrator who is using StorageX to perform his file system migrations. What the user of StorageX really wants is for StorageX to do its job as fast as possible: when he is doing a baseline copy, he wants StorageX to fill his network pipe and move the data as quickly as possible, and when he is cutting over to his shiny new NAS hardware, he wants StorageX to do the final incremental copy within his allotted cutover window.

As I mentioned in my previous article, the techniques StorageX uses to fill the network pipe during a baseline copy are very different from those used to find changed files as quickly as possible during an incremental copy. In this article, I will focus on baseline copies.

Continue reading StorageX is fast (baseline copy)

GDPR Regulations

New Sweeping GDPR Regulations Set to become Law in 2018

With the rapid rise of technology and the borderless nature of the modern digital economy, governments have had to adapt to provide better data protection and improve the fundamental rights of data subjects. On May 25, 2018, the world’s most sweeping data privacy regulation, the European Union’s General Data Protection Regulation (GDPR), will become law.

The GDPR gives EU residents the right to request from organizations whatever personal data is being stored about them and to withdraw consent of its use, thus effectively ordering its destruction. Per Article 12 of the GDPR, this request must be free of charge, easy to make, and must be fulfilled without “undue delay and at the latest within one month.”

The GDPR contains four key mandates:

  • Accountability and Governance – Maintain relevant documentation on data processing activities and implement measures that demonstrate compliance, such as audits.
  • Storage Limitation – Personal data may not be kept for longer than is necessary for the purposes for which it was originally obtained.
  • Breach Notification – A notifiable breach must be reported to the relevant supervisory authority within 72 hours of the organization becoming aware of it.
  • Individual Rights – An individual may request the deletion or removal of personal data when there is no compelling reason for its continued existence.

GDPR aims to encourage organizations to be more accountable, transparent and responsible for any personal data they hold. Any entity that stores or processes the personal data of EU residents will be obligated to conform to this new law, regardless of where that organization resides.  Further, it empowers EU residents to control the data that an organization may hold on them.

Implications for File Management

GDPR demands improved data governance for files that contain personal information of a customer or employee.  File shares may contain millions of files widely distributed across incompatible storage resources making it a challenge to comply with GDPR rules.  A file management solution that can work across heterogeneous storage resources and provide the ability to analyze, move and manage files for GDPR compliance is a necessity.

  • StorageX Dynamic File Management platform empowers you to analyze, move and manage and your files for GDPR compliance. StorageX is built using industry standards and operates seamlessly across heterogeneous storage resources, freeing your data from technology lock-in, complexity and risk.
  • Using StorageX’s integrated analytics, you can quickly analyze files based on file name, type, size, location, creation, last access, attributes, SID and more.  Files that contain personal data can be marked with custom tags so they can be easily managed in the future.
  • When action is required (move, copy, delete), StorageX’s automated data movement policies facilitate the transfer of SMB/NFS source files to file resources more suitable for GDPR management. Move entire shares or exports to a new location with speed and reliability.  StorageX reports record all file actions to document compliance for audits.

To learn how StorageX can help your organization manage its data for GDPR compliance, contact Data Dynamics Sales.

 

Distributed File System

Petabyte Scale Migration – A Personal Story

Are you facing a large petabyte scale migration?

Here’s how one Fortune 100 company used StorageX to successfully manage a ten-petabyte migration, in 20% the time compared to traditional tools.

Client Requirements:

  • Eliminate origin storage issues
  • Address unstructured data growth
  • Scale to accommodate 100s of cutovers weekly
  • Move both CIFS & NFS in a mixed environment
  • Maintain security
  • Attain buy-in from business units

Project Details

  • CIFS and Mixed Mode shares
  • 4,000-5,000 total shares
  • Up to 40 TB per share
  • Up to 180 million files per share

Results

  • 10 years’ of work done in 2 years (2PBs in first four months).
  • 10 PBs optimized total
  • 50% cost savings realized

How StorageX Completed the Project Successfully and Under-Budget

  • Phased migration: Using GUI interface, it was easy for the customer to configure migration policies (baseline, incremental, and final copy). This reduced the downtime window required to migrate the NAS data.
  • Security permissions and file attributes: StorageX copied security permissions (in Windows) and mode bits and file attributes (in Unix). No manual intervention or correction was needed.
  • Migration summary: Detailed summary report of the migration, with migration start-time, end-time stamps, bytes copied, number files and folders copied, skipped or deleted files, and error logs.

Customer Story

Data growth, equipment end-of-life (EOL), and increasing support costs are leading organizations to consolidate and eliminate older and more expensive storage systems. Using traditional “free” tools, the cost of these data migration is significant and can present substantial business and technical challenges.

The Drawbacks of Typical Approaches 

A variety of tools are typically required to move data between arrays from different vendors or to newer equipment, and traditional migration solutions do little to simplify the process.

Host-based tools can consume CPU cycles and I/O bandwidth, reducing the performance of other business applications. Array-based solutions either do not support heterogeneous storage environments, or offer a one-way transfer that locks customers into a single-vendor solution. Appliance-based solutions require a service technician to enter the data center, install the appliance in the data path between the host and the storage device, and then remove the appliance after performing the migration. The result is expensive, intrusive, not readily scalable, and requires downtime before and after the migration. To efficiently meet ongoing migration needs, storage administrators needed a simpler, less disruptive, and more cost-effective way to migrate data between heterogeneous storage arrays.

Why StorageX File Management?

StorageX is a software-based solution which facilitates file-based storage management. Its simple, yet powerful GUI based tool simplifies the migration, consolidation, and archiving of file data in large, complex, heterogeneous file storage environments. StorageX is fully automated and uses policy-driven approach to data management.

StorageX supports various NAS devices (EMC VNX/VNX OE for File and Isilon/OneFS, NetApp/Data ONTAP 7-Mode and Cluster Mode, Windows, and Linux) as both sources and destinations, as well as stand-alone CIFS and NFS file storage resources. StorageX software can be installed on virtual or physical servers. Installation is a simple and straightforward process. We migrated more than 3,000 NAS between variety of source and destinations storage systems/servers. Some of the major benefits of StorageX are explained below.

Benefits of StorageX:

  1. GUI interface:
    The graphical user interface makes it very easy to handle bulk NAS migrations. With the GUI, we can add storage resources (source and destination) and create data movement policies for NFS/CIFS share by specifying the replication and migration options.
  2. Migration Policies:
    Migration policies define the data movement between source and destinations. For CIFS shared folders and NFS exports, there are several configuration options available. Carefully input the data in the migration template as it is the base for deciding the migration options. Options are available to copy directories only, delete orphaned file/folder on destination, copy security settings, choose file attributes, filter files by age, exclude files/folders, etc.  After the initial baseline data copy phase, continuous incremental copies ensure replication of new, locked, or recently modified files on the source to the destination.In the final cut-over phase, StorageX options allow you to remove user access to the source, perform a short, final sync to copy any new files recently added or updated, and then share the new destination with users.This phased migration (baseline, incremental and final copy) approach reduces the overall downtime window required to migrate the NAS data.
  3. Migration Schedule:
    Once the baseline copy is completed, subsequent incremental copies can be automated through migration schedules. Migrations can be scheduled every minute, hourly, daily, weekly, etc.
  4. Automated Emails:
    Email notifications can be configured for effective monitoring. We did not need to login to the tool to check the migration status, emails alerted us under different conditions (completed successfully, completed with errors/warnings or cancelled).
  5. Migration Summary Report:
    Summary reports give a detailed status of the migration with migration start, end time stamps, bytes copied, number files and folder copied. If the migration is cancelled with error or if it is unsuccessful, the error logs clearly indicate the cause so we can troubleshoot to fix the issue and retry the data copy. If certain files are skipped or deleted, that is also indicated in the summary.
  6. Security Permissions and File Attributes:
    A major challenge in NAS data migration is copying the security permission (in Windows) and mode bits and file attributes (in Unix). StorageX is able to copy security permissions and file attributes over smoothly from source to destination based on the options specified in the migration template.  No manual intervention or correction is needed after the final data copy.

Have you been putting off the need to update or replace thousands of aging NAS volumes containing millions of old files?

If so, you need StorageX.

StorageX is the ONLY file management solution proven capable of managing massive petabyte migrations.  It is not wonder that six of twelve world’s largest banks rely on StorageX.

Contact Data Dynamics