StorageX 7.1 Release Notes

Last Updated: January 2, 2014

This version of the Data Dynamics StorageX product (StorageX) provides several new features. This version also improves usability and extends several capabilities. Many of these improvements were made in direct response to suggestions from our customers. This Release Notes outlines why you should install this version, provides additions to the documentation, and identifies the known or resolved issues. This document includes the following sections:

Why install this version?

The Data Dynamics StorageX (StorageX) product is more than simply a data mover or storage migration tool. It is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system platforms, as well as support for both CIFS and NFS protocols.

StorageX provides storage infrastructure management capabilities that allow storage professionals to logically view distributed file storage, and then use policies to automate data movement across heterogeneous storage infrastructures.

The following sections outline the important new features and functionality provided in this version of StorageX:

Updates and enhancements to Migration Project designs

New Advanced Migration Project design capabilities

StorageX 7.1 now provides new, advanced Migration Project design capabilities. Now when you create Migration Project designs for your Migration Projects, you can choose to create the following types of Migration Project designs:

Like-to-Like Migration Project designs

Create Like-to-Like Migration Project designs when you want to create destination items on destination file storage resources that are similar to the source items on the source file storage resources. Consider the following examples:

    • Your source is a Data ONTAP file storage resource and you want to perform the following tasks:
      • Migrate file data from Data ONTAP volumes to volumes on aggregates on new Data ONTAP file storage resources, to file systems in storage pools on VNX OE for File file storage resources, or to the /ifs folder on OneFS file storage resources.
      • Migrate file data from Data ONTAP qtrees to volumes on Data ONTAP file storage resources, to file systems on VNX OE for File file storage resources, or to the /ifs folder on OneFS file storage resources.
    • Your source is a VNX OE for File file storage resource and you want to perform the following tasks:
      • Migrate file data from VNX OE for File file systems to volumes on aggregates on Data ONTAP file storage resources, to file systems on VNX OE for File file storage resources, or to the /ifs folder on OneFS file storage resources.
      • Migrate file data in tree quotas on VNX OE for File file storage resources to qtrees on volumes on Data ONTAP file storage resources, to file systems on VNX OE for File file storage resources, or to the /ifs folder on OneFS file storage resources.

Advanced Migration Project designs

Create Advanced Migration Project designs when you want to transform the way you store your file data and create items on destination file storage resources that are different from the source items on the source file storage resource. Consider the following examples:

    • You want to transform a qtree on a source Data ONTAP file storage resource to a volume on a destination Data ONTAP file storage resource.
    • You want to transform a qtree on a source Data ONTAP file storage resource to a file system on a destination VNX OE for File file storage resource or to a folder or subfolder under the /ifs folder on a destination OneFS file storage resource.
    • You want to transform a tree quota on a source VNX OE for File file storage resource to a volume on a destination Data ONTAP file storage resource or to a folder or subfolder under the /ifs folder on a destination OneFS file storage resource.
    • You want to transform a tree quota on a source VNX OE for File file storage resource to a file system on a destination VNX OE for File.

For more information about creating Migration Projects, Like-to-Like Migration Project designs and Migration Project Advanced designs, see the Data Dynamics StorageX Administrator’s Guide or online help.

Ability to export Migration Project designs

StorageX 7.1 now allows you to export Migration Projects designs into a .xls file. You can export existing Migration Project designs when you want to have an offline copy in Excel that others can use to review and verify source to destinations mappings and other settings in the design. You can also quickly and easily export a Migration Project designs when you want to save a copy of the Migration Project designs outside of StorageX for reporting or archival purposes.

Ability to edit and specify advanced options for existing Migration Projects designs

StorageX 7.1 now allows you to specify advanced options for existing Migration Projects designs. For example, you can edit or specify advanced options for an existing Migration Project designs when you want to change or update some of the settings specified for the design based on your review of the design.

New StorageX reports

StorageX 7.1 now provides a new Reporting view with 10 new reports. Use the new Reporting view and new reports to help you better understand your file storage resource environment and how you are using StorageX Phased Migration policies and Migration Projects to migrate and manage your file data. The following types of reports are now included in the new Reporting view:

New Storage Resource reports

Provide information about CIFS shared folders and NFS exports on file storage resources managed by StorageX, as well as summary information about file storage resources managed by StorageX, including file storage resource names, IIP addresses, the total number of shares and exports on resources, platform and status information, and more.

New Phased Migration Policy reports

Provide information about Phased Migration policies such as how long each policy run is taking, how much file data was migrated during each policy run, and policy run trends over time.

New Migration Project reports

Provide information about Phased Migration policies associated with Migration Projects and Migration Project designs. Use these reports to help you see how many files were copied from the source to the destination during the last policy run, the number of files StorageX was unable to copy during the last policy run, the file copy rate, and more.

You can also see summary information such as the number of Migration Project designs created for each Migration Project, the state of each design, the number of sources and destinations included in each design, the number of policies generated for a design if the design has been executed, and the number of policies in each design that are configured to run on a schedule versus the number of policies configured to be run manually.

New Agent reports

Help you discover and understand which replication agent are actively running policies, as well as see replication agent utilization trends day by day over the past 30 days.

Resolved issues

Platform field may display that platform API credentials are incorrect when they are correct

In StorageX 7.0, when you selected a file storage resource under My Resources in the Storage Resources view, if the selected file storage resource was offline, StorageX could incorrectly display the following error message in the Platform field:

The platform API credential specified is incorrect.

This issue has been addressed and no longer occurs in StorageX 7.1. In StorageX 7.1, once the file storage resource is online, Online displays correctly in the Platform field.

Migration Projects designs with OneFS file storage resources as destinations fail when you include more than 25 source and destination mappings in the design

In StorageX 7.0, if you specified OneFS file storage resources in a Migration Project design and you had more than 25 source and destination mappings in the design, when you executed the design, the design execution failed and you had to manually delete any items on the destination file storage resource StorageX may have created when the Migration Project design executed, as well as manually delete any Phased Migration policies that StorageX created during design execution.

This issue has been addressed and no loner occurs in StorageX 7.1. In StorageX 7.1, there is no limitation on the number of source and destination mappings you can include in Migration Project designs with OneFS file storage resources as destinations.

Known issues

The following issues are known issues in StorageX 7.1:

  • Installation and configuration issues
    • StorageX database is not case sensitive
    • Changes to the Secure Shell (SSH) executable path do not take effect until you restart the StorageX server
    • VNXe file storage resources supported in only in the Phased Migration Policy wizard in the Data Movement view
    • Data ONTAP, VNX OE for File, and OneFS file storage resources must be added to Storage Resources with correctly configured credentials when using the Phased Migration Policy wizard in the Data Movement view and specifying an entire file storage resource as a source or destination
    • NFS destination path invalid error displays even though valid NFS destination path was specified
  • Upgrade issues
    • No upgrade path from previous versions of Brocade StorageX
  • Agent management issues
    • Replication agent throttling settings are not honored
    • Replication agent port cannot be changed
    • New replication agent groups do not display in the Group field for other replication agents until you refresh the other replication agents
  • Storage resource management issues
    • NFS export name and local path must match
    • Cannot specify security style or oplocks settings when creating qtrees on Data ONTAP vFiler file storage resources
    • Newly created CIFS shared folders on Data ONTAP Vservers do not display until the parent folder is refreshed
    • Error creating or cloning an NFS export with a VNX server that has a large number of NFS exports
  • Migration Project issues
    • Information about provisioning actions not included in Phased Migration policy manifests
    • Migration Project design execution fails and Volume created successfully but not mounted error message displays
    • Migration Project design execution completes successfully, but VNX mount options are not applied when the destination file system is a nested under a parent file system that is a regular, and not an NMFS, file system
    • Final Phase of Phased Migration policies fail in NFS migrations if replication agents do not have root permissions on source exports prior to executing the Migration Project design
    • Volume quotas on Data ONTAP sources are not migrated to VNX OE for File destinations
    • User and group quotas on Data ONTAP sources are not migrated to VNX OE for File destinations
    • Migration Project design execution fails if the destination is a VNX OE for File Virtual Data Mover, an object on the source has the same name as an item on the destination, and the item on the destination with the same name is not visible to the destination VDM
    • Migration Project advanced design allows configuring the security style and oplocks settings when creating a volume on a Data ONTAP Vserver, but the settings are not used
    • Reading the quota for a domain user from a Data ONTAP source requires an HTTP or HTTPS connection when the user name contains a space
    • Export cloning fails when VNX is the destination, and the VNX server has a large number of NFS exports
  • Phased Migration policy issues
    • Unable to contact VNX device ‘VNXFileStorageResourceName’: The remote server returned an error: (503) Server Unavailable error message displays when running Phased Migration policies
    • Unable to mount the destination path error message displays when running Phased Migration policies
  • Reporting issues
    • Storage Resource Reports do not list the correct number of NFS exports for a VNX server when it has a large number of NFS exports

Installation and configuration issues

This section lists known issues related to StorageX installation and configuration.

StorageX database is not case sensitive

The StorageX database is not case sensitive. If you have two items that have the same name but the name is specified using a difference case, StorageX does not recognize each item separately.

Consider the following examples:

  • Assume you have two Data ONTAP qtrees. The first qtree is named finance. The second qtree is name FINANCE. The Data ONTAP operating system recognizes each of these qtrees as different objects based on case sensitivity. However, the StorageX database does not recognize case sensitivity. As a result, StorageX displays only one of the qtrees.
  • Assume you are using a Phased Migration policy to migrate NFS file data. You have two files on your source with file names that differ only by case. The first file is named 2013-Financials, and the second file is named 2013-FINANCIALS. In this scenario, StorageX migrates both files but reports that only one of the files was migrated in the policy manifest.

Changes to the Secure Shell (SSH) executable path do not take effect until you restart the StorageX server

If you change the path specified for the Secure Shell (SSH) executable specified in the SSH Executable Location field on the File > Options > Shell Options tab in the StorageX Console, StorageX does not immediately recognize the change. You must stop and restart the StorageX server service before the change takes effect.

VNXe file storage resources supported in only in the Phased Migration Policy wizard in the Data Movement view

StorageX currently only supports specifying VNXe file storage resources as sources and destinations in Phased Migration policies under the following conditions:

  • The Phased Migration policies that include VNXe file storage resources as sources or destinations are created using the Phased Migration Policy wizard in the Data Movement view.
  • You specify the full UNC path to the VNXe sources and destinations, including the share and any subfolders as appropriate.

StorageX does not support VNXe file storage resources as sources or destinations in Phased Migration policies created by Migration Projects in the Migration Projects view. You also cannot add VNXe file storage resources to the My Resources folder in the Storage Resources view and have StorageX manage the resource.

Data ONTAP, VNX OE for File, and OneFS file storage resources must be added to Storage Resources with correctly configured credentials when using the Phased Migration Policy wizard in the Data Movement view and specifying an entire file storage resource as a source or destination

If you want to use the Phased Migration policy wizard in the Data Movement view and you want to specify an entire Data ONTAP, VNX OE for File, or OneFS file storage resources as your source or destination, you must first add the Data ONTAP, VNX OE for File, or OneFS file storage resource to the My Resources folder or a custom folder under My Resources folder in the Storage Resources view. You must also ensure that you configure appropriate credentials for the Data ONTAP, VNX OE for File, or OneFS. Once you have added the Data ONTAP, VNX OE for File, or OneFS file storage resource to My Resources in the Storage Resources view and you have properly configured credentials, you can then specify an entire file storage resource as a source or destination.

For more information about adding file storage resources to My Resources in the Storage Resources view and configuring credentials for file storage resources, see the StorageX Administrator’s Guide.

NFS destination path invalid error displays even though valid NFS destination path was specified

If you have specified a Data ONTAP 8 Cluster-Mode Vserver file storage resource as a source or destination in a Phased Migration policy but you have not added the Vserver to the My Resources folder or a custom folder under My Resources folder in the Storage Resources view, StorageX displays an NFS destination path invalid error even when the destination path is valid.

To resolve this issue, first add the Data ONTAP Vserver to the My Resources folder or a custom folder under My Resources folder in the Storage Resources view and also ensure that you configure appropriate credentials for the Data ONTAP Vserver. Once you have added the Data ONTAP Vserver to My Resources in the Storage Resources view and you have properly configured credentials, this error message will no longer display.

For more information about adding file storage resources to My Resources in the Storage Resources view and configuring credentials for file storage resources, see the StorageX Administrator’s Guide.

Upgrade issues

This section lists known issues related to upgrading previous versions of StorageX to StorageX 7.1.

No upgrade path from previous versions of Brocade StorageX

There is no upgrade path from previous versions of Brocade StorageX to Data Dynamics, Inc. StorageX 7.1.

Agent management issues

This section lists known issues related to replication agent management.

Replication agent throttling settings are not honored

If you specify throttling settings for a replication agent on the Settings tab for a replication agent, StorageX does not honor any replication agent settings you specify.

Replication agent port cannot be changed

The port specified for a replication agent on the General tab for a replication agent cannot be changed.

New replication agent groups do not display in the Group field for other replication agents until you refresh the other replication agents

When you create a new replication agent group on a replication agent, the new replication agent group displays immediately in the Group field drop-down list on the replication agent where the group was created. However, when you select a different replication agent and try to add this replication agent to the new group, the new group does not display in the Group field drop-down list.

To work around this issue, refresh the replication agent that you want to add to the group. After you refresh the replication agent, the new group now displays in the Group field drop-down list.

Storage resource management issues

This section lists known issues related to file storage resource management.

NFS export name and local path must match

Data ONTAP and VNX OE for File file storage resources support exporting a folder when the export name is different from the local path on the resource. NFS exports created using StorageX do not support this capability. When you create an NFS export in the Storage Resources view, StorageX allows you to only specify a path. You cannot specify both an export name and a path. In the Migration Projects view, when StorageX executes a Migration Project design and create NFS exports on the destination, the NFS export name and local path are the same.

Cannot specify security style or oplocks settings when creating qtrees on Data ONTAP vFiler file storage resources

When you use StorageX to create qtrees on volumes Data ONTAP file storage resources, typically you can specify a security style for the qtree, as well as specify if you want to enable or disable oplocks settings. However, if you are creating a qtree on a volume that is owned by a Data ONTAP vFiler in the Storage Resources view, you cannot specify a security style or oplocks setting on the qtree. New qtrees creates on volumes owned by vFilers inherit the security style and oplocks setting from the volume.

In addition, if you plan to use StorageX Migration Projects with Data ONTAP vFilers as destinations, consider the following scenarios:

  • If you are moving a volume from a source Data ONTAP file storage resource to a volume on a destination Data ONTAP vFiler, when you execute the Migration Project design, StorageX creates the qtree on a volume on the destination vFiler, and the qtree on the destination inherits the security style and oplocks setting from the destination volume.
  • If you are moving a VNX OE for File File System from a source VNX OE for File file storage resource to a volume on a destination Data ONTAP vFiler, when you execute the Migration Project design, StorageX creates the qtree on a volume on the destination vFiler, and the qtree inherits the security style and oplocks setting from the destination volume.
  • If you are moving a Data ONTAP qtree or VNX OE for File tree quota from a Data ONTAP or VNX OE for File file storage resources to a volume on a destination Data ONTAP vFiler file storage resource, StorageX creates the qtree on a volume on the destination vFiler, and the qtree inherits the security style and oplocks setting from the destination volume.

If you plan to use StorageX Migration Projects with Data ONTAP vFilers as destinations, after you execute the Migration Project design, ensure you verify the security style and oplocks settings on the destination qtrees before you migrate file data to the destinations using Migration Project Phased Migration policies.

If you want to change the qtree security style or oplocks settings on the destination, connect to the vFiler file storage resource directly and change the settings. For more information, see the Data ONTAP CLI command documentation.

Newly created CIFS shared folders on Data ONTAP Vservers do not display until the parent folder is refreshed

When you create a new CIFS shared folder on a Data ONTAP Vserver in the Storage Resources view using the Share Creation wizard, when you click Finish, the StorageX user interface immediately refreshes. However, the new CIFS shared folder does not immediately display in the list of CIFS shared folders on the Data ONTAP Vserver. This is because the Data ONTAP Vserver has not yet finished initializing the new CIFS shared folder.

To work around this issue, right-click the parent folder, and then click Refresh. After the refresh, the new CIFS shared folder should now display in the list of CIFS shared folders on the Data ONTAP Vserver.

Error creating or cloning an NFS export with a VNX server that has a large number of NFS exports

When you run the NFS Create Export wizard on a VNX server, if the VNX server has a large number of NFS exports, the wizard may fail to create the export. A similar problem may occur when you run the NFS Clone Exports wizard and a VNX server is either the source or destination of the export cloning.

A VNX server has an upper limit on the buffer size it will use when responding to a mount request to enumerate the NFS exports on the server. Because of this limit, the VNX server may not return information for all NFS exports. This happens when the amount of data required to list all exports is greater than the maximum buffer size configured for the VNX server. In testing StorageX with the default VNX configuration, the VNX returned information for approximately 3,000 exports even though it had 8,000 exports configured (the exact number returned will vary based on the lengths of the export paths).

Because the VNX server does not return all NFS exports, the StorageX server attempts to create an export that already exists.

The fix to this issue is to increase the upper limit of the buffer size used by the VNX server when it responds to a mount request to enumerate its NFS exports. In the EMC VNX Series Release 7.1 Parameters Guide for VNX for File document, the upper limit is documented as being controlled by the mount facility parameter tcpResponseLimit. The default value is 262144, but it can be increased up to 1048576. The needed size depends on the number of NFS exports (and their corresponding path lengths) defined on the VNX server. See the VNX documentation for more details.

Migration Project issues

This section lists known issues related to StorageX Migration Projects.

Information about provisioning actions not included in Phased Migration policy manifests

If you are migrating file data using Migration Projects, information about provisioning actions performed when you execute the Migration Project design are not included in policy manifests.

For example, if you are migrating file data stored on a volume on a Data ONTAP 7 file storage resource using Migration Projects and the volume on the source has a quota, you can choose whether you want StorageX to create a new volume on the destination with the same quota, or whether you want StorageX to create a new volume on the destination with a different quota.

If you specify that you want to create a new volume on the destination with a different quota, when you execute your Migration Project design, StorageX creates the new volume on the destination and immediately applies the new quota to the destination. However, the information about the quota applied on the new quota is not written to the policy manifest.

To verify that StorageX created the volume on the destination with the correct quota, use a native tool such as NetApp OnCommand System Manager to verify StorageX created the new volume using the correct quota on the destination Data ONTAP file storage resource.

However, if there are user or group quotas on sources, these user and group quotas are applied on the destinations when the Phased Migration policy runs and you can see this information in the policy manifest.

Migration Project design execution fails and Volume created successfully but not mounted error message displays

If you create an Advanced Migration Project design to migrate file data to a Data ONTAP 8 Cluster-Mode file storage resource and you specify a mount path that already exists on the destination, when you execute the Migration Project design, the design execution fails and the following error message displays:

Volume created successfully but not mounted.

In addition, when you execute the design, StorageX will create the volume, but no mount path will exist for the volume.

To avoid this issue, ensure that you do not specify a mount path that already exists when creating Advanced Migration Project designs.

Migration Project design execution completes successfully, but VNX mount options are not applied when the destination file system is a nested under a parent file system that is a regular, and not an NMFS, file system

In VNX, when you mount a file system, you can mount the file system under the default VNX namespace, or you can mount the file system under another file system. For example, assume you have a regular file system, FS01, mounted as /FS01. Then assume that you have another regular file system, FS02, mounted as /FS01/FS02, which means the FS02 file system is nested under the FS01 file system. In addition, in VNX, there are regular file systems and Nested Mount File Systems (NMFS).

If you want to create a file system nested under another file system, ensure you create the parent file system as a Nested Mount File Systems (NMFS), and not as a regular file system. You cannot change the mount option for a regular file system if the regular file system is nested under another regular file system. You can only change the mount option for a file system mounted under another file system if the file system is a NMFS file system. This behavior is due to limitations with VNX technology.

For example, assume you are creating an Advanced Migration Project design, and you specify the mount path you want StorageX to create on the VNX OE for File destination and whether you want to use the Oplocks, NT Credentials, Read Only, or Disable Virus Checker mount path options.

If you enter a nested mount path, and if the parent file system is a regular file system and not an NMFS file system, and you specify a mount option other than the default mount option, the Migration Project design will complete successfully and StorageX will create the new file system on the destination. However, StorageX will use the default mount path option, and will not apply any other mount path options you specified in your Advanced Migration Project design.

If you then want to change the mount option, you will be unable to do so. This limitation is due to limitations with VNX technology.

If you want to be able to change the default mount option on a nested file system, ensure the parent file system is a NMFS file system.

Final Phase of Phased Migration policies fail in NFS migrations if replication agents do not have root permissions on source exports prior to executing the Migration Project design

When migrating file data using the NFS protocol and Phased Migration policies generated from a Migration Project design, ensure that the replication agents you want to use to migrate the file data using the NFS protocol have root permissions on all NFS exports on the source file storage resources before you execute the Migration Project design and generate Phased Migration policies for the project.

If the replication agents you want to use to migrate the file data using the NFS protocol do not have root permissions on the source NFS exports before you execute the Migration Project design and generate Phased Migration policies for the project, the Phased Migration policies fail when they run in the Final Phase. In order to continue, you must manually grant the replication agents root access on all of the NFS exports on the destination and then manually run the policies again.

Volume quotas on Data ONTAP sources are not migrated to VNX OE for File destinations

StorageX does not support the migration of volume quotas from source Data ONTAP file storage resources to destination VNX OE for File file storage resources.

User and group quotas on Data ONTAP sources are not migrated to VNX OE for File destinations

StorageX does not support the migration of local user and group quotas or domain user and group quotas from source Data ONTAP file storage resources to destination VNX OE for File file storage resources.

Migration Project design execution fails if the destination is a VNX OE for File Virtual Data Mover, an object on the source has the same name as an item on the destination, and the item on the destination with the same name is not visible to the destination VDM

If your destination is a VNX OE for File Virtual Data Mover (VDM) and you have an object on your source, such as a Data ONTAP volume or VNX OE for File File System, with the same name as a VNX OE for File File System on your destination, and the VNX OE for File File System with the same name is not visible to the destination VDM, when you validate the Migration Project design, the design validates successfully. However, when you execute the Migration Project design, the design execution fails and a This name already exists error message displays.

To avoid this issue, ensure that source object names do not already exist on destination VNX OE for File VDM destinations when you create Migration Project designs.

Migration Project advanced design allows configuring the security style and oplocks settings when creating a volume on a Data ONTAP Vserver, but the settings are not used

When you use a StorageX Advanced Migration Project design to create a volume on a Data ONTAP Vserver file storage resource, the options are available to specify the security style and oplocks settings for the volume. However, the security style and oplocks settings in the advanced design are not used and StorageX creates a volume that inherits these settings from the root volume of the Vserver.

If you plan to use StorageX Migration Projects with Data ONTAP Vservers as destinations, after you execute the Migration Project design, ensure you verify the security style and oplocks settings on the destination volumes before you migrate file data to the destinations using Migration Project Phased Migration policies.

If you want to change the volume security style or oplocks settings on the destination, connect to the Vserver file storage resource directly and change the settings. For more information, see the Data ONTAP CLI command documentation.

Reading the quota for a domain user from a Data ONTAP source requires an HTTP or HTTPS connection when the user name contains a space

StorageX fails to read quota information for a domain user from a Data ONTAP source when using an RPC connection and the user name contains a space. In this case, execution of the Migration Project design fails. To work around this issue, you must change the connection type for the Data ONTAP source to use HTTP or HTTPS.

Export cloning fails when VNX is the destination, and the VNX server has a large number of NFS exports

When you execute a Migration Project design, and the destination of the migration project is a VNX server with a large number of NFS exports, the export creation phase may fail with the following error in the “Sharing data on destination” node of the manifest:

Failed sharing resources '': Failed to create NFS export on destination machine.

A VNX server has an upper limit on the buffer size it will use when responding to a mount request to enumerate the NFS exports on the server. Because of this limit, the VNX server may not return information for all NFS exports. This happens when the amount of data required to list all exports is greater than the maximum buffer size configured for the VNX server. In testing StorageX with the default VNX configuration, the VNX returned information for approximately 3,000 exports even though it had 8,000 exports configured (the exact number returned will vary based on the lengths of the export paths).

Because the VNX server does not return all NFS exports, the StorageX server attempts to create an export that already exists, resulting in the error listed above.

The fix to this issue is to increase the upper limit of the buffer size used by the VNX server when it responds to a mount request to enumerate its NFS exports. In the EMC VNX Series Release 7.1 Parameters Guide for VNX for File document, the upper limit is documented as being controlled by the mount facility parameter tcpResponseLimit. The default value is 262144, but it can be increased up to 1048576. The needed size depends on the number of NFS exports (and their corresponding path lengths) defined on the VNX server. See the VNX documentation for more details.

Phased Migration policy issues

This section lists known issues related to StorageX Phased Migration policies.

Unable to contact VNX device ‘VNXFileStorageResourceName’: The remote server returned an error: (503) Server Unavailable error message displays when running Phased Migration policies

StorageX may display the error: Unable to contact VNX device 'VNXFileStorageResourceName': The remote server returned an error: (503) Server Unavailable under the following conditions:

  • If you install the StorageX server on a Windows computer that does not have sufficient processing power or memory and you then try to run several hundred Phased Migration policies at the same time and that all have VNX OE for File file storage resources as destinations. In this scenario, when the Phased Migration policies run and try to clone a shared folders from the source to the destination the ‘VNXFileStorageResourceName’: The remote server returned an error: (503) Server Unavailable error message displays.
  • If you have installed the StorageX server on a computer with sufficient processing power and memory but there are other storage applications connecting to the VNX OE for File file storage resources that are consuming lots of connections to the VNX OE for File file storage resource. In this scenario, when the Phased Migration policies run and try to clone a shared folders from the source to the destination the ‘VNXFileStorageResourceName’: The remote server returned an error: (503) Server Unavailable error message displays.

This issue is caused by a limitation in the EMC VNX API. By design, VNX file storage resources have a maximum of 16 connections. This 16 connection limit is designed to prevent denial of service attacks.

If you are running the StorageX server on a Windows computer without sufficient processing power or memory, you can resolve this issue by installing the StorageX server on a newer computer with more processing power and memory. For more information about StorageX server computer requirements, see the StorageX Administrator’s Guide.

If you are running the StorageX server on a Windows computer with sufficient processing power or memory but other storage applications are consuming the connections, you can resolve this issue by stopping the other storage applications that are consuming connections when you are running a large number Phased Migration policies all at the same time.

Unable to mount the destination path error message displays when running Phased Migration policies

When you run a Phased Migration policy, and the source or destination of the migration project is a VNX server with a large number of NFS exports, the policy may fail with the following error message:

Unable to mount the destination path or Unable to mount the source path

A VNX server has an upper limit on the buffer size it will use when responding to a mount request to enumerate the NFS exports on the server. Because of this limit, the VNX server may not return information for all NFS exports. This happens when the amount of data required to list all exports is greater than the maximum buffer size configured for the VNX server. In testing StorageX with the default VNX configuration, the VNX returned information for approximately 3,000 exports even though it had 8,000 exports configured (the exact number returned will vary based on the lengths of the export paths).

Because the VNX server does not return all NFS exports, the StorageX replication agent does not find the NFS export, resulting in the error listed above.

The fix to this issue is to increase the upper limit of the buffer size used by the VNX server when it responds to a mount request to enumerate its NFS exports. In the EMC VNX Series Release 7.1 Parameters Guide for VNX for File document, the upper limit is documented as being controlled by the mount facility parameter tcpResponseLimit. The default value is 262144, but it can be increased up to 1048576. The needed size depends on the number of NFS exports (and their corresponding path lengths) defined on the VNX server. See the VNX documentation for more details.

Reporting issues

This section lists known issues related to StorageX Reports.

Storage Resource Reports do not list the correct number of NFS exports for a VNX server when it has a large number of NFS exports

When you view a report that lists NFS exports for a VNX server (e.g., the Exports report or the Storage Resource Summary report), the number of NFS exports listed in the report may be less than the number of exports configured on the VNX server if the VNX server has a large number of NFS exports.

A VNX server has an upper limit on the buffer size it will use when responding to a mount request to enumerate the NFS exports on the server. Because of this limit, the VNX server may not return information for all NFS exports. This happens when the amount of data required to list all exports is greater than the maximum buffer size configured for the VNX server. In testing StorageX with the default VNX configuration, the VNX returned information for approximately 3,000 exports even though it had 8,000 exports configured (the exact number returned will vary based on the lengths of the export paths).

Because the VNX server does not return all NFS exports, the StorageX reports that include information about NFS exports do not show the correct number of NFS exports for the VNX server.

The fix to this issue is to increase the upper limit of the buffer size used by the VNX server when it responds to a mount request to enumerate its NFS exports. In the EMC VNX Series Release 7.1 Parameters Guide for VNX for File document, the upper limit is documented as being controlled by the mount facility parameter tcpResponseLimit. The default value is 262144, but it can be increased up to 1048576. The needed size depends on the number of NFS exports (and their corresponding path lengths) defined on the VNX server. See the VNX documentation for more details.

Additions to documentation

The product documentation is up-to-date and provides the latest information. For more information about system requirements, installing the product, and using the product, see the StorageX Administrator’s Guide and the StorageX online help.