h12051 Wp Multi Tenant File Storage

download h12051 Wp Multi Tenant File Storage

of 30

Transcript of h12051 Wp Multi Tenant File Storage

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    1/30

    White Paper

    Global Solutions Sales

    Abstract

    This white paper explains how Virtual Data Movers (VDMs) on EMCVNXsystems can be configured and leveraged to provide multiple CIFSand NFS endpoints. This allows service providers to offer multiple filesystem containers to multiple tenants on a single or multiple physicalEMC VNX storage arrays.

    June 2013

    EMC MULTI-TENANT FILE STORAGE SOLUTION

    Multi-Tenant File Storage with EMC VNX andVirtual Data Movers

    Provide file storage services to multiple tenants from a single array

    Monetize investments in existing VNX storage capacity

    Realize ROI sooner and reduce storage TCO

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    2/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    Copyright 2013 EMC Corporation. All Rights Reserved.

    EMC believes the information in this publication is accurate as of itspublication date. The information is subject to change without notice.

    The information in this publication is provided as is. EMC Corporation makesno representations or warranties of any kind with respect to the information inthis publication, and specifically disclaims implied warranties ofmerchantability or fitness for a particular purpose.

    Use, copying, and distribution of any EMC software described in thispublication requires an applicable software license.

    For the most up-to-date listing of EMC product names, see EMC CorporationTrademarks on EMC.com.

    All trademarks used herein are the property of their respective owners.

    Part Number H12051

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    3/30

    3EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Table of contents

    Executive summary ............................................................................................................................... 5

    Business case .................................................................................................................................. 5

    Solution overview ............................................................................................................................ 5

    Key results and recommendations ................................................................................................... 5

    Introduction.......................................................................................................................................... 6

    Purpose ........................................................................................................................................... 6

    Scope .............................................................................................................................................. 6

    Audience ......................................................................................................................................... 6

    Terminology ..................................................................................................................................... 6

    Technology overview ............................................................................................................................ 8

    EMC VNX series ................................................................................................................................ 8

    Virtual Data Movers ..................................................................................................................... 8

    Physical Data Movers .................................................................................................................. 8

    EMC Unisphere ............................................................................................................................ 8

    Solution architecture and design ........................................................................................................ 10

    Architecture overview ..................................................................................................................... 10

    Hardware components ................................................................................................................... 11

    Software components .................................................................................................................... 11

    Network architecture ...................................................................................................................... 11

    EMC VNX57XX network elements ............................................................................................... 13

    Design considerations ................................................................................................................... 13

    Solution validation ............................................................................................................................. 15

    Objective ....................................................................................................................................... 15

    Test scenario.................................................................................................................................. 15

    Server and storage configuration ................................................................................................... 15

    VDM configuration ......................................................................................................................... 17

    Storage pool configuration ........................................................................................................ 17

    Create a VDM ................................................................................................................................. 18

    Create a user file system ........................................................................................................... 18

    Create a mount point ................................................................................................................. 19

    Check VDM status ..................................................................................................................... 20

    Create the VDM network interface .............................................................................................. 20

    Attach a VDM interface to the VDM ............................................................................................ 21

    File system configuration ............................................................................................................... 21

    Mount the file system to a VDM ................................................................................................. 21

    Export the file system to server hosts ........................................................................................ 21

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    4/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    4

    VDM configuration summary .......................................................................................................... 22

    Scripted deployment ...................................................................................................................... 22

    Cloud platform-attached file systems ............................................................................................. 24

    VMware ESXi 5.1 NFS data stores .............................................................................................. 24

    Test procedures ............................................................................................................................. 25

    Use IOzone to generate I/O ....................................................................................................... 25

    Test results .................................................................................................................................... 26

    Physical Data Mover high availability ........................................................................................ 27

    VNX Data Mover load ................................................................................................................. 27

    Conclusion ......................................................................................................................................... 29

    Summary ....................................................................................................................................... 29

    References.......................................................................................................................................... 30

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    5/30

    5EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Executive summary

    Multi-tenancy within private and public clouds includes any cloud architecture orinfrastructure element within the cloud that supports multiple tenants. Tenants canbe separate companies or business units within a company.

    To provide secure multi-tenancy and address the concerns of cloud computing,mechanisms are required to enforce isolation of user and business data at one ormore layers within the infrastructure. These layers include:

    Application layer: A specially written multi-tenant application, or multiple,separate instances of the same application can provide multi-tenancy at thislayer.

    Server layer: Server virtualization and operating systems provide a means ofseparating tenants and application instances on servers, and controllingutilization of and access to server resources.

    Network Layer: Various mechanisms, including zoning and VLANs, can be usedto enforce network separation.

    Storage Layer: Mechanisms such as LUN masking and SAN zoning can be usedto control storage access. Physical storage partitions segregate and assignresources into fixed containers.

    Achieving secure multi-tenancy may require the use of one or more mechanisms ateach infrastructure layer

    This white paper focuses on how to enforce separation at the network and storagelayers to allow cloud providers and enterprises to deploy multi-tenant file storage onEMCVNXstorage arrays. The deployment of multi-tenant file storage within theEMC VNX storage platform can act as an enabler for cloud providers and enterprise

    businesses to offer File System-as-a-Service to their customers or business units.

    The solution described in this white paper uses EMC VNX unified storage and VirtualData Mover (VDM) technology, which enables logical partitioning of the physicalresources of the VNX into many containerized logical instances to serve multipleNAS tenants.

    This solution enables private and public cloud providers that are either selling orsupporting (ITaaS) cloud storage services to host multiple NAS file storageenvironments on one or more physical EMC VNX storage platforms.

    Cloud storage providers who want to offer a choice of multi-tenant NAS file storageservices from multiple storage vendors can now offer EMC VNX file storage to multipletenants.

    Investments in existing VNX storage capacity can be monetized further throughhosting multiple tenants on a single storage platform helping accelerate the return oninvestment and reducing the storage total cost of ownership (TCO).

    Business case

    Solution overview

    Key results andrecommendations

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    6/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    6

    Introduction

    The purpose of this white paper is to provide the necessary level of detail for thedesign and deployment of secure multiple file systems within the Data Mover andVDM constructs of the EMC VNX storage platform, enabling public and private cloudproviders to standardize multi-tenant file storage.

    Throughout this white paper we1assume that you have hands on experience with theEMC VNX storage platform including the CLI, and familiarity with EMC Unisphere.

    You should also have a good understanding of networking fundamentals and a goodoverall grasp of the concepts related to virtualization technologies, and their use incloud and data center infrastructures. Detailed configuration and operationalprocedures are outlined along with links to other white papers and documents.

    This white paper is intended for EMC employees, partners, and customers including ITplanners, system architects and administrators, and any others involved who are

    interested in deploying file storage to multiple tenants on new or existing EMC VNXstorage platforms.

    Table 1 shows terminology that is used in this white paper.

    Table 1.

    Terminology

    Term Definition

    802.1Q Trunk A trunk port is a network switch port that passes traffictagged with an 802.1Q VLAN IDs. Trunk ports are usedto maintain the VLAN isolation between physicalswitches or compatible network devices such as the

    network ports on a storage array. An LACP port groupcan also be configured as a trunk port to pass taggedVLAN traffic.

    Common Internet FileSystem (CIFS)

    File-sharing protocol based on the Microsoft ServerMessage Block (SMB) protocol that enables users toaccess shared file storage over a network.

    Data Mover Within the VNX platform offering file storage, the DataMover is a hardware component that provides the NASpresence and protocol support to enable clients toaccess data on the VNX using NAS protocols such as

    NFS and CIFS. Data Movers are also referred to as X-Blades.

    1In this white paper, "we" refers to the EMC engineering team that validated the solution.

    Purpose

    Scope

    Audience

    Terminology

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    7/30

    7EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Term Definition

    Domain Logical grouping of Microsoft Windows servers andother computers that share common security and useraccount information. All resources such as computersand users are domain members and have an accountin the domain that uniquely identifies them. Thedomain administrator creates one user account foreach user in the domain, and the users log in to thedomain once. Users do not log in to each server.

    LACP High-availability feature based on the IEEE 802.3adLink Aggregation Control Protocol (LACP) standardwhich allows Ethernet ports with similarcharacteristics on the same switch to combine into asingle logical port, or link with a single MAC addressand potentially multiple IP addresses. This feature isused to group ports that appear to be logically largerlinks with aggregated bandwidth.

    Lightweight DirectoryAccess Protocol (LDAP)

    Industry-standard information access protocol. It is theprimary access protocol for Active Directory and LDAP-based directory servers. LDAP version 3 is defined inInternet Engineering Task Force (IETF) RFC 2251

    Network file system (NFS) A network file system protocol that allows a user on aclient computer to access shared file storage over anetwork.

    Network InformationService (NIS)

    Distributed data lookup service that shares user andsystem information across a network, includingusernames, passwords, home directories, groups,

    hostnames, IP addresses, and netgroup definitions.

    Storage pool Groups of available disk volumes organized byAutomatic Volume Management (AVM) that are used toallocate available storage to file systems. They can becreated automatically by AVM or manually by the user.

    Virtual Data Mover An EMC VNX software feature that enables thegrouping of file systems, NFS endpoints, and CIFSservers into virtual containers. These run as logicalcomponents on top of a physical Data Mover.

    VLAN Logical networks that function independently of thephysical network configuration and are a means ofsegregating traffic across a physical network or switch.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    8/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    8

    Technology overview

    The VNX family of storage arrays is designed to deliver maximum performance andscalability, enabling private and public cloud providers to grow, share, and cost-effectively manage multiprotocol file and block systems. EMC VNX series storage ispowered by Intelprocessors for intelligent storage that automatically and efficiently

    scales in performance, while ensuring data integrity and security.

    Virtual Data Movers

    A VDM is an EMC VNX software feature that enables

    the grouping of file systems, CIFSservers and NFS endpoints into virtual containers. EachVDM contains all the datanecessary to support one or more CIFS serversand NFS endpoints associated withtheir file systems. The servers in a VDM store their dynamic

    configuration information(such as local users, local groups, shares, security credentials,audit logs, NS Domainconfiguration files and so on) in a configuration file system. A VDM can then beloaded (active state) and unloaded (mounted but inactive state), moved from DataMover to Data Mover, orreplicated to a remote Data Mover as an autonomous unit.The servers,their file systems, and configuration data are available in one virtual

    container.

    VDMs enable system administrators to group file systems and NFS server mountpoints. Each VDM contains the necessary information to support one or more NFSservers. Each VDM has access only to the file systems mounted to that VDM. Thisprovides a logical isolation between the VDM and NFS mount points.

    Physical Data Movers

    A physical Data Mover is a component within the VNX platform that retrieves datafrom the associated disk storage and makes it available to a network client; the DataMover can use the CIFS and NFS protocols.

    EMC Unisphere

    EMC Unisphereis the central management platform for the EMC VNX series,providing a single combined view of file and block systems, with all features andfunctions available through a common interface.Figure 1 is an example of how theproperties of a Data Mover, named server_2, are presented through the Unisphereinterface on a VNX5700 system.

    EMC VNX series

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    9/30

    9EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Figure 1. The server_2 Data Mover on the Unisphere interface on VNX5700

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    10/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    1

    Solution architecture and design

    To validate the functionality and performance of VDMs on the EMC VNX seriesstorage, we implemented multiple VDMs to simulate a multi-tenant environment.Each VDM was used as a container that included the file systems exported by the NFSendpoint. The NFS exports of the VDM are visible through a subset of the Data Movernetwork interfaces assigned to the VDM, as shown inFigure 2.The clients can then

    access the Data Mover network via different VLANs for network isolation and secureaccess to the data.

    Figure 2. Architecture diagram

    Within the EMC VNX57xx series used in this solution, the Data Movers and VDMshave the following features:

    A single physical Data Mover supports the NFS services for different tenantseach with their own LDAP, NIS, and DNS configurations by separating theservices for each tenant in their own VDM.

    The file systems exported by each VDM are not accessible by users of different

    VDMs.

    Each tenant is served by a different VDM addressed through a subset of logicalnetwork interfaces configured on the Data Mover.

    The file systems exported by a VDM can be accessed by CIFS and NFSv3 orNFSv4 over TCP protocols. The VDM solution compartmentalizes the file systemresources. Consequently, only file systems mounted on a VDM can be exportedby the VDM.

    Architectureoverview

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    11/30

    1EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Table 2 lists the hardware components used in solution validation.

    Table 2.

    Hardware components

    Item Units Description

    EMC VNX5700 1 File version: 7.1.56.5

    Block version: 05.32.000.5.15

    Cisco MDS 9509 2 Version 5.2.1

    Cisco UCS B200 M2 BladeServer

    4 Intel Xeon X5680, six-core processors, 3.333 GHz,96 GB RAM

    Table 3 lists the software components used in solution validation.

    Table 3.

    Software components

    Item Version Description

    EMC Unisphere 1.2.2 Management tool for EMC VNX5700

    VMware vCenter Server 5.1 2 vCPU, Intel Xeon X7450, 2.66 GHz, 4 GBRAM

    Windows 2008 Enterprise Edition R2 (x64)

    VMware vSphere 5.1 Build 799733

    CentOS 6.3 2 vCPU Intel Xeon X5680, 2 GB RAM

    Cisco UCS Manager 2.0(4b) Cisco UCS server management tool

    Plink Release0.62

    Scripting tool

    IOzone 4.1.4 I/O generation tool

    A key component of the solution is the aggregation and mapping of network portsonto VDMs. This makes use of industry standard features of the EMC VNX Data Movernetwork ports which can tag and identify traffic to a specific logical network or VLAN.The tagged traffic is then effectively isolated between different tenants andmaintained across the network.

    If multiple logical network connections are configured between the clients and theVNX, the network traffic can be distributed and aggregated across the multipleconnections to provide increased network bandwidth and resilience. SMB3 clients,

    such as Windows 8 and Windows Server 2012, detect and take advantage of multiplenetwork connections to the VNX natively. Similar benefit can be provided to NFSclients by logically grouping interfaces with the LACP protocol.

    When using LACP, traffic is distributed across the individual links based on thechosen algorithm that is determined by configuration on the EMC VNX and networkswitch. The most suitable traffic distribution algorithm should be selected based onhow hosts are accessing and communicating with the storage. When configuringLACP, the choice of IP, MAC or TCP port-based traffic distribution should be selected

    Hardwarecomponents

    Softwarecomponents

    Networkarchitecture

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    12/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    1

    based on the relationship of host to server, this involves examining howconversations would occur in the specific environment and if any changes to defaultpolicy are required. The default policy is by IP address-based traffic distribution.

    Individual network interfaces and LACP port groups can also be configured as an802.1Q trunk to pass 802.1Q tagged traffic. An 802.1Q tag is used to identify that apacket belongs to a specific logical network or VLAN. By assigning multiple logical

    interfaces to a trunk port, a different VLAN can be associated to each interface. Wheneach logical interface is configured for a different VLAN, a packet is accepted only ifits destination IP address is the same as the IP address of the interface, and thepacket's VLAN tag is the same as the interface's VLAN ID.

    The Layer 2 network switch ports for servers, including VNX, are configured to include802.1Q VLAN tags on packets sent to the VNX. The server is responsible forinterpreting the VLAN tags and processing the packets appropriately. This enablesthe server to connect to multiple VLANs and their corresponding subnets through asingle physical connection.

    The example inFigure 3 shows how a physical Data Mover is configured to support atenant user domain in a VDM.

    Figure 3. VDM configuration within the physical Data Mover

    In this example, we configured a VDM called VDM-Saturn, which represents a tenantuser. The logical VDM network interface for VDM-Saturnis then called Saturn-if. On

    the physical Data Mover we configured an LACP trunk interface, TRK-1; this isconfigured to use two 10 Gb Ethernet ports, fxg-1-0and fxg-2-0.

    The trunk port TRK-1 was associated to VLAN A for accessing its defined host networkto enforce tenant and domain isolation. VLAN A was associated VLAN ID on thenetwork switch to allow communication between clients and the file system.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    13/30

    1EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    EMC VNX57XX network elements

    Within the EMC VNX57XX series, file system access is made via the network ports onthe physical Data Mover. The EMC VNX can support between two and eight DataMovers, depending on the model. These are configured as either active or standby. AData Mover can be configured using a combination of quad-port1 Gb or a dual-port 10 Gb network interface cards. Each network interface portsupports the LACP and 802.1Q industry standard features to allow either VLAN trunksor host mode. Network interfaces can also be combined using LACP to form logicallinks.

    For more details on the networking aspects of the VNX platform, refer to Configuringand Managing Networking on VNX.

    The current VDM implementation and functionality has the following characteristics:

    The VDM supports CIFS, NFSv3, and NFSv4 protocols over TCP. All otherprotocols such as FTP, SFTP, FTPS, and iSCSI are not supported.

    The NFSv3 clients must support NFSv3 over TCP to connect a NFS endpoint.

    There are a number of contributing factors for how many file systems can exist on aData Mover these are, the number of mount points, storage pools, and other internalfile systems. The total number of VDMs, file systems, and checkpoints cannot exceed2048 per Data Mover.

    The maximum number of VDMs per VNX array corresponds to the maximum numberof file systems per Data Mover. A VDM has a root file system which reduces onenumber from the total count. Any file systems created on those VDMs will alsocontinue to decrease the total number. The common practice is to create andpopulate the VDM so that it has at least two file systems per VDM. This reduces themaximum number of VDMs per VNX as follows:

    2048/2 = 1024 1 root file system) = 1023

    Although this 1023 limit exists, EMC currently supports a maximum of 128 VDMsconfigured on a physical Data Mover.

    Each physical Data Mover (including all the VDMs it hosts) does not supportoverlapping IP addresses spaces. It is therefore not possible to host two differenttenants that use the same IP addresses or subnet ranges on the same Data Mover.Such tenants must be moved onto separate physical Data Movers.

    In provisioning terms, when determining which physical Data Mover on to which toprovision a new tenant, and hence VDM, the provisioning logic must determine if

    there is an IP space conflict between the new tenant to be provisioned and theexisting tenants on a physical Data Mover. If there is no clash, the new tenant can beprovisioned to the Data Mover. If there is a clash, the new tenant must be provisionedto a different Data Mover.

    If a physical Data Mover crashes, all of its file systems, VDMs, IP interfaces, and otherconfiguration are loaded by the standby Data Mover and it takes over the failed DataMovers identity. The result is that everything comes back online as if it were theoriginal Data Mover.

    Designconsiderations

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    14/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    1

    A manual planning exercise is required to accurately balance workloads betweeneach physical Data Mover as, in the current implementation, there is no automatedload balancing of VDMs on a physical Data Mover.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    15/30

    1EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Solution validation

    To validate this solution, the objective was to test the configuration of multiple VDMsfor NFS and how they performed under I/O load. Specifically, the network file-basedNFS data stores were configured on NFS file shares. We deployed several open sourcebased CentOS 6.3 virtual machines to generate I/O activities against these datastores. The physical Data Mover performance was monitored to ensure CPU and

    memory utilization was in line with the design specifications, while multiple VDMswere used for file access.

    To simulate multi-tenant file systems, we configured multiple VDMs on a physicalData Mover and exported the NFS file systems associated with a VDM to VMware ESXi5.1 hosts. These hosts are assigned to different tenants who have access to filestorage from different networks and LDAP domains.

    There were four VMware ESXi 5.1 hosts in the data center. Each host has data storesfrom different NFS shares exported by its designated VDMs. For each tenant, it canonly have access to its designated file system and NFS data stores. Other tenants are

    not permitted to have any access to the file systems and NFS data stores in the samedata center.

    The server and storage configuration for this solution validation test consists of twoVDMs configured on a physical Data Mover for two different tenants, Tenant A andTenant B, as shown inFigure 4.

    Figure 4.

    Server and storage topology for Tenant A and Tenant B

    Each tenant had file access provided by their own VDM. These were named VDM-Saturnand VDM-Mercuryand attached to different network interfaces configured

    Objective

    Test scenario

    Server and storageconfiguration

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    16/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    1

    within each VDM. By implementing LDAP and VLANs, each tenant can limit the fileaccess and maintain distributed directory information over the network.

    You can configure single or multiple resolver domain names for a VDM. You mustspecify the respective domain name and the resolver value. The VDM domainconfiguration includes the NIS, LDAP, DNS, and NFSv4 domains specifications.

    For more details on how to manage the domain configuration for a VDM, refer toConfiguring Virtual Data Movers on VNX 7.1.

    In the following example, as shown inFigure 5,the VDM VDM-Saturnis configured toprovide file access to Tenant A and it is attached to Network A. The file systemSaturn_File_Systemis mounted in VDM-Saturn. In the same way, the NFS clients ofTenant B have access to Mercury_File_Systemby mounting the NFS export to the IPaddress associated with Network B.

    Figure 5.

    Network interface to NFS endpoint mapping

    To configure an NFS server to exclusively serve tenants for a particular namingdomain, the service provider and storage administrator must complete the followingtasks:

    Create a new VDM that houses the file system to export for the considereddomain.

    Create the network interface(s) for the VDM.

    Assign the interface to the VDM. Configure the domain for the VDM.

    Configure the lookup name service strategy for the VDM (optional if notconfigured at the VDM, the services configured on the physical Data Mover willbe used).

    Mount the file system(s) on the VDM.

    Export the file system(s) for the NFS protocol.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    17/30

    1EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    The interfaces mapped between a Data Mover and a VDM are reserved for the CIFSservers and the NFS server of the physical Data Mover.

    The VDM feature allows separation of several file system resources on one physicalData Mover. The solution described in this document implements an NFS server perVDM named NFSendpoint. The VDM is used as a container that includes the filesystems exported by the NFS endpoint and/or the CIFS server. The file systems of the

    VDM are visible through a subset of the Data Mover network interfaces attached tothe VDM.

    The same network interface can be shared by both CIFS and NFS protocols on thatVDM. The NFS endpoint and CIFS server are addressed through the network interfacesattached to that VDM.

    The command line interface (CLI) must be used to create the VDMs, using nasadminor root privileges to access the VNX management console.

    The following steps show how to create a VDM on a physical Data Mover for Tenant Ain a multi-tenant environment. To support multiple tenants, multiple VDMs are

    required to provide file access. The procedure to create a VDM can be repeated foradditional tenant VDM creation as required.

    Storage pool configuration

    Before a VDM can be created on a Data Mover, a storage pool must be configured onthe VNX to store the user file systems. In this example, we configured a storage poolnamed FSaaS-Storage-Pool. Its properties are shown inFigure 6.

    Figure 6. Configuring a storage pool named FSaaS-Storage-Pool in Unisphere

    DM configuration

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    18/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    1

    For more information on file systems, refer toManaging Volumes and File Systemswith VNX Automatic Volume.

    Create a VDM

    The VNX CLI command, inFigure 7,shows how to create VDM-Saturnwhich is used forTenant A file access on Data Mover server_2.

    Figure 7. Creating the VDM named VDM-Saturn

    When using default values, the VDM is created in a loaded state.

    Note The system assigns default names for the VDM and its root file system.

    You can use the same command to create VDM-Mercuryfor Tenant B, as shown inFigure 8.

    Figure 8.

    Creating the VDM named VDM-Mercury

    Create a user file system

    The CLI command, inFigure 9,shows how to create a file system namedSaturn_File_System, with 200 GB storage capacity, from the storage pool FSaaS-Storage-Pool.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    19/30

    1EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Figure 9.

    Creating the Saturn file system

    Create a mount point

    The CLI command, inFigure 10,shows how to create the mount point for/SaturnFileSystemfor the Saturn_File_Systemon VDM-Saturn.

    Figure 10.

    Mount point setup for VDM-Saturn

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    20/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    Check VDM status

    To validate the VDM-Saturnproperties that you configured, you can run the commandas shown inFigure 11.

    Figure 11.

    Validating the VDM-Saturn setup

    Create the VDM network interface

    The network interface for Saturn-ifis created for device trunk1with the following

    parameters, as shown inFigure 12:

    IP address: 10.110.46.74

    Network mask: 255.255.255.0

    IP broadcast address: 10.110.46.255

    Figure 12.

    VDM network interface setup

    To achieve the maximum security and domain/tenant separation, each VDM musthave its own dedicated VDM network interface. The same network interfaces cannotbe shared between different VDMs.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    21/30

    2EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Attach a VDM interface to the VDM

    The CLI command, inFigure 13,shows how to attach the network interface Saturn-ifto VDM-Saturn.

    Figure 13.

    Attaching the VDM interface

    You can use the CLI for file system configuration by mounting the file system to a VDMand exporting it to server hosts.

    Mount the file system to a VDM

    You can mount the Saturn_File_Systemon /SaturnFileSystemon the VNX NFS server,as shown inFigure 14.

    Figure 14.

    Mounting the file system to the VDM

    Export the file system to server hosts

    In the example inFigure 15,we exported the Saturn_File_System, using the NFSprotocol, to a VMware ESXi 5.1 host with the IP address 10.110.46.73.

    Figure 15.

    Exporting the Saturn file system to an ESXi host

    Note A state change of a VDM from loadedto mounted, temp-unloaded, or perm-unloadedshuts down the NFS endpoints in the VDM, making the file systems

    inaccessible to the clients through the VDM.

    File systemconfiguration

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    22/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    Table 4 summarizes the process of VDM creation and exporting the file systems tovSphere ESXi 5.1 hosts.

    Table 4.

    VDM tenant configuration

    Parameter Tenant A Tenant B

    VDM name VDM-Saturn VDM-MercuryStorage pool FSaaS-Storage-Pool FSaaS-Storage-Pool

    User file system Saturn_File_System Mercury_File_System

    Mount point onVNX NFS server

    /SaturnFileSystem /MercuryFileSystem

    VDM interface Saturn-if with IP address:10.110.46.74

    Mercury-if with IPaddress: 10.110.47.74

    VDM network VLAN-A VLAN-B

    File export host Host A with IP address:

    10.110.46.73

    Host B with IP address:

    10.110.47.73

    By accessing different VLANs and networks, both Tenant A and Tenant B have theirown VDM interfaces and host networks. The user file systems for Tenant-A andTenant-B can be created from either the same storage pool or different storage pools,depending on tenant service requirements.

    For large-scale deployments, you should consider using scripting tools to speed upthe process of VDM creation and its associated file system mount and exportprocedures.

    You can use Plink to access the VNX Control Station via SSH. Plink can bedownloadedfrom:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

    Figure 16 shows an example of running Plink from a Windows 7 command console tocreate VDM-Marsfrom a script.

    DM configurationsummary

    Scripteddeployment

    http://www.chiark.greenend.org.uk/~sgtatham/putty/download.htmlhttp://www.chiark.greenend.org.uk/~sgtatham/putty/download.htmlhttp://www.chiark.greenend.org.uk/~sgtatham/putty/download.htmlhttp://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    23/30

    2EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Figure 16. Running Plink

    A sample script file to create VDM-Marsfor Tenant M and export its associated userfile system is shown inFigure 17.

    Tenant-M has the same profile attributes as Tenant-A and Tenant-B, as listed inTable4.

    Figure 17. Example Plink script

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    24/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    VMware ESXi 5.1 NFS data stores

    On VMware ESXi hosts, you can create data stores using an NFS file exported fromVNX, as shown inFigure 18.

    Figure 18.NFS data store on ESXi hosts

    You must specify the NFS server which is running on the specific VDM and the shared

    folder, as shown inFigure 19.

    Figure 19. Selecting the server, folder, and data store

    As shown inFigure 20,NFS-Datastore-Saturnis created from NFS server

    10.110.46.74, the shared folder is /SaturnFileSystem.

    Cloud platform-attached filesystems

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    25/30

    2EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Figure 20.

    NFS data store details

    The tests documented in this white paper are as follows:

    1.

    Creating the data stores on the NFS file systems exported by the VDMs.

    2. Installing and configuring eight CentOS 6.3 virtual machines on the NFS datastores.

    3. Running I/O workloads on all eight CentOS virtual machines with 128 threadsto simulate 128 VDMs against NFS data stores using IOzone.

    4. Failing over the active physical Data Mover with VDMs configured to thestandby Data Mover.

    5. Verifying the benchmarking tests with no disruption during physical DataMover failover.

    6.

    Monitoring the physical Data Mover CPU and memory utilization during theI/O workload using VNX Performance Monitor.

    Use IOzone to generate I/O

    The CentOS 6.3 virtual machines generated I/O using open source IOzone. IOzone isa file system workload generation tool. The workload generates and measures avariety of file operations. IOzone is useful for performing broad file system analysis ofa vendors computer platform. The workload tests file I/O for the followingoperations:

    Read, write, re-read, re-write, read backwards, read strided, fread, fwrite, randomread, pread, mmap, aio_read, aio_write

    IOzone is designed to create temporary test files, from 64 KB to 512 MB in size, fortesting in automatic mode. However, the file size and I/O operation can be specifieddepending on the test. In our test, we used the following I/O parameters:

    Read Reading a file that already exists in the file system.

    Write Writing a new file to the file system.

    Re-read After reading a file, the file is read again.

    Test procedures

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    26/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    Re-write Writing to an existing file.

    We installed the latest IOzone build on the virtual machines and ran the followingcommands from the server console:

    #wget http://www.iozone.org/src/current/iozone3_414.tar

    #tar xvf iozone3_414.tar

    #cd iozone3_414/src/current

    #make

    #make linux

    Figure 21 shows a test run where all I/O read and writes were set with a file size of1024 KB.

    Figure 21.

    Test run for 1024 KB file size

    Note In this test, IO-zone was used for running read and write I/Os on the CentOS6.3 virtual machines to validate the VDM functionality. We did not undertakea full scale performance test to evaluate how a physical Data Mover performswith multiple file systems and 128 VDMs configured, while running intensiveI/Os.

    The VDM functionality validation test results are summarized inTable 5.

    Table 5.

    VDM functionality validation results

    Action Validation results

    Create NFS data stores Yes

    Install virtual machines on NFS datastores

    Yes

    Power up/shut down virtual machinessuccessfully

    Yes

    Test results

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    27/30

    2EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Action Validation results

    Run I/Os from virtual machines againstNFS data stores

    Yes

    Physical Data Mover high availability

    By running all eight CentOS 6.3 virtual machines with 128 threads of read, write, re-read and re-write I/O operations, we were able to produce the approximateequivalent overhead to that of a physical Data Mover with 128 VDMs configured withI/O running on each VDM.

    We executed the following command on a VNX control station to make active DataMover failover to the standby Data Mover:

    # server_standby server_2 -a mover

    The failover process completed in less than 30 seconds and all I/O operations wererestored without any disruption. For most applications running on NFS-based storageand data stores, the IOzone throughputs observed for write, re-write, read, and re-read were well within acceptable performance levels.

    VNX Data Mover load

    While running 128 threads of I/O workload, the VNX Data Mover load is monitored. Asshown inFigure 22,the Data Mover CPU utilization is approximately 55 percent whilethe Free Memory is approximately 50 percent. This is well within the system designspecification range.

    Figure 22.

    VNX Data Mover Performance Monitor

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    28/30

    EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    2

    Based on the test results, the clients should not expect any significant performanceimpact since VDMs perform in the same way as the physical Data Mover and a usersability to access data from a VDM is no different from accessing data residing on thephysical Data Mover, as long as the maximum number of supported VDMs are not

    exceeded on a physical Data Mover.

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    29/30

    2EMC Multi-Tenant File Storage

    Multi-Tenant File Storage with EMC VNX

    Conclusion

    EMC VNX VDMs provide a feasible way to support file system services for multipletenants on one or more physical EMC VNX storage systems for private and publiccloud environments.

    VDMs can be configured via the VNX CLI and can be managed within the VNX

    Unisphere GUI. Also, by adopting best practice for security and network planning, theimplementation of VNX VDMs can enhance the file system functionality and lay thefoundations for multi-tenant File System-as-a-Service offerings.

    This solution enables service providers that are offering cloud storage services tohost up to 128 VDMs on one physical EMC VNX storage platform while maintainingthe required separation between tenants.

    Cloud storage providers who want to offer a choice of multi-tenant NAS file systemservices from multiple storage vendors can now offer EMC VNX file systems tomultiple tenants. This allows investments in existing VNX storage capacity to bemonetized further, helping to accelerate their return on investment and reduce their

    storage TCO.

    Summary

  • 7/26/2019 h12051 Wp Multi Tenant File Storage

    30/30

    References

    For specific information related to the features and functionality described in thisdocument refer to:

    VNX Glossary

    EMC VNX Command Line Interface Reference for File

    Managing Volumes and File Systems on VNX Manually

    Managing Volumes and File Systems with VNX Automatic Volume Management

    Problem Resolution Roadmap for VNXVNX for File Man Pages

    EMC Unisphere online help

    EMC VNX documentation can be found onEMC Online Support.

    https://support.emc.com/https://support.emc.com/https://support.emc.com/https://support.emc.com/