Admin help netapp

538
NetApp® OnCommand™ Console Administration Task Help For Use with Core Package 5.0 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: [email protected] Information Web: http://www.netapp.com Part number: 215-05997_A0 July 2011

description

netapp related doc

Transcript of Admin help netapp

Page 1: Admin help netapp

NetApp® OnCommand™ ConsoleAdministration Task Help For Use with Core Package 5.0

NetApp, Inc.495 East Java DriveSunnyvale, CA 94089 USATelephone: +1 (408) 822-6000Fax: +1 (408) 822-4501Support telephone: +1 (888) 4-NETAPPDocumentation comments: [email protected] Web: http://www.netapp.com

Part number: 215-05997_A0July 2011

Page 2: Admin help netapp
Page 3: Admin help netapp

Contents

About this document .................................................................................. 13Welcome to OnCommand console Help ................................................... 15

How to use OnCommand console Help .................................................................... 15

Bookmarking your favorite topics ................................................................. 15

Understanding how the OnCommand console works ............................................... 15

About the OnCommand console ................................................................... 15

Window layout and navigation ..................................................................... 15

Window layout customization ....................................................................... 16

How the OnCommand console works with the Operations Manager

console and NetApp Management Console ............................................ 17

Launching the Operations Manager console ................................................. 17

Installing NetApp Management Console ...................................................... 18

How the OnCommand console works with AutoSupport ............................. 19

Dashboard ................................................................................................... 21Understanding the dashboard .................................................................................... 21

OnCommand console dashboard panels ....................................................... 21

Monitoring the dashboard ......................................................................................... 22

Monitoring dashboard panels ........................................................................ 22

Page descriptions ....................................................................................................... 22

Availability dashboard panel ......................................................................... 22

Events dashboard panel ................................................................................. 23

Full Soon Storage dashboard panel ............................................................... 24

Fastest Growing Storage dashboard panel .................................................... 24

Dataset Overall Status dashboard panel ........................................................ 25

Resource Pools dashboard panel ................................................................... 25

External Relationship Lags dashboard panel ................................................ 26

Unprotected Data dashboard panel ............................................................... 26

Events and alarms ...................................................................................... 29Understanding events and alarms .............................................................................. 29

What events are ............................................................................................. 29

What alarms are ............................................................................................. 29

Guidelines for creating alarms ...................................................................... 30

Table of Contents | 3

Page 4: Admin help netapp

How to know when an event occurs .............................................................. 31

Description of event severity types ............................................................... 31

Alarm configuration ...................................................................................... 32

Configuring alarms .................................................................................................... 32

Creating alarms for events ............................................................................. 32

Creating alarms for a specific event .............................................................. 33

Managing events and alarms ..................................................................................... 34

Resolving events ........................................................................................... 34

Editing alarm properties ................................................................................ 35

Configuring the mail server for alarm notifications ...................................... 36

Monitoring events and alarms ................................................................................... 36

Viewing event details .................................................................................... 36

Viewing alarm details .................................................................................... 37

Page descriptions ....................................................................................................... 37

Events tab ...................................................................................................... 37

Alarms tab ..................................................................................................... 40

Create Alarm dialog box ............................................................................... 41

Edit Alarm dialog box ................................................................................... 43

Jobs .............................................................................................................. 45Understanding jobs .................................................................................................... 45

Understanding jobs ........................................................................................ 45

Managing jobs ........................................................................................................... 45

Canceling jobs ............................................................................................... 45

Monitoring jobs ......................................................................................................... 46

Monitoring jobs ............................................................................................. 46

Page descriptions ....................................................................................................... 46

Jobs tab .......................................................................................................... 46

Servers ......................................................................................................... 53Understanding virtual inventory ................................................................................ 53

How virtual objects are discovered ............................................................... 53

Monitoring virtual inventory ..................................................................................... 53

Monitoring VMware inventory ..................................................................... 53

Monitoring Hyper-V inventory ..................................................................... 57

Managing virtual inventory ....................................................................................... 58

Adding virtual objects to a group .................................................................. 59

Adding a virtual machine to inventory .......................................................... 59

4 | OnCommand Console Help

Page 5: Admin help netapp

Preparing a virtual object managed by the OnCommand console for

deletion from inventory ........................................................................... 60

Performing an on-demand backup of virtual objects .................................... 61

Restoring backups from the Server tab ......................................................... 64

Mounting and unmounting backups in a VMware environment ................... 67

Page descriptions ....................................................................................................... 71

VMware ......................................................................................................... 71

Hyper-V ......................................................................................................... 80

Storage ......................................................................................................... 85Physical storage ......................................................................................................... 85

Understanding physical storage .................................................................... 85

Configuring physical storage ........................................................................ 87

Managing physical storage ............................................................................ 89

Monitoring physical storage .......................................................................... 94

Page descriptions ......................................................................................... 100

Virtual storage ......................................................................................................... 118

Understanding virtual storage ..................................................................... 118

Managing virtual storage ............................................................................. 120

Monitoring virtual storage ........................................................................... 123

Page descriptions ......................................................................................... 125

Logical storage ........................................................................................................ 134

Understanding logical storage ..................................................................... 134

Managing logical storage ............................................................................ 136

Monitoring logical storage .......................................................................... 141

Page descriptions ......................................................................................... 150

Policies ....................................................................................................... 167Local policies .......................................................................................................... 167

Understanding local policies ....................................................................... 167

Configuring local policies ........................................................................... 170

Managing local policies .............................................................................. 172

Page descriptions ......................................................................................... 175

Datasets ...................................................................................................... 181Understanding datasets ............................................................................................ 181

What a dataset is .......................................................................................... 181

Dataset concepts .......................................................................................... 181

Role of provisioning policies in dataset management ................................. 183

Table of Contents | 5

Page 6: Admin help netapp

Role of protection policies in dataset management ..................................... 183

What conformance monitoring and correction is ........................................ 184

Datasets of physical storage objects ............................................................ 184

Datasets of virtual objects ........................................................................... 185

Data ONTAP licenses used for protecting or provisioning data ................. 195

Descriptions of dataset protection status ..................................................... 197

Configuring datasets ................................................................................................ 199

Adding a dataset of physical storage objects .............................................. 199

Adding a dataset of virtual objects .............................................................. 203

Editing a dataset to add virtual object members ......................................... 209

Editing a dataset to assign storage service and remote protection of

virtual objects ........................................................................................ 210

Editing a dataset of virtual objects to configure local policy and local

backup .................................................................................................... 211

Editing a dataset containing virtual objects to reschedule or modify local

backup jobs ............................................................................................ 212

Editing a dataset to remove protection from a virtual object ...................... 213

Adding a dataset of physical storage objects with dataset-level custom

naming ................................................................................................... 214

Adding a dataset of virtual objects with dataset-level custom naming ....... 215

Editing a dataset of virtual objects for dataset-level custom naming .......... 217

Editing a dataset of physical storage objects for dataset-level custom

naming ................................................................................................... 218

Selecting virtual objects to create a new dataset ......................................... 219

Selecting virtual objects to add to an existing dataset ................................. 220

Configuring local backups for multiple datasets of virtual Hyper-V

objects .................................................................................................... 221

Managing datasets ................................................................................................... 222

Performing an on-demand backup of a dataset ........................................... 222

Deleting a dataset of virtual objects ............................................................ 226

Suspending dataset protection and conformance checking ......................... 226

Resuming protection and conformance checking on a suspended dataset .. 227

Changing a storage service on datasets of storage objects .......................... 228

Attaching a storage service to existing datasets of storage objects ............. 229

Restoring data backed up from a dataset of physical storage objects ......... 230

Repairing datasets that contain deleted virtual objects ............................... 231

6 | OnCommand Console Help

Page 7: Admin help netapp

Evaluating and resolving issues displayed in the Conformance Details

dialog box .............................................................................................. 232

Monitoring datasets ................................................................................................. 236

Overview of dataset status types ................................................................. 236

How to evaluate dataset conformance to policy .......................................... 239

Monitoring dataset status ............................................................................ 244

Monitoring backup and mirror relationships ............................................... 245

Listing nonconformant datasets and viewing details .................................. 246

Evaluating and resolving issues displayed in the Conformance Details

dialog box .............................................................................................. 246

Page descriptions ..................................................................................................... 251

Datasets tab ................................................................................................. 251

Create Dataset dialog box or Edit Dataset dialog box ................................ 258

Backups ..................................................................................................... 269Understanding backups ........................................................................................... 269

Types of backups ......................................................................................... 269

Backup version management ...................................................................... 269

Backup scripting information ...................................................................... 270

Retention of job progress information ........................................................ 271

Guidelines for mounting or unmounting backups in a VMware

environment ........................................................................................... 271

How the Hyper-V plug-in uses VSS ........................................................... 272

How the Hyper-V plug-in handles saved-state backups ............................. 274

Overlapping policies and Hyper-V hosts .................................................... 274

Co-existence of SnapManager for Hyper-V with the Hyper-V plug-in ...... 275

How to manually transition SnapManager for Hyper-V dataset

information ............................................................................................ 275

Managing backups ................................................................................................... 276

Performing an on-demand backup of virtual objects .................................. 276

Performing an on-demand backup of a dataset ........................................... 280

On-demand backups using the command-line interface ............................. 283

Mounting or unmounting backups in a VMware environment ................... 284

Manually mounting or unmounting backups in a Hyper-V environment

using SnapDrive for Windows .............................................................. 288

Locating specific backups ........................................................................... 292

Deleting a dataset backup ............................................................................ 293

Table of Contents | 7

Page 8: Admin help netapp

Monitoring backups ................................................................................................. 294

Monitoring local backup progress ............................................................... 294

Page descriptions ..................................................................................................... 294

Backups tab ................................................................................................. 294

Restore ....................................................................................................... 297Understanding restore ............................................................................................. 297

Restoring data from backups ....................................................................... 297

Restore scripting information ...................................................................... 298

Managing restore ..................................................................................................... 299

Restoring data from backups created by the OnCommand console ............ 299

Monitoring restore ................................................................................................... 302

Viewing restore job details .......................................................................... 302

Reports ...................................................................................................... 303Understanding reports ............................................................................................. 303

Reports management ................................................................................... 303

Reports tab customization ........................................................................... 304

Types of object status .................................................................................. 305

What report scheduling is ............................................................................ 305

Managing reports ..................................................................................................... 306

Scheduling reports ....................................................................................... 306

Viewing the scheduled reports log .............................................................. 307

Sharing reports ............................................................................................ 307

Deleting a report .......................................................................................... 308

Page descriptions ..................................................................................................... 308

Reports tab ................................................................................................... 308

Schedule Report dialog box ........................................................................ 310

Share Report dialog box .............................................................................. 311

Events reports .......................................................................................................... 311

Understanding events reports ...................................................................... 312

Page descriptions ......................................................................................... 312

Inventory reports ..................................................................................................... 314

Understanding inventory reports ................................................................. 314

Page descriptions ......................................................................................... 314

Storage capacity reports .......................................................................................... 323

Understanding storage capacity reports ...................................................... 323

Monitoring storage capacity reports ............................................................ 328

8 | OnCommand Console Help

Page 9: Admin help netapp

Page descriptions ......................................................................................... 338

Database schema ..................................................................................................... 359

How to access DataFabric Manager server data ......................................... 359

Supported database views ........................................................................... 360

alarmView ................................................................................................... 361

cpuView ...................................................................................................... 362

designerReportView .................................................................................... 363

Database view datasetIOMetricView .......................................................... 363

Database view datasetSpaceMetricView .................................................... 364

Database view datasetUsageMetricCommentView .................................... 366

hbaInitiatorView .......................................................................................... 367

hbaView ...................................................................................................... 367

initiatorView ................................................................................................ 367

reportOutputView ........................................................................................ 367

sanhostlunview ............................................................................................ 368

usersView .................................................................................................... 368

volumeDedupeDetailsView ........................................................................ 369

Administration .......................................................................................... 371Users and roles ........................................................................................................ 371

Understanding users and roles ..................................................................... 371

Configuring users and roles ......................................................................... 375

Groups ..................................................................................................................... 375

Understanding groups ................................................................................. 375

Configuring groups ..................................................................................... 377

Managing groups ......................................................................................... 379

Page descriptions ......................................................................................... 381

Alarms ..................................................................................................................... 385

Understanding alarms .................................................................................. 385

Configuring alarms ...................................................................................... 385

Managing events and alarms ....................................................................... 388

Page descriptions ......................................................................................... 390

Host services ........................................................................................................... 393

Understanding host services ........................................................................ 393

Configuring host services ............................................................................ 394

Managing host services ............................................................................... 400

Monitoring host services ............................................................................. 407

Table of Contents | 9

Page 10: Admin help netapp

Page descriptions ......................................................................................... 409

Storage systems users .............................................................................................. 411

Understanding storage system users ........................................................... 411

Configuring storage system users ............................................................... 411

Storage system configuration .................................................................................. 412

Understanding storage system configuration .............................................. 412

Configuring storage systems ....................................................................... 413

vFiler configuration ................................................................................................. 414

Understanding vFiler unit configuration ..................................................... 414

Configuring vFiler units .............................................................................. 415

Options .................................................................................................................... 415

Page descriptions ......................................................................................... 415

Backup setup options .................................................................................. 419

Global naming settings setup options .......................................................... 421

Costing setup options .................................................................................. 445

Database backup setup options ................................................................... 448

Default thresholds setup options ................................................................. 453

Discovery setup options .............................................................................. 462

File SRM setup options ............................................................................... 469

LDAP setup options .................................................................................... 473

Monitoring setup options ............................................................................ 478

Management setup options .......................................................................... 491

Systems setup options ................................................................................. 496

Security and access ................................................................................... 505Understanding RBAC ............................................................................................. 505

What RBAC is ............................................................................................. 505

How RBAC is used ..................................................................................... 505

How roles relate to administrators .............................................................. 505

Example of how to use RBAC to control access ........................................ 505

Administrator roles and capabilities ............................................................ 506

Access permissions for the Virtual Infrastructure Administrator role ........ 508

Understanding authentication .................................................................................. 509

Authentication methods on the DataFabric Manager server ....................... 509

Authentication with LDAP .......................................................................... 509

Plug-ins ...................................................................................................... 511Hyper-V troubleshooting ......................................................................................... 511

10 | OnCommand Console Help

Page 11: Admin help netapp

Error: Vss Requestor - Backup Components failed with partial writer

error. ...................................................................................................... 511

Error: Failed to start VM. Job returned error 32768 ................................... 512

Error: Failed to start VM. You might need to start the VM using Hyper-

V Manager ............................................................................................. 512

Error: Vss Requestor - Backup Components failed. An expected disk did

not arrive in the system .......................................................................... 512

Error: Vss Requestor - Backup Components failed. Writer Microsoft

Hyper-V VSS Writer involved in backup or restore encountered a

retryable error ........................................................................................ 513

Hyper-V virtual objects taking too long to appear in OnCommand

console ................................................................................................... 514

Increasing SnapDrive operations timeout value in the Windows registry . . 514

MBR unsupported in the Hyper-V plug-in ................................................. 514

Some types of backup failures do not result in partial backup failure ........ 515

Space consumption when taking two snapshot copies for each backup ..... 515

Virtual machine snapshot file location change can cause the Hyper-V

plug-in backup to fail ............................................................................. 516

Virtual machine backups taking too long to complete ................................ 516

Virtual machine backups made while a restore operation is in progress

might be invalid ..................................................................................... 516

Volume Shadow Copy Service error: An internal inconsistency was

detected in trying to contact shadow copy service writers. ................... 517

Hyper-V VHDs do not appear in the OnCommand console ....................... 518

Copyright information ............................................................................. 519Trademark information ........................................................................... 521How to send your comments .................................................................... 523Index ........................................................................................................... 525

Table of Contents | 11

Page 12: Admin help netapp

12 | OnCommand Console Help

Page 13: Admin help netapp

About this document

This document is a printable version of the OnCommand console Help. It is intended to be used foroffline searches when you do not have access to the Help on a management station. The Helpcontains administrative tasks, as well as conceptual and reference material that can be useful inunderstanding how to use the OnCommand console.

13

Page 14: Admin help netapp

14 | OnCommand Console Help

Page 15: Admin help netapp

Welcome to OnCommand console Help

How to use OnCommand console Help

This Help includes information for all features included in the OnCommand console.

By using the table of contents, the index, or the search tool, you can find information about featuresand how to use them.

Help is available from each tab and from the menu bar of the OnCommand console, as follows:

•To learn about a specific parameter, click .

• To view all the Help contents, click the Help menu and select Contents.You can expand any portion of the Table of Contents in the navigation pane to find more detailedinformation.

You can also print selected Help topics.

Note: The search tool does not work for partial terms, only whole words.

Bookmarking your favorite topicsIn the Help Favorites tab, you can add bookmark links to Help topics that you use frequently. Helpbookmarks provide fast access to your favorite topics.

Steps

1. Navigate to the topic you want to add as a favorite.

2. Click the Favorites tab, then click Add.

Understanding how the OnCommand console works

About the OnCommand consoleThe OnCommand console provides a centralized Web interface from which you can flexibly andefficiently manage your physical and virtual storage infrastructure.

Window layout and navigationMost windows in the OnCommand console have the same general layout.

Not every window contains every element in the following diagram.

15

Page 16: Admin help netapp

Groups:

File View HelpAdministration

Dashboard

Menu bar

Dashboard panel tabs

List of viewsList of related objects

Details for the selected object

Breadcrumb trail Command

buttons

List of objects

Tabs

Use a minimum display setting of 1280 by 1024 pixels.

By default, the Dashboard tab, Events tab, Storage tab, Server tab, Policies tab, Datasets tab, andReports tab are open when you first log in to the OnCommand console.

Window layout customizationThe OnCommand console enables you to customize the window layout. By customizing thewindows, you can control which data is viewable or how it is displayed.

Sorting You can click the column headings to sort the column entries in ascending order

and display the sort arrows ( and ). You can then use the sort arrows tospecify the order in which entries appear.

FilteringYou can use the filter icon ( ) to display only those entries that match theconditions provided. You can use the character filter (?) or string filter (*) tonarrow your search. You can apply filters to one or more columns. The columnheading is highlighted if a filter is applied. For example, you can search foralarms configured for a particular event type: Aggregate Overcommitted. In theAlarms tab, you can use the filter in the Event column. You can use the stringfilter to search for alarms configured for the event "Aggregate Overcommitted."In the string filter, when you type *aggr, all events whose names start with "aggr"are listed.

16 | OnCommand Console Help

Page 17: Admin help netapp

Note: If an entry in the column contains "?" or "*", to use the character filter orstring filter, you must enclose "?" or "*" in square brackets.

Hiding orredisplayingthe columns

You can click the column display icon ( ) to select the columns you want todisplay.

Customizingthe layout

You can drag the bottom of the "list of objects" area up or down to resize themain areas of the window. You can also choose to display or hide the "list ofrelated objects" and "list of views" panels. You can drag vertical dividers toresize the width of columns or other areas of the window.

How the OnCommand console works with the Operations Manager consoleand NetApp Management Console

The OnCommand console provides centralized access to a variety of storage capabilities. While youcan perform most virtualization tasks directly in the OnCommand console graphical user interface,many physical storage tasks require the Operations Manager console or NetApp ManagementConsole.

The OnCommand console automatically launches these other consoles when they are required tocomplete a task. You must install NetApp Management Console separately. You can also access theOperations Manager console from the OnCommand console File menu at any time.

Launching the Operations Manager consoleYou can launch the Operations Manager console from the OnCommand console to perform many ofyour physical storage tasks.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Step

1. Click the File menu, then click Operations Manager.

The Operations Manager console opens in a separate browser tab or window.

Welcome to OnCommand console Help | 17

Page 18: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Installing NetApp Management ConsoleYou can download and install NetApp Management Console through the OnCommand console.NetApp Management Console is required to perform many of your physical storage tasks.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Log in to the OnCommand console if necessary.

2. Click the File menu, then click Download Management Console.

A separate browser tab or window opens to the Management Console Software page in theOperations Manager console.

3. Click the download link for the Linux or Windows installation.

4. In the download dialog box, click Save File.

The executable file is downloaded to your local system, from the system on which theOnCommand Core Package was installed.

5. From the download directory, run the nmconsole-setup-xxx.xxx executable file.

The NetApp Management Console installation wizard opens.

6. Follow the prompts to install NetApp Management Console.

Result

After installation, you can access NetApp Management Console from the following locations:

• On Windows systems, the default installation path is C:\Program Files\NetApp\ManagementConsole.You can launch the console from the NetApp directory on the Start menu.

• On Linux systems, the default installation path is /usr/lib/ NetApp/management_console/.You can launch the console from /usr/bin.

18 | OnCommand Console Help

Page 19: Admin help netapp

Related references

Administrator roles and capabilities on page 506

How the OnCommand console works with AutoSupportIf you have AutoSupport enabled, weekly messages that contain information about your operatingenvironment are automatically sent to your internal support organization, NetApp support personnel,or both.

If you have AutoSupport enabled for the OnCommand console, the weekly AutoSupport messagescontain accounting information stored by DataFabric Manager server in addition to any otherinformation that is sent by other applications.

The information sent from DataFabric Manager server includes, but is not limited to, the followingcounts:

• The total number of host services registered with DataFabric Manager server.• Of the total number of host services, the number of VMware host services and the number of

Hyper-V host services.• The total number of host services with pending authorization.• The total number of storage systems that have host services connected to them. This is the total

number of unique storage systems known to all host services registered with DataFabric Managerserver.

• Of the total number of storage systems that have host services connected to them, the number ofeach FAS system model.

• The total number of vFilers that are connected to host services.• The total number of VMware virtual centers.• The total number of VMware datacenters.• The total number of virtual machines.• Of the total number of virtual machines, the number of VMware virtual machines and the number

of Hyper-V virtual machines.• The total number of hypervisors.• Of the total number of hypervisors, the number of VMware hypervisors.• The total number of Hyper-V parents.• The total number of datastores.• Of the total number of datastores, the number of SAN datastores and the number of NAS

datastores.• The maximum number of virtual machines in a datastore.• The minimum number of virtual machines in a datastore.• The average number of virtual machines per datastore.• The maximum number of virtual machines on an ESX server.• The minimum number of virtual machines on an ESX server.• The average number of virtual machines per ESX server.• The maximum number of virtual machines on a Hyper-V server.

Welcome to OnCommand console Help | 19

Page 20: Admin help netapp

• The minimum number of virtual machines on a Hyper-V server.• The average number of virtual machines per Hyper-V server.• The total number of virtual machines that span datastores.• The total number of some other types of objects, including Open Systems SnapVault (OSSV)

relationships, backup jobs, restore jobs, mount jobs, unmount jobs, and various types of datasets.

20 | OnCommand Console Help

Page 21: Admin help netapp

Dashboard

Understanding the dashboard

OnCommand console dashboard panelsThe OnCommand console dashboard contains multiple panels that provide cumulative at-a-glanceinformation about your storage and virtualization environment. The dashboard provides variousaspects of your storage management environment, such as the availability of storage objects, eventsgenerated for storage objects, resource pools, and dataset overall status.

The following panels are available in the OnCommand console dashboard:

Availability Provides information about the availability of storage controllers (stand-alone controllers and HA pairs) and vFiler units that are discovered andmonitored. You can also view the number of controllers and vFiler units thatare either online or offline.

Events Provides information about the status of the objects by listing the top fiveevents, based on their severity.

Resource Pools Displays the resource pools that have existing or potential space shortages.

Full Soon Storage Displays the top five aggregates and volumes that are likely reach aconfigured threshold, based on the number of days before this threshold isreached. You can also view the trend and space utilization of a particularaggregate or volume.

Fastest GrowingStorage

Displays the top five aggregates and volumes whose space usage is rapidlyincreasing. You can also view the growth rate, trend, and space utilization ofa particular aggregate or volume.

Dataset OverallStatus

Displays the number of datasets in one of the following statuses: Error,Warning, or Normal.

ExternalRelationship Lags

Displays the relative percentages of external SnapVault, qtree SnapMirror,and volume SnapMirror relationships with lag times in Error, Warning, andNormal status.

Unprotected Data Displays the number of unprotected storage and virtual objects that are beingmonitored.

Get Started Enables you to navigate to the Getting Started with NetApp Software page inthe NetApp University Web site.

21

Page 22: Admin help netapp

Related information

Getting Started with NetApp Software - http://communities.netapp.com/community/netapp_university/getting_started

Monitoring the dashboard

Monitoring dashboard panelsYou can use the dashboard panels to monitor your physical storage, logical storage, virtual storageand non-storage objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Log in to the OnCommand console. By default, the Dashboard tab is displayed.

2. To view details about any of the information displayed in the dashboard panels, click the panelheading to display the relevant OnCommand console tab.

You can also click any hypertext links in the individual dashboard panels to view detailedinformation.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Availability dashboard panelThis panel provides information about the availability of storage controllers (stand-alone controllersand HA pairs) and vFiler units that are discovered and monitored by the OnCommand console.

Panel display

The icon links you to the Storage tab where you can access details about your storage controllersand vFiler units.

Controllers Displays the percentage of storage controllers that are either online or offline. Forexample, when ten storage controllers are being monitored and all the controllers are

22 | OnCommand Console Help

Page 23: Admin help netapp

online, the Controllers area displays 100% up. If five controllers are offline, theControllers area displays 50% down. You can also view the number of storagecontrollers that are online.

You can view more information about the storage controllers by clicking in theControllers area.

vFiler Units Displays the percentage of vFiler units that are either online or offline. For example,when ten vFiler units are being monitored and all the vFiler units are online, thevFiler Units area displays 100% up. If five vFiler units are offline, the vFiler Unitsarea displays 50% down. You can also view the number of vFiler units that areonline.

You can view more information about the vFiler units by clicking in the vFiler Unitsarea.

Events dashboard panelThis panel provides information about the status of the managed objects and the management host bylisting the top five events that are generated by OnCommand console. The Events dashboard paneldisplays only current events. The top five events are listed based on their severity levels. The highestseverity level is represented as Emergency.

Panel display

The icon links you to the Events tab where you can view a list of events and their properties.

Events The events are displayed in the order of their severity as follows:

Emergency The event source unexpectedly stopped performing and experiencedunrecoverable data loss. You must take corrective action immediately toavoid extended downtime.

Critical A problem occurred that might lead to service disruption if correctiveaction is not taken immediately.

Error The event source is still performing; however, corrective action isrequired to avoid service disruption.

Warning The event source experienced an occurrence that you should be aware of.Events of this severity do not cause service disruption, and correctiveaction might not be required.

Information The event may be of interest to the administrator. This severity does notrepresent an abnormal operation.

By clicking the specific event, you can view more information about the event from theEvents tab.

Dashboard | 23

Page 24: Admin help netapp

Full Soon Storage dashboard panelThis panel displays the top five aggregates and volumes that will soon reach the configuredthreshold, based on the number of days before this threshold is reached. This enables you to takecorrective action to prevent the aggregates or volumes from reaching the threshold.

Panel display

The icon links you to the Storage tab where you can view the aggregates and volumesapproaching the specified threshold.

Resource Lists the volumes or aggregates that will soon reach the specified threshold. Byclicking the name of a resource, you can view more information about it in theVolumes view or the Aggregates view, depending on the type of resource youselect.

Days to Full Displays the estimated number of days remaining before the volume oraggregate reaches the threshold.

Trend Displays, as a trend line, information about the space used in the volume or theaggregate for the past 30 days.

SpaceUtilization

Displays the amount of space used within the volume or aggregate.

Fastest Growing Storage dashboard panelThis panel displays the top five aggregates and volumes that are using the most space. You can alsoview the growth rate, trend, and space utilization.

Panel display

The icon links you to the Storage tab where you can view the aggregates and volumes using themost space.

Resource Displays the five fastest-growing aggregates and volumes. By clicking the nameof a resource, you can view more information about it in the Volumes view orthe Aggregates view, depending on the type of resource you select.

Growth Rate(%)

Displays the percent growth rate of the space used by the fastest-growing storagesystems. The growth rate is determined by dividing the daily growth rate by thetotal amount of space in the storage system.

Trend Displays, as a trend line, information about the space used in the aggregate orvolume for the past 30 days.

SpaceUtilization

Displays the amount of space used within the volume or aggregate.

24 | OnCommand Console Help

Page 25: Admin help netapp

Dataset Overall Status dashboard panelThis panel summarizes for you the number of datasets in overall Error, Warning, or Normal status.

Panel display

The icon links you to the Datasets tab where you can examine the details of a datasets overallstatus.

This panel displays the number of datasets in overall Error status, overall Warning status, or overallNormal status.

Error The number of datasets with an overall status of Error. A dataset is designated withoverall error status based upon the following status values:

DR status condition: ErrorProtection status condition: Lag error or Baseline failedConformance status condition: NonconformantSpace status condition: ErrorResource status condition: Emergency, Critical, or Error

Warning The number of datasets with an overall status of Warning. A dataset is designated withoverall warning status based upon the following status values:

DR status condition: WarningProtection status condition: Job failure, Lag warning, Uninitialized, or No protectionpolicy for a non-empty datasetConformance status condition: NASpace status condition: WarningResource status condition: Warning

Normal The number of datasets with an overall status of Normal.

Resource Pools dashboard panelThis panel displays the total space allocated to and the percentage of space utilized by each resourcepool, listed by resource pool name.

Panel display

The icon links you to the Resource Pools window in the NetApp Management Console whereyou can access details about individual resource pools.

For each existing resource pool, this panel displays the following information:

Dashboard | 25

Page 26: Admin help netapp

Name The name of the resource pool

Total Size The total size in kilobytes, megabytes, gigabytes, or terabytes of the resourcepool

Space Utilization The percentage of the resource pool's capacity that is being utilized

Items are sorted in decreasing order of available space.

External Relationship Lags dashboard panelThis panel summarizes for you the relative percentages of external SnapVault, Qtree SnapMirror, andVolume SnapMirror relationships with lag times of Normal, Warning, and Error status.

Panel display

The icon links you to the External Relationships window in the NetApp Management Console.

This panel uses colored bars to indicate the relative percentages of external SnapVault, QtreeSnapMirror, and volume SnapMirror relationships with lag times in Error, Warning, and Normalstatus.

The Warning and Error status percentages indicate the portion of external SnapVault, QtreeSnapMirror, and Volume SnapMirror relationships whose current lag times have exceeded the timespecified in the global Warning and Error threshold settings for those relationships. Normal statuspercentages indicate the portion of external relationships whose current lag times are still withinnormal range.

External relationships are protection relationships that are monitored but not managed by theOnCommand console. The lag is the time since the last successful data update associated with anexternal protection relationship was completed.

Unprotected Data dashboard panelThis panel summarizes the total number of storage objects (volumes and qtrees) and virtual objects(Hyper-V virtual machines, VMware virtual machines, VMware datastores, and VMwaredatacenters) that are not protected.

Panel displaysDisplays the number of unprotected objects being monitored.

Volumes Displays the number of unprotected volumes in your domain. The hypertext linksyou to the Unprotected Data window in the NetApp Management Console.

Qtrees Displays the number of unprotected qtrees in your domain. The hypertext links youto the Unprotected Data window in the NetApp Management Console.

Hyper-V VMs Displays the number of unprotected Hyper-V virtual machines in your domain.The hypertext links you to the Hyper-V VMs view of the Server tab.

26 | OnCommand Console Help

Page 27: Admin help netapp

VMware VMs Displays the number of unprotected VMware virtual machines in your domain.The hypertext links you to the VMware VMs view of the Server tab.

Datastores Displays the number of unprotected VMware datastores in your domain. Thehypertext links you to the VMware Datastores view of the Server tab.

Datacenters Displays the number of unprotected VMWare datacenters in your domain. Thehypertext links you to the VMware Datacenters view of the Server tab.

Storage objects are unprotected if they do not belong to a dataset or if they belong to an unprotecteddataset. Datasets are unprotected if they do not have an assigned protection policy or if they have anassigned protection policy but do not have an initial relationship created (the dataset has neverconformed to the protection policy).

Virtual objects are unprotected if they do not belong to a dataset that has been assigned a localpolicy.

Dashboard | 27

Page 28: Admin help netapp

28 | OnCommand Console Help

Page 29: Admin help netapp

Events and alarms

Understanding events and alarms

What events areEvents are generated automatically when a predefined condition occurs or when an object crosses athreshold. Event messages inform you when specific events occur. All events are assigned a severitytype and are automatically logged in the Events tab.

You can configure alarms to send notification automatically when specific events or severity typesoccur. If an application is not configured to trigger an alarm when an event is generated, you can findout about the event by checking the Events tab.

It is important that you take immediate corrective action for events with severity level Error, Critical,or Emergency. Ignoring such events can lead to poor performance and system unavailability.

Note: Event types are predetermined. Although you cannot add or delete event types, you canmanage notification of events. However, you can modify the event severity type from theDataFabric Manager server command-line interface.

You can use the Events tab to acknowledge and resolve events, and also create alarms for specificevents.

What alarms areAlarms are configured notifications that are sent whenever a specific event or an event of a specificseverity type occurs. You can create alarms for any defined group or the global group for which youwant automatic notification of events.

Alarms are not the events themselves, only the notification of events. Alarms are not the same as useralerts.

Note: By default, the DataFabric Manager server sends user alerts (e-mail messages) to all userswho exceed their quota limits.

You can use the Alarms tab to create, edit, delete, test, and enable or disable alarms.

Related concepts

What a global group is on page 376

29

Page 30: Admin help netapp

Guidelines for creating alarmsYou can create alarms to notify you whenever a specific event or an event of a specific severity typeoccurs. You can create alarms based on the event type, event severity, or event class. You can use theAlarms tab or the Events tab to create a new alarm.

• GroupYou can create alarms only at the group level. You must decide the group for which the alarm isadded. If you want to set an alarm for a specific object, you must first create a group with thatobject as the only member. For example, if you want to closely monitor a single aggregate byconfiguring an alarm, you must create a group, and add the aggregate into the group. You canthen configure an alarm for the newly created group.

Note: By default, there exists a global group, and all objects and groups belong to the globalgroup.

• EventIf you add an alarm based on the type of event generated, you should decide which events requirean alarm.

• Event severityYou should decide if any event of a specified severity type should trigger the alarm and, if so,which severity type.

• Event classYou can configure a single alarm for multiple events using event class. If you add an alarm basedon the event class, you should decide if an event in an event class should trigger the alarm and, ifso, which event class. For example, the expression userquota.*|qtree.* matches all userquota or qtree events.

Note: You can view the list of event classes from the CLI, by using the following command:dfm eventType list. You can view the list of events specific to an event class by using thefollowing command: dfm eventType list -Cevent-class

• Modes of event notificationYou should decide who or what needs to receive the event notification. You can specify one ormore of the following modes of event notification:

• E-mailsYou must provide the administrator user names or e-mail addresses of users other than theadministrator.

• PagersYou must provide the user names of the administrators or pager numbers of thenonadministrator users.

Note: You must ensure that proper e-mail addresses and pager numbers of administratorsand nonadministrator users are configured.

• SNMP listener traps

30 | OnCommand Console Help

Page 31: Admin help netapp

You must provide the SNMP traphost. Optionally, you should provide the SNMP communityname.

• ScriptYou must provide the complete path of a script that is executed when an alarm occurs and theuser name that runs the script.

• Effective time for repeat notificationYou can configure an alarm to repeatedly send notification to the recipients for a specified time.You should determine the time from which the event notification is active for the alarm. If youwant the event notification repeated until the event is acknowledged, you should determine howoften you want the notification to be repeated.

How to know when an event occursYou can learn about event occurrences by viewing the events list or by configuring alarms toautomatically notify you when events occur.

• Viewing the events listYou can use the Events tab to view a list of all events that occurred and to view detailedinformation about any selected event.

• Configuring alarmsYou can use the Alarms tab to add an alarm that sends notifications automatically when an eventoccurs.

Description of event severity typesEach event is associated with a severity type to help you prioritize the events that require immediatecorrective action.

Emergency The event source unexpectedly stopped performing and experienced unrecoverabledata loss. You must take corrective action immediately to avoid extended downtime.

Critical A problem occurred that might lead to service disruption if corrective action is nottaken immediately.

Error The event source is still performing; however, corrective action is required to avoidservice disruption.

Warning The event source experienced an occurrence that you should be aware of. Events ofthis severity do not cause service disruption, and corrective action might not berequired.

Information The event occurs when a new object is discovered, or when a user action isperformed. For example, when a group is created, an alarm is configured, or when astorage system is added, the event with severity type Information is generated. Noaction is required.

Normal A previous abnormal condition for the event source returned to a normal state and theevent source is operating within the desired thresholds.

Events and alarms | 31

Page 32: Admin help netapp

Alarm configurationDataFabric Manager server uses alarms to notify you when events occur. DataFabric Manager serversends the alarm notification to one or more specified recipients in different formats, such as e-mailnotification, pager alert, an SNMP traphost, or a script you wrote (you should attach the script to thealarm).

You should determine the events that cause alarms, whether the alarm repeats until it isacknowledged, and how many recipients an alarm has. Not all events are severe enough to requirealarms, and not all alarms are important enough to require acknowledgment. Nevertheless, to avoidmultiple responses to the same event, you should configure DataFabric Manager server to repeatnotification until an event is acknowledged.

Note: DataFabric Manager server does not automatically send alarms for the events.

Configuring alarms

Creating alarms for eventsThe OnCommand console enables you to configure alarms for immediate notification of events. Youcan also configure alarms even before a particular event occurs. You can add an alarm based on theevent, event severity type, or event class from the Create Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have your mail server configured so that the DataFabric Manager server can send e-mailsto specified recipients when an event occurs.

You must have the following information available to add an alarm:

• The group with which you want the alarm associated.• The event name, event class, or event severity type that triggers the alarm.• The recipients and the modes of event notifications.• The period during which the alarm is active.

You must have the following capabilities to perform this task:

• DFM.Event.Write• DFM.Alarm.Write

About this task

Alarms you configure based on the event severity type are triggered when that event severity leveloccurs.

32 | OnCommand Console Help

Page 33: Admin help netapp

Steps

1. Click the Administration menu, then click the Alarms option.

2. From the Alarms tab, click Create.

3. In the Create Alarm dialog box, specify the condition for which you want the alarm to betriggered.

Note: An alarm is configured based on event type, event severity, or event class.

4. Specify either one or more means of alarm notification properties.

5. Click Create, then click Close.

Related concepts

Guidelines for creating alarms on page 30

Related tasks

Configuring the mail server for alarm notifications on page 36

Related references

Administrator roles and capabilities on page 506

Creating alarms for a specific eventThe OnCommand console enables you to configure an alarm when you want immediate notificationfor a specified event name or event class, or if events (of a specified severity level) occur. You canadd an alarm from the Create Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have your mail server configured so that the DataFabric Manager server can send e-mailsto specified recipients when an event occurs.

You must have the following information available to add an alarm:

• The group with which you want the alarm associated• The event name, event class, or severity type that triggers the alarm• The recipients and the modes of event notifications• The period during which the alarm is active

You must have the following capabilities to perform this task:

• DFM.Event.Write• DFM.Alarm.Write

Events and alarms | 33

Page 34: Admin help netapp

About this task

Alarms you configure for a specific event are triggered when that event occurs.

Steps

1. Click the View menu, then click the Events option.

2. From the events list in the Events tab, analyze the events, and then determine the event for whichyou want to create an alarm notification.

3. Select the event for which you want to create an alarm.

4. Click Create Alarm.

The Create Alarm dialog box opens, and the event is selected by default.

5. Specify either one or more means of alarm notification properties.

6. Click Create, then click Close.

Related concepts

Guidelines for creating alarms on page 30

Related tasks

Configuring the mail server for alarm notifications on page 36

Related references

Administrator roles and capabilities on page 506

Managing events and alarms

Resolving eventsAfter you have taken corrective action of a particular event, you should mark the event as resolved toavoid multiple event notifications. You can mark the events as resolved from the Events tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Events option.

2. From the events list in the Events tab, select the event that you want to acknowledge.

34 | OnCommand Console Help

Page 35: Admin help netapp

3. Click Acknowledge.

When you do not acknowledge and mark an event resolved, you will receive multiple eventnotifications for the same event.

4. Find the cause of the event and take corrective action.

5. Click Resolve to mark the event as resolved.

Related references

Administrator roles and capabilities on page 506

Editing alarm propertiesYou can edit the configuration of an existing alarm from the Edit Alarm dialog box. For example, ifyou have created a script that is executed when there is an event notification, you can provide thecomplete script path in Edit Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have the following capabilities to perform this task:

• DFM.Event.Write• DFM.Alarm.Write

Steps

1. Click the Administration menu, then click the Alarms option.

2. From the alarms list, select the alarm whose properties you want to modify.

3. From the Alarms tab, click Edit.

4. In the Edit Alarm dialog box, edit the properties of the alarm as required.

5. Click Edit.

Result

The new configuration is immediately activated and displayed in the alarms list.

Related references

Administrator roles and capabilities on page 506

Events and alarms | 35

Page 36: Admin help netapp

Configuring the mail server for alarm notificationsYou must configure the mail server so that when an event occurs the DataFabric Manager server cansend e-mails to specified recipients.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. From the File menu, click Operations Manager.

2. In the Operations Manager console, click the Control Center tab.

3. Click Setup menu, and then click Options.

4. In Edit options, click the Events and Alerts option.

5. In the Events and Alerts Options page, specify the name of your mail server.

Related references

Administrator roles and capabilities on page 506

Monitoring events and alarms

Viewing event detailsYou can view the details of each event such as the source of the event, the event type, the conditionthat triggered the event, the time and date on which the event was triggered, and event severity, fromthe Events tab. You can also view common details of multiple events.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

36 | OnCommand Console Help

Page 37: Admin help netapp

Steps

1. Click the View menu, then click the Events option.

2. From the Events tab, select the event to view the details.

You can view the event details in the Details area.

Related references

Administrator roles and capabilities on page 506

Viewing alarm detailsYou can view the list of alarms created for various events from the Alarms tab. You can also viewalarm properties such as the event name, severity of the event, and group associated with the alarm.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Alarms option.

2. From the Alarms tab, select the alarm to view the details.

You can view the alarm details in the Details area.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Events tabThe Events tab provides a single location from which you can view a list of events and theirproperties. You can perform various actions on these events such as navigating to the Alarms tab,configuring alarms (by clicking the Manage alarms link), acknowledging, and resolving events.

• Command buttons on page 38• Events list on page 38• Details area on page 39

Events and alarms | 37

Page 38: Admin help netapp

Command buttons

The command buttons enable you to perform the following management tasks for a selected event:

Acknowledge Acknowledges the selected events.

Your user name and the time are entered in the events list (Acknowledged By andAcknowledged) for the selected events. When you acknowledge an event, you takeresponsibility for managing that event.

Resolve Resolves the selected events.

Your user name and the time are entered in the events list (Resolved By andResolved) for the selected events. After you have taken a corrective action for theevent, you must mark the event as resolved.

Note: You can view the resolved event in the Events All report.

Create Alarm Launches the Create Alarm dialog box in which you can create alarms for theselected event.

Refresh Refreshes the list of events.

Events list

The Events list displays a list of all the events that occurred. By default, the most recent events arelisted. The list of events is updated dynamically, as events occur. You can select an event to see thedetails for that event.

ID Displays the ID of the event. By default, this column is hidden.

Source ID Displays the ID of the object with which the event is associated. By default,this column is hidden.

Triggered Displays the time and date the event was triggered.

Source Displays the full name of the object with which the event is associated.

Event Displays the event names. You can select an event to display the event details.

State Displays the event state: New, Acknowledged, or Resolved.

Severity Displays the severity type of the event. You can filter this column to show allseverity types. The event severity types are Normal, Information, Warning,Error, Critical, and Emergency.

Acknowledged By Displays the name of the person who acknowledged the event. The field isblank if the event is not acknowledged. By default, this column is hidden.

Acknowledged Displays the date and time when the event was acknowledged. The field isblank if the event is not acknowledged. By default, this column is hidden.

38 | OnCommand Console Help

Page 39: Admin help netapp

Resolved By Displays the name of the person who resolved the event. This field is blank ifthe event is not resolved. By default, this column is hidden.

Resolved Displays the date and time at which the event was resolved. This field is blankif the event is not resolved. By default, this column is hidden.

Current Displays a "Yes" if the event is a current event, and displays a "No" if theevent is a history event.

Details area

Apart from the event details displayed in the events list, you can view other additional details of theevents in the area below the events list.

Event Displays the event names. You can select an event to display the event details.

About Brief description of the event.

Triggered Displays the time and date the event was triggered.

State Displays the event state: New, Acknowledged, or Resolved.

Severity Displays the severity type of the event. You can filter this column to show allseverity types. The event severity types are Normal, Information, Warning, Error,Critical, and Emergency.

Source Displays the full name of the object with which the event is associated. Byclicking the source, you can view the details of the object from the correspondinginventory view.

Type The type of the source that triggered the event.

Condition A description of the condition that triggered the event.

Notified The date and time on which the event was notified.

Acknowledged Displays the date and time when the event was acknowledged. The field is blank ifthe event is not acknowledged.

Resolved Displays the date and time at which the event was resolved. This field is blank ifthe event is not resolved.

Related references

Window layout customization on page 16

Events and alarms | 39

Page 40: Admin help netapp

Alarms tabThe Alarms tab provides a single location from which you can view a list of alarms configured basedon event, event severity type, and event class. You can also perform various actions from thiswindow, such as edit, delete, test, and enable or disable alarms.

• Command buttons on page 40• Alarms list on page 40• Details area on page 41

Command buttons

The command buttons enable you to perform the following management tasks for a selected event:

Create Launches the Create Alarm dialog box in which you can create an alarm based on event,event severity type, and event class.

Edit Launches the Edit Alarm dialog box in which you can modify alarm properties.

Delete Deletes the selected alarm.

Test Tests the selected alarm to check its configuration, after creating or editing the alarm.

Enable Enables an alarm to send notifications.

Disable Disables the selected alarm when you want to temporarily stop its functioning.

Refresh Refreshes the list of alarms.

Alarms list

The Alarms list displays a list of all the configured alarms. You can select an alarm to see the detailsfor that alarm.

Alarm ID Displays the ID of the alarm.

Event Displays the event name for which the alarm is created.

Event Severity Displays the severity type of the event.

Group Displays the group name with which the alarm is associated.

Enabled Displays “Yes” if the selected alarm is enabled or “No” if the selected alarm isdisabled.

Start Displays the time at which the selected alarm becomes active. By default, thiscolumn is hidden.

End Displays the time at which the selected alarm becomes inactive. By default, thiscolumn is hidden.

40 | OnCommand Console Help

Page 41: Admin help netapp

Repeat Interval(Minutes)

Displays the time period (in minutes) before the DataFabric Manager servercontinues to send a repeated notification until the event is acknowledged orresolved. By default, this column is hidden.

Repeat Notify Displays “Yes” if the selected alarm is enabled for repeated notification, ordisplays “No” if the selected alarm is disabled for repeated notification. Bydefault, this column is hidden.

Event Class Displays the class of event that is configured to trigger an alarm. By default, thiscolumn is hidden.

You can configure a single alarm for multiple events using the event class. Theevent class is a regular expression that contains rules, or pattern descriptions, thattypically use the word "matches" in the expression. For example, theuserquota.*|qtree.* expression matches all user quota or qtree events.

Details area

Apart from the alarm details displayed in the alarms list, you can view other additional properties ofthe alarms in the area below the alarms list.

Effective Time Range The time during which an alarm is active.

Administrators (EmailAddress)

The e-mail address of the administrator, to which the alarmnotification is sent.

Administrators (PagerNumber)

The pager number of the administrator, to which the alarmnotification is sent.

Email Addresses (Others) The e-mail addresses of nonadministrator users, to which the alarmnotification is sent.

Pager Numbers (Others) The pager numbers of nonadministrator users, to which the alarmnotification is sent.

SNMP Trap Host The SNMP traphost system that receives the alarm notification in theform of SNMP traps.

Script Path The name, along with the path of the script that is run when an alarmis triggered.

Related references

Window layout customization on page 16

Create Alarm dialog boxThe Create Alarm dialog box enables you to create alarms based on the event type, event severity, orevent class. You can create alarms for specific event or many events.

• Event Options on page 42

Events and alarms | 41

Page 42: Admin help netapp

• Notification Options on page 42• Command buttons on page 42

Event Options

You can create an alarm based on event name, event severity type, or event class:

Group Displays the group that receives an alert when an event or event type triggers analarm.

Event Displays the names of the events that triggers an alarm.

EventSeverity

Displays the severity types of the event that triggers an alarm. The event severitytypes are Normal, Information, Warning, Error, Critical, and Emergency.

Event Class Specifies the events classes that triggers an alarm.

The event class is a regular expression that contains rules, or pattern descriptions,that typically use the word "matches" in the expression. For example, theexpression userquota.* matches all user quota events.

Notification Options

You can specify alarm notification properties by selecting one of the following check boxes:

SNMP Trap Host Specifies the SNMP traphost that receives the notification.

E-mail Administrator(Admin Name)

Specifies the name of the administrator who receives the e-mailnotification. You can specify multiple administrator names, separatedby commas.

Page Administrator(Admin Name)

Specifies the administrator who receives the pager notification. Youcan specify multiple administrator names, separated by commas.

E-mail Addresses(Others)

Specifies the e-mail addresses of nonadministrator users who receivethe notification. You can specify multiple e-mail addresses, separatedby commas.

Pager Numbers (Others) Specifies the pager numbers of other nonadministrator users whoreceive the notification. You can specify multiple pager numbers,separated by commas.

Script Path Specifies the name of the script that is run when the alarm is triggered.

Repeat Interval(Minutes)

Specifies whether an alarm notification is repeated until the event isacknowledged, and, if so, how often the notification is repeated.

Effective Time Range Specifies the time during which the alarm is active.

Command buttons

You can use command buttons to perform the following management tasks for a selected event:

42 | OnCommand Console Help

Page 43: Admin help netapp

Create Creates an alarm based on the properties that you specify.

Cancel Does not save the alarm configuration and closes the Create Alarm dialog box.

Edit Alarm dialog boxThe Edit Alarm dialog box enables you to edit alarm properties such as group with which the alarmis associated, event type, event severity, event class, and notification options.

• Event Options on page 43• Notification Options on page 43• Command buttons on page 44

Event Options

You can edit alarm properties such as group with which the alarm is associated, event type, eventseverity, or event class.

Group Displays the group (and its subgroups) that receives an alert when an event orevent type triggers an alarm.

Event Displays the name of the event that triggers an alarm.

EventSeverity

Displays the severity type of the event that triggers the alarm.

Event Class Specifies the class of the event that triggers the alarm.

The event class is a regular expression that contains rules, or pattern descriptions,that typically use the word "matches" in the expression. For example, theexpression userquota.* matches all user quota events.

Notification Options

You can edit alarm notification properties by selecting one of the following check boxes:

SNMP Trap Host Specifies the SNMP traphost that receives the notification.

E-mail Administrator(Admin Name)

Specifies the name of the administrator who receives the e-mailnotification of the event. You can specify multiple administratornames, separated by commas.

Page Administrator(Admin Name)

Specifies the administrator who receives the pager notification of theevent. You can specify multiple administrator names, separated bycommas.

E-mail Addresses(Others)

Specifies the e-mail addresses of other users (other than theadministrators) who receive the notification. You can specify multiplee-mail addresses separated by commas.

Events and alarms | 43

Page 44: Admin help netapp

Pager Numbers (Others) Specifies the pager numbers of other users (other than theadministrators) who receive the notification. You can specify multiplepager numbers, separated by commas.

Script Path Specifies the name of the script that is run when the alarm is triggered.

Repeat Interval(Minutes)

Specifies whether an alarm notification is repeated until the event isacknowledged, and how often the notification is repeated.

Effective Time Range Specifies the time during which the alarm is active.

Command buttons

You can use command buttons to perform the following management tasks for a selected event:

Edit Modifies the alarm properties that you specify.

Cancel Does not save the modification of alarm configuration, and closes the Edit Alarm dialogbox.

44 | OnCommand Console Help

Page 45: Admin help netapp

Jobs

Understanding jobs

Understanding jobsA job is typically a long-running operation. The OnCommand console enables you to create, manage,and monitor jobs. From the Jobs tab, you can view all jobs that are currently running as well as jobsthat have completed.

Following are three examples of a job:

• A scheduled local backup of a dataset• A mirror transfer job• A mount or unmount of a VMware snapshot copy

Managing jobs

Canceling jobsYou can use the Jobs tab to cancel a job if it is taking too long to complete, encountering too manyerrors, or is no longer needed. You can cancel a job only if its status and type allow it. You cancancel any job that has the status Running or Running with failures.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Jobs option.

2. From the list of jobs, select one or more jobs.

3. Click Cancel.

Note: The Cancel button is enabled only for jobs that are either Running or Running withfailures. If the Cancel button is not enabled, that job type cannot be canceled.

4. At the confirmation prompt, click Yes to cancel the selected job.

45

Page 46: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Monitoring jobs

Monitoring jobsYou can monitor for job status and other job details using the Jobs tab. For example, you can viewthe progress of an on-demand backup job and see whether there are any errors.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Jobs option.

2. Select a job in the jobs list to see information about that job.

The Groups selection list in the toolbar enables you to display only the data that pertains toobjects in a selected group. This setting persists until you log out or choose a different group.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Jobs tabThe Jobs tab enables you to view the current status and other information about all jobs that arecurrently running as well as jobs that have completed. You can use this information to see which jobsare still running and which jobs have succeeded. This tab displays up to 25,000 of the jobs in allstates.

• Command buttons on page 47• View Jobs drop-down list on page 47• Jobs list on page 47• Completed job steps on page 51• Job details on page 52

46 | OnCommand Console Help

Page 47: Admin help netapp

Command buttons

Cancel Stops the selected jobs. You can select multiple jobs and cancel them simultaneously.This button is enabled only for certain job types and when the selected jobs are running.

Refresh Updates the jobs list.

View Jobs drop-down list

Selecting these options displays all jobs that were started during the specified time range; thesedisplays do not include earlier but still in-process jobs. All ranges are based on a 24-hour day forwhich 00:00 represents midnight.

1 Day Displays all jobs that were started between midnight of the previous day and now. Thisperiod can cover up to 47 hours and 59 minutes.

For example, if you click 1 Day at 15:00 on February 14 (on a 24-hour clock), the listincludes all jobs that were started from 00:00 (midnight) on February 13 to the currenttime on February 14. This list covers the full day of February 13 plus the partial currentday of February 14.

1 Week Displays all jobs that were started between midnight of the same day in the previousweek (seven days ago) and now. This period can cover up to seven days, 23 hours, and59 minutes.

For example, if you click 1 Week at 15:00 on Thursday, February 14 (on a 24-hourclock), the list includes all jobs that were started from 00:00 (midnight) the previousThursday (February 7) to the current time on February 14. This list covers seven fulldays plus the partial current day.

1 Month Displays all jobs that were started between midnight of the same day in the previousmonth and now. This period can cover from 28 through 32 days, depending on themonth.

For example, if you click 1 Month at 15:00 on Thursday, February 14 (on a 24-hourclock), the list includes all jobs that were started from 00:00 (midnight) on January 14 tothe current time on February 14.

All Displays all jobs.

Note: On very large or very busy systems, the Jobs tab might be unresponsive for long periodswhile loading 1 Month or All data. If the application appears unresponsive for these large lists,select a shorter time period (such as 1 Day).

Jobs list

Displays a list of the jobs that are in progress. You can customize the display by using the followingfiltering and sorting options in the columns of the jobs list.

Note: You can display no more than 25,000 records simultaneously.

Jobs | 47

Page 48: Admin help netapp

Job The identification number of the job. The default jobs list includes this column.

The job identification number is unique and is assigned by the server when it startsthe job. You can search for a particular job by entering the job identificationnumber in the text box provided by the column filter.

Job Type The type of job, which is determined by the policy assigned to the dataset or by thedirect request initiated by a user. The default jobs list includes this column. Thejob types are as follows:

Backup Deletion A job that deletes backups of volumes of a dataset.

Backup Mount An operation that mounts a selected backup to an ESXserver.

Backup Unmount An operation that unmounts a backup from an ESXserver.

Failover A dataset failover from a primary node to a disasterrecovery node. Applies only if the dataset is enabledfor disaster recovery.

Host ServiceResource Discovery

A job that discovers virtual servers and the mappingbetween virtual and physical storage systems.

Host Service SettingsConfiguration

A job that configures a host service.

Host Service StorageConfiguration Import

A job that imports the configuration of the storagesystem from the host service to the Express edition ofthe DataFabric Manager server.

LUN Destruction A job that deletes a LUN.

LUN Resizing A job that changes the LUN size.

Local Backup A local scheduled backup protection operation basedon Snapshot technology.

Local BackupConfirmation

A local scheduled backup protection operation basedon Snapshot technology. Applies if a dataset is anapplication-generated dataset and if the application isresponsible for creating local backups.

Local Then RemoteBackup

A local backup protection operation on the primarynode of the dataset followed by a transfer of thebackups to remote nodes of the dataset.

Member Dedupe A deduplication space savings operation initiated onvolumes of a dataset.

48 | OnCommand Console Help

Page 49: Admin help netapp

Migration (One-Step) A job that begins migrating a dataset or vFiler unit toa new storage system and automatically performs thecutover operation.

MigrationCancellation

A job that cancels a dataset or vFiler unit migration.

Migration Cleanup A job that deletes the old storage after a dataset orvFiler unit migration cutover.

MigrationCompletion

A migration operation that switches the source of adataset or vFiler unit from the old storage system to anew storage system.

MigrationRelinquishment

A job that transfers the migration capability of adataset.

Migration Repair A CLI-based repair operation.

Migration Rollback A job that reverses the migration and restores theoriginal source storage systems as the activeaccessible systems.

Migration Start An operation that begins migrating a dataset or vFilerunit to a new storage system.

Migration Update A job that updates the SnapMirror relationships thatwere created as part of the migration start operation.

Mirror A scheduled protection mirror operation based onSnapMirror technology.

On-DemandProtection

A backup or mirror operation that is initiated byclicking the Backup Now button in the Datasets tab.The types of tasks performed are determined by thepolicy configured for the dataset.

Provisioning A job that provisions containers into a dataset basedon the associated policy and dataset attributes.

Relationship Creation A job that creates a protection relationship based onprotection technology.

RelationshipDestruction

A protection relationship delete operation based onprotection technology.

Remote Backup A scheduled backup to secondary storage based onSnapVault or qtree SnapMirror technology.

Restore A protection data restore job that is initiated from theDatasets tab or the Backups tab.

Jobs | 49

Page 50: Admin help netapp

Server Configuration A job that begins the initial setup and configuration ofthe Express edition of the DataFabric Manager server.

Snapshot CopiesDeletion

A job that deletes Snapshot copies of volumes of adataset containing physical storage objects.

Snapshot CopyDeletion

A job that deletes a Snapshot copy from any volume,not just members of a dataset.

Storage Deletion A job that deletes a volume, qtree, or LUN from thestorage system.

Attention: This operation destroys the data in thedeleted volume, qtree, or LUN, and cannot bereversed.

Storage Resizing A job that changes the storage size or quota limit. Ifthe selected container is a volume, this job typechanges the size, Snapshot reserve, and maximum sizeof the volume. If the selected container is a qtree, thisjob type changes the quota limit of the qtree.

Volume Dedupe A deduplication space savings operation initiated onany volume, not just members of a dataset.

Volume Migration A job of a volume migration in secondary or tertiarystorage.

Volume Resizing A job that changes the storage size of any volume, notjust members of a dataset.

Volume Undedupe A deduplicated volume that is converted to a normalvolume.

Object The name of the object on which the job was started. The default jobs list includesthis column.

Object Type The type of object on which the job was started. The default jobs list includes thiscolumn. Examples of object types are Aggregate, Dataset, and vFiler unit.

Start The date and time the job was started. The default jobs list includes this column.

BytesTransferred

The amount of data (in megabytes, gigabytes, or kilobytes) that was transferredduring the job. This column is not displayed in the jobs list by default.

Note: This number is an approximation and does not reflect an exact count; it isalways less than the actual number of bytes transferred. For jobs that take ashort time to complete, no data transfer size is reported.

Job Status The running status of the job. The default jobs list includes this column. Theprogress options are as follows:

50 | OnCommand Console Help

Page 51: Admin help netapp

Failed All tasks in the job failed.

Partially Failed One or more of the tasks in the job failed and one or more ofthe tasks completed successfully.

Succeeded All tasks completed successfully.

Running withFailures

The job is currently running but one or more tasks in the jobfailed.

Running The job is currently running.

Queued The job is not running yet. However, it is scheduled to runafter other provisioning or protection jobs on the samedataset are completed.

Canceled The job stopped because the Cancel button was clicked tostop the job before it was completed.

Canceling The Cancel button was clicked and the job is in the processof being canceled.

End The date and time the job ended. The default jobs list includes this column.

Policy The name of the data protection policy associated with the job. This column is notdisplayed in the jobs list by default.

Source Node The name of the storage resource that contains the data being protected. Thiscolumn is not displayed in the jobs list by default.

DestinationNode

The name of the storage resource to which the data is transferred during the job.This column is not displayed in the jobs list by default.

Submitted By The policy that automatically started the job or the user name of the person whostarted the job. This column is not displayed in the jobs list by default.

Description A description of the job taken from the policy configuration or the job descriptionentered when the job was manually started. This column is not displayed in thejobs list by default.

Completed job steps

Displays detailed information about each task in the selected job. You can select a step to see itsdetails.

Time Stamp The date and time the step was completed.

Step A description of the step: for example, Start, In progress, or End.

Result The result of the step. Result options are as follows:

Error The step failed.

Jobs | 51

Page 52: Admin help netapp

Warning The step succeeded but with a possible problem.

Normal The step succeeded.

Job details

Displays details for the currently highlighted job appear in the lower right window.

Dataset The name of the dataset to which the job belongs.

Job Description A description of the specified job.

EventDescription

A description of any events associated with the job.

Policy The name of the data protection policy associated with the job. This column isnot displayed in the jobs list by default.

Job Type The type of job, which is determined by the policy assigned to the dataset or bythe direct request initiated by a user.

Source The name of the storage resource that contains the data being protected. Thiscolumn is not displayed in the jobs list by default.

Destination The name of the storage resource to which the data is transferred during the job.This column is not displayed in the jobs list by default.

Submitted By The policy that automatically started the job or the user name of the person whostarted the job. This column is not displayed in the jobs list by default.

BytesTransferred

The amount of data (in megabytes, gigabytes, or kilobytes) that was transferredduring the job. This column is not displayed in the jobs list by default.

Note: This number is an approximation and does not reflect an exact count; itis always less than the actual number of bytes transferred. For jobs that take ashort time to complete, no data transfer size is reported.

Related references

Window layout customization on page 16

52 | OnCommand Console Help

Page 53: Admin help netapp

Servers

Understanding virtual inventory

How virtual objects are discoveredAfter you successfully install and register a host service with DataFabric Manager server, DataFabricManager server automatically begins a job to discover the virtual server inventory.

The storage credentials that you set when configuring the host service (and vCenter properties forVMware) are pushed to the storage inventory, and DataFabric Manager server begins to map eachserver to storage.

If this automatic discovery job is not successful, you can fix the errors noted in the event log and thenmanually start a discovery job from the Host Services tab.

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Related concepts

What a host service is on page 393

Configuring a new host service on page 395

Rediscovering virtual object inventory on page 408

Monitoring virtual inventory

Monitoring VMware inventory

Viewing the VMware virtual center inventory

You can view your inventory of VMware virtual centers, add a virtual center to a group, and viewrelated storage objects from the VMware Virtual Centers view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

53

Page 54: Admin help netapp

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware folder, then click Virtual Centers.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

VMware Virtual Centers view on page 71

Related references

Administrator roles and capabilities on page 506

Viewing the VMware datacenter inventory

You can view your inventory of VMware datacenters, start a backup job, add a datacenter to adataset, add a datacenter to a group, and view related storage objects from the VMware Datacentersview.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware folder, then click Datacenters.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

54 | OnCommand Console Help

Page 55: Admin help netapp

VMware Datacenters view on page 72

Related references

Administrator roles and capabilities on page 506

Viewing the VMware ESX server inventory

You can view your inventory of VMware ESX servers, add an ESX server to a group, and viewrelated storage objects from the VMware ESX Servers view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Note: If you move an ESX server from one vCenter to another, DataFabric Manager server stillshows the ESX server and its objects in the inventory for the original vCenter host service. In thiscase, you must explicitly remove the ESX server from the original vCenter.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware folder, then click ESX Servers.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

VMware ESX Servers view on page 73

Related references

Administrator roles and capabilities on page 506

Servers | 55

Page 56: Admin help netapp

Viewing the VMware virtual machine inventory

You can view and monitor your inventory of VMware virtual machines, start a backup job, add avirtual machine to a dataset, start a restore job, mount or unmount a virtual machine, add a virtualmachine to a group, and view related storage objects from the VMware VMs view.

Before you begin

For the OnCommand console to show guest virtual machine properties such as DNS name and IPaddress, VMware Tools must be installed and running on the guest virtual machine.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware folder, then click VMware VMs.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

VMware VMs view on page 74

Related references

Administrator roles and capabilities on page 506

Viewing the VMware datastore inventory

You can view and monitor your inventory of VMware datastores, start a backup job, add a datastoreto a dataset, start a restore job, mount or unmount a datastore, add a datastore to a group, and viewrelated storage objects from the VMware Datastores view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

56 | OnCommand Console Help

Page 57: Admin help netapp

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware folder, then click Datastores.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

VMware Datastores view on page 77

Related references

Administrator roles and capabilities on page 506

Monitoring Hyper-V inventory

Viewing the Hyper-V server inventory

You can view your inventory of Hyper-V servers, add a server to a group, and view related storageobjects from the Hyper-V Servers view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the Hyper-V folder, then click Hyper-V Servers.

Related tasks

Rediscovering virtual object inventory on page 408

Servers | 57

Page 58: Admin help netapp

Refreshing host service information on page 408

Hyper-V Servers view on page 80

Related references

Administrator roles and capabilities on page 506

Viewing the Hyper-V virtual machine inventory

You can view and monitor your inventory of Hyper-V virtual machines, start a backup job, add avirtual machine to a dataset, start a restore job, add a virtual machine to a group, and view relatedstorage objects from the Hyper-V VMs view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you make changes to the virtual infrastructure, the results do not immediately appear in theOnCommand console. To see the updated inventory, manually refresh the host service informationfrom the Host Services tab.

Note: The Related Objects pane does not show LUNs that were created on the virtual machineusing the Microsoft iSCSI software initiator.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the Hyper-V folder, then click Hyper-V VMs.

Related tasks

Rediscovering virtual object inventory on page 408

Refreshing host service information on page 408

Hyper-V VMs view on page 81

Related references

Administrator roles and capabilities on page 506

Managing virtual inventory

58 | OnCommand Console Help

Page 59: Admin help netapp

Adding virtual objects to a groupYou can add a virtual object to an existing group. You can also create a new group and add thevirtual object to it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, select the type of virtual object in the list of views.

3. Select the virtual object you want to add to the group.

4. Click Add to Group.

5. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the virtual object to an existing group. Select the appropriate group.

Add the virtual object to a new group. Type the name of the new group in the New Group field.

6. Click Ok.

Result

The virtual object is added to the group.

Related references

Administrator roles and capabilities on page 506

Adding a virtual machine to inventoryIf you want to recover data from a virtual machine that has been deleted from the OnCommandconsole, you must manually add the virtual machine to the inventory.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the OnCommand Server tab.

Servers | 59

Page 60: Admin help netapp

2. Select Datastores and select the listing for the virtual machine that you want to add.

3. Click Add to Group.

Preparing a virtual object managed by the OnCommand console fordeletion from inventory

Before you use a third-party virtual object management tool, such as vSphere, to delete frominventory a virtual object, such as a virtual machine or a datastore, that is currently a managed by theOnCommand console, you must first remove that object from any dataset to which it belongs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

Failure to remove a virtual object from a dataset before deleting it from inventory causes backupfailures in the deleted object's former dataset.

Steps

1. After you decide to delete a specific virtual object from inventory, but before you actually deleteit, click the OnCommand console Server tab.

2. Find and select the listing for the virtual object that you want to delete, and note whether anydatasets are listed in that object's Dataset(s) column.

Datasets that are listed in the selected object's Dataset(s) column indicate that the virtual object isa member of those datasets.

3. If no datasets are listed for the selected virtual object, use your usual tool to delete that objectfrom inventory.

4. If the selected object belongs to one or more datasets, click Datasets in the Related Objects paneto display the dataset hyperlinks, then complete the following actions for each hyperlink:

a. Click the dataset hyperlink.

The OnCommand console displays the Datasets tab with the dataset in question selected.

b. Click Edit to display the Edit Dataset dialog box for the selected dataset.

c. Click Data to display the Data area

d. Remove the virtual object that you want to delete from its dataset.

e. Click OK to finalize the removal.

5. After you have removed the virtual object from all datasets, use your favorite tool to delete thevirtual object from inventory.

60 | OnCommand Console Help

Page 61: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Performing an on-demand backup of virtual objectsYou can protect your virtual machines or datastores by adding them to an existing or new dataset andperforming an on-demand backup.

Before you begin

• You must have reviewed the Guidelines for performing an on-demand backup on page 277• You must have reviewed the Requirements and restrictions when performing an on-demand

backup on page 279• You must have added the virtual objects to an existing dataset or have created a dataset and added

the virtual objects that you want to back up.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.• You must have the following information available:

• Dataset name• Retention duration• Backup settings• Backup script location• Backup description

About this task

If you perform a backup of a dataset containing Hyper-V virtual machines and you are currentlyrestoring those virtual machines, the backup might fail.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, choose the virtual machine or datastore that you want to back up.

If you want to back up... Then...

A virtual machine or datastore that does notbelong to an existing dataset

You must first add it to an existing dataset.

A virtual machine or datastore, but no datasetscurrently exist

You must first create a dataset and then add to itthe required virtual machines or datastores.

3. Click Backup and select the Back Up Now option.

4. In the Back Up Now dialog box, select the dataset that you want to back up.

Servers | 61

Page 62: Admin help netapp

If the virtual machine or datastore belongs to multiple datasets, you must select one dataset toback up.

5. Specify the local protection settings, backup script path, and backup description for the on-demand backup.

If you have already established local policies for the dataset, that information automaticallyappears for the local protection settings for the on-demand backup. If you change the localprotection settings, the new settings override only the existing application policies for this on-demand backup.

6. If you want a remote backup to begin after the local backup has finished, select the Start remotebackup after local backup check box.

7. Click Back Up Now.

After you finish

You can monitor the status of your backup from the Jobs tab.

Related references

Jobs tab on page 46

Administrator roles and capabilities on page 506

Guidelines for performing an on-demand backup

Before performing an on-demand backup of a dataset, you must decide how you want to assignresources and assign protection settings.

General properties information

When performing an on-demand backup, you need to provide information about what objects youwant to back up, to assign protection and retention settings, and to specify script information thatruns before or after the backup operation.

Datasetname

You must select the dataset that you want to back up.

Localprotectionsettings

You can define the retention duration and the backup settings for your on-demandbackup, as needed.

Retention You can choose to keep a backup until you manually delete it, oryou can assign a retention duration. By specifying a length of timeto keep the on-demand local backup, you can override the retentionduration in the local policy you assigned to the dataset for thisbackup. The retention duration of a local backup defaults to aretention type for the remote backup.

62 | OnCommand Console Help

Page 63: Admin help netapp

A combination of both the remote backup retention type and storageservice is used to determine the remote backup retention duration.

For example, if you specify a local backup retention duration of twodays, the retention type of the remote backup is Daily. The datasetstorage service then verifies how long daily remote backups are keptand applies this to the backup. This is the retention duration of theremote backup.

The following table lists the local backup retention durations and theequivalent remote backup retention type:

Local retention duration Remote retentiontype

Less than 24 hours Hourly

1 day up to, but not including, 7 days Daily

1 week up to, but not including, 31 days Weekly

More than 31 days Monthly

Backupsettings

You can choose your on-demand backup settings based on the typeof virtual objects you want to back up.

Allow saved statebackup (Hyper-Vonly)

You can choose to skip the backup if itcauses one or more of the virtual machinesto go offline. If you do not choose thisoption, and your Hyper-V virtualmachines are offline, backup operationsfail.

Create VMwaresnapshot(VMware only)

You can choose to create a VMwareformatted in addition to the storage systemSnapshot copies created during localbackup operations.

Includeindependent disks(VMware only)

You can include independent disks.VMDKs belong to VMware virtualmachines in the current dataset, but resideon datastores that are not part of thecurrent dataset.

Backupscript path

You can specify a script that is invoked before and after the local backup. The scriptis invoked on the host service and the path is local to the host service. If you use aPowerShell script, you should use the drive letter convention. For other types of

Servers | 63

Page 64: Admin help netapp

scripts, you can use either the drive letter convention or the Universal NamingConvention.

Backupdescription

You can provide a description for the on-demand backup so you can easily find itwhen you need it.

Clustered virtual machine considerations (Hyper-V only)

Dataset backups of clustered virtual machines take longer to complete when the virtual machines runon different nodes of the cluster. When virtual machines run on different nodes, separate backupoperations are required for each node in the cluster. If all virtual machines run on the same node,only one backup operation is required, resulting in a faster backup.

Requirements and restrictions when performing an on-demand backup

You must be aware of the requirements and restrictions when performing an on-demand backup.Some requirements and restrictions apply to all types of objects and some are specific to Hyper-V orVMware virtual objects.

Requirements Virtual machines or datastores must first belong to a dataset before backing up.You can add virtual objects to an existing dataset or create a new dataset andadd virtual objects to it.

Hyper-V specificrequirements

Each virtual machine contained in the dataset that you want to back up mustcontain at least 300 MB of free disk space. Each Windows volume in thevirtual machine (guest OS) must have at least 300 MB free disk space. Thisincludes the Windows volumes corresponding to VHDs, iSCSI LUNs, andpass-through disks attached to the virtual machine.

Hyper-V virtual machine configuration files, snapshot copy files, and VHDsmust reside on Data ONTAP LUNs, otherwise backup operations fail.

VMware specificrequirements

Backup operations of datasets containing empty VMware datacenters ordatastores will fail. All datacenters must contain datastores or virtual machinesto successfully perform a backup.

Virtual disks must be contained within folders in the datastore. If virtual disksexist outside of folders on the datastore, and that data is backed up, restoringthe backup could fail.

NFS backups might take more time than VMFS backups. This is because ittakes more time for VMware to commit snapshots in a NFS environment.

Hyper-V specificrestrictions

Partial backups are not supported. If the Hyper-V VSS writer fails to back upone of the virtual machines in the backup and the failure occurs at the Hyper-Vparent host, the backup fails for all of the virtual machines in the backup.

Restoring backups from the Server tab

64 | OnCommand Console Help

Page 65: Admin help netapp

Restoring a datastore using the Restore wizard

You can use OnCommand console to recover a datastore from a local or remote backup. By doing so,you overwrite the existing content with the backup you select.

About this task

Once you start the restoration, you cannot stop the process.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, select All Datastores to sort the backup table by datastores.

3. Select a virtual machine that has the backup you want.

4. Click Restore.

The Restore wizard opens.

5. Select the datastore that contains the backup from the list of backed-up entities.

6. Select the following restore options:

Option Description

Start VM after restore Restores the contents of your virtual machine from a Snapshot copy and restartsthe virtual machine after the operation completes.

Pre/Post Restore Script Runs a script that is stored on the host service server before or after the restoreoperation.

The Restore Wizard displays the location of the virtual hard disk (.vhd) file.

7. From this wizard, click Restore to begin the restoration.

Restoring a VMware virtual machine using the Restore wizard

You can use OnCommand console to recover a VMware virtual machine from a local or remotebackup. By doing so, you overwrite the existing content with the backup you select.

About this task

The process for restoring a VMware virtual machine differs from restoring a Hyper-V virtualmachine in that you can restore an entire virtual machine or its disk files. Once you start therestoration, you cannot stop the process, and you cannot restore from a backup of a virtual machineafter you delete the dataset the virtual machine belonged to.

Servers | 65

Page 66: Admin help netapp

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click All VMware VMs to sort the backup table by VMware virtual machines.

3. Select a virtual machine from the list of backed-up entities.

4. Click Restore.

The Restore wizard opens and lists the dataset which includes the backup of the virtual machine.

5. Select one of the following recovery options:

Option Description

The entire virtualmachine

Restores the contents of your virtual machine from a Snapshot copy to its originallocation. The Restart VM checkbox is enabled if you select this option and thevirtual machine is registered.

Particular virtualdisks

Restores the contents of the virtual disks on a virtual machine to a differentlocation. This option is enabled if you uncheck the entire virtual machine option.

6. In the ESX host name field, select the name of the ESX host. The ESX host is used to mount thevirtual machine components.

This option is available if you want to restore virtual disk files or the virtual machine is on aVMFS datastore.

7. In the Pre/Post Restore Script field, type the name of the script that you want to run before orafter the restore operation.

8. Click Next.

9. From this wizard, review the summary of restore operations and click Restore to begin therestoration.

Related tasks

Adding a virtual machine to inventory on page 59

Where to restore a backup on page 297

Restoring a Hyper-V virtual machine using the Restore wizard

You can use the OnCommand console to recover a Hyper-V virtual machine from a local or remotebackup. By doing so, you overwrite the existing content with the backup you select.

About this task

If you start a restore operation of a Hyper-V virtual machine, and another backup or restoration of thesame virtual machine is in process, it fails. Once you start the restoration, you cannot stop theprocess.

66 | OnCommand Console Help

Page 67: Admin help netapp

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, select All Hyper-V VMs to sort the backup table by Hyper-V virtual machines.

3. Select a virtual machine that has the backup you want.

4. Click Restore.

The Restore wizard opens.

5. Select the dataset that contains the backup from the list of backed-up entities.

6. Select the following restore options:

Option Description

Start VM after restore Restores the contents of your virtual machine from a Snapshot copy and restartsthe virtual machine after the operation completes.

Pre/Post Restore Script Runs a script that is stored on the host service server before or after the restoreoperation.

The Restore Wizard displays the location of the virtual hard disk (.vhd) file.

7. From this wizard, click Restore to begin the restoration.

Mounting and unmounting backups in a VMware environment

Mounting backups in a VMware environment from the Server tab

You can mount existing backups onto an ESX server for backup verification prior to completing arestore operation or to restore a virtual machine to an alternate location. All the datastores and thevirtual machines within the backup are mounted to the ESX server that you specify. Both the Mountand Unmount buttons are disabled for Hyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you frommounting its partner mirror destination backup copy. For a Mirror-generated destination backup copyto be mountable, its associated mirror source backup copy must still exist on the source node.

Steps

1. Click the View menu, then click the Server option.

Servers | 67

Page 68: Admin help netapp

2. In the Server tab, click the VMware option, then click VMware VMs or Datastores.

3. Select a virtual machine or datastore and click Mount.

You cannot mount Hyper-V backups using this button.

4. In the Mount Backup dialog box, select an unmounted backup that you want to mount.

You can only mount one backup each time and you cannot mount a mounted backup.

5. Select from the drop-down list the name of the ESX server to which you want to mount thebackup.

6. Click Mount.

A dialog box appears with a link to the mount job and when you click the link, the Jobs tabappears.

After you finish

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Unmounting backups in a VMware environment from the Server tab

After you are done using a mounted backup for verification or to restore a virtual machine to analternate location, you can unmount the mounted backup from the ESX server that it was mounted to.When you unmount a backup, all the datastores in that backup are unmounted and can no longer beseen from the ESX server that you specify. Both the Mount and Unmount buttons are disabled forHyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If there are virtual objects in use from the previously mounted datastores of a backup, the unmountoperation fails. You must manually clean up the backup prior to mounting the backup again becauseits state reverts to not mounted.

If all the datastores of the backup are in use, the unmount operation fails but this backup's statechanges to mounted. You can unmount the backup after determining the datastores are not in use.

68 | OnCommand Console Help

Page 69: Admin help netapp

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware option, then click VMware VMs or Datastores.

3. Select a virtual machine or datastore and click Unmount.

4. In the Unmount Backup dialog box, select a mounted backup to unmount.

5. Click Unmount.

6. At the confirmation prompt, click Yes.

A dialog box opens with a link to the unmount job displays and when you click the link, the Jobstab appears.

After you finish

If the ESX server becomes inactive or reboots during an unmount operation, the job is terminated andthe mount state remains mounted and the backup stays mounted on the ESX server.

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Mounting backups in a VMware environment from the Backups tab

You can mount existing backups onto an ESX server for backup verification prior to completing arestore operation or to restore a virtual machine to an alternate location. All the datastores and thevirtual machines within the backup are mounted to the ESX server that you specify. Both the Mountand Unmount buttons are disabled for Hyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you frommounting its partner mirror destination backup copy. For a Mirror-generated destination backup copyto be mountable, its associated mirror source backup copy must still exist on the source node.

Steps

1. Click the View menu, then click the Backups option.

Servers | 69

Page 70: Admin help netapp

2. In the Backups tab, select an unmounted backup that you want to mount.

3. Click Mount.

You cannot mount Hyper-V backups using this button.

4. In the Mount Backup dialog box, select from the drop-down list the name of the ESX server towhich you want to mount the backup.

You can only mount one backup each time and you cannot mount a mounted backup.

5. Click Mount.

A dialog box appears with a link to the mount job and when you click the link, the Jobs tabappears.

After you finish

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Unmounting backups in a VMware environment from the Backups tab

After you are done using a mounted backup for verification or to restore a virtual machine to analternate location, you can unmount the mounted backup from the ESX server that it was mounted to.When you unmount a backup, all the datastores in that backup are unmounted and can no longer beseen from the ESX server that you specify. Both the Mount and Unmount buttons are disabled forHyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If there are virtual objects in use from the previously mounted datastores of a backup, the unmountoperation fails. You must manually clean up the backup prior to mounting the backup again becauseits state reverts to not mounted.

If all the datastores of the backup are in use, the unmount operation fails but this backup's statechanges to mounted. You can unmount the backup after determining the datastores are not in use.

Steps

1. Click the View menu, then click the Backups option.

70 | OnCommand Console Help

Page 71: Admin help netapp

2. In the Backups tab, select a mounted backup to unmount.

3. Click Unmount.

4. At the confirmation prompt, click Yes.

A dialog box opens with a link to the unmount job displays and when you click the link, the Jobstab appears.

After you finish

If the ESX server becomes inactive or restarts during an unmount operation, the job is terminated andthe mount state remains mounted and the backup stays mounted on the ESX server.

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Page descriptions

VMware

VMware Virtual Centers view

The VMware Virtual Centers view lists the discovered virtual centers. You can access this view byclicking View > Server > VMware > Virtual Centers.

From the VMware Virtual Centers view, you can add a virtual center to a group, and view objectsthat are related to each virtual center.

• Breadcrumb trail on page 71• Command buttons on page 72• Virtual centers list on page 72• Related Objects pane on page 72

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Servers | 71

Page 72: Admin help netapp

Command buttons

Add to Group Opens the Add to Group dialog box that enables you to add the selected datacenterto the destination group.

Refresh Refreshes the list of virtual centers.

Virtual centers list

Displays information about the virtual centers that have been discovered by DataFabric Managerserver. You can double-click a virtual center to display the objects in that virtual center.

Virtual Center Name of the virtual center.

Related Objects pane

Displays the storage controllers and vFiler units that are related to the selected virtual center.

Related references

Window layout customization on page 16

VMware Datacenters view

The VMware Datacenters view lists the discovered datacenters. You can access this window byclicking View > Server > VMware > Datacenters.

From the VMware Datacenters view, you can start on-demand backup jobs, add a datacenter to agroup or dataset, and view objects that are related to each datacenter.

• Breadcrumb trail on page 72• Command buttons on page 72• Datacenters list on page 73• Related Objects pane on page 73

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Back Up Opens a list of backup commands.

72 | OnCommand Console Help

Page 73: Admin help netapp

Using NewDataset

Opens the Create Dataset dialog box to create a new datasetand add the selected datacenter to that dataset.

Using ExistingDataset

Opens the Add to Existing Dataset dialog box to add theselected datacenter to an existing dataset.

Add to Group Opens the Add to Group dialog box that enables you to add the selected datacenterto the destination group.

Refresh Refreshes the list of datacenters.

Datacenters list

Displays information about the datacenters that have been discovered by DataFabric Manager server.You can double-click a datacenter to display the objects in that datacenter.

Datacenter Name of the datacenter.

Protected Indicates whether the datacenter is protected. Valid values are "Yes" and "No."

A datacenter is protected if it is a member of a dataset that has a local policyassigned to it.

Virtual Center Name of the virtual center with which the datacenter is associated.

Dataset The names of the datasets of which the datacenter is a member.

Related Objects pane

Displays the storage controllers, vFiler units, and datasets that are related to the selected datacenter.

Related references

Window layout customization on page 16

VMware ESX Servers view

The VMware ESX Servers view lists the discovered VMware ESX servers. You can access thiswindow by clicking View > Server > VMware > ESX Servers.

From the VMware ESX Servers view, you can add a server to a group, and view objects that arerelated to each server.

• Breadcrumb trail on page 73• Command buttons on page 74• ESX servers list on page 74• Related Objects pane on page 74

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added to

Servers | 73

Page 74: Admin help netapp

the “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Add to Group Opens the Add to Group dialog box that enables you to add the selected server tothe destination group.

Refresh Refreshes the list of ESX servers.

ESX servers list

Displays information about the ESX servers that have been discovered by DataFabric Managerserver. You can double-click an ESX server to display the virtual machines that are mapped to thatESX server.

ESX Server Name of the VMware ESX server.

Datacenter Name of the datacenter with which the ESX server is associated.

Virtual Center Name of the virtual center with which the datacenter is associated.

Related Objects pane

Displays the storage controllers and vFiler units that are related to the selected ESX server.

Related references

Window layout customization on page 16

VMware VMs view

The VMware VMs view lists the discovered VMware virtual machines. You can access this windowby clicking View > Server > VMware > VMware VMs.

From the VMware VMs view, you can start on-demand backup and restore jobs, add virtualmachines to groups and datasets, mount or unmount virtual machines, and view objects that arerelated to each virtual machine.

• Breadcrumb trail on page 75• Command buttons on page 75• VMware VMs list on page 75• VDisks tab on page 76• Datasets tab on page 76• Related Objects pane on page 77

74 | OnCommand Console Help

Page 75: Admin help netapp

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Back Up Opens a list of backup commands.

Using NewDataset

Opens the Create Dataset dialog box to create a new datasetand add the selected VMware virtual machine to thatdataset.

Using ExistingDataset

Opens the Add to Existing Dataset dialog box to add theselected VMware virtual machine to an existing dataset.

Back Up Now Opens the Back Up Now dialog box to start a backup job.

Restore Starts the Restore wizard to begin a restore job.

Mount Enables you to mount a selected backup to an ESX server if you want to verify itscontent before restoring it.

Unmount Enables you to unmount a backup after you mount it on an ESX server and verifyits contents.

Add to Group Opens the Add to Group dialog box that enables you to add the selected virtualmachine to the destination group.

Refresh Refreshes the list of VMware virtual machines.

VMware VMs list

Displays information about the VMware virtual machines that have been discovered by DataFabricManager server.

VirtualMachine

Name of the VMware virtual machine.

Protected Indicates whether the data in the virtual machine is protected. Valid values are"Yes" and "No."

A virtual machine is protected if any of the following conditions are true:

• The virtual machine is a member of a dataset that has a local policy assignedto it.

Servers | 75

Page 76: Admin help netapp

• The virtual machine is in a datastore that is a member of a dataset that has alocal policy assigned to it.

• The virtual machine is in a datacenter that is a member of a dataset that has alocal policy assigned to it.

ESX Server Name of the ESX server in which the virtual machine runs.

Datacenter Name of the datacenter with which the VM is associated.

Virtual Center Name of the virtual center with which the datacenter is associated.

State The state of the virtual machine.

Powered Off The virtual machine is down.

Powered On The virtual machine is up.

Suspended The operating system for the virtual machine is down.

DNS Name The DNS name for the virtual machine.

IP Address The IP address of the virtual machine. One or more IP addresses might be listedfor a virtual machine.

Dataset(s) The names of the datasets of which the virtual machine is a member.

VDisks tab

Displays detailed information about the VDisks for the selected virtual machine.

VDisk The name of the VDisk.

Disk Type The disk type of the VDisk. Possible values are "Raw Device Mapping" or "Regular."

Datastore The datastore to which the VDisk is mapped.

Datasets tab

Displays detailed information about the dataset of which the selected virtual machine is a member.

Dataset The name of the datasets of which the virtual machine is a member.

StorageService

The storage service that is assigned to the dataset.

Local Policy The local policy that is assigned to the dataset. This policy might be the defaultpolicy associated with the dataset or it might be a local policy assigned by anadministrator as part of a dataset modification.

76 | OnCommand Console Help

Page 77: Admin help netapp

Related Objects pane

Displays the datastores, ESX servers, storage controllers, vFiler units, volumes, LUNs, datasets, andbackups that are related to the selected VMware virtual machine.

Related references

Window layout customization on page 16

VMware Datastores view

The VMware Datastores view lists the discovered datastores. You can access this window byclicking View > Server > VMware > Datastores.

From the VMware Datastores view, you can start on-demand backup and restore jobs, add datastoresto groups and datasets, mount or unmount datastores, and view objects that are related to eachdatastore.

• Breadcrumb trail on page 77• Command buttons on page 77• Datastores list on page 78• Storage Details tab on page 78• Related Objects pane on page 80

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Back Up Opens a list of backup commands.

Using NewDataset

Opens the Create Dataset dialog box to create a newdataset and add the selected datastore to that dataset.

Using ExistingDataset

Opens the Add to Existing Dataset dialog box to add theselected datastore to an existing dataset.

Back Up Now Opens the Back Up Now dialog box to start a backup job.

Restore Starts the Restore wizard to begin a restore job.

Mount Enables you to mount a selected backup to an ESX server if you want to verify itscontent before restoring it.

Servers | 77

Page 78: Admin help netapp

Unmount Enables you to unmount a backup after you mount it on an ESX server and verifyits contents.

Add to Group Opens the Add to Group dialog box that enables you to add the selected datastoreto the destination group.

Refresh Refreshes the list of datastores.

Datastores list

Displays information about the datastores that have been discovered by DataFabric Manager server.You can double-click a datastore to display the objects in that datastore.

Datastore The name of the datastore.

Protected Indicates whether the data in the virtual machine is protected. Valid valuesare "Yes" and "No."

A datastore is protected if any of the following conditions are true:

• The datastore is a member of a dataset that has a local policy assigned toit.

• The datastore is in a datacenter that is a member of a dataset that has alocal policy assigned to it.

Type The type of datastore. Valid values are NFS and VMFS.

Datacenter The name of the datacenter with which the datastore is associated.

Virtual Center The name of the virtual center with which the datastore is associated.

Capacity (GB) The configured amount of space in the datastore.

Used Capacity(GB)

The amount of space in the datastore that is used.

Dataset The names of the datasets of which the datastore is a member.

Hosted on DataONTAP

Indicates whether the datastore is hosted on Data ONTAP. Valid values are"Yes" and "No."

Storage Details tab

Displays detailed information about the selected datastore.

Details for NFS type datastores:

Overview Export Path The path used by the datastore to export data.

78 | OnCommand Console Help

Page 79: Admin help netapp

Volume ThinProvisioningEnabled

Indicates whether the datastore is configured for thinprovisioning. Valid values are "true" if the feature is configuredand "false" if not.

Dedupe Indicates whether the datastore is configured for deduplication.Valid values are "true" if the feature is configured and "false" ifnot.

Autosize Indicates whether the datastore is configured for automaticstorage sizing based on usage. Valid values are "true" if thefeature is configured and "false" if not.

Capacity(GB)

Datastore Usage The percentage of total capacity of the datastore that is used.

Volume Usage The percentage of total capacity of the volume that is used.

Space Savings The amount of space that was saved on the volume becausededuplication is enabled.

Aggregate Usage The percentage of total capacity of the aggregate that is inuse.

Details for VMFS type datastores:

LUN Paths A list of the LUNs that are mapped to the datastore.

Overview IGroup For SAN datastores, the VMware IGroup with which thedatastore is associated. This field is not displayed for NASdatastores.

LUN SpaceReservation

Indicates whether the LUN is configured for space reservation.Valid values are "true" if the feature is configured and "false" ifnot.

Volume ThinProvisioningEnabled

Indicates whether the volume is configured for thinprovisioning. Valid values are "true" if the feature is configuredand "false" if not.

Dedupe Indicates whether the volume is configured for deduplication.Valid values are "true" if the feature is configured and "false" ifnot.

Autosize Indicates whether the volume is configured for automaticstorage sizing based on usage. Valid values are "true" if thefeature is configured and "false" if not.

Capacity(GB)

Datastore Usage The percentage of total capacity of the datastore that is in use.

Servers | 79

Page 80: Admin help netapp

LUN Usage The percentage of total capacity of the LUN that is in use.

Volume Usage The percentage of total capacity of the volume that is in use.

Space Savings The amount of space that was saved on the volume becausededuplication is enabled.

Aggregate Usage The percentage of total capacity of the aggregate that is inuse.

Related Objects pane

Displays the virtual machines, ESX servers, storage controllers, vFiler units, volumes, LUNs,datasets, and backups that are related to the selected datastore.

Related references

Window layout customization on page 16

Hyper-V

Hyper-V Servers view

The Hyper-V Servers view lists the discovered Hyper-V servers. You can access this window byclicking View > Server > Hyper-V > Hyper-V Servers.

From the Hyper-V Servers view, you can add a server to a group, and view objects that are related toeach server.

• Breadcrumb trail on page 80• Command buttons on page 80• Hyper-V Servers list on page 81• Related Objects pane on page 81

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Add to Group Opens the Add to Group dialog box that enables you to add the selected server tothe destination group.

80 | OnCommand Console Help

Page 81: Admin help netapp

Refresh Refreshes the list of Hyper-V servers.

Hyper-V Servers list

Displays information about the Hyper-V servers that have been discovered by DataFabric Managerserver. You can double-click a server to display the children of that server.

Hyper-V Server The name of the Hyper-V server.

Domain Name The domain name that is used by the Hyper-V server.

Related Objects pane

Displays the storage controllers and vFiler units that are related to the selected Hyper-V server.

Related references

Window layout customization on page 16

Hyper-V VMs view

The Hyper-V VMs view lists the discovered Hyper-V virtual machines. You can access this windowby clicking View > Server > Hyper-V > Hyper-V VMs.

From the Hyper-V VMs view, you can start on-demand backup and restore jobs, add virtualmachines to groups and datasets, and view objects that are related to each virtual machine.

• Breadcrumb trail on page 81• Command buttons on page 81• Hyper-V VMs list on page 82• VDisks tab on page 82• Datasets tab on page 82• Related Objects pane on page 83

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

Back Up Opens a list of backup commands.

Servers | 81

Page 82: Admin help netapp

Using NewDataset

Opens the Create Dataset dialog box to create a new datasetand add the selected Hyper-V virtual machine to thatdataset.

Using ExistingDataset

Opens the Add to Existing Dataset dialog box to add theselected virtual machine to an existing dataset.

Back Up Now Opens the Back Up Now dialog box to start a backup job.

Restore Starts the Restore wizard to begin a restore job.

Add toGroup

Opens the Add to Group dialog box that enables you to add the selected virtualmachine to the destination group.

Refresh Refreshes the list of Hyper-V virtual machines.

Hyper-V VMs list

Displays information about the Hyper-V virtual machines that have been discovered by DataFabricManager server.

Virtual Machine The name of the Hyper-V virtual machine.

Protected Indicates whether the data in the virtual machine is protected. Valid values are"Yes" and "No."

A virtual machine is protected if it is a member of a dataset that has a localpolicy assigned to it.

Hypervisor The name of the server that manages the virtual machine.

State The state of the virtual machine.

DNS Name The DNS name for the virtual machine.

Dataset(s) The names of the datasets of which the virtual machine is a member.

VDisks tab

Displays detailed information about the VDisks for the selected virtual machine.

VDisk The name of the VDisk for the selected virtual machine.

VHD Type The virtual hard disk type for the selected virtual machine. Possible values are "BootDisk," "Cluster Shared Volume," "Passthrough," or "Regular."

Datasets tab

Displays detailed information about the dataset of which the selected virtual machine is a member.

Dataset The name of the dataset of which the Hyper-V virtual machine is a member.

82 | OnCommand Console Help

Page 83: Admin help netapp

StorageService

The storage service that is assigned to the dataset.

Local Policy The local policy that is assigned to the dataset. This policy might be the defaultpolicy associated with the storage service or it might be a local policy assigned byan administrator as part of a storage service modification.

Related Objects pane

Displays the Hyper-V servers, storage controllers, vFiler units, volumes, LUNs, datasets, andbackups that are related to the selected Hyper-V virtual machine.

Related references

Window layout customization on page 16

Servers | 83

Page 84: Admin help netapp

84 | OnCommand Console Help

Page 85: Admin help netapp

Storage

Physical storage

Understanding physical storage

What physical storage objects are

You can monitor and manage physical storage objects such as clusters, storage systems, aggregates,and disks by using the OnCommand console.

You can view detailed information about the physical storage objects that are discovered andmonitored by clicking the appropriate view option.

Cluster A group of connected storage systems that share a global namespace that you canmanage as a single virtual server or multiple virtual servers, providing performance,reliability, and scalability benefits. The Clusters view displays all the clusters that aremonitored by OnCommand console and all of the controllers that are part of thecluster.

StorageSystem

Also known as storage controller, is a hardware device running Data ONTAP thatreceives data from and sends data to native disk shelves, third-party storage, or both.

The Storage Controllers view displays all the storage systems that are discovered andmonitored by the OnCommand console.

Aggregate An aggregate contains a defined amount of RAID-protected physical storage that canbe expanded dynamically at any time.

To support the differing security, backup, performance, and data sharing needs ofyour users, you should group the physical data storage resources on your storagesystem into one or more aggregates. These aggregates provide storage to the volumeor volumes that they contain. The Aggregates view displays all the aggregatesbelonging to the storage systems and clusters that are discovered and monitored bythe OnCommand console.

Disk The basic unit of physical storage for a Data ONTAP system. Multiple disks arecontained by a disk shelf. A Data ONTAP node, can accommodate multiple diskshelves; the number and capacity varies according to the node's specifications. Diskshelves provide the physical storage on which logical objects such as aggregates andvolumes are located. The Disks view displays all the disks that are monitored by theOnCommand console.

85

Page 86: Admin help netapp

What cluster-related objects are

OnCommand enables you to include cluster-related objects, such as controllers and virtual servers, ina group. This enables you to easily monitor cluster-related objects that belong to a particular group.

The cluster-related objects are as follows:

Virtual server A single file-system namespace. A virtual server has separate network accessand provides the same flexibility and control as a dedicated node. Each virtualserver has its own user domain and security domain. It can span multiplephysical nodes.

A virtual server has a root volume that constitutes the top level of the namespacehierarchy; additional volumes are mounted to the root volume to extend thenamespace. A virtual server is associated with one or more logical interfaces(LIFs) through which clients access the data on the storage server. Clients canaccess the virtual server from any node in the cluster, but only through thelogical interfaces that are associated with the virtual server.

Namespace Every virtual server has a namespace associated with it. All the volumesassociated with a virtual server are accessed under the virtual server'snamespace. A namespace provides a context for the interpretation of thejunctions that link together a collection of volumes.

Junction A junction points from a directory in one volume to the root directory of anothervolume. Junctions are transparent to NFS and CIFS clients.

Logicalinterface

An IP address with associated characteristics, such as a home port, a list of portsto fail over to, a firewall policy, a routing group, and so on. Each logicalinterface is associated with a maximum of one virtual server to provide clientaccess to it.

Cluster A group of connected storage systems that share a global namespace and thatyou can manage as a single virtual server or multiple virtual servers, providingperformance, reliability, and scalability benefits.

Storagecontroller

The component of a storage system that runs the Data ONTAP operating systemand controls its disk subsystem. Storage controllers are also sometimes calledcontrollers, storage appliances, appliances, storage engines, heads, CPUmodules, or controller modules.

Ports A port represents a physical Ethernet connection. In a Data ONTAP cluster,ports are classified into the following three types:

• Data portsProvide data access to NFS and CIFS clients.

• Cluster portsProvide communication paths for cluster nodes.

• Management ports

86 | OnCommand Console Help

Page 87: Admin help netapp

Provide data access to Data ONTAP management utility.

Data LIF A logical network interface mainly used for data transfers and operations. A dataLIF is associated with a node or virtual server in a Data ONTAP cluster.

NodemanagementLIF

A logical network interface mainly used for node management and maintenanceoperations. A node management LIF is associated with a node and does not failover to a different node.

ClustermanagementLIF

A logical network interface used for cluster management operations. A clustermanagement LIF is associated with a cluster and can fail over to a different node.

Interface group A single virtual network interface that is created by grouping together multiplephysical interfaces.

What deleted objects are

Deleted objects are the storage objects you have deleted from the OnCommand console. When youdelete a storage object, it is not removed from the OnCommand console database, it is only deletedfrom the OnCommand console display and is no longer be monitored by the OnCommand console.

If you delete an object from the database, DataFabric Manager server also deletes all the child objectsit contains. For example, if you delete a storage system, all volumes and qtrees in the storage systemare deleted. Similarly, if a volume is deleted, all the qtrees in the volume are deleted. However, if youdelete a SnapMirror object, only the SnapMirror destination object (volume or qtree) is deleted fromthe database.

What happens when storage objects are deleted

With the OnCommand console, you can stop monitoring a storage object (aggregate, volume, orqtree) by deleting it from the Global group. When you delete an object, the DataFabric Managerserver stops collecting and reporting data about it. Data collection and reporting is resumed onlywhen the object is added back to the OnCommand console database.

Note: When you delete a storage object from any group other than Global, the object is deletedonly from that group; DataFabric Manager server continues to collect and report data about it. Youmust delete the object from the Global group if you want the DataFabric Manager server to stopmonitoring it.

Configuring physical storage

Storage | 87

Page 88: Admin help netapp

Adding clusters

You can add a new cluster and monitor it by using the Storage tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Clusters view.

3. Click Add.

The Clusters, All page is displayed in the Operations Manager console.

4. In the (New Storage System) text box, type the fully qualified domain name of the cluster youwant to add, then click Add.

The cluster is added and displayed in the Global list of clusters.

Note: Identifying the cluster and determining its status might take a few minutes. TheDataFabric Manager server displays an Unknown status until it determines the identity andstatus of the cluster.

Related references

Administrator roles and capabilities on page 506

Adding storage controllers

You can add a new storage controller and monitor it by using the Storage tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab key

88 | OnCommand Console Help

Page 89: Admin help netapp

combination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Storage Controllers view.

3. Click Add.

The Storage Systems, All page is displayed in the Operations Manager console.

4. In the (New Storage System) text box, type the fully qualified domain name of the storagecontroller you want to add, then click Add.

The storage controller is added and displayed in the Global list of storage systems.

Note: Identifying the controller and determining its status might take a few minutes. TheDataFabric Manager server displays an Unknown status until it determines the identity andstatus of the controller.

Related references

Administrator roles and capabilities on page 506

Managing physical storage

Editing storage controller settings

You can edit the following controller settings of the storage system from the Edit Storage ControllerSettings page: the primary IP address, remote platform management IP address, console terminalserver address, login details, preferred SNMP version, owner details, resource tag, maximum activedata transfers, and storage controller thresholds.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Storage | 89

Page 90: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Storage Controllers view.

3. Select the storage controller you want to modify.

4. Click Edit.

The Edit Storage Controller Settings page is displayed in the Operations Manager console.

5. Modify the properties of the storage controller.

6. Click Update.

Changes to the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Related references

Administrator roles and capabilities on page 506

Editing cluster settings

You can edit the following cluster settings from the Edit Cluster Settings page: the primary IPaddress, owner e-mail address, owner name, resource tag, monitoring options, management options,and threshold values.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Clusters view.

3. Select the cluster you want to modify.

4. Click Edit.

90 | OnCommand Console Help

Page 91: Admin help netapp

The Edit Cluster Settings page is displayed in the Operations Manager console.

5. Edit the cluster settings.

6. Click Update.

Changes to the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Related references

Administrator roles and capabilities on page 506

Editing aggregate settings

You can edit the following settings for an aggregate from the Edit Aggregate Settings page: owner e-mail address, owner name, resource tag, threshold values, and alerts. When you edit the thresholdsettings of a specific aggregate, the edited settings override the global settings.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Aggregates view.

3. Select the aggregate you want to modify.

4. Click Edit.

The Edit Aggregate Settings page is displayed in the Operations Manager console.

5. Modify the storage capacity threshold settings for the selected aggregate.

6. Click Update.

Changes to the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Storage | 91

Page 92: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Adding storage systems to a group

You can add storage systems to an existing group. You can also create a new group and add thestorage systems to it. This enables you to easily monitor all the storage systems that belong to agroup.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Storage Controllers view.

3. Select the storage systems you want to add to the group, and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the storage systems to an existing group Select the appropriate group.

Add the storage systems to a new group Type the name of the new group in the New Group field.

5. Click OK.

Result

The storage systems are added to the group.

Related references

Administrator roles and capabilities on page 506

Adding clusters to a group

You can add clusters to an existing group. You can also create a new group and add the clusters to it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

92 | OnCommand Console Help

Page 93: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Clusters view.

3. Select the clusters you want to add to the group, and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the clusters to an existing group Select the appropriate group.

Add the clusters to a new group Type the name of the new group in the New Group field.

5. Click OK.

Result

The clusters are added to the group.

Related references

Administrator roles and capabilities on page 506

Adding aggregates to a group

You can add aggregates to an existing group. You can also create a new group and add aggregates toit.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Aggregates view.

3. Select the aggregates you want to add to the group, and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the aggregates to an existing group Select the appropriate group.

Add the aggregates to a new group Type the name of the new group in the New Group field.

5. Click OK.

Storage | 93

Page 94: Admin help netapp

Result

The aggregates are added to the group.

Related references

Administrator roles and capabilities on page 506

Monitoring physical storage

Overview of the monitoring process

Monitoring involves several processes. The DataFabric Manager server discovers the objectsavailable on your network, and then periodically monitors the objects and data that it collects fromthe discovered objects, such as CPU usage, interface statistics, free disk space, qtree usage, andstorage object ID.

The DataFabric Manager server generates events when it discovers a storage system, when the statusis abnormal, or when a predefined threshold is breached. You can configure the DataFabric Managerserver to send a notification to a recipient when an event triggers an alarm.

Aggregate capacity thresholds and their events

You can configure capacity thresholds for aggregates and events for these thresholds fromDataFabric Manager server. You can set alarms to monitor the capacity and committed space of anaggregate. You can also take corrective actions based on the event generated.

You can configure alarms to send notification whenever an event related to the capacity of anaggregate occurs. For the Aggregate Full threshold, you can also configure an alarm to sendnotification only when the condition persists over a specified time.

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to repeat until you receive anacknowledgment.

Note: If you want to set an alarm for a specific aggregate, you must create a group with thataggregate as the only member.

You can set the following aggregate capacity thresholds:

Aggregate Full(%)

Description: Specifies the percentage at which an aggregate is full.

Note: To reduce the number of Aggregate Full Threshold events generated,you can set an Aggregate Full Threshold Interval. This causes DataFabricManager server to generate an Aggregate Full event only if the conditionpersists for the specified time.

Default value: 90 percent

Event generated: Aggregate Full

94 | OnCommand Console Help

Page 95: Admin help netapp

Event severity: Error

Corrective Action

Perform one or more of the following actions:

• To free disk space, ask your users to delete files that are no longer neededfrom volumes contained in the aggregate that generated the event.

• You must add one or more disks to the aggregate that generated the event.

Note: After you add a disk to an aggregate, you cannot remove itwithout first destroying all flexible volumes present in the aggregate towhich the disk belongs. You must destroy the aggregate after all theflexible volumes are removed from the aggregate.

• You must temporarily reduce the Snapshot reserve.By default, the reserve is 20 percent of disk space. If the reserve is not inuse, reducing the reserve can free disk space, giving you more time to adda disk.There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. It is, therefore, important tomaintain a large enough reserve for Snapshot copies so that the active filesystem always has space available to create new files or modify existingones. For more information about the Snapshot reserve, see the DataONTAP Data Protection Online Backup and Recovery Guide.

Aggregate NearlyFull (%)

Description: Specifies the percentage at which an aggregate is nearly full.

Default value: 80 percent

The value for this threshold must be lower than the value for Aggregate FullThreshold for DataFabric Manager server to generate meaningful events.

Event generated: Aggregate Almost Full

Event severity: Warning

Corrective action

Perform one or more of the actions mentioned in Aggregate Full.

AggregateOvercommitted(%)

Description: Specifies the percentage at which an aggregate isovercommitted.

Default value: 100 percent

Event generated: Aggregate Overcommitted

Event severity: Error

Corrective action

You should perform one or more of the following actions:

Storage | 95

Page 96: Admin help netapp

• You must create new free blocks in the aggregate by adding one or moredisks to the aggregate that generated the event.

Note: You must add disks with caution. After you add a disk to anaggregate, you cannot remove it without first destroying all flexiblevolumes present in the aggregate to which the disk belongs. You mustdestroy the aggregate after all the flexible volumes are destroyed.

• You must temporarily free some already occupied blocks in the aggregateby taking unused flexible volumes offline.

Note: When you take a flexible volume offline, it returns any space ituses to the aggregate. However, when you bring the flexible volumeonline again, it requires the space again.

• Permanently free some already occupied blocks in the aggregate bydeleting unnecessary files.

Aggregate NearlyOvercommitted(%)

Description: Specifies the percentage at which an aggregate is nearlyovercommitted.

Default value: 95 percent

The value for this threshold must be lower than the value for Aggregate FullThreshold for DataFabric Manager server to generate meaningful events.

Event generated: Aggregate Almost Overcommitted

Event severity: Warning

Corrective action

Perform one or more of the actions provided in Aggregate Overcommitted.

AggregateSnapshot ReserveNearly FullThreshold (%)

Description: Specifies the percentage of the Snapshot copy reserve on anaggregate that you can use before the system generates the AggregateSnapshots Nearly Full event.

Default value: 80 percent

Event generated: Aggregate Snapshot Reserve Almost Full

Event severity: Warning

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. If you disable the aggregateSnapshot autodelete option, it is important to maintain a large enoughreserve.

See the Operations Manager Help for instructions on how to identifySnapshot copies you can delete. For more information about the Snapshot

96 | OnCommand Console Help

Page 97: Admin help netapp

reserve, see the Data ONTAP Data Protection Online Backup and RecoveryGuide.

AggregateSnapshot ReserveFull Threshold(%)

Description: Specifies the percentage of the Snapshot copy reserve on anaggregate that you can use before the system generates the AggregateSnapshots Full event.

Default value: 90 percent

Event generated: Aggregate Snapshot Reserve Full

Event severity: Warning

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them.

Note: A newly created traditional volume tightly couples with its containing aggregate so that thecapacity of the aggregate determines the capacity of the new traditional volume. Therefore, youshould synchronize the capacity thresholds of traditional volumes with the thresholds of theircontaining aggregates.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Viewing the cluster inventory

You can use the Clusters view to monitor your inventory of clusters and view information aboutrelated storage objects, capacity graphs, and cluster hierarchy details.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Clusters.

Result

The list of clusters are displayed.

Storage | 97

Page 98: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Viewing the storage controller inventory

You can use the Storage Controllers view to monitor your inventory of storage controllers, and viewinformation about related storage objects, and capacity graphs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Storage Controllers.

Result

The list of storage controllers are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the aggregate inventory

You can use the Aggregates view to monitor your inventory of aggregates, and view informationabout related storage objects, space usage details, and capacity graphs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Aggregates.

Result

The list of aggregates are displayed.

98 | OnCommand Console Help

Page 99: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Viewing the disk inventory

You can monitor your inventory of disks and view information about the disk properties and relatedstorage objects from the Disks view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Disks.

Result

The list of disks are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the deleted object inventory

You can use the Deleted Objects view to view a deleted object's properties, such as the object type,and gather information about the time at which the object was deleted, the user who deleted it, andwhether the parent object was deleted. You can also retrieve the deleted objects by using this view.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Deleted Objects.

Result

The list of deleted objects are displayed.

Storage | 99

Page 100: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Page descriptions

Clusters view

The Clusters view displays detailed information about the clusters you are monitoring, as well astheir related objects, and also enables you to perform tasks such as editing the cluster settings,grouping the clusters, and refreshing the monitoring samples.

• Breadcrumb trail on page 100• Command buttons on page 100• List view on page 101• Overview tab on page 101• Graph tab on page 102• Cluster Hierarchy tab on page 102• Related Objects pane on page 102

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected cluster:

Add Launches the Storage Systems, All page in the Operations Manager console.You can add clusters from this page.

Edit Launches the Edit Cluster Settings in the Operations Manager console. Youcan modify the cluster settings from this page.

Delete Deletes the selected cluster. Deleting a cluster does not also delete the clusterfrom the OnCommand console database, but the deleted cluster is no longer bemonitored.

Add to Group Displays the Add to Group dialog box, which enables you to add the selectedcluster to the destination group.

100 | OnCommand Console Help

Page 101: Admin help netapp

RefreshMonitoringSamples

Refreshes the database sample of the selected cluster and enables you to viewthe updated details.

More Actions • View EventsDisplays the events associated with the cluster in the Events tab. You cansort the information based on the event severity, source ID, date of eventtrigger, state, and so on.

Refresh Refreshes the list of clusters.

Note: You can modify cluster settings, delete a cluster, add clusters to a group, refresh monitoringsamples, and view events for a cluster by right-clicking the selected cluster.

List view

The List view displays, in tabular format, the properties of all the discovered clusters. You cancustomize your view of the data by clicking the filters for the columns.

You can double-click a cluster to display its child objects. The breadcrumb trail is modified todisplay the selected cluster.

ID Displays the cluster ID. By default, this column is hidden.

Name Displays the name of the cluster.

Serial Number Displays the serial number of the cluster.

Controller Count Displays the number of controllers in the cluster.

Vserver Count Displays the number of Vservers created in the cluster.

Location Displays the physical location of the cluster.

Aggregate UsedCapacity (GB)

Displays the amount of space used in the aggregate.

Aggregate TotalCapacity (GB)

Displays the total capacity of the aggregate.

Primary IP Address Displays the IP address of the cluster.

Status Displays the current status of the cluster, based on the events generatedfor the cluster. The status can be Normal, Warning, Error, Critical,Emergency, or Unknown.

Overview tab

The Overview tab displays information about the selected cluster, such as the list of LIFs and ports.

Contact Email Displays the e-mail address of the administrator for the cluster.

Storage | 101

Page 102: Admin help netapp

Uptime Displays the duration for which the cluster is online.

LIFs Displays the list of logical interfaces on the cluster.

Ports Displays the list of physical ports on the controllers in the cluster.

Graph tab

The Graph tab visually represents the various statistics about the clusters, such as performance andcapacity. You can select the graph you want to view from the drop-down list.

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as the space savingstrend, used capacity, total capacity, and space savings achieved through deduplication.

Cluster Hierarchy tab

The Cluster Hierarchy tab displays details about the cluster objects in the selected cluster, such asLIFs, storage controllers, Vservers, and aggregates.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, storage controllers,aggregates, volumes, and Vservers related to the cluster.

Groups Displays the groups to which the selected cluster belongs.

Storage Controllers Displays the storage controllers in the selected cluster.

Aggregates Displays the aggregates in the selected cluster.

Volumes Displays the volumes in the selected cluster.

Vservers Displays the Vservers in the selected cluster.

Related references

Window layout customization on page 16

Storage Controllers view

The Storage Controllers view displays detailed information about the storage controllers that aremonitored, as well as their related objects, and also enables you to perform tasks such as editing thestorage controller settings, grouping the storage controllers, and refreshing the monitoring samples.

• Breadcrumb trail on page 103• Command buttons on page 103• List view on page 104• Map view on page 104

102 | OnCommand Console Help

Page 103: Admin help netapp

• Overview tab on page 107• Capacity tab on page 108• More Info tab on page 108• Graph tab on page 108• Related Objects pane on page 109

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected storage controller:

Add Launches the Storage Systems, All page in the Operations Manager console.You can add storage controllers from this page.

Edit Launches the Edit Storage Controller Settings page in the OperationsManager console. You can modify the storage controller settings from thispage.

Delete Deletes the selected storage controller.

Add to Group Displays the Add to Group dialog box, which enables you to add the selectedstorage controller to the destination group.

RefreshMonitoringSamples

Refreshes the database sample of the selected storage controller and enablesyou to view the updated details.

More Actions • View EventsDisplays the events associated with the storage controller in the Eventstab. You can sort the information based on the event severity, source ID,date of event trigger, state, and so on.

Refresh Refreshes the list of storage controllers.

Grid ( ) Displays a list view of the storage controllers.

Note: You can modify storage controller settings, delete a storagecontroller, add storage controllers to a group, refresh monitoring samples,and view events for a storage controller by right-clicking the selectedstorage controller.

Storage | 103

Page 104: Admin help netapp

TreeMap ( ) Displays a map view of the storage controllers.

List view

The List view displays, in tabular format, the properties of all the discovered storage controllers. Youcan customize your view of the data by clicking the column filters.

You can double-click a storage controller to display its child objects. The breadcrumb trail ismodified to display the selected storage controller.

ID Displays the storage controller ID. By default, this column is hidden.

Name Displays the name of the storage controller.

Type Displays the type of storage controller. The controller can be a stand-alonesystem, a clustered system, or a node in an HA pair.

Status Displays the current status of the storage controller based on the eventsgenerated. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Cluster Displays the name of the cluster to which the storage controller belongs.

Model Displays the model number of the storage controller.

Serial Number Displays the serial number of the storage controller (This number is alsoprovided on the chassis).

System ID Displays the ID number of the storage controller.

Used Capacity(GB)

Displays the amount of space used by the storage controller.

Total Capacity(GB)

Displays the total space available in the storage controller.

IP Address Displays the IP address of the storage controller. By default, this column ishidden.

State Displays the current state of the storage controller. The state can be Up, Down,Error, or Unknown. By default, this column is hidden.

Map view

The Map view enables you to view the properties of the storage controllers which are displayed asrectangles with different sizes and colors. The size and color of the rectangles are based on theoptions you select for the Size and Color fields in the properties area.

StorageControllerFilter

Enables you to display capacity, CPU utilization, and status information about thestorage controllers in varying rectangle sizes and colors:

104 | OnCommand Console Help

Page 105: Admin help netapp

Size Specifies the size of the rectangle based on the option you select from thedrop-down list. You can choose one of the following options:

• Used Capacity (default): The amount of physical space (in GB) usedby application or user data in the storage controller. The size of therectangle increases when the value for used capacity increases.

• Available Capacity: The amount of physical space (in GB) that isavailable in the storage controller. The size of the rectangle increaseswhen the value for available capacity increases.

• Committed Capacity: The amount of physical space (in GB) allocatedto user and application data. The size of the rectangle increases whenthe value for committed capacity increases.

• Saved Capacity: The amount of space (in GB) saved in the storagecontroller. The size of the rectangle increases when the value for savedcapacity increases.

• Status: The current status of the storage controller based on the eventsgenerated. The size of the rectangle varies from large to small in thefollowing order: Emergency, Critical, Error, Warning, Normal, andUnknown. For example, a controller with an Emergency status isdisplayed as a larger rectangle than a controller with a Critical status.

• CPU Utilization: The CPU usage (in percentage) of the storagecontroller. The size of the rectangle increases when the value for CPUutilization increases.

Color Specifies the color of the rectangle based on the option you select from thedrop-down list. You can choose one of the following options:

• Status (default): The current status of the storage controller based onthe events generated. Each status displays a specific color: Emergency

( ), Critical ( ), Error ( ), Warning ( ), Normal ( ), and

Unknown ( ).• Available %: The percentage of space available in the storage

controller. The color varies based on the specified threshold values andthe space available in the controller. For example, in a storagecontroller with a size of 100 GB, if the Volume Nearly Full Thresholdand Volume Full Threshold are set to default values of 80% and 90%,respectively, the color of the rectangle depends on the followingconditions:

• If the available space in the controller is more than 20 GB and less

than 100 GB, the color displayed is green ( ). When theavailable space reduces, the green color changes to a lighter shade.

• If the available space in the controller is less than 20 GB but more

than 10 GB, the color displayed is orange ( ). When the

Storage | 105

Page 106: Admin help netapp

available space reduces, the orange color changes to a darkershade.

• If the available space in the controller is less than 10 GB, the color

displayed is red ( ). When the available space reduces, the redcolor changes to a darker shade.

• Used %: The percentage of space used in the storage controller. Thecolor displayed varies based on the following conditions:

• If the used space in the controller is less than the Volume NearlyFull Threshold value of the controller, the color displayed is green

( ). When the used space reduces, the green color changes to adarker shade.

• If the used space in the controller exceeds the Volume Nearly FullThreshold value but is less than the Volume Full Threshold value

of the controller, the color displayed is orange ( ). When theused space reduces, the orange color changes to a lighter shade.

• If the used space in the controller exceeds the Volume Full

Threshold value of the controller, the color displayed is red ( ).When the used space reduces, the red color changes to a lightershade.

• Committed Capacity: The amount of physical space (in GB) allocated

to application or user data. The color displayed is blue ( ). When thecommitted capacity reduces, the blue color changes to a lighter shade.

• Saved Capacity: The amount of space (in GB) saved in the storage

controller. The color displayed is blue ( ). When the saved capacityreduces, the blue color changes to a lighter shade.

• CPU Utilization: The CPU usage (in percentage) of the storagecontroller. The color displayed varies based on the followingconditions:

• If the CPU usage of the controller is less than the Host CPU Too

Busy Threshold value, the color displayed is green ( ). When theCPU usage reduces, the green color changes to a darker shade.

• If the CPU usage of the controller exceeds the Host CPU Too Busy

Threshold value, the color displayed is red ( ). When the CPUusage reduces, the red color changes to a lighter shade.

General Enables you to filter storage controllers based on the name, status, or both.

106 | OnCommand Console Help

Page 107: Admin help netapp

Note: You can filter by entering regular expressions instead of the full name ofthe controller. For example, xyz* lists all the controllers that begin with the namexyz.

Capacity Enables you to filter storage controllers based on the used capacity, availablecapacity, committed capacity, and saved capacity. You can specify the capacityrange by dragging the sliders.

Performance Enables you to filter storage objects based on the CPU utilization of the storagecontroller. You can specify the CPU utilization range by dragging the sliders.

Overview tab

The Overview tab displays information about the selected storage controller, such as the IP address,network interface connection, status, and AutoSupport details.

Name Displays the name of the storage controller.

OperatingSystem

Displays the version of the operating system the storage controller is running.

IP Address Displays the IP address of the storage controller in IPv4 or IPv6 format.

NetworkInterface count

Displays the number of network interfaces on the storage controller.

NetworkInterfaces

Displays the names of the network interfaces on the storage controller.

Status Displays the current status of the storage controller based on the eventsgenerated. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Up Time Displays the duration for which the storage controller is online.

Remote PlatformManagement

Displays the status of the Remote LAN Module (RLM) card that is installed onthe controller. The status can be one of the following:

Online This status is displayed when the DataFabric Manager serveris communicating with the RLM card using the IP addressyou have set.

Unavailable This status is displayed when the DataFabric Manager serveris unable to communicate with the RLM card. This can be dueto the following reasons:

• The IP address of the card is not set.• The IP address of the card is set, but the DataFabric

Manager server is not able to communicate with the card.• The storage controller does not support an RLM card.

Storage | 107

Page 108: Admin help netapp

• The storage controller supports an RLM card, but the cardis not accessible.

• The RLM card is not functioning.

You can perform remote maintenance operations for the storage controller byclicking the remote platform management link.

Contact Displays the contact information of the administrator for the storage controller.

Location Displays the location of the storage controller.

AutoSupport Displays the status of AutoSupport : "Yes" if AutoSupport is enabled and"Unknown" if not.

Capacity tab

The Capacity tab displays information about the capacity of storage objects and disks within thestorage controller.

StorageCapacity

Displays the number of aggregates, volumes, qtrees, or LUNs, if any, that thestorage system contains, including the capacity that is currently in use. You can clickthe number corresponding to the storage capacity for more information.

PhysicalSpace

Displays the number of data, spare, and parity disks and their data capacities on thestorage controller. The Total disks field under Physical space provides informationabout the disks on the storage controller. You can click the number corresponding toTotal Disks to view more information from the Disks view.

More Info tab

The More Info tab displays the following information about the broken disks in a storage controller:

Failed Disks Displays the number of failed disks on the storage controller.

Failed Disk Info Displays the location of the failed disk on the storage controller.

Initiators Displays the number of LUN initiators available in the storage controller. Youcan double-click the number corresponding to the initiator for more information.

Protocols Displays the list of protocols that are supported by the storage controller, such asNFS, CIFS, FCP, and iSCSI.

Graph tab

The Graph tab visually represents the various statistics about the storage controller, such asperformance and capacity. You can select the graph you want to view from the drop-down list.

108 | OnCommand Console Help

Page 109: Admin help netapp

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as the space savingstrend, used capacity, total capacity, and space savings achieved through deduplication.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, volumes, aggregates,and vFiler units related to the storage controller.

Groups Displays the groups to which the storage controller belongs.

Volumes Displays the volumes in the selected storage controller.

Aggregates Displays the aggregates in the selected storage controller.

vFiler Units Displays the vFiler units in the selected storage controller.

Related references

Window layout customization on page 16

Aggregates view

The Aggregates view displays detailed information about the aggregates in storage systems that aremonitored, as well as their related objects, and also enables you to perform tasks such as editing theaggregate settings, grouping the aggregates, and refreshing monitoring samples.

• Breadcrumb trail on page 109• Command buttons on page 110• List view on page 110• Map view on page 111• Overview tab on page 114• Capacity tab on page 115• Space Breakout tab on page 115• Graph tab on page 115• Related Objects pane on page 115

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Storage | 109

Page 110: Admin help netapp

Command buttons

The command buttons enable you to perform the following tasks for a selected aggregate:

Edit Launches the Edit Aggregate Settings in the Operations Manager console.You can modify the aggregate settings from this page.

Delete Deletes the selected aggregate.

Add to Group Displays the Add to Group dialog box, which enables you to add theselected aggregate to the destination group.

Refresh MonitoringSamples

Refreshes the database sample of the selected aggregate and enables you toview the updated details.

More Actions • View EventsDisplays the events associated with the aggregate in the Events tab. Youcan sort the information based on the event severity, source ID, date ofevent trigger, and state.

Refresh Refreshes the list of aggregates.

Grid ( ) Displays the list view of the aggregate.

Note: You can add an aggregate to a group, refresh monitoring samples,modify aggregate settings, view events for an aggregate, and delete anaggregate by right-clicking the selected aggregate.

TreeMap ( ) Displays the map view of the aggregate.

List view

The list view displays, in tabular format, the properties of all the discovered aggregates. You cancustomize your view of the data by clicking the column filters.

You can double-click the name of an aggregate to display its child objects. The breadcrumb trail ismodified to display the selected aggregate.

ID Displays the aggregate ID. By default, this column is hidden.

Name Displays the name of the aggregate.

Storage System Displays the name of the storage system that contains the aggregate.

Type Displays the type of aggregate selected. By default, this column is hidden.

Block Type Displays the block format of the aggregate, either 32-bit or 64-bit.

RAID Type Displays the RAID protection scheme, if specified. By default, this column ishidden.

The RAID protection scheme can be one of the following:

110 | OnCommand Console Help

Page 111: Admin help netapp

RAID 0 All the raid groups in the aggregate are of type raid0.

RAID 4 All the raid groups in the aggregate are of type raid4.

RAID DP All the raid groups in the aggregate are of typeraid_dp.

Mirrored RAID 0 All the raid groups in the mirrored aggregate are oftype raid0.

Mirrored RAID 4 All the raid groups in the mirrored aggregate are oftype raid4.

Mirrored RAIDDP

All the raid groups in the mirrored aggregate are oftype raid_dp.

State Displays the current state of an aggregate. The state can be Online, Offline, orUnknown.

Status Displays the current status of an aggregate. The status can be Normal,Warning, Error, Critical, Emergency, or Unknown.

Used Capacity(GB)

Displays the amount of space used in the aggregate.

AvailableCapacity (GB)

Displays the amount of space available in the aggregate.

CommittedCapacity (GB)

Displays the total space reserved for all flexible volumes on an aggregate.

Total Capacity(GB)

Displays the total space of the aggregate.

Host ID Displays the host ID to which the aggregate is related. By default, this columnis hidden.

Map view

The Map view enables you to view the properties of the aggregates which are displayed as rectangleswith different sizes and colors. The size and color of the rectangles are based on the options youselect for the Size and Color fields in the properties area.

AggregateFilter

Enables you to display capacity and status information about the aggregates invarying rectangle sizes and colors:

Size Specifies the size of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Used % (default): The percentage of space used in the aggregate. Thesize of the rectangle increases when the value for used space increases.

Storage | 111

Page 112: Admin help netapp

• Available %: The percentage of space available in the aggregate. Thesize of the rectangle increases when the value for available spaceincreases.

• Growth Rate: The rate at which data in the aggregate is increasing. Thesize of the rectangle increases when the value for growth rate increases.

• Unused Snapshot Reserve: The amount of unused Snapshot reservespace (in GB). The size of the rectangle increases when the value forunused snapshot reserve increases.

• Days to Full: The number of days needed for the aggregate to reach fullcapacity. The size of the rectangle increases when the value for days tofull increases.

• Size: The total size (in GB) of the aggregate. The size of the rectangleincreases when the value for size increases.

• Used Capacity: The amount of physical space (in GB) used byapplication or user data in the aggregate. The size of the rectangleincreases when the value for used capacity increases.

• Available Capacity: The amount of physical space (in GB) that isavailable in the aggregate. The size of the rectangle increases when thevalue for available capacity increases.

• Committed Capacity: The amount of physical space (in GB) allocatedto user and application data. The size of the rectangle increases whenthe value for committed capacity increases.

• Saved Capacity: The amount of space (in GB) saved in the aggregate.The size of the rectangle increases when the value for saved capacityincreases.

• Status: The current status of the aggregate based on the eventsgenerated. The size of the rectangle varies from large to small in thefollowing order: Emergency, Critical, Error, Warning, Normal, andUnknown. For example, an aggregate with an Emergency status isdisplayed as a larger rectangle than an aggregate with a Critical status.

Color Specifies the color of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Status (default): The current status of the aggregate based on the events

generated. Each status displays a specific color: Emergency ( ),

Critical ( ), Error ( ), Warning ( ), Normal ( ), and

Unknown ( ).• Used %: The percentage of space used in the aggregate. The color

displayed varies based on the following conditions:

• If the used space in the aggregate is less than the Nearly FullThreshold value of the aggregate, the color displayed is green

112 | OnCommand Console Help

Page 113: Admin help netapp

( ). When the used space reduces, the green color changes to adarker shade.

• If the used space in the aggregate exceeds the Nearly FullThreshold value but is less than the Full Threshold value of the

aggregate, the color displayed is orange ( ). When the used spacereduces, the orange color changes to a lighter shade.

• If the used space in the aggregate exceeds the Full Threshold value

of the aggregate, the color displayed is red ( ). When the usedspace reduces, the red color changes to a lighter shade.

• Aggregate with Snapshot Reserve: The size (in GB) of the Snapshotreserve in an aggregate. If the aggregate has an allocated Snapshot

reserve, ( ) is displayed. If the aggregate does not have an allocated

Snapshot reserve, ( ) is displayed.• Unused Snapshot Reserve: The amount of unused Snapshot reserve

space (in GB) in the aggregate. The color varies based on the specifiedthreshold values and the unused Snapshot reserve space in thecontroller. For example, in an aggregate with a size of 100 GB, if theAggregate Snapshot Reserve Nearly Full and Aggregate SnapshotReserve Full Threshold are set to default values of 80% and 90%,respectively, the color of the rectangle depends on the followingconditions:

• If the unused Snapshot reserve space in the aggregate is more than

20 GB but less than 100 GB, the color displayed is green ( ).When the unused Snapshot reserve space reduces, the green colorchanges to a lighter shade.

• If the unused Snapshot reserve space in the aggregate is less than 20

GB but more than 10 GB, the color displayed is orange ( ).When the unused Snapshot reserve space reduces, the orange colorchanges to a darker shade.

• If the unused Snapshot reserve space in the aggregate is less than 10

GB, the color displayed is red ( ). When the unused Snapshotreserve space reduces, the red color changes to a darker shade.

• Raid-4/DP: The RAID protection scheme, if specified. The color

displayed varies based on the raid protection types: raid0 ( ), raid4

( ), raid_dp ( ), and mixed_raid_type ( ).

Storage | 113

Page 114: Admin help netapp

• Size: The total size (in GB) of the aggregate. The color displayed is

blue ( ). When the size of the aggregate reduces, the blue colorchanges to a lighter shade.

• Available %: The percentage of space available in the aggregate. Thecolor varies based on the specified threshold values and the spaceavailable in the aggregate. For example, in an aggregate with a size of100 GB, if the Aggregate Nearly Full Threshold and Aggregate FullThreshold are set to default values of 80% and 90%, respectively, thecolor of the rectangle depends on the following conditions:

• If the available space in the aggregate is more than 20 GB but less

than 100 GB, the color displayed is green ( ). When the availablespace reduces, the green color changes to a lighter shade.

• If the available space in the aggregate is less than 20 GB but more

than 10 GB, the color displayed is orange ( ). When the availablespace reduces, the orange color changes to a darker shade.

• If the available space in the aggregate is less than 10 GB, the color

displayed is red ( ). When the available space reduces, the redcolor changes to a darker shade.

• Committed Capacity: The amount of physical space (in GB) allocated

to application or user data. The color displayed is blue ( ). When thecommitted capacity reduces, the blue color changes to a lighter shade.

• Saved Capacity: The amount of space (in GB) saved in the aggregate.

The color displayed is blue ( ). When the saved capacity reduces,the blue color changes to a lighter shade.

General Enables you to filter aggregates based on the name, status, or both.

Note: You can filter by entering regular expressions instead of the full name of theaggregate. For example, xyz* lists all the aggregates that begin with the name xyz.

Capacity Enables you to filter aggregates based on the used %, growth rate, used capacity,available capacity, saved capacity, and so on. You can specify the capacity range bydragging the sliders.

Overview tab

The Overview tab displays details about the selected aggregate, such as the storage object name andoptions to enable Snapshot copies.

Full Name Displays the full name of the aggregate.

114 | OnCommand Console Help

Page 115: Admin help netapp

Storage System Displays the name of the storage system that contains the aggregate. Youcan view more information about the storage system by clicking the link.

Snapshot CopiesEnabled

Specifies whether automatic Snapshot copies are enabled.

Snapshot AutoDelete

Specifies whether a Snapshot copy will be deleted to free space when a writeto a volume fails due to lack of space in the aggregate.

Capacity tab

The Capacity tab displays information about the capacity of storage objects and disks within thestorage system.

StorageCapacity

Displays the number of volumes and qtrees, if any, that the aggregate contains, ifany, including the capacity uses by each object. You can click the number todisplay the volumes or qtrees contained in the volume.

PhysicalSpace

Displays the number and capacity of data disks and parity disks assigned to theaggregate. You can click the number corresponding to Total Disks to view moreinformation from the Disks view.

Space Breakout tab

The Space Breakout tab displays information about the space used, and space available, for Snapshotcopies and data.

Graph tab

The Graph tab visually represents the performance characteristics of an aggregate. You can select thegraph you want to view from the drop-down list.

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as the used capacitytrend, used capacity, and total capacity.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, storage controllers,volumes, and disks related to the aggregate.

Groups Displays the groups to which the aggregate belongs.

Storage Controllers Displays the storage controllers that contain the selected aggregate.

Volumes Displays the volumes in the selected aggregate.

Disks Displays the disks in the selected aggregate.

Storage | 115

Page 116: Admin help netapp

Related references

Window layout customization on page 16

Disks view

The Disks view displays detailed information about disks and its related objects.

• Breadcrumb trail on page 116• Command button on page 116• List view on page 116• Overview tab on page 117• Related objects pane on page 117

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command button

The command button enables you to perform the following task for a selected disk:

Refresh Refreshes the list of disks.

List view

The List view displays, in tabular format, the properties of all the discovered disks. You cancustomize your view of the data by clicking the column filters.

ID Displays the disk ID. By default, this column is hidden.

Disk Name Displays the name of the disk.

Controller Displays the name of the storage controller that contains the disk.

Aggregate Displays the name of the parent aggregate of the disk.

Aggregate ID Displays the ID of the aggregate to which the disk belongs. By default, this columnis hidden.

Type Displays the type of the disk.

Size (GB) Displays the size of the disk.

Shelf ID Displays the ID of the shelf on which the disk is located.

116 | OnCommand Console Help

Page 117: Admin help netapp

Bay ID Displays the ID of the bay within the shelf on which the disk is located.

Plex ID Displays the ID of the plex to which the disk is assigned.

Status Displays the current status of the disk, such as Active, Reconstruction in Progress,Scrubbing in Progress, Failed, Spare, or Offline.

Host ID Displays the ID of the host to which the disk is related. By default, this column ishidden.

Overview tab

The Overview tab displays the following information about the selected disk:

Firmware Revision Number Displays the latest version of the firmware installed on the disk.

Vendor Displays the name of the disk vendor.

Disk Model Displays the model number of the disk.

Related objects pane

The Related Objects section displays the aggregates related to the disk.

Aggregates Displays the aggregates in the selected disk.

Related references

Window layout customization on page 16

Deleted Objects view

The Deleted Objects view displays detailed information about the storage objects that you deleted.Deleting a storage object does not also delete the object from the OnCommand console database, butthe deleted object is no longer monitored by OnCommand console. You can also restore deletedstorage objects from the Deleted Objects view.

• Breadcrumb trail on page 117• Command buttons on page 118• List view on page 118

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Storage | 117

Page 118: Admin help netapp

Command buttons

The command buttons enable you to perform the following tasks for a selected storage object:

Recover Enables you to recover a storage object that has been deleted. You cannot recover astorage object that has been deleted by the DataFabric Manager server. Also, you cannotrecover a storage object whose parent object is deleted.

Data collection and reporting for a storage object resumes when you recover it.

Note: You can also recover a storage object by right-clicking the selected deletedobject.

Refresh Refreshes the list of deleted storage objects.

List view

The List view displays, in tabular format, the properties of the deleted storage objects. You cancustomize your view of the data by clicking the column filters.

ID Displays the storage object ID. By default, this column is hidden.

Name Displays the name of the storage object.

Type Displays the storage object type, such as volume, qtree, or aggregate.

Storage Server Displays the parent of the storage object, such as volume, vFiler unit, or Vserver.

Deleted Displays the date and time that the storage object was deleted.

Deleted By Displays the name of the user who deleted the storage object.

Parent Deleted Displays "Yes" if the parent object is deleted or displays "No" if the parent objectis not deleted.

Parent ID Displays the ID of the parent object. By default, this column is hidden.

Parent Name Displays the name of the parent object. By default, this column is hidden.

Related references

Window layout customization on page 16

Virtual storage

Understanding virtual storage

118 | OnCommand Console Help

Page 119: Admin help netapp

What vFiler units are

A vFiler unit is a partition of a storage system and the associated network resources. Each vFilerpartition appears to the user as a separate storage system on the network and functions as a storagesystem.

Access to vFiler units can be restricted so that an administrator can manage and view files only on anassigned vFiler unit, not on other vFiler units that reside on the same storage system. In addition,there is no data flow between vFiler units. When using vFiler units, you can be sure that no sensitiveinformation is exposed to other administrators or users who store data on the same storage system.

You can assign volumes or LUNs to vFiler units in NetApp Management Console. You can create upto 65 vFiler units on a storage system.

To use vFiler units you must have the MultiStore software licensed on the storage system that ishosting the vFiler units.

You can use vFiler templates to simplify creation of vFiler units. You create a template by selecting aset of vFiler configuration settings, including CIFS, DNS, NIS, and administration host information.You can configure as many vFiler templates as you need.

Discovery of vFiler units

The OnCommand console monitors the hosting storage systems to discover vFiler units. You mustset authentication credentials for the hosting storage system to ensure that the OnCommand consolediscovers the vFiler units.

Attention: If you encounter a "timed out" error message when setting the authenticationcredentials for the hosting storage system, you must set the credentials again.

The server monitors the hosting storage system once every hour to discover new vFiler units that youconfigured on the storage system. The server deletes from the database the vFiler units that youdestroyed on the storage system.

You can change the default monitoring interval from the Monitoring setup options, or by using thefollowing CLI command:

dfm option set vFilerMonInterval=1hour

You can disable the vFiler discovery from the Discovery setup options, or by using the dfm optionset discovervfilers=no CLI command.

When the OnCommand console discovers a vFiler unit, it does not add the network to which thevFiler unit belongs to its list of networks on which it runs host discovery. In addition, when youdelete a network, the server continues to monitor the vFiler units in that network.

Storage | 119

Page 120: Admin help netapp

What Vservers are

A Vserver represents a single file-system namespace. It has separate network access and provides thesame flexibility and control as a dedicated node.Each Vserver has its own user domain and securitydomain that can span multiple physical nodes.

A Vserver has a root volume that constitutes the top level of the namespace hierarchy; additionalvolumes are mounted to the root volume to extend the namespace. A Vserver is associated with oneor more logical interfaces through which clients access the data on the storage system (or Vserver).Clients can access the Vserver from any node in the cluster through the logical interfaces that areassociated with the Vserver.

Note: A namespace provides a context for determining the junctions that link together a collectionof volumes. All the volumes associated with a Vserver are accessed from the Vserver's namespace.

Managing virtual storage

Editing vFiler unit settings

You can edit the following vFiler unit settings from the Edit vFiler Settings page: primary IP address,login details, password, login protocol, owner name and e-mail address, resource tag, Host.equivoption, and vFiler unit thresholds.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the vFiler Units view.

3. Select the vFiler unit you want to modify.

4. Click Edit.

The Edit vFiler Settings page is displayed in the Operations Manager console.

5. Modify the properties of the vFiler unit, as required.

120 | OnCommand Console Help

Page 121: Admin help netapp

6. Click Update.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Result

The modified settings are updated in the DataFabric Manager server.

Related references

Administrator roles and capabilities on page 506

Editing Vserver settings

You can edit the following virtual server settings from the Edit Vserver Settings page: the primary IPaddress, resource tag, and owner e-mail address and name.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Vservers view.

3. Select the virtual server you want to modify.

4. Click Edit.

The Edit vserver Settings page is displayed in the Operations Manager console.

5. Modify the properties of the virtual server, as required.

6. Click Update.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Storage | 121

Page 122: Admin help netapp

Result

The modified settings are updated in the DataFabric Manager server.

Related references

Administrator roles and capabilities on page 506

Adding vFiler units to a group

You can add a vFiler unit to an existing group. You can also create a new group and add a vFiler unitto it. This enables you to easily monitor all the vFiler units that belong to a group.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the vFiler Units view.

3. Select the vFiler unit you want to add to the group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the vFiler unit to an existing group. Select the appropriate group.

Add the vFiler unit to a new group. Type the name of the new group in the New Group field.

5. Click OK.

Result

The vFiler unit is added to the group.

Related references

Administrator roles and capabilities on page 506

122 | OnCommand Console Help

Page 123: Admin help netapp

Adding Vservers to a group

You can add a virtual server to an existing group. You can also create a new group and add a virtualserver to it. This enables you to easily monitor all the virtual servers that belong to a group.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Vservers view.

3. Select the virtual server you want to add to the group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the virtual server to an existing group. Select the appropriate group.

Add the virtual server to a new group. Type the name of the new group in the New Group field.

5. Click OK.

Result

The virtual server is added to the group.

Related references

Administrator roles and capabilities on page 506

Monitoring virtual storage

Prerequisites for monitoring vFiler units

Before you enable monitoring of vFiler units, you must ensure that the hosting storage system isrunning a supported Data ONTAP release and it is part of the same routable network as theDataFabric Manager server. NDMP discovery must also be enabled.

You must meet the following requirements before monitoring vFiler units:

• Supported Data ONTAP releaseThe MultiStore monitoring feature supports hosting storage systems running Data ONTAP 6.5 orlater.

Storage | 123

Page 124: Admin help netapp

Note: To run a command on a vFiler unit using a Secure Socket Shell (SSH) connection, youmust ensure that the hosting storage system is running Data ONTAP 7.2 or later.

• Network connectivityTo monitor a vFiler unit, DataFabric Manager server and the hosting storage system must be partof the same routable network that is not separated by firewalls.

• Hosting storage system discovery and monitoringYou must first discover and monitor the hosting storage system before discovering andmonitoring the vFiler units.

• NDMP discoveryDataFabric Manager server uses NDMP as the discovery method to manage SnapVault andSnapMirror relationships between vFiler units. To use NDMP discovery, you must first enableSNMP and HTTPS discovery.

• Monitoring the default vFiler unitWhen you enable your core license, which includes MultiStore, Data ONTAP automaticallycreates a default vFiler unit on the hosting storage system unit called vfiler0. The OnCommandconsole does not provide vfiler0 details.

• Editing user quotasTo edit user quotas that are configured on vFiler units, ensure that the hosting storage systems arerunning Data ONTAP 6.5.1 or later.

• Monitoring backup relationshipsFor hosting storage systems that are backing up data to a secondary system, you must ensure thatthe secondary system is added to the vFiler group. DataFabric Manager server collects detailsabout vFiler unit backup relationships from the hosting storage system. You can then view thebackup relationships if the secondary storage system is assigned to the vFiler group, even thoughthe primary system is not assigned to the same group.

• Monitoring SnapMirror relationshipsFor hosting storage systems that are mirroring data to a secondary system, you must ensure thatthe secondary system is added to the vFiler group. DataFabric Manager server collects detailsabout vFiler unit SnapMirror relationships from the hosting storage system. DataFabric Managerserver displays the relationships if the destination vFiler unit is assigned to the vFiler group, eventhough the source vFiler unit is not assigned to the same group.

Viewing the vFiler unit inventory

You can use the vFiler Units view to monitor your inventory of vFiler units and view informationabout their properties, related storage objects, and capacity graphs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

124 | OnCommand Console Help

Page 125: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click vFiler Units.

Result

The list of vFiler units are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the Vserver inventory

You can use the Vservers view to monitor your inventory of Vservers and view information abouttheir properties, related objects, capacity graphs, and namespace hierarchy details.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Vservers.

Result

The list of Vservers are displayed.

Related references

Administrator roles and capabilities on page 506

Page descriptions

vFiler Units view

The vFiler Units view displays detailed information about the vFiler units that are monitored, as wellas their related objects, and also enables you to perform tasks such as editing the vFiler unit settings,grouping the vFiler units, and refreshing the monitoring samples.

• Breadcrumb trail on page 126• Command buttons on page 126• List view on page 127

Storage | 125

Page 126: Admin help netapp

• Map view on page 127• Overview tab on page 129• Capacity tab on page 129• Graph tab on page 129• Related Objects pane on page 130

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected vFiler unit:

Edit Launches the Edit vFiler Settings page in the Operations Manager console.You can modify the vFiler unit settings from this page.

Delete Deletes the selected vFiler unit.

Add To Group Displays the Add to Group dialog box, which enables you to add the selectedvFiler unit to a destination group.

RefreshMonitoringSamples

Refreshes the database sample of the selected vFiler unit and enables you toview the updated details.

More Actions • View EventsDisplays the events associated with the vFiler unit in the Events tab. Youcan sort the information based on the event severity, source ID, date ofevent trigger, and state.

Refresh Refreshes the list of vFiler units.

Grid ( ) Displays the list view of the vFiler units.

Note: You can add a vFiler unit to a group, refresh monitoring samples,modify the settings for a vFiler unit, view events for a vFiler unit, anddelete a vFiler unit by right-clicking the selected vFiler unit.

TreeMap ( ) Displays the map view of the vFiler units.

126 | OnCommand Console Help

Page 127: Admin help netapp

List view

The list view displays, in tabular format, the properties of all the discovered vFiler units. You cancustomize your view of the data by clicking the column filters.

ID Displays the vFiler unit ID. By default, this column is hidden.

Name Displays the name of the vFiler unit.

Hosting StorageSystem

Displays the full name of the hosting storage system of the vFiler unit.

IP Space Displays the IP space in which the vFiler unit is created and can subsequentlyparticipate.

Primary IPAddress

Displays the primary IP address of the network interface.

Status Displays the current status of a vFiler unit. The status can be Normal,Warning, Error, Critical, Emergency, or Unknown.

State Displays "Up" if the vFiler unit is online and, "Down" if not. By default, thiscolumn is hidden.

Map view

The Map view enables you to view the properties of the vFiler units which are displayed asrectangles with different sizes and colors. The size and color of the rectangles are based on theoptions you select for the Size and Color fields in the properties area.

vFilerFilter

Enables you to display capacity and status information about the vFiler units invarying rectangle sizes and colors:

Size Specifies the size of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Used % (default): The percentage of space used in the vFiler unit. Thesize of the rectangle increases when the value for used space increases.

• Available %: The percentage of space available in the vFiler unit. Thesize of the rectangle increases when the value for available spaceincreases.

• Used Capacity: The amount of physical space (in GB) used byapplication or user data in the vFiler unit. The size of the rectangleincreases when the value for used capacity increases.

• Available Capacity: The amount of physical space (in GB) that isavailable in the vFiler unit. The size of the rectangle increases when thevalue for available capacity increases.

• Status: The current status of the vFiler unit based on the eventsgenerated. The size of the rectangle varies from large to small in the

Storage | 127

Page 128: Admin help netapp

following order: Emergency, Critical, Error, Warning, Normal, andUnknown. For example, a vFiler unit with an Emergency status isdisplayed as a larger rectangle than a vFiler unit with a Critical status.

Color Specifies the color of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Status (default): The current status of the vFiler unit based on the events

generated. Each status displays a specific color: Emergency ( ),

Critical ( ), Error ( ), Warning ( ), Normal ( ), and Unknown

( ).• Used %: The percentage of space used in the vFiler unit. The color

displayed varies based on the following conditions:

• If the used space in the vFiler unit is less than the Volume Nearly

Full Threshold value, the color displayed is green ( ). When theused space reduces, the green color changes to a darker shade.

• If the used space in the vFiler unit exceeds the Volume Nearly FullThreshold value but is less than the Volume Full Threshold value,

the color displayed is orange ( ). When the used space reduces,the orange color changes to a lighter shade.

• If the used space in the vFiler unit exceeds the Volume Full

Threshold value, the color displayed is red ( ). When the usedspace reduces, the red color changes to a lighter shade.

• Available %: The percentage of space available in the vFiler unit. Thecolor varies based on the following conditions:

• If the available space in the vFiler unit exceeds the Volume Full

Threshold value, the color displayed is green ( ). When theavailable space reduces, the green color changes to a lighter shade.

• If the available space in the vFiler unit is less than the Volume FullThreshold value but exceeds the Volume Nearly Full Threshold

value, the color displayed is orange ( ). When the available spacereduces, the orange color changes to a darker shade.

• If the available space in the vFiler unit is less than the Volume

Nearly Full Threshold value, the color displayed is red ( ). Whenthe available space reduces, the red color changes to a darker shade.

General Enables you to filter vFiler units based on the name, status, or both.

128 | OnCommand Console Help

Page 129: Admin help netapp

Note: You can filter by entering regular expressions instead of the full name of thevFiler unit. For example, xyz* lists all the vFiler units that begin with the name xyz.

Capacity Enables you to filter storage objects based on used capacity, available capacity, used%, and available %. You can specify the capacity range by dragging the sliders.

Overview tab

The Overview tab displays details about the selected vFiler unit, such as the system ID, informationabout protocols, ping status, and domains.

Name Displays the name of the vFiler unit.

Hosting Storage System Displays the full name of the hosting storage system of the vFiler unit.

System ID Displays the universal unique identifier (UUID) of the vFiler unit.

Ping timestamp Displays the date and time that this vFiler unit was last queried.

Ping status Displays the status of the ping request sent to the vFiler unit.

Protocols enabled Displays the list of protocols enabled for the vFiler unit.

NFS service Displays the service status (Up or Down) of the NFS protocol.

CIFS service Displays the service status (Up or Down) of the CIFS protocol.

iSCSI service Displays the service status (Up or Down) of the iSCSI protocol.

Capacity tab

The Capacity tab displays information about the capacity volumes and qtrees that were added at thetime of creating the vFiler unit.

Volume Displays the number of volumes the vFiler unit contains, including the used and totalcapacity of the volumes. By clicking the number corresponding to the volumes, you canview more information about these volumes from the Volumes view.

Qtree Displays the number of qtrees contained within the volumes in the vFiler unit, including theused and total capacity of the qtrees. By clicking the number corresponding to the qtrees,you can view more information about these qtrees from the Qtrees view.

Graph tab

The Graph tab visually represents the performance of a vFiler unit. The graphs display the volumecapacity used versus the total capacity in the vFiler unit, vFiler capacity used, volume capacity used,and CPU usage (%). You can select the graphs from the drop-down list.

Storage | 129

Page 130: Admin help netapp

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as the space savingstrend, used capacity, total capacity, and space savings achieved through deduplication.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, volumes, and qtreesrelated to the vFiler unit.

Groups Displays the groups to which the vFiler unit belongs.

Volumes Displays the volumes in the selected vFiler unit. The volumes that were added after thevFiler creation are displayed along with the volumes added during the vFiler creation.

Qtrees Displays the qtrees in the selected vFiler unit. The qtrees that were added after the vFilercreation are displayed along with the qtrees added during the vFiler creation.

Related references

Window layout customization on page 16

Vservers view

The Vservers view displays detailed information about the Vservers that are monitored, as well astheir related objects, and also enables you to perform tasks such as editing the Vserver settings,grouping the Vservers, and refreshing monitoring samples.

• Breadcrumb trail on page 130• Command buttons on page 131• List view on page 131• Map view on page 132• Overview tab on page 133• Graph tab on page 134• Namespace Hierarchy tab on page 134• Related Objects pane on page 134

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

130 | OnCommand Console Help

Page 131: Admin help netapp

Command buttons

Edit Launches the Edit Vserver Settings page in the Operations Manager console.You can modify the Vserver settings from this page.

Delete Deletes the selected Vserver.

Add To Group Displays the Add to Group dialog box, which enables you to add the selectedVserver to a destination group.

Refresh MonitoringSamples

Refreshes the database sample of the selected Vserver and enables you toview the updated details.

More Actions • View EventsDisplays the events associated with the Vserver in the Events tab. Youcan sort the information based on the event severity, source ID, date ofevent trigger, and state.

Refresh Refreshes the list of Vservers.

Grid ( ) Displays the list view of the Vserver.

Note: You can add a Vserver to a group, refresh monitoring samples,modify the settings for a Vserver, view events for a Vserver, and delete aVserver by right-clicking the selected Vserver.

TreeMap ( ) Displays the map view of the Vserver.

List view

The list view displays, in tabular format, the properties of all the discovered Vservers. You cancustomize your view of the data by clicking the column filters.

You can double-click a Vserver to display its child objects. The breadcrumb trail is modified todisplay the selected Vserver.

ID Displays the Vserver ID. By default, this column is hidden.

Name Displays the name of the Vserver.

Cluster Displays the name of the cluster to which the Vserver belongs.

Root Volume Displays the name of the root volume of the Vserver.

Name ServiceSwitch

Displays the information type gathered from hosts.

NIS Domain Displays the Network Information Service (NIS) domain name.

Status Displays the current status of a Vserver. The status can be Critical, Error,Warning, Normal, or Unknown.

Storage | 131

Page 132: Admin help netapp

Map view

The Map view enables you to view the properties of the Vservers which are displayed as rectangleswith different sizes and colors. The size and color of the rectangles are based on the options youselect for the Size and Color fields in the properties area.

VserverFilter

Enables you to display capacity and status information about the Vservers in varyingrectangle sizes and colors:

Size Specifies the size of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Used % (default): The percentage of space used in the Vserver. The sizeof the rectangle increases when the value for used space increases.

• Available %: The percentage of space available in the Vserver. The sizeof the rectangle increases when the value for available space increases.

• Saved %: The percentage of space saved in the Vserver. The size of therectangle increases when the value for saved space increases.

• Used Capacity: The amount of physical space (in GB) used byapplication or user data in the Vserver. The size of the rectangleincreases when the value for used capacity increases.

• Available Capacity: The amount of physical space (in GB) that isavailable in the Vserver. The size of the rectangle increases when thevalue for available capacity increases.

• Saved Capacity: The amount of physical space (in GB) that is saved inthe Vserver. The size of the rectangle increases when the value for savedcapacity increases.

• Status: The current status of the Vserver based on the events generated.The size of the rectangle varies from large to small in the followingorder: Emergency, Critical, Error, Warning, Normal, and Unknown. Forexample, a Vserver with an Emergency status is displayed as a largerrectangle than a Vserver with a Critical status.

Color Specifies the color of the rectangle based on the option you select from theColor drop-down list. The options can be one of the following:

• Status (default): The current status of the Vserver based on the events

generated. Each status displays a specific color: Emergency ( ),

Critical ( ), Error ( ), Warning ( ), Normal ( ), and Unknown

( ).• Used %: The percentage of space used in the Vserver. The color

displayed varies based on the following conditions:

132 | OnCommand Console Help

Page 133: Admin help netapp

• If the used space in the Vserver is less than the Volume Nearly Full

Threshold value, the color displayed is green ( ). When the usedspace reduces, the green color changes to a darker shade.

• If the used space in the Vserver exceeds the Volume Nearly FullThreshold value but is less than the Volume Full Threshold value,

the color displayed is orange ( ). When the used space reduces,the orange color changes to a lighter shade.

• If the used space in the Vserver exceeds the Volume Full Threshold

value, the color displayed is red ( ). When the used space reduces,the red color changes to a lighter shade.

• Available %: The percentage of space available in the Vserver. Thecolor varies based on the following conditions:

• If the available space in the Vserver exceeds the Volume Full

Threshold value, the color displayed is green ( ). When theavailable space reduces, the green color changes to a lighter shade.

• If the available space in the Vserver is less than the Volume FullThreshold value but exceeds the Volume Nearly Full Threshold

value, the color displayed is orange ( ). When the available spacereduces, the orange color changes to a darker shade.

• If the available space in the Vserver is less than the Volume Nearly

Full Threshold value, the color displayed is red ( ). When theavailable space reduces, the red color changes to a darker shade.

• Saved Capacity: The amount of space (in GB) saved in the Vserver. The

color displayed is blue ( ). When the saved capacity reduces, the bluecolor changes to a lighter shade.

General Enables you to filter Vservers based on the name, status, or both.

Note: You can filter by entering regular expressions instead of the full name of theVserver. For example, xyz* lists all the Vservers that begin with the name xyz.

Capacity Enables you to filter storage objects based on used %, available %, saved %, usedcapacity, and available capacity. You specify the capacity range by dragging thesliders.

Overview tab

The Overview tab displays details about the selected Vserver, such as the primary IP address, LIFinformation, and the capacity of the volume that the Vserver contains.

Storage | 133

Page 134: Admin help netapp

Primary IP Displays the primary IP address of the Vserver.

LIFs Displays the number of LIFs that are associated with the Vserver.

Volume Capacity Displays the number of volumes, if any, that the Vserver contains and thecapacity of the volumes that are currently in use.

Graph tab

The Graph tab visually represents the performance of a Vserver. The graphs display the volumecapacity used versus the total capacity in the Vserver, logical interface traffic per second, and volumecapacity used. You can select the graphs from the drop-down list.

You can view the graphs representing a specified time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as the used capacitytrend, used capacity, and total capacity.

Namespace Hierarchy tab

The Namespace Hierarchy tab displays the hierarchical view of the Vserver's namespace, includingthe junctions and volumes. You can browse through these volumes and browse to the correspondingVolumes view.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, clusters, and volumesrelated to the Vserver.

Groups Displays the groups to which the Vserver belongs.

Clusters Displays the clusters in the selected Vserver.

Volumes Displays the volumes in the selected Vserver.

Related references

Window layout customization on page 16

Logical storage

Understanding logical storage

134 | OnCommand Console Help

Page 135: Admin help netapp

What logical storage objects are

Logical storage includes file system objects, such as volumes, qtrees, LUNs, and Snapshot copies.

Volumes File systems that hold user data that is accessible through one or more of the accessprotocols supported by Data ONTAP. The Volumes view displays all the volumesdiscovered and monitored by OnCommand console.

Qtrees Logically defined file system that can exist as a special subdirectory of the root directorywithin either a traditional volume or a flexible volume. There is no maximum limit onthe number of qtrees you can create in storage systems. The Qtrees view displays all theqtrees monitored by OnCommand console.

LUN Logical unit of storage identified by a number. The LUNs view displays all the LUNsmonitored by the OnCommand console.

About quotas

Quotas provide a way to restrict or track the disk space and number of files used by a user, group, orqtree. Quotas are applied to a specific volume or qtree.

Why you use quotas

You can use quotas to limit resource usage, to provide notification when resource usage reachesspecific levels, or simply to track resource usage.

You specify a quota for the following reasons:

• To limit the amount of disk space or the number of files that can be used by a user or group, orthat can be contained by a qtree

• To track the amount of disk space or the number of files used by a user, group, or qtree, withoutimposing a limit

• To warn users when their disk usage or file usage is high

Overview of the quota process

Quotas can cause Data ONTAP to send a notification (soft quota) or to prevent a write operationfrom succeeding (hard quota) when quotas are exceeded.

When Data ONTAP receives a request to write to a volume, it checks to see whether quotas areactivated for that volume. If so, Data ONTAP determines whether any quota for that volume (and, ifthe write is to a qtree, for that qtree) would be exceeded by performing the write operation. If anyhard quota would be exceeded, the write operation fails, and a quota notification is sent. If any softquota would be exceeded, the write operation succeeds, and a quota notification is sent.

Storage | 135

Page 136: Admin help netapp

What Snapshot copies are

Snapshot copies are read-only images of a traditional volume, a FlexVol volume, or an aggregate thatcaptures the state of the file system at a point in time. Snapshot copies are your first line of defense toback up and restore data.

Data ONTAP maintains a configurable Snapshot copy schedule that creates and deletes Snapshotcopies automatically for each volume.

Managing logical storage

Editing volume quota settings

You can modify threshold conditions for volumes, Snapshot copies, and user quotas, and set alerts toreceive a notification when a threshold is crossed.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage tab.

2. In the Storage tab, select the Volumes view.

3. Select the volume you want to modify.

4. Click Edit.

The Edit Volume Quota Settings page is displayed in the Operations Manager console.

5. Modify the capacity threshold settings for the selected volume.

6. Click Update

Changes in the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

136 | OnCommand Console Help

Page 137: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Editing qtree quota settings

You can modify threshold conditions for qtrees and set alerts to receive notification when a thresholdis crossed.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage tab.

2. In the Storage tab, select the Qtrees view.

3. Select the qtree you want to modify.

4. Click Edit.

The Edit Qtree Quota Settings page is displayed in the Operations Manager console.

5. Modify the capacity threshold settings for the selected qtree.

6. Click Update.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Related references

Administrator roles and capabilities on page 506

Storage | 137

Page 138: Admin help netapp

Editing LUN path settings

You can edit the following settings for a selected LUN path: owner e-mail address, owner name,resource tag, and the description of the LUN.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage tab.

2. In the Storage tab, select the LUNs view.

3. Select the LUN path you want to modify.

4. Click Edit.

The Edit LUN Path Settings page is displayed in the Operations Manager console.

5. Modify the attributes of the selected LUN.

6. Click Update.

Changes to the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Related references

Administrator roles and capabilities on page 506

138 | OnCommand Console Help

Page 139: Admin help netapp

Editing quota settings

You can edit the following storage capacity threshold settings for a selected user quota: disk spaceused, disk space hard limit, disk space soft limit, disk space threshold, files used, files hard limit, andfiles soft limit.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu, then click the Storage tab.

2. In the Storage tab, select the Quota Settings view.

3. Select the user quota you want to modify.

4. Click Edit Settings.

The Edit Quota Settings page is displayed in the Operations Manager console.

5. Modify the properties of the selected quota on the storage system.

6. Click Update.

Changes to the settings are updated in the DataFabric Manager server.

7. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Related references

Administrator roles and capabilities on page 506

Adding volumes to a group

You can add volumes to an existing group. You can also create a new group and add volumes to it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Storage | 139

Page 140: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Volumes view.

3. Select the volumes you want to add to the group and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the volumes to an existing group Select the appropriate group.

Add the volumes to a new group Type the name of the new group in the New Group field.

5. Click OK.

Result

The volumes are added to the group.

Related references

Administrator roles and capabilities on page 506

Adding qtrees to a group

You can add qtrees to an existing group. You can also create a new group and add qtrees to it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the Qtrees view.

3. Select the qtrees you want to add to the group and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the qtrees to an existing group Select the appropriate group.

Add the qtrees to a new group Type the name of the new group in the New Group field.

5. Click OK.

140 | OnCommand Console Help

Page 141: Admin help netapp

Result

The qtrees are added to the group.

Related references

Administrator roles and capabilities on page 506

Adding LUNs to a group

You can add LUNs to an existing group. You can also create a new group and add the LUNs to it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, select the LUNs view.

3. Select the LUNs you want to add to the group and click Add to Group.

4. In the Add to Group dialog box, perform the appropriate action:

If you want to.. Then..

Add the LUNs to an existing group Select the appropriate group.

Add the LUNs to a new group Type the name of the new group in the New Group field.

5. Click OK.

Result

The LUNs are added to the group.

Related references

Administrator roles and capabilities on page 506

Monitoring logical storage

Volume capacity thresholds and events

DataFabric Manager server features thresholds to help you monitor the capacity of flexible andtraditional volumes. You can configure alarms to send notification whenever an event related to thecapacity of a volume occurs. You can also take corrective actions based on the event generated. For

Storage | 141

Page 142: Admin help netapp

the Volume Full threshold, you can configure an alarm to send notification only when the conditionpersists over a specified period.

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to repeat until it is acknowledged.

Note: If you want to set an alarm for a specific volume, you must create a group with that volumeas the only member.

You can set the following volume capacity thresholds:

Volume FullThreshold (%)

Description: Specifies the percentage at which a volume is considered full.

Note: To reduce the number of Volume Full Threshold eventsgenerated, you can set the Volume Full Threshold Interval to a non-zerovalue. By default, the Volume Full threshold Interval is set to zero.Volume Full Threshold Interval specifies the time during which thecondition must persist before the event is triggered. Therefore, if thecondition persists for the specified time, DataFabric Manager servergenerates a Volume Full event.

• If the threshold interval is 0 seconds or a value less than the volumemonitoring interval, DataFabric Manager server generates theVolume Full events.

• If the threshold interval is greater than the volume monitoringinterval, DataFabric Manager server waits for the specifiedthreshold interval, which includes two or more monitoring intervals,and generates a Volume Full event only if the condition persistedthroughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and thethreshold interval is 90 seconds, the threshold event is generated only ifthe condition persists for two monitoring intervals.

Default value: 90

Event generated: Volume Full

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Ask your users to delete files that are no longer needed, to free diskspace.

• For flexible volumes containing enough aggregate space, you canincrease the volume size.

• For traditional volumes containing aggregates with limited space, youcan increase the size of the volume by adding one or more disks to theaggregate.

142 | OnCommand Console Help

Page 143: Admin help netapp

Note: Add disks with caution. After you add a disk to an aggregate,you cannot remove it without destroying the volume and itsaggregate.

• For traditional volumes, temporarily reduce the Snapshot copy reserve.By default, the reserve is 20 percent of the disk space. If the reserve isnot in use, reduce the reserve free disk space, giving you more time toadd a disk. There is no way to prevent Snapshot copies fromconsuming disk space greater than the amount reserved for them.Therefore, it is important to maintain a large enough reserve forSnapshot copies. By maintaining the reserve for Snapshot copies, theactive file system always has space available to create new files ormodify existing ones. For more information about the Snapshot copyreserve, see the Data ONTAP Data Protection Online Backup andRecovery Guide.

Volume Nearly FullThreshold (%)

Description: Specifies the percentage at which a volume is considerednearly full.

Default value: 80. The value for this threshold must be lower than thevalue for the Volume Full Threshold in order for DataFabric Managerserver to generate meaningful events.

Event generated: Volume Almost Full

Event severity: Warning

Corrective action

Perform one or more of the actions mentioned in Volume Full.

Volume SpaceReserve NearlyDepleted Threshold(%)

Description: Specifies the percentage at which a volume is considered tohave consumed most of its reserved blocks. This option applies tovolumes with LUNs, Snapshot copies, no free blocks, and a fractionaloverwrite reserve of less than 100%. A volume that crosses this thresholdis getting close to having write failures.

Default value: 80

Event generated: Volume Space Reservation Nearly Depleted

Event severity: Warning

Volume SpaceReserve DepletedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed all its reserved blocks. This option applies to volumeswith LUNs, Snapshot copies, no free blocks, and a fractional overwritereserve of less than 100%. A volume that has crossed this threshold isgetting dangerously close to having write failures.

Default value: 90

Storage | 143

Page 144: Admin help netapp

Event generated: Volume Space Reservation Depleted

Event severity: Error

When the status of a volume returns to normal after one of the precedingevents, events with severity 'Normal' are generated. Normal events do notgenerate alarms or appear in default event lists, which display events ofwarning or worse severity.

Volume QuotaOvercommittedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed the whole of the overcommitted space for that volume.

Default value: 100

Event generated: Volume Quota Overcommitted

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Create new free blocks by increasing the size of the volume thatgenerated the event.

• Permanently free some of the occupied blocks in the volume bydeleting unnecessary files.

Volume QuotaNearlyOvercommittedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed most of the overcommitted space for that volume.

Default Value: 95

Event generated: Volume Quota Almost Overcommitted

Event Severity: Warning

Corrective action:

Perform one or more of the actions mentioned in Volume QuotaOvercommitted.

Volume GrowthEvent MinimumChange (%)

Description: Specifies the minimum change in volume size (as apercentage of total volume size) that is acceptable. If the change in volumesize is more than the specified value, and the growth is abnormal inrelation to the volume-growth history, DataFabric Manager servergenerates a Volume Growth Abnormal event.

Default value: 1

Event generated: Volume Growth Abnormal

144 | OnCommand Console Help

Page 145: Admin help netapp

Volume SnapReserve FullThreshold (%)

Description: Specifies the value (percentage) at which the space that isreserved for taking volume Snapshot copies is considered full.

Default value: 90

Event generated: Volume Snap Reserve Full

Event severity: Error

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. If you disable the volumeSnapshot autodelete option, it is important to maintain a largeenough reserve. Disabling would ensure Snapshot copies that there isalways space available to create new files or modify present ones. Forinstructions on how to identify Snapshot copies you can delete, see theOperations Manager Help.

User Quota FullThreshold (%)

Description: Specifies the value (percentage) at which a user is consideredto have consumed all the allocated space (disk space or files used) asspecified by the user quota. The user quota includes hard limit inthe /etc/quotas file. If this limit is exceeded, DataFabric Managerserver generates a User Disk Space Quota Full event or a User Files QuotaFull event.

Default value: 90

Event generated: User Quota Full

User Quota NearlyFull Threshold (%)

Description: Specifies the value (percentage) at which a user is consideredto have consumed most of the allocated space (disk space or files used) asspecified by the user quota. The user quota includes hard limit inthe /etc/quotas file. If this limit is exceeded, DataFabric Managerserver generates a User Disk Space Quota Almost Full event or a UserFiles Quota Almost Full event.

Default value: 80

Event generated: User Quota Almost Full

Volume No FirstSnapshotThreshold (%)

Description: Specifies the value (percentage) at which a volume isconsidered to have consumed all the free space for its space reservation.This is the space that the volume needs when the first Snapshot copy iscreated.

This option applies to volumes that contain space-reserved files, noSnapshot copies, a fraction of Snapshot copies overwrite reserve set togreater than 0, and where the sum of the space reservations for all LUNsin the volume is greater than the free space available to the volume.

Default value: 90

Storage | 145

Page 146: Admin help netapp

Event generated: Volume No First Snapshot

Volume Nearly NoFirst SnapshotThreshold (%)

Description: Specifies the value (percentage) at which a volume isconsidered to have consumed most of the free space for its spacereservation. This is the space that the volume needs when the firstSnapshot copy is created.

This option applies to volumes that contain space-reserved files, noSnapshot copies, a fractional overwrite reserve set to greater than 0, andwhere the sum of the space reservations for all LUNs in the volume isgreater than the free space available to the volume.

Default value: 80

Event generated: Volume Almost No first Snapshot

Note: When a traditional volume is created, it is tightly coupled with its containing aggregate sothat its capacity is determined by the capacity of the aggregate. For this reason, you shouldsynchronize the capacity thresholds of traditional volumes with the thresholds of their containingaggregates.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Qtree capacity thresholds and events

The OnCommand console enables you to monitor qtree capacity and set alarms. You can also takecorrective actions based on the event generated.

DataFabric Manager server features thresholds to help you monitor the capacity of qtrees. Quotasmust be enabled on the storage systems. You can configure alarms to send notification whenever anevent related to the capacity of a qtree occurs.

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to continue to alert you withevents until it is acknowledged. For the Qtree Full threshold, you can also configure an alarm to sendnotification only when the condition persists over a specified period.

Note: If you want to set an alarm for a specific qtree, you must create a group with that qtree as theonly member.

You can set the following qtree capacity thresholds:

Qtree Full(%)

Description: Specifies the percentage at which a qtree is considered full.

Note: To reduce the number of Qtree Full Threshold events generated, you canset a Qtree Full Threshold Interval to a non-zero value. By default, the Qtree Full

146 | OnCommand Console Help

Page 147: Admin help netapp

threshold Interval is set to zero. The Qtree Full Threshold Interval specifies thetime during which the condition must persist before the event is generated. If thecondition persists for the specified amount of time, DataFabric Manager servergenerates a Qtree Full event.

• If the threshold interval is 0 seconds or a value less than the volumemonitoring interval, DataFabric Manager server generates Qtree Full events.

• If the threshold interval is greater than the volume monitoring interval,DataFabric Manager server waits for the specified threshold interval, whichincludes two or more monitoring intervals, and generates a Qtree Full eventonly if the condition persisted throughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and the thresholdinterval is 90 seconds, the threshold event is generated only if the conditionpersists for two monitoring intervals.

Default value: 90 percent

Event generated: Qtree Full

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Ask users to delete files that are no longer needed, to free disk space.• Increase the hard disk space quota for the qtree.

Qtree NearlyFullThreshold(%)

Description: Specifies the percentage at which a qtree is considered nearly full.

Default value: 80 percent

Event severity: Warning

Corrective action

Perform one or more of the following actions:

• Ask users to delete files that are no longer needed, to free disk space.• Increase the hard disk space quota for the qtree.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Storage | 147

Page 148: Admin help netapp

Why Snapshot copies are monitored

The Snapshot copy monitoring and space management help you monitor and generate reports onSnapshot copies and how they influence your space management strategy.

By using DataFabric Manager server, you can determine the following information about Snapshotcopies:

• How much aggregate and volume space is used for Snapshot copies?• Is there adequate space for the first Snapshot copy?• Which Snapshot copies can be deleted?• Which volumes have high Snapshot copy growth rates?• Which volumes have Snapshot copy reserves that are nearing capacity?

Viewing the volume inventory

You can use the Volumes view to monitor your inventory of volumes and view information aboutvolume properties, related storage objects, capacity graphs, space usage details, and protectiondetails.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Volumes.

Result

The list of volumes are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the LUN inventory

You can use the LUNs view to monitor your inventory of LUNs and view information about LUNproperties, related storage objects, and capacity graphs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

148 | OnCommand Console Help

Page 149: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click LUNs.

Result

The list of LUNs are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the qtree inventory

You can use the Qtrees view to monitor your inventory of qtrees and view information about qtreeproperties, related storage objects, and capacity graphs.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Qtrees.

Result

The list of qtrees are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the quota settings inventory

You can use the Quota Settings view to monitor your inventory of quotas and properties of thequotas.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Storage | 149

Page 150: Admin help netapp

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Quota Settings.

Result

The list of quotas are displayed.

Related references

Administrator roles and capabilities on page 506

Viewing the Snapshot copies inventory

You can use the Snapshot Copies view to monitor your inventory of Snapshot copies, and viewproperties of Snapshot copies and information about related storage objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Storage option.

2. In the Storage tab, click Snapshot Copies.

Result

The list of Snapshot copies are displayed.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Volumes view

The Volumes view displays detailed information about the volumes in the storage systems that aremonitored, as well as their related objects, and also enables you to perform tasks such as editing thevolume settings, grouping the volumes, and refreshing monitoring samples.

• Breadcrumb trail on page 151• Command buttons on page 151• List view on page 152

150 | OnCommand Console Help

Page 151: Admin help netapp

• Map view on page 153• Overview tab on page 156• Space Breakout tab on page 156• Capacity tab on page 156• Protection tab on page 156• Storage Server tab on page 156• Graph tab on page 156• Related Objects pane on page 157

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected volume:

Edit Launches the Edit Volume Quota Settings in the Operations Managerconsole. You can modify the storage capacity threshold settings for a specificvolume from this page.

Delete Deletes the selected volume

Add to Group Displays the Add to Group dialog box, which enables you to add the selectedvolume to the destination group.

Refresh MonitoringSamples

Refreshes the database sample of the selected volume and enables you toview the updated details.

More Actions • View EventsDisplays the events associated with the storage system in the Events tab.You can sort the information based on the event severity, source ID, dateof event trigger, and state.

Refresh Refreshes the list of volumes.

Grid ( ) Displays the list view of volumes.

Note: You can add a volume to a group, refresh monitoring samples,modify volume settings, view events for a volume, and delete a volume byright-clicking the selected volume.

TreeMap ( ) Displays the map view of volumes.

Storage | 151

Page 152: Admin help netapp

List view

The List view displays, in tabular format, the properties of all the discovered volumes. You cancustomize your view of the data by clicking the column filters.

You can double-click a volume to display its child objects. The breadcrumb trail is modified todisplay the selected volume.

ID Displays the volume ID. By default, this column is hidden.

Name Displays the name of the volume.

Aggregate Displays the name of the aggregate that contains the volume.

Storage Server Displays the parent of a storage object such as a volume, a storage systemrunning Data ONTAP 7-Mode,a vFiler unit, or a Vserver.

Type Displays the type of volume selected.

Block Type Displays the block format of the volume as a 32-bit or 64-bit.

RAID Displays the RAID protection scheme. The RAID protection scheme can be onof the following:

RAID 0 All the raid groups in the volume are of type raid0.

RAID 4 All the raid groups in the volume are of type raid4.

RAID DP All the raid groups in the volume are of type raid_dp.

Mirrored RAID 0 All the raid groups in the mirrored volume are of typeraid0.

Mirrored RAID 4 All the raid groups in the mirrored volume are of typeraid4.

Mirrored RAID DP All the raid groups in the mirrored volume are of typeraid_dp.

State Displays the current state of a volume. The state can be Online, Offline,Initializing, Failed, Restricted, Partial, or Unknown.

Status Displays the current status of a volume. The status can be Normal, Warning,Error, Critical, Emergency, or Unknown.

Used Capacity(GB)

Displays the amount of space used in the volume.

Total Capacity(GB)

Displays the total space of the volume.

Aggregate ID Displays the ID of the aggregate. By default, this column is hidden.

Host ID Displays the ID of the host to which the volume is related. By default, thiscolumn is hidden.

152 | OnCommand Console Help

Page 153: Admin help netapp

Map view

The Map view enables you to view the properties of the volumes which are displayed as rectangleswith different sizes and colors. The size and color of the rectangles are based on the options youselect for the Size and Color fields in the properties area.

VolumeFilter

Enables you to display capacity and status information about the volumes in varyingrectangle sizes and colors:

Size Specifies the size of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Used % (default): The percentage of space used in the volume. The sizeof the rectangle increases when the value for used space increases.

• Available %: The percentage of space available in the volume. The sizeof the rectangle increases when the value for available space increases.

• Growth Rate: The rate at which data in the volume is growing. The sizeof the rectangle increases when the value for growth rate increases.

• Near to Max Size: The threshold value specified to generate an alertbefore the volume reaches maximum size. The size of the rectangleincreases when the value for near to maximum size increases.

• Inode Used %: The percentage of inode space used in the volume. Thesize of the rectangle increases when the value for inode % increases.

• Days to Max Size: The number of days needed for the volume to reachmaximum size. The size of the rectangle increases as the number of daysto maximum size increases.

• Snapshot Used %: The percentage of space used in the Snapshot copy.The size of the rectangle increases when the value for Snapshot used %increases.

• Saved %: The percentage of space saved in the volume. The size of therectangle increases when the value for saved % increases.

• Used Capacity: The amount of physical space (in GB) used byapplication or user data in the volume. The size of the rectangleincreases when the value for used capacity increases.

• Available Capacity: The amount of physical space (in GB) that isavailable in the volume. The size of the rectangle increases when thevalue for available capacity increases.

• Saved Capacity: The amount of space (in GB) saved in the volume. Thesize of the rectangle increases when the value for saved capacityincreases.

• Available Snapshot Reserve: The amount of Snapshot reserve space (inGB) available in the volume. The size of the rectangle increases whenthe value for available snapshot reserve increases.

• Status: The current status of the volume based on the events generated.The size of the rectangle varies from large to small in the following

Storage | 153

Page 154: Admin help netapp

order: Emergency, Critical, Error, Warning, Normal, and Unknown. Forexample, a volume with an Emergency status is displayed as a largerrectangle than a volume with a Critical status.

Color Specifies the color of the rectangle based on the option you select from thedrop-down list. You can select one of the following options:

• Status (default): The current status of the volume based on the events

generated. Each status displays a specific color: Emergency ( ),

Critical ( ), Error ( ), Warning ( ), Normal ( ), and Unknown

( ).• Used %: The percentage of space used in the volume. The color

displayed varies based on the following conditions:

• If the used space in the volume is less than the Nearly Full Threshold

value of the volume, the color displayed is green ( ). When theused space reduces, the green color changes to a darker shade.

• If the used space in the volume exceeds the Nearly Full Thresholdvalue but is less than the Full Threshold value of the volume, the

color displayed is orange ( ). When the used space reduces, theorange color changes to a lighter shade.

• If the used space in the volume exceeds the Full Threshold value of

the volume, the color displayed is red ( ). When the used spacereduces, the red color changes to a lighter shade.

• Available %: The percentage of space available in the volume. The colorvaries based on the following conditions:

• If the available space in the volume exceeds the Full Threshold value

of the volume, the color displayed is green ( ). When the availablespace reduces, the green color changes to a lighter shade.

• If the available space in the volume is less than the Full Thresholdvalue but exceeds the Nearly Full Threshold value of the volume, the

color displayed is orange ( ). When the available space reduces,the orange color changes to a darker shade.

• If the available space in the volume is less than the Nearly Full

Threshold value of the volume, the color displayed is red ( ).When the available space reduces, the red color changes to a darkershade.

154 | OnCommand Console Help

Page 155: Admin help netapp

• Saved Capacity: The amount of space (in GB) saved in the storage

controller. The color displayed is blue ( ). When the saved capacityreduces, the blue color changes to a lighter shade.

• Available Snapshot Reserve: The amount of Snapshot reserve space (in

GB) available in the volume. The color displayed is blue ( ). Whenthe available Snapshot reserve space reduces, the blue color changes to alighter shade.

• Inode Used %: The percentage of inode space used in the volume. Thecolor varies based on the following conditions:

• If the inode used space in the volume is less than the nearly full

threshold value, the color displayed is green ( ). When the inodeused space reduces, the green color changes to a darker shade.

• If the inode used space in the volume exceeds the nearly fullthreshold value but is less than the full threshold value, the color

displayed is orange ( ). When the inode used space reduces, theorange color changes to a lighter shade.

• If the inode used space in the volume exceeds the full threshold

value, the color displayed is red ( ). When the inode used spacereduces, the red color changes to a lighter shade.

Note: The threshold values for inode used % are defined by theDataFabric Manager server.

• Auto Size: If the containing aggregate has sufficient space, the volumecan automatically increase to a maximum size. If auto size is enabled,

( ) is displayed. If auto size is disabled, ( ) is displayed.• Snapshot Overflow: The amount of additional space (in GB) used by the

Snapshot copies apart from the allocated Snapshot reserve space. The

color displayed is blue ( ). When the Snapshot overflow reduces, theblue color changes to a lighter shade.

General Enables you to filter volumes based on name, status, or both.

Note: You can filter by entering regular expressions instead of the full name of thevolume. For example, xyz* lists all the volumes that begin with the name xyz.

Capacity Enables you to filter storage objects based on the growth rate, used capacity, availablecapacity, saved capacity, and so on. You can specify the capacity range by draggingthe sliders.

Storage | 155

Page 156: Admin help netapp

Overview tab

The Overview tab displays details about the selected volume, such as storage spaced used, and SRMpath of the volume.

Full Name Displays the full name of the volume.

Storage Server Displays the name of the storage server that contains the volume. Youcan view more information about the storage server by clicking the link.

Total Capacity Displays the total amount of space available in the volume to store data.

Quota CommittedSpace

Displays the space reserved in the quotas.

SRM Path Displays the SRM path to which the volume is mapped.

Space Breakout tab

The Space Breakout tab displays the percentage of space used for Snapshot copies and volume data.Also, indicates the size of the Snapshot reserve as a proportion of total volume size.

Capacity tab

The Capacity tab displays the number of qtrees, LUNs, and Snapshot copies the volume contains,including the amount of capacity currently in use.

Protection tab

Displays information about the SnapMirror relationships of the volume and whether scheduledSnapshot copies are enabled.

Storage Server tab

Displays information about the total volume size and amount of space used in the volume.

Note: If multiple volumes belonging to different storage server types are selected, empty labels aredisplayed.

Graph tab

The Graph tab visually represents the performance characteristics of a volume. You can select thegraph you want to view from the drop-down list.

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as used capacity trend,used capacity, total capacity, and space savings achieved through deduplication.

156 | OnCommand Console Help

Page 157: Admin help netapp

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, storage controllers,aggregates, qtrees, LUNs, Snapshot copies, datasets, and datastores related to the volume.

Groups Displays the groups to which the volumes belong.

Storage Controllers Displays the storage controllers that contain the selected volume .

Aggregates Displays the aggregates in the selected volume.

Qtrees Displays the qtrees in the selected volume.

LUNs Displays the LUNs in the selected volume.

Snapshot Copies Displays the Snapshot copies in the volume.

Datasets Displays the datasets in the volume.

Datastores Displays the datastores in the volume.

Related references

Window layout customization on page 16

LUNs view

The LUNs view displays detailed information about the LUNs that are monitored, as well as theirrelated objects, and also enables you to perform tasks such as editing the LUN Path settings,grouping the LUNs, and refreshing monitoring samples.

• Breadcrumb trail on page 157• Command buttons on page 157• List view on page 158• Overview tab on page 158• Graph tab on page 159• Related Objects pane on page 159

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected LUN:

Storage | 157

Page 158: Admin help netapp

Edit Launches the Edit LUN Path Settings in the Operations Manager console.You can modify the attributes for a selected LUN from this page.

Delete Deletes the selected LUN.

Add to Group Displays the Add to Group dialog box, which enables you to add the selectedLUN to the destination group.

RefreshMonitoringSamples

Refreshes the database sample for the selected LUN and enables you to viewthe updated details.

More Actions • View EventsDisplays the events associated with the LUN in the Events tab. You cansort the information based on the event severity, source ID, date of eventtrigger, and state.

Refresh Refreshes the list of LUNs.

Note: You can add a LUN to a group, refresh monitoring samples, modify LUN path settings,view events for a LUN, and delete a LUN by right-clicking the selected LUN.

List view

The List view displays, in tabular format, the properties of all the discovered LUNs. You cancustomize your view of the data by clicking the column filters.

ID Displays the LUN ID. By default, this column is hidden.

LUN Path Displays the path to the LUN including the volume and qtree name.

Initiator Group Specifies the initiator group (igroup) to which the LUN is mapped.

Description Displays the description you provide when creating the LUN on your storagesystem.

Size (GB) Displays size of the LUN.

Storage Server Displays the name of the storage controller or vFiler unit on which the LUNresides.

Status Displays the current status of a LUN. The status can be Normal, Warning, Error,Critical, Emergency, or Unknown.

File System ID Displays the file system ID that contains the LUN. By default, this column ishidden.

Overview tab

The Overview tab displays details about the selected LUN such as SRM path, size, connected ports,and host details.

158 | OnCommand Console Help

Page 159: Admin help netapp

Full Path Displays the complete path to the LUN.

Size Displays the size of the LUN.

Contained FileSystem

Displays the name of the file system (volume or qtree) on which this LUNresides.

Mapped To Displays the following information about the LUN that is exported:

• The initiator group (igroup) to which the LUN is mapped.• The total number of LUNs exported from the storage system.

This is represented as igroup_name(n), where n is the total number ofLUNs exported.

Serial Number Displays the serial number of the LUN.

SRM Path Displays the SRM path to which the LUN is mapped. You can modify theSRM path by clicking the link.

SAN Host Displays the monitored host in a SAN that initiates requests to the storagesystems to perform tasks.

Space ReservationEnabled

Displays whether space reservation is enabled.

HBA Port Specifies the HBA ports that SAN hosts use to connect to each other in aSAN environment.

Graph tab

The Graph tab visually represents the performance characteristics of a LUN. You can select thegraph you want to view from the drop-down list.

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as LUN bytes read persecond and LUN bytes written per second.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups and storage objectsrelated to the LUN.

Groups Displays the groups to which the selected LUN belongs.

Volumes Displays the volumes in the selected LUN.

Qtrees Displays the qtrees in the selected LUN.

Datastores Displays the datastores in the selected LUN.

Storage | 159

Page 160: Admin help netapp

Related references

Window layout customization on page 16

Qtrees view

The Qtrees view displays detailed information about the qtrees that are monitored, as well as theirrelated objects, and also enables you to perform tasks, such as editing the qtree settings, grouping theqtrees, and refreshing monitoring samples.

• Breadcrumb trail on page 160• Command buttons on page 160• List view on page 161• Overview tab on page 161• Graph tab on page 162• Related Objects pane on page 162

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected qtree:

Edit Launches the Edit Qtree Settings in the Operations Manager console. Youcan modify the capacity threshold settings for a selected qtree from this page.

Delete Deletes the selected qtree.

Add to Group Displays the Add to Group dialog box, which enables you to add the selectedqtree to the destination group.

RefreshMonitoringSamples

Refreshes the database samples of the qtree and enables you to view theupdated details.

More Actions • View EventsDisplays the events associated with the qtree in the Events tab. You cansort the information based on the event severity, source ID, date of eventtrigger, and state.

Refresh Refreshes the list of qtrees.

160 | OnCommand Console Help

Page 161: Admin help netapp

Note: You can add a qtree to a group, refresh monitoring samples, modify qtree settings, viewevents for a qtree, and delete a qtree by right-clicking the selected qtree.

List view

The List view displays, in tabular format, the properties of all the discovered qtrees. You cancustomize your view of the data by clicking the column filters.

You can double-click a qtree to display its child objects. The breadcrumb trail is modified to displaythe selected qtree.

ID Displays the qtree ID. By default, this column is hidden.

Qtree Name Displays the name of the qtree.

Storage Server Displays the name of the storage controller or vFiler unit containing the qtree.

Volume Displays the name of the volume containing the qtree.

Status Displays the current status of a qtree. The status can be Normal, Warning,Error, Critical, Emergency, or Unknown.

Quota Limit (MB) Displays the quota limit, if any.

Volume ID Displays the ID of the volume which contains the qtree. By default, thiscolumn is hidden.

Aggregate ID Displays the ID of the aggregate that contains the volume which in turncontains the qtree. By default, this column is hidden.

Storage Path Type Displays the direct or indirect storage path type of the qtree. By default, thiscolumn is hidden.

Overview tab

The Overview tab displays details about the selected qtree, such as storage capacity, SnapMirrorrelationships and the SRM path.

Full Path Display the complete path of the qtree.

SnapMirror Indicates whether the qtree is a source or destination in a SnapMirrorrelationship.

Days to Full Displays the estimated amount of time left before the storage space isfull.

Scheduled SnapshotCopies

Displays information about the SnapMirror relationships of the qtree andwhether scheduled Snapshot copies are enabled.

SnapVault Displays whether or not the qtree is backed up. If the qtree is backed up,the SnapVault destination is displayed.

Storage | 161

Page 162: Admin help netapp

Last SnapVaulted Displays the date and time when the qtree was last backed up usingSnapVault.

SRM Path Displays the SRM path to which the qtree is mapped.

Quota Limit Displays the quota limit, if any.

Used Capacity Displays the total capacity allocated and used by the qtree.

Daily Growth Rate Displays the change in the disk space (number of bytes) used in the qtreeif the amount of change between the last two samples continues for 24hours.

Growth Rate (%) Displays the change in the amount of used space in the qtree reserve.

Graph tab

The Graph tab visually represents about the performance of a qtree. You can select the graphs youwant to view from the drop-down list.

You can view the graphs representing a selected time period, such as one day, one week, one month,

three months, or one year. You can also click to export graph details, such as used capacity trendand used capacity.

Related Objects pane

The Related Objects section enables you to view and navigate to the groups, volumes, LUNs,datasets, and datastores related to the qtree.

Groups Displays the groups to which the qtree belongs.

Volumes Displays the volumes that contains the selected qtree.

LUNs Displays the LUNs in the selected qtree.

Datasets Displays the datasets in the selected qtree.

Datastores Displays the datastores in the selected qtree.

Related references

Window layout customization on page 16

Quota Settings view

The Quota Settings view enables you to view detailed information about the user and user groupquota settings, and also enables you to perform tasks such as editing the quota settings and refreshingthe list of quotas.

• Breadcrumb trail on page 163• Command buttons on page 163

162 | OnCommand Console Help

Page 163: Admin help netapp

• List view on page 163

Breadcrumb trailThe breadcrumb trail is created as you navigate from one list of storage objects to another. As younavigate, each time you double-click certain items in these lists, another “breadcrumb” is added tothe “trail,” providing a string of hyperlinks that captures your navigation history. If you want torevisit a previously viewed list, you can click the breadcrumb that links to it. You can also click the

icon in the breadcrumb to display a sub-navigation list of objects associated with the hyperlinkedobject.

Command buttons

The command buttons enable you to perform the following tasks for a selected quota:

Edit Launches the Edit Quota Settings page in the Operations Manager console. You can editthe selected quota from this page.

Refresh Refreshes the list of user and user group quotas.

Note: You can also modify the settings for a quota by right-clicking the selected quota.

List view

The List view displays, in tabular format, the properties of all the discovered user and user groupquotas. You can customize your view of the data by clicking the column filters.

ID Displays the ID of the quota. By default, this column is hidden.

User Name Displays the name of the user or user group.

File System Displays the name and path of the volume or qtree on which the userquota resides.

Type Displays the type of quota. The quota can be either a user quota or agroup quota.

Status Displays the current status of the quotas. The status can be Normal,Warning, Error, Critical, Emergency, or Unknown.

Disk Space Used (MB) Displays the total amount of disk space used.

Disk Space Threshold(MB)

Displays the disk space threshold as specified in the /etc/quotas fileof the storage system.

Disk Space Soft Limit(MB)

Displays the soft limit on disk space as specified in the /etc/quotasfile of the storage system.

Disk Space Hard Limit(MB)

Displays the hard limit on disk space as specified in the /etc/quotasfile of the storage system.

Storage | 163

Page 164: Admin help netapp

Disk Space Used (%) Displays the percentage of disk space used.

Files Used Displays the total number of files used.

Files Soft Limit Displays the soft limit on files as specified in the /etc/quotas file ofthe storage system.

Files Hard Limit Displays the hard limit on files as specified in the /etc/quotas file ofthe storage system.

Files Used (%) Displays the percentage of files used.

Related references

Window layout customization on page 16

Snapshot Copies view

The Snapshot Copies view enables you to view detailed information about Snapshot copies, relatedobjects, and storage servers to which the Snapshot copies are associated.

• Command button on page 164• List view on page 164• Related objects pane on page 165

Command button

The command button enables you to perform the following task for a selected Snapshot copy:

Refresh Refreshes the list of Snapshot copies.

List view

The List view displays, in tabular format, the properties of all the discovered Snapshot copies. Youcan customize your view of the data by clicking the column filters.

ID Displays the Snapshot copy ID. By default, this column is hidden.

Name Displays the name of the Snapshot copy.

Volume Displays the name of the volume that contains the Snapshot copy.

Aggregate Displays the name of the aggregate that contains Snapshot copy.

Storage Server Displays the name of the storage controller, vFiler unit, or Vserver that containsthe Snapshot copy.

Access Time Displays the time when the Snapshot copy was last accessed.

Dependency Displays the names of the applications that are accessing the Snapshot copy (forexample, SnapMirror), if any.

164 | OnCommand Console Help

Page 165: Admin help netapp

Related objects pane

The Related objects section enables you to view and navigate to the aggregates and volumes relatedto the Snapshot copies.

Aggregates Displays the aggregates that contains the Snapshot copy.

Volumes Displays the volumes that contains the Snapshot copy.

Related references

Window layout customization on page 16

Storage | 165

Page 166: Admin help netapp

166 | OnCommand Console Help

Page 167: Admin help netapp

Policies

Local policies

Understanding local policies

Local policy and backup of a dataset's virtual objects

A dataset's local policy in the OnCommand console enables you to specify the start times, stop times,frequency, retention time, and warning and error event thresholds for local backups of its VMware orHyper-V virtual objects.

What local protection of virtual objects is

Local protection of a dataset's virtual objects consists of the OnCommand console making Snapshotcopies of the VMware virtual objects or Hyper-V virtual objects that reside as images on yourstorage systems and saving those Snapshot copies as backup copies locally on the same storagesystems.

In case of data loss or corruption due to user or software error, you can restore the lost or damagedvirtual object data from saved local Snapshot copies as long as the primary storage systems on whichthe virtual objects reside remain intact and operating.

What a local policy is

A local policy is a configured combination of Snapshot copy schedules, retention times, warningthreshold, and error threshold levels that you can assign to a dataset. After you assign a local policyto a dataset, that policy applies to all the virtual objects that are included in that dataset.

The OnCommand console allows you to configure multiple local policies with different settings fromwhich you can select one to assign to a dataset.

You can also use policies supplied by the OnCommand console.

Local protection and remote protection of virtual objects

After Snapshot copies of a dataset's virtual objects are generated as local backup, remote protectionoperations that are specified in an assigned storage service configuration can save these backupcopies to secondary storage.

Secondary or tertiary protection of virtual objects cannot be accomplished unless Snapshot copieshave been generated on the primary node by backup jobs carried out on demand or through localpolicy.

167

Page 168: Admin help netapp

Local policies supplied by the OnCommand console

To simplify the task of providing local protection to virtual objects in a dataset the OnCommandconsole provides a set of local policies (preconfigured combinations of local backup schedules, localbackup retention settings, lag warning and lag error thresholds, and optional backup scripts)specifically to support local backup of certain types of data.

The preconfigured local policies include the following set:

VMwarelocal backuppolicytemplate

This default policy enforces the following VMware environment-optimized settingsrelated to local backup scheduling and retention. This policy can also be renamedand modified, to accommodate different circumstances.

• Hourly backups without VMware snapshot (crash consistent) every hourbetween 7 am and 7 pm every day including weekends

• Daily backups with VMware snapshot at 10 PM every night every day includingweekends

• Weekly backup with VMware snapshot every Sunday midnight• Retention settings: Hourly backups for 2 days, Daily backups for 1 week,

Weekly backups for 2 weeks• Issue a warning if there are no backups for: 1.5 days• Issue an error if there are no backups for: 2 days• Backup script path: empty

Hyper-Vlocal backuppolicytemplate

This default policy enforces the following Hyper-V environment-optimized settingsrelated to local backup scheduling and retention. This policy can also be renamedand modified, to accommodate different circumstances.

• Hourly backups every hour between 7 am and 7 pm every day includingweekends

• Daily backups at 10 PM every night every day including weekends• Weekly backup every Sunday midnight• Retention settings: Hourly backups for 2 days, Daily backups for 1 week,

Weekly backups for 2 weeks• Issue a warning if there are no backups for: 1.5 days• Issue an error if there are no backups for: 2 days• Skip backups that will cause virtual machines to go offline• Start remote backup after local backup• Backup script path: empty

168 | OnCommand Console Help

Page 169: Admin help netapp

Guidelines for configuring a local policy

Before you use the Create Local Policy dialog box or Edit Local Policy dialog box to create or edit alocal policy, you must make some decisions about the policy to input in your command selection andin the configuration page.

Virtual object type

To open the Create Local Policy dialog box you specify with the Create command whether the localpolicy you want to create is for a dataset of VMware objects or for a dataset of Hyper-V objects.

Name

Name Your company might have a naming convention for policies. When specifying a namefor a new policy, make sure you follow that convention.

Description Use a description that helps someone unfamiliar with the policy to understand itspurpose.

Schedule and Retention

Schedule You can set up multiple schedules of multiple types (Hourly, Daily, Weekly) for yourlocal backups. Each schedule has a start time, a stop time and a frequency with whichits backups are executed.

If you intend to implement local backups on multiple datasets of Hyper-V objects thatare associated with the same Hyper-V server, you must configure separate localpolicies with non-overlapping schedules to assign separately to each separate dataset.

Retention You can specify a retention period to be associated with each type of backup schedule(Hourly, Daily, or Weekly). A retention period specifies the minimum length of timethat a backup copy is maintained, before it is eligible to be purged. A retention periodassigned to one type of backup schedule applies to all backup copies of that type.

BackupOptions

Depending on the type of virtual objects the dataset contains, you can enableadditional operations to be performed on those objects during backup.

• Create VMware SnapshotsCreates VMware quiesced snapshots before taking the storage system Snapshotcopies during local backup operations (displayed for datasets of VMware objects).

• Include independent disksIncludes independent VMDKs in local backups associated with this schedule forthis dataset (displayed for datasets of VMware objects only).

• Allow saved state backupsAllows the local backup of a dataset of virtual machines to proceed even if someof the virtual machines in that dataset are in saved state or shut down. Virtualmachines in saved state or that are shut down, receive saved state or offline

Policies | 169

Page 170: Admin help netapp

backup. Performing a saved state or offline backup can cause downtime (displayedfor datasets of Hyper-V objects).If this option is not selected, encountering a virtual machine that is in saved stateor that is shutdown fails the dataset backup.

• Start remote backup after local backupStarts a remote backup of data to secondary storage after the local backup isfinished if a storage service that specifies a remote backup is also assigned to thedataset.

Backup Settings

Issue a warning ifthere is no backupfor:

Decide how long the OnCommand console waits before issuing a warningevent if no local backup has successfully finished during that time.

Issue an error ifthere is no backupfor:

Decide how long the OnCommand console waits before issuing an errorevent if no local backup has successfully finished during that time.

Backup script path: You can specify a path to a backup script (located on the system on whichthe host service runs) to specify additional operations to be executed withthe local backup. If you use a PowerShell script, you should use the driveletter convention. For other types of scripts, you can use either the driveletter convention or the Universal Naming Convention.

Configuring local policies

Adding a local policy

You can create local policies to schedule local backup jobs and designate retention periods for thelocal backup copies for datasets of virtual objects.

Before you begin

You must have reviewed the Guidelines for configuring a local policy on page 169.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

The Create Local Policy dialog box enables you to create a new local policy, or add a preconfiguredlocal policy supplied by the OnCommand console.

170 | OnCommand Console Help

Page 171: Admin help netapp

Steps

1. Click the View menu and click the Policies option.

2. Click Create in the Policies tab and select the option for the appropriate local policy type.

3. In the Create Local Policy dialog box, select the Name option and enter the requestedinformation in the associated content area.

4. Select the Schedule and Retention option and make your selections in the associated contentarea.

5. Select the Backup Settings option and make your selections in the associated content area.

You can also change this information for this policy at a later time.

6. After you enter your desired amount of information about this policy, click OK.

Result

The OnCommand console creates your new policy and lists it in the Policies tab.

Related concepts

Guidelines for configuring a local policy on page 169

Related references

Administrator roles and capabilities on page 506

Name and Description area on page 177

Schedule and Retention area on page 177

Backup Settings area on page 179

Editing a local policy

You can edit an existing local policy if you want to modify the name or description, schedule andretention times, or the backup options assigned to the policy.

Before you begin

You must have reviewed the Guidelines for configuring a local policy on page 169.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

The Edit Local Policy dialog box enables you to modify an existing local policy.

Policies | 171

Page 172: Admin help netapp

Steps

1. Click the View menu and click the Policies option.

2. From the Policies tab, select the row in the local policies list containing the policy you want toedit, and then click Edit Policy.

3. In the Edit Local Policy dialog box click the selection button on one or more of the followingoptions to display and modify their related settings:

• Name• Schedule and Retention• Backup Settings

4. After you have changed the desired information, click OK.

Result

The OnCommand console updates your policy and lists it in the Policies tab.

Related concepts

Guidelines for configuring a local policy on page 169

Related references

Administrator roles and capabilities on page 506

Name and Description area on page 177

Schedule and Retention area on page 177

Backup Settings area on page 179

Managing local policies

Editing a dataset of virtual objects to configure local policy and local backup

You can select, modify, or configure new local policies to automate local protection of datasetscontaining virtual VMware objects or virtual Hyper-V objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu and click the Datasets option.

172 | OnCommand Console Help

Page 173: Admin help netapp

2. In the Datasets tab, select the dataset on which you want to schedule and configure local backupsand click Edit.

3. In the Edit Dataset dialog box, select the Local Policy option and drop down list and completeone of the following actions:

• If you want to assign an existing local policy, select that policy from the Local Policy dropdown list.

• If you want to assign an existing local policy with some modifications select that policy, makeyour modifications in the content area, and click Save.

• If you want to configure a new local policy to apply to this dataset, select the Create Newoption, configure the policy in the content area, and click Create.

4. After you finish assigning a new or existing local policy to this dataset, if you want to testwhether your dataset's new configuration is in conformance with OnCommand consolerequirements before you apply them, click Test Conformance to display the DatasetConformance Report.

• If the Dataset Conformance Report displays no warning or error information, click Close andcontinue.

• If the Dataset Conformance Report displays warning or error information, read the Action andSuggestion information resolve the conformance issue, then click Close and continue.

5. Click OK.

Any local policy assignment, modification, or creation that you completed will be applied to thelocal protection of the virtual objects in the selected dataset.

Related references

Administrator roles and capabilities on page 506

Local Policy area on page 266

Administrator roles and capabilities on page 506

Administrator roles and capabilities on page 506

Copying a local policy

You can make a new local policy by copying an existing local policy and modifying it to meet yourparticular requirements.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

Making multiple copies of a local policy, then configuring the copies with non-overlappingschedules, and then assigning each copy to a different dataset is a good way to implement the local

Policies | 173

Page 174: Admin help netapp

backup of multiple datasets of Hyper-V objects because a Hyper-V host does not allow simultaneousor overlapping local backups on virtual machines associated with it.

Steps

1. Click the View menu and click the Policies option.

2. From the Policies tab, select the row in the local policies list containing the policy that you wantto copy and then click Copy.

The OnCommand console lists a copy of the policy (labeled as "Copy of.." ) in the Policies tab.

3. Select the copied policy and click Edit.

4. Make any required name changes and schedule modifications on the Edit Local Policy dialogbox and click OK.

The OnCommand console updates the copied policy with your changes and lists it in the Policiestab.

Related references

Administrator roles and capabilities on page 506

Deleting a local policy

You can delete local policies that are not currently assigned to datasets of virtual objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu and click the Policies option.

2. From the Policies tab, select the row in the local policies list containing the policy you want todelete.

3. Click the Dependencies tab to determine whether this local policy is currently assigned to one ormore datasets.

4. If one or more assigned datasets are listed on the Dependencies tab, either edit those datasets tounassign this local policy or do not delete this local policy.

5. If no assigned datasets are listed on the Dependencies tab, click Delete, and click OK to confirm.

The OnCommand console removes this local policy de-lists it in the Policies tab.

174 | OnCommand Console Help

Page 175: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Page descriptions

Policies tab

The Policies tab enables you to add, edit, copy, delete, and list local protection policies. You canassign the listed local policies to your virtual object datasets to configure local protection of theirvirtual object members.

• Command buttons on page 175• Local policies list on page 175• Overview tab on page 176• Details area on page 176• Dependencies tab on page 176

Command buttons

The command buttons enable you perform the following tasks related to local policies:

Create Enables you to create a new local policy.

• If you select the VMware Policy sub-option, starts the Create Local Policy dialog boxfor adding a local protection policy specific to VMware objects.

• If you select the Hyper-V Policy sub-option, starts the Create Local Policy dialog boxfor adding a local protection policy specific to Hyper-V objects.

Edit Enables you to edit the selected local policy.

Copy Enables you to copy the selected local policy.

Delete Enables you to delete the selected local policy if that policy is not currently attached to adataset.

Refresh Updates the information that is displayed for all local policies listed on the Policies tab.

Local policies list

Lists information about existing local application policies. Select a row in the list to displayinformation in the Details area.

Name The name of the policy.

Type The type of local policy: Hyper-V and VMware.

Description Briefly describes the local policy

Policies | 175

Page 176: Admin help netapp

Overview tab

Schedules Displays the scheduled local backup jobs by schedule type (Hourly, Daily, Weekly, orMonthly) and time.

Retention Displays the period of time that the backup Snapshot copies associated with eachbackup schedule type are retained in storage before becoming subject to automaticpurging.

Details area

The area below the Local policies list displays information about the selected local policy.

Issue a warning if thereare no backups for

Displays the period of time after which the OnCommand console issuesa warning event if no local backup has successfully finished during thattime.

Issue an error if thereare no backups for

Displays the period of time after which the OnCommand console issuesan error event if no local backup has successfully finished during thattime.

Backup Script Path Displays a path to an optional backup script (located on the system onwhich the host service is installed) that can specify additional operationsto be executed in association with the local backup.

Dependencies tab

Displays information about datasets that are assigned this local policy.

Name Displays the names of datasets assigned this local policy.

Protection Status Displays whether or not the dataset is protected by a protection policy.

Related references

Window layout customization on page 16

Create Local Policy dialog box and Edit Local Policy dialog box

The Create Local Policy dialog box and the Edit Local Policy dialog box each enable you toconfigure a new or existing local protection policy to apply to virtual objects in your datasets.

Options

The options enable you to perform the following types of configuration on a local policy:

Name Enables you to edit the policy name and description.

Schedule andRetention

Enables you to create, edit, and delete backup and retention schedulesassociated with this local policy.

176 | OnCommand Console Help

Page 177: Admin help netapp

Backup Settings Enables you to specify no-backup warning and error thresholds, and a pathto an optional backup script

Policy Summary section

Displays the current local policy name, description, backup script path (if any), no-backup warningand error times, and a list of the local backup schedules and their associated retention times.

Command buttons

Command buttons enable you to perform the following tasks for a local policy:

OK Saves the latest changes that you have made to the data in the Create Local Policy dialogbox or Edit Local Policy dialog box as the latest configuration for this policy.

Cancel Cancels any changes you have made to the settings in the Create Local Policy dialog boxor Edit Local Policy dialog box since the last time you opened it.

Name and Description area

The Name and Description area of the Create Local Policy dialog box or Edit Local Policy dialogbox enables you to specify a name and description for a local policy.

Name Enables you to change the name of the current policy.

Unless you change it, the OnCommand console displays a default policy name that ithas assigned.

Description Enables you to enter or modify a short description of the current policy.

Schedule and Retention area

The Schedule and Retention area of the Create Local Policy dialog box or Edit Local Policy dialogbox enables you to configure a schedule of local backup jobs that you can apply to members of avirtual object dataset and also specify how long to retain the resulting Snapshot copies before theirdeletion.

Schedule andRetention

Either specifies the local backup schedule and retention settings assigned to thecurrent local policy, or enables you to create a new backup and retention schedulefor the current local policy.

Add Enables you to add a schedule to be applied to the current local policy backup.

Delete Enables you to delete the selected local policy backup schedule

Schedule Type Displays the type of backup schedule (Hourly, Daily, Weekly, or Monthly).

Start Time Enables you to select the time of day the local backup starts.

Policies | 177

Page 178: Admin help netapp

End Time Enables you to select the time of day an hourly local backup ends (Applies toHourly type schedules only).

Recurrence Enables you to select the frequency with which local policy backups occur for theassociated schedule. Recurrence settings vary by schedule type:

• HourlyYou can specify recurrence by hours and minutes.

• DailyRecurrence is fixed at once a day.

• WeeklyYou can specify recurrence by days of the week.

• MonthlyYou can specify recurrence by days of the month.

Retention Enables you to select the period of time that local backup copies generated by aschedule remain on the storage system before becoming subject to purging.

You can use any valid number and either Minutes, Hours, Days, or Weeks to setthe backup retention time.

All schedules of one type use the same retention setting. For example, changingthe retention setting for one Hourly schedule configured for this policy changesthe retention setting for all the Hourly schedules configured for this policy.

BackupOptions

Enables you to view and select additional options to be implemented with yourlocal backups.

• Create VMware SnapshotsCreates VMware quiesced snapshots before taking the storage systemSnapshot copies during local backup operations (displayed for datasets ofVMware objects).

• Include independent disksIncludes independent VMDKs in local backups associated with this schedulefor this dataset (displayed for datasets of VMware objects only).

• Allow saved state backupsAllows the backup of a dataset of virtual machines to proceed even if some ofthe virtual machines in that dataset are in saved state or shut down. Virtualmachines in saved state or shut down, receive saved state or offline backup.Performing a saved state or offline backup can cause downtime. (displayed fordatasets of Hyper-V objects only.)If this option is not selected, encountering a virtual machine that is in savedstate or shutdown fails the dataset backup.

• Start remote backup after local backup

178 | OnCommand Console Help

Page 179: Admin help netapp

If remote backup of data to secondary storage is specified by a storage serviceassigned to this dataset, whether to start that remote backup immediately afterthe local backup is finished.If this option is not selected, any remote backup to secondary storage startsaccording the schedules configured for the protection policy specified for thestorage service.

Backup Settings area

The Backup Settings area of the Create Local Policy dialog box or the Edit Local Policy dialog boxenables you to specify lag threshold warning and error levels and an optional path to a backup scriptif you need a script to specify additional operations to be carried out in association with backups inthis local policy.

Backup Settings Either specifies the current lag warning threshold, lag error threshold, andoptional backup script path, or enables you to set the lag warning threshold,lag error threshold, and optional backup script path for the current localpolicy backup.

Issue a warning ifthere are nobackups for

Specifies a period of time after which the OnCommand console issues awarning event if no local backup has successfully finished.

Issue an error ifthere are nobackups for

Specifies a period of time after which the OnCommand console issues anerror event if no local backup has successfully finished.

Backup Script Path Displays the existing backup script path, or, optionally, enables you to entera path to a backup script (located on the system upon which the host serviceis installed) to specify additional operations to be executed with the backup.

Policies | 179

Page 180: Admin help netapp

180 | OnCommand Console Help

Page 181: Admin help netapp

Datasets

Understanding datasets

What a dataset isA dataset is a set of physical or virtual data containers that you can configure as a unit for thepurpose of group protection or group provisioning operations.

• You can use the OnCommand console to configure datasets that contain virtual VMware objectsor virtual Hyper-V objects.

• You can also use the OnCommand console and the associated NetApp Management Console toconfigure datasets that contain physical storage systems with aggregates, volumes, qtrees, andLUNs

During dataset configuration you can additionally configure or assign local protection or remoteprotection arrangements and schedules that apply to all objects in that dataset. You can also start on-demand protection operations for all objects in that dataset with one command.

Dataset conceptsAssociating data protection, disaster recovery, a provisioning policy, or storage service with a datasetenables storage administrators automate tasks, such as assigning consistent policies to primary data,propagating policy changes, and provisioning new volumes, qtrees, or LUNS on primary andsecondary dataset nodes.

Configuring a dataset combines the following objects:

Dataset ofphysicalstorage objects

For protection purposes, a collection of physical resources on a primary node,such as volumes, flexible volumes, and qtrees, and the physical resources forcopies of backed-up data on secondary and tertiary nodes.

For provisioning purposes, a collection of physical resources, such as volumes,flexible volumes, qtrees, and LUNs, that are assigned to a dataset node. If theprotection policy establishes a primary and one or more nonprimary nodes, eachnode of the dataset is a collection of physical resources that might or might notbe provisioned from the same resource pool.

Dataset ofvirtual objects

A collection of supported VMware virtual objects or Hyper-V virtual objects thatreside on storage systems. These virtual objects can also be backed up locallyand backed up or mirrored on secondary and tertiary nodes.

Resource pool A collection of physical resources from which storage is provisioned. Resourcepools can be used to group storage systems and aggregates by attributes, such asperformance, cost, physical location, or availability. Resource pools can be

181

Page 182: Admin help netapp

assigned directly to the primary, secondary, or tertiary nodes of datasets ofphysical storage objects. They can be assigned indirectly both to datasets ofvirtual objects and to datasets of physical storage objects through a storageservice.

Data protectionpolicy

Defines how to protect primary data on primary, secondary or tertiary storage, aswell as when to create copies of data and how many copies to keep. Protectionpolicies can be assigned directly to datasets of physical storage objects. They canbe assigned indirectly to both datasets of virtual objects and to datasets ofphysical storage objects through a storage service.

Provisioningpolicy

Defines how to provision storage for the primary or secondary dataset nodes, andprovides rules for monitoring and managing storage space and for allocatingstorage space from available resource pools. Provisioning policies can beassigned directly to the primary, secondary, or tertiary nodes of datasets ofphysical storage objects. They can be assigned indirectly to both datasets ofvirtual objects and datasets of physical storage objects through a storage service.

Storage service A single dataset configuration package that consists of a protection policy,provisioning policies, resource pools, and an optional vFiler template (for vFilerunit creation). You can assign a single uniform storage service to datasets withcommon configuration requirements as an alternative to separately assigning thesame protection policy, provisioning policies, resource pools, and setting upsimilar vFiler unit attachments to each of them.

The only way to configure a dataset of virtual objects with secondary or tertiarybackup and mirror protection and provisioning is by assignment of a storageservice. You cannot configure secondary storage vFiler attachments for datasetsof virtual objects.

Local policy A policy that schedules local backup jobs and designates retention periods for thelocal backup copies for datasets of virtual objects.

Related objects Are Snapshot copies, primary volumes, secondary volumes, or secondary qtreesthat are generated as a result of local policy or storage service protection jobs orprovisioning jobs.

The OnCommand console lists related objects for each dataset on the Datasetstab.

Namingsettings

Are character strings and naming formats that are applied when naming relatedobjects that are generated as a result of local policy or storage service protectionjobs or provisioning jobs.

182 | OnCommand Console Help

Page 183: Admin help netapp

Role of provisioning policies in dataset managementProvisioning policies specify the resource pool location and the configuration requirements of thestorage systems and their container objects that can be used to supply the primary, secondary, andtertiary storage needs of the storage objects or virtual objects in a dataset.

Direct or indirect assignment of a provisioning policy

Provisioning policies can be assigned to datasets directly or indirectly, through storage services.

• You can assign provisioning policies directly to each node in a dataset that is configured toinclude and manage physical storage objects as members.

• You can also assign provisioning policies to storage services, which are preconfiguredcombinations of protection policies, provisioning policies, and resource pools.You can then assign storage services directly both to datasets configured for physical storageobjects and datasets configured for virtual objects.

Where you can create, modify, and assign provisioning policies

You can create and modify provisioning policies, and assign them to storage dataset nodes or storageservices using the associated program, NetApp Management Console. Information on any of thosetasks is in the NetApp Management Console help.

Role of protection policies in dataset managementA protection policy specifies a dataset's primary node, secondary node, and tertiary node dataprotection topology, backup or mirror schedules, backup copy retention times, backup bandwidthconsumption, and other aspects related to backing up physical storage objects and virtual objects in adataset.

Direct or indirect assignment of a protection policyProtection policies can be assigned to datasets directly or indirectly, through storage services.

• You can assign protection policies directly to datasets that are configured to include and managephysical storage objects as members.

• You can also assign protection policies to storage services, which are preconfigured combinationsof protection policies, provisioning policies, and resource pools.You can then assign storage services directly both to datasets configured for physical storageobjects and datasets configured for virtual objects.

Where you can create, modify, and assign protection policies

You can create and modify protection policies, and assign them to storage datasets or storageservices using the associated program, NetApp Management Console. Information on any of thosetasks is in the NetApp Management Console help.

Datasets | 183

Page 184: Admin help netapp

What conformance monitoring and correction isThe OnCommand console actively monitors a dataset's resources during remote protectionconfiguration and at hourly intervals thereafter to ensure that the resource configuration remains inconformance with the protection policies and provisioning policies that are included in the storageservice assigned to it.

When the OnCommand console encounters a condition that is not in conformance with theprovisioning and protection policies, it first attempts to correct that condition automatically;however, if the condition requires user consent to correct or if the condition must be correctedmanually, then the OnCommand console assigns nonconformant status to the dataset. If the dataset istagged with this status, operations associated with that dataset's assigned protection and provisioningpolicies have a high likelihood of partial or full failure until the nonconformant condition is resolved.

If the nonconformant condition is encountered during initial storage service assignment or later on,that condition is flagged for the administrator in the Datasets tab, from which the administrator canmanually display the Conformance Details dialog box.

Datasets of physical storage objectsThe OnCommand console and the associated application, NetApp Management Console enable youto group physical storage systems and the container objects (the aggregates, volumes, qtrees, orLUNs) that reside on them into datasets for purposes of data protection.

You can set up and enhance automated protection and provisioning for a dataset of physical storageobjects by applying the following dataset configuration tasks:

• assigning a protection policy• assigning a disaster recovery capable protection policy• assigning provisioning policies to the dataset's secondary and tertiary nodes• assigning a resource pool to the dataset's secondary and tertiary nodes• assigning a storage service (a preconfigured combination of a protection policy, provisioning

policies, and resource pools)• enabling an online migration capability

Objects that a dataset of physical storage objects can include

Datasets of physical storage objects can include containers and the physical storage systems onwhich they are located.

You can include the following types of objects as members of a dataset of physical storage objects:

• qtrees• volumes• aggregates• hosts• vFiler units

184 | OnCommand Console Help

Page 185: Admin help netapp

• Open Systems SnapVault directories• Open Systems SnapVault hosts

Effect of time zone on scheduled protection operations in datasets of physical objects

The actual execution time of the protection jobs that are scheduled in the local policy and storageservice that you assign to a dataset of physical objects depends on the time zone that is specified forthe dataset, the time zones that are specified for the assigned resource pools, or (in the absence ofthose settings) on the time that is set on the DataFabric Manager server.

Administrators can assign optional dataset or resource pool time zone settings to datasets of physicalobjects in the NetApp Management Console interface. For additional information on the effect oftime zone settings on datasets of physical objects, see the NetApp Management Console help.

Effect of no time zone settings assigned to datasets of physical objects or toresource pools

If a dataset of physical objects or its assigned physical resource pool elements are not assigned timezone settings, then by default the NetApp Management Console interface executes the protectionschedule for its dataset members in accordance with the clock and time zone setting on theDataFabric Manager server.

For example, without dataset-level time zone settings configured for them, datasets in Los Angelesand London with a daily backup scheduled for 9 p.m. (Eastern Standard Time) from a NetAppManagement Console and a DataFabric Manager server in New York will, by default, executesimultaneously at 6 p.m. (Pacific Standard Time) on a primary data node in Los Angeles, or at 2 a.m.(GMT) at a primary data node in London.

Effect of time zone settings assigned to datasets

If those Los Angeles and London datasets are assigned Pacific Standard Time and GMT time zonesettings, respectively, then the NetApp Management Console adjusts the schedule to execute non-simultaneous daily backups at 9 p.m. (Pacific Standard Time) in Los Angeles and at 9 p.m. (GMT) inLondon.

Effect of time zone settings assigned to resource pools

If the secondary dataset nodes in the Los Angeles and London datasets are assigned resource poolsand those resource pools are assigned time zone settings, then any scheduled protection jobs fromsecondary to tertiary storage execute according to the assigned schedule and the time zone settingsfor those resource pools.

Datasets of virtual objectsThe OnCommand console enables you to group VMware virtual objects or Hyper-V virtual objectsthat reside as data on your storage systems into datasets for purposes of data protection.

You can set up and enhance automated protection and provisioning for a dataset of virtual objects byconfiguring the following types of protection:

Datasets | 185

Page 186: Admin help netapp

• You can assign a local policy to configure local backup job scheduling and local backup copyretention of your virtual object data.

• You can assign a storage service (a preconfigured combination of a protection policy,provisioning policies, and resource pools) to configure secondary storage and tertiary storagebackup and mirroring of your virtual object data.

VMware objects that a dataset can include

Datasets that you configure to include and protect VMware objects are constrained by VMware-specific restrictions.

• A dataset designed for VMware virtual objects can include datacenter, virtual machine, anddatastore objects.

• VMware datacenter objects that you include in a dataset cannot be empty.They must contain virtual machine objects or datastore objects that also contain virtual machineobjects for successful backup.

• VMware and Hyper-V objects can not coexist in one dataset.• VMware objects and storage system container objects (such as aggregates, volumes, and qtrees)

cannot coexist in one dataset.• If you add a datastore object to a dataset, all the virtual machine objects that are contained in that

datastore are protected by the dataset's assigned local backup policy or storage service.• If a virtual machine resides on more than one datastore, you can exclude one or more of those

datastores from the dataset.No local or remote protection is configured for the excluded datastores.You might want to exclude datastores that contain swap files that you want to exclude frombackup.

• If a virtual machine is added to a dataset, all of its VMDKs are protected by default unless one ofthe VMDKs is on a datastore that is in the "exclusion list" of that dataset.

• VMDKs on a datastore object in a dataset must be contained within folders in that datastore. IfVMDKs exist outside of folders on the datastore, and that data is backed up, restoring the backupcould fail.

Hyper-V objects that a dataset can include

Datasets that you configure to include and protect Hyper-V objects are constrained by Hyper-Vspecific restrictions.

• A dataset with Hyper-V objects can include only virtual machines.• VMware and Hyper-V objects cannot co-exist in one dataset.• Hyper-V objects and storage system container objects (such as aggregates, volumes, and qtrees)

cannot coexist in one dataset.• Shared virtual machines and dedicated virtual machines cannot coexist in one dataset.

186 | OnCommand Console Help

Page 187: Admin help netapp

Best practices when adding or editing a dataset of virtual objects

When you create or edit a dataset of virtual objects, observing best practices specific to datasets ofvirtual objects helps you to avoid some performance and space usage problems after configuration.

General best practices for datasets of virtual objectsThe following configuration practices apply to both datasets of Hyper-V objects and datasets ofVMware objects:

• To avoid conformance and local backup issues caused by primary volumes reaching theirSnapshot copy maximum of 255, best practice is to limit the number of virtual objects included ina primary volume, and limit the number of datasets to which each primary volume is directly orindirectly included as a member.A primary volume that hosts virtual objects that are included in multiple datasets is subject toretaining an additional Snapshot copy of itself for every local backup on any dataset that any ofits virtual object children are members of.

• To avoid backup schedule inconsistencies, best practice is to include only virtual objects that arelocated in the same time zone in one dataset.The schedules for the local protection jobs and remote protection jobs specified in the localpolicies and storage services that are assigned a dataset of virtual objects are carried outaccording to the time in effect on the host systems that are associated with the dataset's virtualobjects.

Best practices specific to datasets of Hyper-V objectsThe following configuration practices apply specifically to datasets containing Hyper-V objects:

• To ensure faster dataset backup of virtual machines in a Hyper-V cluster, best practice is to runall the virtual machines on one node of the Hyper-V cluster.When virtual machines run on different Hyper-V cluster nodes, separate backup operations arerequired for each node in the cluster. If all virtual machines run on the same node, only onebackup operation is required, resulting in a faster backup.

Best practices specific to datasets of VMware objectsThe following configuration practices apply specifically to datasets containing VMware objects:

• If a virtual machine resides on more than one datastore, you can exclude one or more of thosedatastores from the dataset. No local or remote protection is configured for the excludeddatastores.You might want to exclude datastores that contain swap files that you want to exclude frombackup.

• To avoid an excessive amount of secondary space provisioned for backup, best practice whencreating volumes to host the VMware datastores whose virtual machines will be protected by theOnCommand console backup is to size those volumes to be not much larger than the datastoresthey host.The reason for this practice is that when provisioning secondary storage space to back up virtualmachines that are members of datastores, the OnCommand console allocates secondary space that

Datasets | 187

Page 188: Admin help netapp

is equal to the total space of the volume or volumes in which those datastores are located. If thehost volumes are much larger than the datastores they hold, an excessive amount of provisionedsecondary space can result.

Storage services and remote backup of a dataset's virtual objects

You can specify remote backup of a dataset's VMware or Hyper-V virtual objects by assigning thatdataset a storage service. A storage service specifies the type of remote protection (backup or mirror)to apply and the type of storage systems on which to store the secondary or tertiary data.

What remote protection is

Remote protection of a dataset's virtual objects consists of the OnCommand console taking the mostrecent Snapshot copies of the VMware virtual objects or Hyper-V virtual objects that reside asimages on your storage systems and saving those Snapshot copies to remote storage systems atsecondary or tertiary locations.

In case of data loss or corruption at the primary site, or even damage or disabled storage systems atthe primary site, you can restore the lost or damaged virtual object data from Snapshot copies savedto a remote location.

Remote protection and local protection

The Snapshot copies that are saved to secondary storage are those that have been generated by localon demand backup jobs or by backup jobs scheduled through the dataset's assigned local policy.

What storage services are

Storage services are preconfigured combinations of protection policies, provisioning policies, andresource pools that can be assigned as a package to a dataset.

Storage services are configured in NetApp Management Console, but the OnCommand consoleallows you to view the properties of existing storage services and assign one of them to the currentdataset. When you are creating a dataset, you can select a storage service, review the storage service'sdata protection topology (its primary, secondary, and if applicable, tertiary storage), review thestorage service's protection schedule, specify supplementary backup scripts, and assign that storageservice to the current dataset.

The OnCommand console supplies a set of storage services from which you can select that arespecifically configured for protection of virtual objects in a small to mid-sized data storagemanagement operation.

To support data management and protection of virtual objects at an enterprise level, you might needto use the NetApp Management Console to configure custom storage services, and custom protectionand provisioning policies components.

Details on creating and customizing storage services are available in the NetApp ManagementConsole help.

188 | OnCommand Console Help

Page 189: Admin help netapp

Storage services enabled for disaster recovery support cannot be assigned to datasets of virtualobjects.

Preconfigured storage services supplied by the OnCommand console

To simplify the task of providing provisioning storage for virtual objects and remote protection tovirtual objects in a dataset, the OnCommand console provides a set of storage services, preconfiguredcombinations of protection policies, provisioning policies, and resource pools specifically designedto facilitate remote protection of virtual objects.

The listed storage services are optimal for use in storage facilities with five or fewer storage systemsin single resource pools. The preconfigured storage services are assigned with a Mirror protectionpolicy or none. You can copy, clone, or modify these storage services with other protection policiesusing the NetApp Management Console. The preconfigured storage services include the followingprovisioning and protection policies:

Thin ProvisionedSpace for VMFSDatastores withMirror

This storage service is configured to provide thin provisioning and mirrorprotection for VMFS datastore objects.

• Provisioning policy: Thin Provisioned Space for VMFS DatastoresThis policy enables deduplication and thin provisioning. It facilitatesmaximized space savings. It is required for 50% guarantee program.

• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Thin ProvisionedSpace for NFSDatastores withMirror

This storage service is configured to provide thin provisioning and mirrorprotection for NFS datastore objects.

• Provisioning policy: Thin Provisioned Space for NFS DatastoresThis policy enables deduplication and thin provisioning. It facilitatesmaximized space savings. It is required for 50% guarantee program.

• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Thin ProvisionedSpace for Hyper-VStorage with Mirror

This storage service is configured to provide thin provisioning and mirrorprotection for backing storage LUNs in Hyper-V environments.

• Provisioning policy: Thin Provisioned Space for Hyper-V StorageThis policy is required for 50% guarantee program.

• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Datasets | 189

Page 190: Admin help netapp

Thin ProvisionedSpace for Hyper-VDelegated Storagewith Mirror

This storage service is configured to provide thin provisioning and mirrorprotection in Hyper-V environments. It supports a delegation model wherethe volume is provisioned according to best practices. The LUNs will beprovisioned by SnapDrive for Windows on the Hyper-V parent.

• Provisioning policy: Thin Provisioned Space for Hyper-V DelegatedStorageThis policy is required for 50% guarantee program.

• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Reserved DataSpace for VMFSDatastores withMirror

This storage service is configured to provide provisioning and mirrorprotection for write guaranteed VMFS datastores.

• Provisioning policy: Reserved Data Space for VMFS Datastores• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Reserved DataSpace for NFSDatastores withMirror

This storage service is configured to provide thick provisioning and mirrorprotection for NFS datastores.

• Provisioning policy: Reserved Data Space for NFS Datastores• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Reserved DataSpace for Hyper-VStorage with Mirror

This storage service is configured to provide thick provisioning and mirrorprotection for backing LUNs in Hyper-V environments. The LUNs could bebacking store for Cluster Shared Volumes.

• Provisioning policy: Reserved Data Space for Hyper-V Storage• Protection policy: Mirror• Resource pools: Default storage service primary pool; Default storage

service mirror pool

Reserved DataSpace for Hyper-VDelegated Storagewith Mirror

This storage service is configured to provide thick provisioning and mirrorprotection for backing storage for Hyper-V environments. It supports adelegation model where the volume is provisioned according to bestpractices. The LUNs are be provisioned using SnapDrive for Windows on aHyper-V parent.

• Provisioning policy: Reserved Data Space for Hyper-V DelegatedStorage

• Protection policy: Mirror

190 | OnCommand Console Help

Page 191: Admin help netapp

• Resource pools: Default storage service primary pool; Default storageservice mirror pool

Thin ProvisionedSpace for VMFSDatastores

This storage service is configured to provide thin provisioning for VMFSdatastore objects.

• Provisioning policy:Thin Provisioned Space for VMFS Datastores• Protection policy: None• Resource pools: Default storage service primary pool

Thin ProvisionedSpace for NFSDatastores

This storage service is configured to provide thin provisioning for NFSdatastore objects.

• Provisioning policy:Thin Provisioned Space for NFS Datastores• Protection policy: None• Resource pools: Default storage service primary pool

Thin ProvisionedSpace for Hyper-V

This storage service is configured to provide thin provisioning for backingstorage LUNs in Hyper-V environments.

• Provisioning policy: Thin Provisioned Space for Hyper-V Storage• Protection policy: Storage• Resource pools: Default storage service primary pool

Thin ProvisionedSpace for Hyper-VDelegated Storage

This storage service is configured to provide thin provisioning in Hyper-Venvironments. It supports a delegation model where the volume isprovisioned according to best practices. The LUNs will be provisioned bySnapDrive for Windows on the Hyper-V parent.

• Provisioning policy:Thin Provisioned Space for Hyper-V DelegatedStorage

• Protection policy: None• Resource pools: Default storage service primary pool

Reserved DataSpace for VMFSDatastores

This storage service is configured to provide provisioning for writeguaranteed VMFS datastores.

• Provisioning policy: Reserved Data Space for VMFS Datastores• Protection policy: None• Resource pools: Default storage service primary pool

Reserved DataSpace for NFSDatastores

This storage service is configured to provide thick provisioning for NFSdatastores.

• Provisioning policy: Reserved Data Space for NFS Datastores• Protection policy: None

Datasets | 191

Page 192: Admin help netapp

• Resource pools: Default storage service primary pool

Reserved DataSpace for Hyper-VStorage

This storage service is configured to provide thick provisioning for backingLUNs in Hyper-V environments. The LUNs could be backing store forCluster Shared Volumes.

• Provisioning policy: Reserved Data Space for Hyper-V Storage• Protection policy: None• Resource pools: Default storage service primary pool

Reserved DataSpace for Hyper-VDelegated Storage

This storage service is configured to provide thick provisioning for backingstorage for Hyper-V environments. It supports a delegation model wherethe volume is provisioned according to best practices. The LUNs are beprovisioned using SnapDrive for Windows on a Hyper-V parent.

• Provisioning policy: Reserved Data Space for Hyper-V DelegatedStorage

• Protection policy: None• Resource pools: Default storage service primary pool

Local policy and backup of a dataset's virtual objects

A dataset's local policy in the OnCommand console enables you to specify the start times, stop times,frequency, retention time, and warning and error event thresholds for local backups of its VMware orHyper-V virtual objects.

What local protection of virtual objects is

Local protection of a dataset's virtual objects consists of the OnCommand console making Snapshotcopies of the VMware virtual objects or Hyper-V virtual objects that reside as images on yourstorage systems and saving those Snapshot copies as backup copies locally on the same storagesystems.

In case of data loss or corruption due to user or software error, you can restore the lost or damagedvirtual object data from saved local Snapshot copies as long as the primary storage systems on whichthe virtual objects reside remain intact and operating.

What a local policy is

A local policy is a configured combination of Snapshot copy schedules, retention times, warningthreshold, and error threshold levels that you can assign to a dataset. After you assign a local policyto a dataset, that policy applies to all the virtual objects that are included in that dataset.

The OnCommand console allows you to configure multiple local policies with different settings fromwhich you can select one to assign to a dataset.

You can also use policies supplied by the OnCommand console.

192 | OnCommand Console Help

Page 193: Admin help netapp

Local protection and remote protection of virtual objects

After Snapshot copies of a dataset's virtual objects are generated as local backup, remote protectionoperations that are specified in an assigned storage service configuration can save these backupcopies to secondary storage.

Secondary or tertiary protection of virtual objects cannot be accomplished unless Snapshot copieshave been generated on the primary node by backup jobs carried out on demand or through localpolicy.

Local policies supplied by the OnCommand console

To simplify the task of providing local protection to virtual objects in a dataset the OnCommandconsole provides a set of local policies (preconfigured combinations of local backup schedules, localbackup retention settings, lag warning and lag error thresholds, and optional backup scripts)specifically to support local backup of certain types of data.

The preconfigured local policies include the following set:

VMwarelocal backuppolicytemplate

This default policy enforces the following VMware environment-optimized settingsrelated to local backup scheduling and retention. This policy can also be renamedand modified, to accommodate different circumstances.

• Hourly backups without VMware snapshot (crash consistent) every hourbetween 7 am and 7 pm every day including weekends

• Daily backups with VMware snapshot at 10 PM every night every day includingweekends

• Weekly backup with VMware snapshot every Sunday midnight• Retention settings: Hourly backups for 2 days, Daily backups for 1 week,

Weekly backups for 2 weeks• Issue a warning if there are no backups for: 1.5 days• Issue an error if there are no backups for: 2 days• Backup script path: empty

Hyper-Vlocal backuppolicytemplate

This default policy enforces the following Hyper-V environment-optimized settingsrelated to local backup scheduling and retention. This policy can also be renamedand modified, to accommodate different circumstances.

• Hourly backups every hour between 7 am and 7 pm every day includingweekends

• Daily backups at 10 PM every night every day including weekends• Weekly backup every Sunday midnight• Retention settings: Hourly backups for 2 days, Daily backups for 1 week,

Weekly backups for 2 weeks• Issue a warning if there are no backups for: 1.5 days• Issue an error if there are no backups for: 2 days• Skip backups that will cause virtual machines to go offline

Datasets | 193

Page 194: Admin help netapp

• Start remote backup after local backup• Backup script path: empty

Effect of time zone on scheduled protection operations in datasets of virtual objects

The actual execution time of the protection jobs that are scheduled in the local policy and storageservice that you assign to a dataset of virtual objects depends on the times that are set on the systemsthat run the host services associated with that dataset's virtual object members.

Best practice to accommodate the time zone effect

The schedules for the local protection jobs and remote protection jobs specified in the local policiesand storage services that are assigned a dataset of virtual objects are carried out according to the timein effect on the systems that run the host services that are associated with the dataset's virtual objects.

Therefore, when you create datasets of virtual objects, the best practice is to include as members inany one dataset only those virtual objects that are associated with host services that are located in thesame time zone.

However, if you cannot avoid including virtual object members whose host services are located indifferent time zones, you should be aware of how time zone differences can affect protection jobtimes on the different virtual object members.

Local backup execution in a dataset containing virtual objects from different timezones

If a dataset contains a mix of virtual object members that are associated with different host servicesin different time zones, a local backup job, scheduled at a specified time by that dataset's localpolicies, executes for each individual virtual object member according to the time that is set on itsassociated host system. The DataFabric Manager server records the two actions using the timesettings of the most lagging time zone.

For example, if virtual machines associated with a host service in California and virtual machinesassociated with a host service in New York are added to the same dataset, and that dataset's localpolicy schedule specifies a local backup at 9 a.m., the virtual machines in New York are locallybacked up at 9 a.m. Eastern time (or 6 a.m. Pacific time), and the virtual machines in California arelocally backed up at 9 a.m. Pacific time. The DataFabric Manager server records two backupversions: one at 6 a.m. Pacific time, containing backup for the New York virtual machines only, andone at 9 a.m. Pacific time, containing backup for Sunnyvale virtual machines only.

Remote backup execution in a dataset containing virtual objects from different timezones

If a dataset contains a mix of virtual object members that are associated with different host servicesin different time zones, a remote protection job, scheduled at a specified time by that dataset's storage

194 | OnCommand Console Help

Page 195: Admin help netapp

service settings, executes for every virtual object member at a single common time that is determinedby the host systems in the most lagging time zone.

For example, if virtual machines associated with a host service in California and virtual machinesassociated with a host service in New York are added to the same dataset, and the protection policyschedule in that dataset's storage service specifies a remote backup at 9 a.m., both the virtualmachines in California and the virtual machines in New York are backed up at 9 a.m. Pacific time (or12 noon Eastern time).

Data ONTAP licenses used for protecting or provisioning dataThere are several Data ONTAP licensed options that you can use to protect or provision your data.After you have purchased the software licenses you need, you can assign these licenses to yourprimary and secondary storage from the Storage Systems Hosts window.

When you purchase a Data ONTAP option license, you receive a code composed of a string ofcharacters, such as ABCDEFG, that is unique to a particular service. You receive license codes forevery protocol and option, or service, that you purchase.

Not all purchased license codes are installed on a storage system before it is shipped from thefactory. Some licenses are installed after the system is set up. You can purchase license codes toenable additional services at any time. If you misplace a license code, you can contact NetApptechnical support or log in to the NetApp Support Site to obtain a copy.

You must enter a software license code on a storage system to enable the corresponding service. Youdo not need to indicate which license the code enables. The code is matched automatically to theappropriate service license.

Note: The Licenses area is visible only when the selected host is a single storage system runningData ONTAP. If you plan to use Open Systems SnapVault to back up data on a host that is notrunning Data ONTAP, you select the secondary storage system to license the necessary DataONTAP services.

The following licenses are available for use with DataFabric Manager server:

SnapMirror license You install a SnapMirror license on each of the source and destinationstorage systems for the mirrored data. If the source and destination volumesare on the same system, only one license is required.

SnapMirror replicates data to one or more networked storage systems.SnapMirror updates the mirrored data to keep it current and available fordisaster recovery, offloading tape backup, read-only data distribution, testingon nonproduction systems, online data migration, and so on. You can alsoenable the SnapMirror license to use Qtree SnapMirror for backup.

To use SnapMirror software, you must update thesnapmirror.access option in Data ONTAP to specify the destinationsystems that are allowed to access the primary data source system. For moreinformation about the snapmirror.access option, see the Data ONTAPData Protection Online Backup and Recovery Guide.

Datasets | 195

Page 196: Admin help netapp

SnapVault DataONTAP secondarylicense

You install the SnapVault Secondary license on storage systems that host thebackups of protected data. SnapVault creates backups of data stored onmultiple primary storage systems and copies the backups to a secondarystorage system. If data loss or corruption occurs, backed-up data can berestored to a primary or open storage system with little of the downtime anduncertainty associated with conventional tape backup and restore operations.For versions of Data ONTAP 7.3 or later, a single storage system cancontain a SnapVault Data ONTAP Primary license and a SnapVaultSecondary license.

SnapVault DataONTAP primarylicense

You install the SnapVault Data ONTAP Primary license on storage systemsrunning Data ONTAP that contain host data to be backed up. For versions ofData ONTAP 7.3 or later, a single storage system can contain a SnapVaultData ONTAP Primary license and a SnapVault Secondary license.

SnapVaultWindows PrimaryLicense

You install the SnapVault Windows Primary license on a secondary storagesystem, in addition to the SnapVault Secondary license, to support aWindows-based primary storage system running the Open SystemsSnapVault agent. A Windows-based primary storage system running theOpen Systems SnapVault agent does not require a SnapVault license.

SnapVaultWindows Open FileManager license

You install the SnapVault Open File Manager license on a secondary storagesystem to enable the backup of open files on Windows primary storagesystems running the Open Systems SnapVault agent.

You must install the SnapVault Windows Primary license and theSnapVault Data ONTAP Secondary license on the secondary storage systembefore installing the SnapVault Open File Manager license.

SnapVault UNIXprimary license

You install the SnapVault UNIX Primary license on a secondary storagesystem, in addition to the SnapVault Secondary license, to support a UNIX-based primary storage system (AIX, HP-UX, or Solaris) running the OpenSystems SnapVault agent. A UNIX-based primary storage system runningthe Open Systems SnapVault agent does not require a SnapVault license.

SnapVault Linuxprimary license

You install the SnapVault Linux Primary license on a secondary storagesystem, in addition to the SnapVault Secondary license, to support a Linux-based primary storage system running the Open Systems SnapVault agent. ALinux-based primary storage system running the Open Systems SnapVaultagent does not require a SnapVault license.

NearStore Optionlicense

The NearStore license enables your storage system to use transfer resourcesas conservatively as if it were optimized as a backup system. This approachis useful when the storage system on which you want to store backed-up datais not a system optimized for storing backups, and you want to minimize thenumber of transfer resources the storage system requires.

Storage systems using the NearStore license must meet the followingcriteria:

196 | OnCommand Console Help

Page 197: Admin help netapp

• The storage system must be a FAS30xx, FAS31xx, or FAS60xx system.• The version of Data ONTAP must be 7.1 or later.• If you plan to use the SnapVault service, the storage system must have a

SnapVault secondary license enabled.

Deduplicationlicense

The deduplication license (a_sis) enables you to consolidate blocks ofduplicate data into single blocks to store more information using less storagespace.

SnapMirror Synclicense

The SnapMirror Sync license enables you to replicate data to the destinationas soon as it is written to the source volume. SnapMirror Sync is a feature ofSnapMirror.

MultiStore Optionlicense

The MultiStore Option license enables you to partition the storage andnetwork resources of a single storage system so that it appears as multiplestorage systems on the network. Each virtual "storage system" created as aresult of the partitioning is called a vFiler unit. A vFiler unit, using theresources assigned, delivers file services to its clients as a storage systemdoes.

The storage resource assigned to a vFiler unit can be one or more qtrees orvolumes. The storage system on which you create vFiler units is called thehosting storage system. The storage and network resources used by thevFiler units exist on the hosting storage system.

Be sure the host on which you intend to install the MultiStore Option licenseis running Data ONTAP 6.5 or later.

FlexClone license The FlexClone license is necessary on storage systems that you intend to useas resources for secondary nodes for datasets of virtual objects.

Single file restorelicense

The Single file restore license is necessary on storage systems that youintend to use as primary storage for datasets of virtual objects.

Related information

Data ONTAP Data Protection Tape Backup and Recovery Guide - http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Descriptions of dataset protection statusA dataset is remotely protected only if the secondary storage system specified in the protectionrelationship is successfully backing up data, and if the copies of the data can be restored. You canmonitor dataset status using the Datasets tab.

You should regularly monitor a dataset's protection, because the OnCommand console cannotsufficiently protect the dataset under the following conditions:

Datasets | 197

Page 198: Admin help netapp

• If a secondary storage system runs out of storage space necessary to meet the retention durationrequired by the protection policy

• If the lag thresholds specified by the policy are exceeded

The following list describes protection status values and their descriptions:

Baseline Failed No initial baseline data transfers have registered a backup version.

Initializing The dataset is conforming to the protection policy and the initial baseline datatransfer is in process.

Job Failure The most recent protection job did not succeed.

This status might result for any of the following reasons:

• A backup from a SnapVault or Qtree SnapMirror relationship failed or couldnot be registered.

• A mirror copy from a SnapMirror relationship failed or could not beregistered.

• Local backups (Snapshot copies) failed on the primary node.

Lag Error The dataset has reached or exceeded the lag error threshold specified in theassigned protection policy. This value indicates that there has been no successfulbackup or mirror copy of a node's data within a specified period of time.

This status might result for any of the following reasons:

• The most recent local backup (Snapshot copy) on the primary node is olderthan the threshold setting permits.

• The most recent backup (SnapVault or Qtree SnapMirror) is older than thelag threshold setting or no backup jobs have completed since the dataset wascreated.

• The most recent mirror (SnapMirror) copy is older than the lag thresholdsetting or no mirror jobs have completed since the dataset was created.

Lag Warning The dataset has reached or exceeded the lag warning threshold specified in theassigned protection policy. This value indicates that there has been no successfulbackup or mirror copy of a node's data within a specified period of time.

This status might result for any of the following reasons:

• The most recent local backup (Snapshot copy) on the primary node is olderthan the threshold setting permits.

• The most recent backup (SnapVault or Qtree SnapMirror) is older than thelag threshold setting or no backup jobs have completed since the dataset wascreated.

• The most recent mirror (SnapMirror) copy is older than the lag thresholdsetting or no mirror jobs have completed since the dataset was created.

198 | OnCommand Console Help

Page 199: Admin help netapp

No ProtectionPolicy

The dataset is managed by the OnCommand console but no protection policy hasbeen assigned to the dataset.

Protected The dataset has an assigned policy and it has conformed to that policy at leastonce.

ProtectionSuspended

An administrator has requested that scheduled backups be temporarily halteduntil the administrator requests that they be resumed.

Uninitialized This status might result for any of the following reasons:

• The dataset has a protection policy that does not have any protectionoperations scheduled.

• The dataset does not contain any data to be protected.• The dataset does not contain storage for one or more destination nodes.• The single node dataset does not have any backup versions.• An application dataset requires at least one backup version associated with it.• The dataset does not contain any backup or mirror relationships.

No Local Policy A dataset of virtual objects has no local policy assigned.

Attention: Importing an external relationship into a dataset temporarily changes the dataset'sprotection status to Unitialized. When the next scheduled backup or mirror backup job runs orwhen you run an on-demand backup the protection status changes to reflect the results of theprotection job.

Configuring datasets

Adding a dataset of physical storage objectsYou can use the Add Dataset wizard to add a dataset to manage protection for physical storageobjects sharing the same protection requirements, or to manage provisioning for the dataset members.

Before you begin

• You must already be familiar with the Decisions to make before adding datasets of physicalstorage objects (for protection) on page 200.

• You must have NetApp Management Console installed.• You must have gathered the protection information that you need to complete this task:

• Dataset properties• Dataset naming properties• Group membership

• You must have gathered the provisioning information that you need to complete this task:

Datasets | 199

Page 200: Admin help netapp

• Provisioning policy for primary node• Migration capability• vFiler unit assignment• Provisioning policy for nonprimary dataset nodes

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

About this task

During this task, the OnCommand console launches NetApp Management Console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the NetApp Management Console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu and select the Datasets option.

2. In the Datasets tab, click Create and select the Dataset with storage objects option to start theNetApp Management Console Add Dataset wizard.

3. Complete the steps in the Add Dataset wizard to create a dataset of physical storage objects.

After you complete the wizard, the new dataset is listed in the Datasets tab.

4. To provide data protection or disaster recovery protection for the new dataset, select that datasetand click Protection Policy to start the Dataset Policy Change wizard.

5. When finished, press Alt-Tab or click the OnCommand console browser tab to return to theOnCommand console.

Related references

Administrator roles and capabilities on page 506

Decisions to make before adding datasets of physical storage objects (for protection)

Before you use the Add Dataset wizard to create a new dataset of physical storage objects and theDataset Policy Change wizard to configure protection, you must decide how you want to protect thedata.

Datasetproperties

• Is there a dataset naming convention that you can use to help administratorseasily locate and identify datasets?Dataset names can include the following characters but cannot be onlynumeric:

a to zA to Z

200 | OnCommand Console Help

Page 201: Admin help netapp

0 to 9. (period)_ (underscore)- (hyphen)space

If you use any other characters when naming the dataset, they do not appear inthe name.

• What is a good description of the dataset membership?Use a description that helps someone unfamiliar with the dataset to understandits purpose.

• Who is the owner of the dataset?• If an event on the dataset triggers an alarm, who should be contacted?

You can specify one or more individual e-mail addresses or a distribution listof people to be contacted.

• What time zone do you want to use to schedule operations on the dataset bescheduled according to the local time zone for the data?You can specify a time zone in the wizard or use the default time zone, whichis the system time zone used by the DataFabric Manager server.

Datasetnamingproperties

• Do you want to use the actual dataset name or a custom label in your dataset-level Snapshot copy, primary volume, secondary volume, or secondary qtreenaming?

• For customizing the naming settings of object types, do you want the currentdefault naming format to apply to one or more object types that are generatedin this dataset?If you want to customize the dataset-level naming formats for one or moreobject types, in what order do you want to enter the naming attributes forSnapshot copy, primary volume, secondary volume, or secondary qtree?

Groupmembership

• Do you need to create a collection of datasets and resource pools based oncommon characteristics, such as location, project, or owning organization?

• Is there an existing group to which you want to add this dataset?

Resources forprimarystorage

Will you assign a resource pool or individual physical resources as destinationsfor your primary storage?

If using a resource pool:

• For the primary node in the dataset, which resource pool meets its provisioningrequirements?

• If no resource pool meets the requirements of the primary node, you can createa new resource pool for each node at the Resource Pools window.

Datasets | 201

Page 202: Admin help netapp

• Verify that you have the appropriate software licenses on the storage youintend to use.

If using individual resources:

• If you prefer not to use resource pools for automatic provisioning, you canselect individual physical resources as members of your dataset.

• Verify that you have the appropriate software licenses on the storage youintend to use.

Protectionpolicy

After you create a new dataset of physical objects, you protect it by running theNetApp Management Console Dataset Policy Change wizard to assign aprotection policy.

• Which protection policy meets the requirements of the dataset?Review the policies listed on the Protection Policies window to see if any aresuitable for your new dataset.

• If no protection policy meets the requirements of your new dataset, is there aprotection policy that would be suitable with minor modifications?If so, you can copy that protection policy to create a new policy you canmodify as needed for the new dataset. If not, you can run the Add ProtectionPolicy wizard to create a new policy for the dataset.

Resources forsecondary ortertiarystorage

When you assign a protection policy, will you assign a resource pool or individualphysical resources as destinations for your backups and mirror copies?

You do not have to assign a resource pool or physical resources to a node to createa new dataset. However, the dataset will be nonconformant with its policy untilresources are assigned to each node, because the NetApp Management Consoledata protection capability cannot carry out the protection specified by the policy.

If using a resource pool:

• For the secondary or third nodes in the dataset, which resource pool meetstheir provisioning requirements?For example, the resource pool you assign to a mirror node should containphysical resources that would all be acceptable destinations for mirror copiescreated of the dataset members.

• If no resource pool meets the requirements of a node, you can create a newresource pool for each node at the Resource Pools window.

• Verify that you have the appropriate software licenses on the storage youintend to use.

If using individual resources:

• If you prefer not to use resource pools for automatic provisioning, you canselect individual physical resources as destinations for backups and mirrorcopies of your dataset.

202 | OnCommand Console Help

Page 203: Admin help netapp

• Verify that you have the appropriate software licenses on the storage youintend to use.

Adding a dataset of virtual objectsThe OnCommand console enables you to create datasets to manage the protection and provisioningof virtual objects in VMware or Hyper-V environments.

Before you begin

• Review the Guidelines for adding a dataset of virtual objects on page 204.• Review the Requirements and restrictions when adding a dataset of virtual objects on page 207• Review the Best practices when adding or editing a dataset of virtual objects on page 187

• Have the protection information available that you need to complete this task:

• The type of virtual objects, either VMware objects or Hyper-V objects, that you want toinclude

• The name you want to give this dataset• The user group to whom you want this dataset visible• Whether you want to specify dataset-level custom naming formats for the Snapshot copy,

volume, and qtree objects that are generated by local policy or storage service protection jobson the virtual objects in this dataset

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

About this task

The Create Dataset dialog box allows you to create an empty dataset, a populated but unprotecteddataset, a populated, remotely protected dataset, a populated locally protected dataset, or a populatedpartially-configured or fully-configured dataset. A minimally configured dataset is an empty dataset.

Datasets of virtual objects must have any secondary protection and provisioning configured through astorage service that you assign to them using the OnCommand console.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, click Create and select the option for the appropriate virtual object datasettype.

• To create a dataset to manage VMware objects, select the Dataset with VMware objectsoption.

• To create a dataset to manage Hyper-V objects, select the Dataset with Hyper-V objectsoption.

Datasets | 203

Page 204: Admin help netapp

3. In the Create Dataset dialog box, select the Name option and enter the requested information inthe sub-tabs of the associated content area.

a. Enter the dataset name and administrative contact information in the General Properties tab.

b. If you want to specify dataset-level naming formats to apply to the Snapshot copy, volume,and qtree objects that are generated by local policy or storage service protection jobs that arerun on this dataset, select options in the Naming Properties tab.

4. If you want to specify, at this time, the virtual objects to include in this dataset, select the Dataoption and make your selections in the associated content area.

You can also add or change this information for this dataset at a later time.

5. If you want to specify, at this time, a storage service that will execute remote protection for theobjects in this dataset, select the Storage service option and make your selection.

You can also add or change this information for this dataset at a later time; however, changingthis information later, requires more time.

6. If you want to specify or create and configure, at this time, a local policy that will execute localprotection for the objects in this dataset, select the Local Policy option and make your localpolicy selection or configuration.

You can also add or change this information for this dataset at a later time.

7. After you specify your desired information, if you want to test whether your dataset's newconfiguration is in conformance with the OnCommand console requirements before you applythem, click Test Conformance to display the Dataset Conformance Report.

• If the Dataset Conformance Report displays no warning or error information, click Close andcontinue.

• If the Dataset Conformance Report displays warning or error information, read the Action andSuggestion information resolve the conformance issue, then click Close and continue.

8. Click OK.

The OnCommand console creates your new dataset and lists it in the Datasets tab.

Related references

Administrator roles and capabilities on page 506Create Dataset dialog box or Edit Dataset dialog box on page 258Name area on page 260Data area on page 264Storage Service area on page 265Local Policy area on page 266

Guidelines for adding a dataset of virtual objects

Before you create a dataset of virtual objects, you must decide the type of virtual objects to includeand the name of the dataset. If you want to fully configure your dataset in the initial session, you

204 | OnCommand Console Help

Page 205: Admin help netapp

must also decide on the set of virtual objects to include, the local policy that you want to assign, and,if appropriate, the storage service that you want to assign.

The virtual object types you want to include

The dataset that you create can include either VMware virtual objects or Hyper-V virtual objects. Adataset configured for VMware virtual objects can include Datacenter, Virtual Machine, anddatastore objects. A dataset configured for Hyper-V virtual objects can include virtual machines.VMware objects and Hyper-V objects cannot co-exist in one dataset.

The scope of your initial dataset configuration

If you are ready to do so, the Edit Dataset dialog box and its four content areas enable you to create afully populated, fully protected dataset in one session.

However, if you are not ready to create a fully configured dataset in the initial session, the EditDataset dialog box also allows you to create an empty dataset, a dataset of unprotected virtualobjects, a dataset of just locally protected virtual objects, or a dataset of virtual objects that are bothlocally protected and remotely protected.

You can later edit a partially configured dataset to fully configure its membership or data protection.

General properties information

When you create a dataset for virtual objects, the minimum information you need to provide is thegeneral property information. If you complete a dataset configuration specifying only thisinformation, your result is a named but empty dataset.

Name Your company might have a naming convention for datasets. If so, then the bestpractice is for the dataset name to follow those requirements.

Description A useful description is one that helps someone unfamiliar with the dataset tounderstand its purpose.

Owner The name of the person or organization that is responsible for maintaining the virtualobjects that are included in this dataset.

Contact You can specify one or more individual e-mail addresses or a distribution list ofpeople to be contacted when an event on the dataset triggers an alarm.

Group If appropriate, you can specify the group to which you assign this dataset.

Naming properties informationWhen you create a dataset of virtual objects, you can specify dataset-level character strings andnaming formats to be applied to Snapshot copy, primary volume, secondary volume, or secondaryqtree objects that are generated by local policy or storage service protection jobs on this dataset.

You can also accept the globally configured default naming formats for those objects.

Datasets | 205

Page 206: Admin help netapp

Custom Label You can accept the default naming process of including the dataset name as partof the entire name of related objects created for this dataset or you can specify acustom character string to use instead.

SnapshotCopy

You can accept the global default Snapshot copy naming format to be applied toall local policy or storage service generated Snapshot copies for this dataset, oryou can specify an alternative custom dataset-level naming format.

SecondaryVolume

You can accept the global default secondary volume naming format to be appliedto all local policy or storage service generated secondary volumes for this dataset,or you can specify an alternative custom dataset-level naming format.

SecondaryQtrees

You can accept the global default secondary qtree naming format to be applied toall local policy or storage service generated secondary qtrees for this dataset, oryou can specify an alternative custom dataset-level naming format.

Data information

If you want to populate your dataset with virtual objects in the same session that you create it, youmust be ready to specify the following information:

Group The resource group from which you want to select virtual objects toinclude in the dataset.

Resource type(applies to datasetsconfigured forVMware objects)

If you are configuring a dataset for VMware virtual objects, what class ofsupported VMware object (Datacenter, Virtual Machine, or datastore) youwant to include.

Resources in thedataset

What virtual objects that meet your group and type selection criteria youwant to include in the dataset.

Any VMware datacenter objects that you include in a dataset cannot beempty. They must contain virtual machine objects or a datastore thatcontains virtual machine objects for successful backup.

Spanned entities(applies to datasetsconfigured forVMware objects)

If one of the VMware virtual machine objects that you want to include in adataset spans two or more datastores, the option of whether to include orexclude any of those datastores from that dataset.

Local policy information

If you want to set up local protection, you must decide whether to apply an existing local policy or tocustomize an additional local policy. Local policies specify the backup times, local copy retentiontimes, no-backup warning and error times, and path to any additional backup script you require.

206 | OnCommand Console Help

Page 207: Admin help netapp

If you want to set up remote protection, you must still generate local backup copies in primarystorage that an assigned storage service can copy and store in secondary storage. The usual method ofgenerating backups for this purpose is by scheduling local backup copies in the local policy.

If you intend to implement local backups on multiple datasets of Hyper-V objects that are associatedwith the same Hyper-V server, you must configure separate local policies with non-overlappingschedules to assign separately to each dataset.

Storage services information

If you want to set up remote protection (backup or mirroring to secondary and possibly tertiarystorage locations), you must be ready to select a storage service that is configured with a protectionpolicy that supports this possibility and to specify the path to any additional backup script that yourequire.

Requirements and restrictions when adding a dataset of virtual objects

You must be aware of the requirements and restrictions when creating or editing a dataset of virtualobjects. Some requirements and restrictions apply to datasets of all types of virtual objects and someare specific to datasets of Hyper-V or datasets of VMware virtual objects.

General requirements and restrictions

• OnCommand console administrators attempting to assign a storage service to a dataset, requireRBAC read permission (DFM.Policy.Read) to any protection policy included in that storageservice.This is in addition to having RBAC permissions DFM.Policy.Read, DFM.StorageService.Readand DFM.StorageService.Attach to the storage service itself.

• Administrators of datasets of virtual objects who are attempting to attach storage services to thosedatasets require RBAC DFM.Resource.Control permission if the storage service they areattempting to attach assigns a provisioning policy to the dataset primary node.Even though provisioning policy assignment does not actually apply to datasets of virtual objects,the DFM.Resource.Control permission is necessary to allow access to the underlying storage onwhich the virtual objects are located.

• VMware objects and Hyper-V objects cannot coexist in the same dataset.• Virtual objects (VMware objects or Hyper-V objects) cannot coexist in the same dataset with and

storage system container objects (such as aggregates, volumes, and qtrees).• To provide an application dataset with application consistent backup protection, the OnCommand

console operator must assign to that application dataset a storage service that is configured with aprotection policy that uses a "Mirror then backup" type protection topology.

VMware specific restrictions

• VMware datacenter objects that you include in a dataset cannot be empty.They must contain datastore or virtual machine objects for successful backup.

Datasets | 207

Page 208: Admin help netapp

• VMDKs on a datastore object in a dataset must be contained within folders in that datastore. IfVMDKs exist outside of folders on the datastore, and that data is backed up, restoring the backupcould fail.

Hyper-V specific restrictions

• A dataset with Hyper-V objects can only include virtual machines.• Hyper-V objects and storage system container objects (such as aggregates, volumes, and qtrees)

cannot coexist in the same dataset.• Shared virtual machines and dedicated virtual machines cannot coexist in the same dataset.• Each virtual machine contained in the dataset that you want to back up must contain at least 300

MB of free disk space.Each Windows volume in the virtual machine (guest OS) must have at least 300 MB free diskspace. This includes the Windows volumes corresponding to VHDs, iSCSI LUNs and pass-through disks attached to the virtual machine.

Best practices when adding or editing a dataset of virtual objects

When you create or edit a dataset of virtual objects, observing best practices specific to datasets ofvirtual objects helps you to avoid some performance and space usage problems after configuration.

General best practices for datasets of virtual objectsThe following configuration practices apply to both datasets of Hyper-V objects and datasets ofVMware objects:

• To avoid conformance and local backup issues caused by primary volumes reaching theirSnapshot copy maximum of 255, best practice is to limit the number of virtual objects included ina primary volume, and limit the number of datasets to which each primary volume is directly orindirectly included as a member.A primary volume that hosts virtual objects that are included in multiple datasets is subject toretaining an additional Snapshot copy of itself for every local backup on any dataset that any ofits virtual object children are members of.

• To avoid backup schedule inconsistencies, best practice is to include only virtual objects that arelocated in the same time zone in one dataset.The schedules for the local protection jobs and remote protection jobs specified in the localpolicies and storage services that are assigned a dataset of virtual objects are carried outaccording to the time in effect on the host systems that are associated with the dataset's virtualobjects.

Best practices specific to datasets of Hyper-V objectsThe following configuration practices apply specifically to datasets containing Hyper-V objects:

• To ensure faster dataset backup of virtual machines in a Hyper-V cluster, best practice is to runall the virtual machines on one node of the Hyper-V cluster.

208 | OnCommand Console Help

Page 209: Admin help netapp

When virtual machines run on different Hyper-V cluster nodes, separate backup operations arerequired for each node in the cluster. If all virtual machines run on the same node, only onebackup operation is required, resulting in a faster backup.

Best practices specific to datasets of VMware objectsThe following configuration practices apply specifically to datasets containing VMware objects:

• If a virtual machine resides on more than one datastore, you can exclude one or more of thosedatastores from the dataset. No local or remote protection is configured for the excludeddatastores.You might want to exclude datastores that contain swap files that you want to exclude frombackup.

• To avoid an excessive amount of secondary space provisioned for backup, best practice whencreating volumes to host the VMware datastores whose virtual machines will be protected by theOnCommand console backup is to size those volumes to be not much larger than the datastoresthey host.The reason for this practice is that when provisioning secondary storage space to back up virtualmachines that are members of datastores, the OnCommand console allocates secondary space thatis equal to the total space of the volume or volumes in which those datastores are located. If thehost volumes are much larger than the datastores they hold, an excessive amount of provisionedsecondary space can result.

Editing a dataset to add virtual object membersYou can add virtual VMware or virtual Hyper-V objects to existing datasets that were created tocontain VMware objects or Hyper-V objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Datasets created as VMware datasets can only contain VMware objects.Datasets created as Hyper-V datasets can only contain Hyper-V objects. Hyper-V shared virtualmachines and dedicated virtual machines cannot be included in the same dataset.

• VMware datacenter objects that you include in a dataset cannot be empty.They must contain datastore or virtual machine objects for successful backup.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset to which you want to add virtual objects, and click Edit.

Datasets | 209

Page 210: Admin help netapp

3. If the selected dataset currently has no members or local policy assigned, specify, whenprompted, whether the members that you want to add are VMware objects Hyper-V objects.

4. In the Edit Dataset dialog box, select the Data option.

The associated content area displays all the virtual objects of the type (VMware or Hyper-V) thatare compatible with the selected dataset type.

5. Select the virtual objects to include in this dataset.

6. After you finish adding the desired virtual objects to your dataset, click OK.

The set of virtual objects in this dataset is updated with your additions.

Related references

Administrator roles and capabilities on page 506

Data area on page 264

Editing a dataset to assign storage service and remote protection of virtualobjects

You can configure remote protection of a dataset's virtual objects by assigning a storage service tothat dataset.

Before you begin

• Local backup copies of the dataset's primary data, generated either by on-demand backups or byscheduled backups specified in the local policy, must exist on the primary node for transfer by theassigned storage service to a secondary node.

• You must have a storage service available for assignment that is configured to support thedataset's protection and provisioning requirements.

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

• OnCommand console administrators attempting to assign a storage service to a dataset,require RBAC read permission (DFM.Policy.Read) to any protection policy included in thatstorage service.This is in addition to having RBAC permissions DFM.Policy.Read,DFM.StorageService.Read and DFM.StorageService.Attach to the storage service itself.

• Administrators of datasets of virtual objects who are attempting to attach storage services tothose datasets require RBAC DFM.Resource.Control permission if the storage service theyare attempting to attach assigns a provisioning policy to the dataset primary node.Even though provisioning policy assignment does not actually apply to datasets of virtualobjects, the DFM.Resource.Control permission is necessary to allow access to the underlyingstorage on which the virtual objects are located.

210 | OnCommand Console Help

Page 211: Admin help netapp

About this task

• Datasets of virtual objects must have any secondary protection and provisioning configuredthrough a storage service that you assign to them using the OnCommand console.

• To provide an application dataset with application consistent backup protection, the OnCommandconsole operator must assign to that application dataset a storage service that is configured with aprotection policy that uses a "Mirror then backup" type protection topology.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset for whose virtual objects you want to configure remoteprotection, and click Edit.

3. In the Edit Dataset dialog box use the Storage service option to view and assign a storageservice that will execute remote protection for the objects in this dataset.

As you make your selection, the associated content area displays the administrative, topological,protection schedule, and backup retention information on the storage service that you select.

4. After you make your selection, if you want to test whether your dataset's settings and resourcesconform to the requirements of its newly assigned storage service before you apply that storageservice, click Test Conformance to display the Dataset Conformance Report.

• If the Dataset Conformance Report displays no warning or error information, click Close andcontinue.

• If the Dataset Conformance Report displays warning or error information, read the Action andSuggestion information resolve the conformance issue, then click Close and continue.

5. When you are ready to confirm the storage service assignment, click OK.

The OnCommand console saves your dataset with its newly added members and lists your datasetand its members in the Datasets tab.

Related references

Administrator roles and capabilities on page 506

Storage Service area on page 265

Editing a dataset of virtual objects to configure local policy and localbackup

You can select, modify, or configure new local policies to automate local protection of datasetscontaining virtual VMware objects or virtual Hyper-V objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Datasets | 211

Page 212: Admin help netapp

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset on which you want to schedule and configure local backupsand click Edit.

3. In the Edit Dataset dialog box, select the Local Policy option and drop down list and completeone of the following actions:

• If you want to assign an existing local policy, select that policy from the Local Policy dropdown list.

• If you want to assign an existing local policy with some modifications select that policy, makeyour modifications in the content area, and click Save.

• If you want to configure a new local policy to apply to this dataset, select the Create Newoption, configure the policy in the content area, and click Create.

4. After you finish assigning a new or existing local policy to this dataset, if you want to testwhether your dataset's new configuration is in conformance with OnCommand consolerequirements before you apply them, click Test Conformance to display the DatasetConformance Report.

• If the Dataset Conformance Report displays no warning or error information, click Close andcontinue.

• If the Dataset Conformance Report displays warning or error information, read the Action andSuggestion information resolve the conformance issue, then click Close and continue.

5. Click OK.

Any local policy assignment, modification, or creation that you completed will be applied to thelocal protection of the virtual objects in the selected dataset.

Editing a dataset containing virtual objects to reschedule or modify localbackup jobs

You can modify the schedule of local backup jobs that are configured in the local policy assigned toa dataset containing virtual VMware objects or virtual Hyper-V objects.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If you need to reschedule or modify the local backup jobs associated with the local policy of a datasetof virtual objects, you can edit the Local Policy settings of that dataset.

212 | OnCommand Console Help

Page 213: Admin help netapp

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset on which you want to schedule and configure local backupsand click Edit.

3. In the Edit Dataset dialog box, locate the Local settings option and click >.

4. Modify the local backup jobs as needed.

5. After you finish changing the schedule for the local policy to this dataset, click OK.

Result

Any local policy modification that you completed will be applied to the local protection of the virtualobjects in all datasets that use that local policy.

Related references

Local Policy area on page 266

Administrator roles and capabilities on page 506

Editing a dataset to remove protection from a virtual objectYou can permanently remove dataset-based local or remote protection from a virtual object byremoving that object from the protected dataset.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset that contains the virtual object whose local or remoteprotection you want to remove and click Edit.

3. In the Edit Dataset dialog box, select the Data option.

The associated content area displays all the current members of the selected dataset in the list boxthat is labeled "Members in the dataset."

4. Select the virtual objects that you want to remove as dataset member and click the < button.

5. After you finish removing the virtual objects from your dataset, click OK.

The set of virtual objects in this dataset is updated with your removals.

Datasets | 213

Page 214: Admin help netapp

Result

The virtual objects continue to exist as objects, but not as members of the dataset from which theyare removed. Protection and provisioning jobs that are executed on the remaining objects in thedataset are no longer executed on the removed objects.

Related references

Administrator roles and capabilities on page 506

Data area on page 264

Adding a dataset of physical storage objects with dataset-level customnaming

When you create a new dataset of physical storage objects, to improve recognition and usability, youcan customize the naming formats that are applied to that dataset's related objects (the Snapshotcopies, primary volumes, secondary volumes, or secondary qtrees that are generated in that datasetby protection jobs run on that dataset).

Before you begin

• Have the custom naming information available to complete this task:

• The related object types whose naming you want to customize• If you want to include a custom label for your dataset in your custom naming formats, the

character string that you want to use• If you want to customize naming settings by selecting and ordering NetApp Management

Console-supplied attributes in the Add Dataset wizard Naming Properties page, the namingattributes that you want to include in the naming format

• If you plan to assign a policy, you must to be assigned a role that enables you to view policies.• If you plan to assign a provisioning policy, you must be assigned a role that enables you to attach

the resource pools configured for the policy.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.

About this task

• During this task, the OnCommand console launches NetApp Management Console. Dependingon your browser configuration, you can return to the OnCommand console by using the Alt-Tabkey combination or clicking the OnCommand console browser tab. After the completion of thistask, you can leave the NetApp Management Console open, or you can close it to conservebandwidth.

• Dataset-level naming properties, if customized for the related object types in a dataset, overridethe global naming settings for those object types in that dataset.

214 | OnCommand Console Help

Page 215: Admin help netapp

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, click Create and select Dataset with Storage entities to start the AddDataset wizard.

3. When the wizard displays the Naming Properties panel, specify how you want the datasetreference displayed in the names for your object types:

• If you want the OnCommand console to include the name of the dataset in the names of itsgenerated Snapshot copy, primary volume, secondary volume, or secondary qtree objects,select Use dataset name.

• If you want the OnCommand console to use a custom character string in place of the datasetname in the names of its generated related object, select Use custom label and enter thecharacter strings that you want to use.

4. Customize the naming settings for your related object types:

• If you want the current global naming format to apply to one or more object types that aregenerated in this dataset, select the Use global naming format option for those object types.

• If you want to customize the dataset-level naming formats for one or more object types thatare generated in this dataset, select the Use custom format option for those object types, andtype the naming attributes in the order that you want those attributes to appear.

5. When you complete your naming configuration, click Next and complete the dataset creation.

6. To return to the Datasets tab, press Alt-Tab.

Result

After dataset creation is complete, the OnCommand console applies the custom dataset-level namingformats to all objects created by protection and provisioning jobs for that dataset.

Related references

Administrator roles and capabilities on page 506

Adding a dataset of virtual objects with dataset-level custom namingWhen you create a dataset of virtual objects, to improve recognition and usability, you can customizethe naming formats that are applied to that dataset's related objects (Snapshot copies, secondaryvolumes, or secondary qtrees that are generated by protection jobs run on that dataset).

Before you begin

• Have the protection information available:

• The name you want to give this dataset• The user group that you want this dataset to be visible to.

Datasets | 215

Page 216: Admin help netapp

• The dataset-level custom naming formats that you want to specify for the related object typesthat are generated by local policy or storage service protection jobs on the virtual objects inthis dataset.

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

About this task

Dataset level naming properties customized for a related object type in a dataset override anyconflicting global naming settings that might be configured for that object type.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, click Create and select the option for the appropriate virtual object datasettype.

• To create a dataset to manage VMware objects, select the Dataset with VMware entitiesoption.

• To create a dataset to manage Hyper-V objects, select the Dataset with Hyper-V entitiesoption.

3. In the Create Dataset dialog box, select the Name option and enter the requested information inthe sub-tabs of the associated content area.

a. In the General Properties tab, enter the dataset name and administrative contact information.

b. In the Naming Properties tab, specify dataset-level naming formats to apply to the objecttypes that are generated by protection jobs run on this dataset.

4. If you want to specify, at this time, the virtual objects to be included in this dataset, select theData option and make your selections in the associated content area.

You can also add or change this information for this dataset at a later time.

5. If you want to specify, at this time, a storage service that executes remote protection for theobjects in this dataset, select the Storage service option and make your selection.

You can also add or change this information for this dataset at a later time.

6. If you want to specify or create and configure, at this time, a local policy that executes localprotection for the objects in this dataset, select the Local Policy option and make your localpolicy selection or configuration.

You can also add or change this information for this dataset at a later time.

7. After you specify your desired amount of information about this dataset, click OK.

The OnCommand console creates your new dataset and lists it in the Datasets tab.

216 | OnCommand Console Help

Page 217: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Name area on page 260

Editing a dataset of virtual objects for dataset-level custom namingTo improve recognition and usability, you can edit an existing dataset of virtual objects to customizethe dataset-level naming format that is applied to that dataset's related objects (the Snapshot copies,secondary volumes, or secondary qtrees that are generated by protection jobs run on that dataset).

Before you begin

• Have the custom naming information available:

• The name of the dataset for which you want to configure custom naming• The related object types whose naming you want to customize• If you want to include a custom label for your dataset in your custom naming format, the

character string that you want to use• If you want to customize the naming settings by entering attributes on the Naming Properties

tab, the naming attributes that you want to include in the naming format• If you want to customize the naming settings by naming a pre-authored naming script, the

name and location of that script.• If you plan to assign a policy, you must be assigned a role that enables you to view policies.• If you plan to assign a provisioning policy, you must be assigned a role that enables you to attach

the resource pools configured for the policy.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.

About this task

Dataset-level naming properties customized for the related object types in a dataset override anyconflicting global naming settings that might be configured for those related object types.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset for which you want to configure dataset-level customnaming, and click Edit.

3. In the Edit Dataset dialog box, select the Name & Properties option.

4. Click the Naming Properties tab.

5. Select the options and enter the formats necessary to specify the particular dataset-level customnaming formats that you want.

6. After you finish the naming customizations that you want for this dataset, click OK.

Datasets | 217

Page 218: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Editing a dataset of physical storage objects for dataset-level customnaming

To improve recognition and usability, you can edit an existing dataset to customize the namingformat that is applied to that dataset's protection-related objects (Snapshot copies, primary volumes,secondary volumes, or secondary qtrees that are generated by protection jobs run on that dataset).

Before you begin

• Have the custom naming information available to complete this task:

• The name of the dataset for which you want to configure custom naming• The related object whose naming you want to customize• If you want to include a custom name for your dataset in your custom naming format, the

character string that you want to use• If you want to customize naming settings by selecting and ordering attributes from the

Naming Properties page, the naming attributes that you want to include in the naming format• If you plan to assign a policy, you must be assigned a role that enables you to view policies.• If you plan to assign a provisioning policy, you also need a role that enables you to attach the

resource pools configured for the policy.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.

About this task

• During this task, the OnCommand console launches NetApp Management Console. Dependingon your browser configuration, you can return to the OnCommand console by using the Alt-Tabkey combination or clicking the OnCommand console browser tab. After the completion of thistask, you can leave the NetApp Management Console open, or you can close it to conservebandwidth.

• Dataset-level naming properties customized for the protection-related objects override anyconflicting global naming settings that might be configured.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset for which you want to configure dataset-level customnaming, and click Edit to display the Edit Dataset window.

3. Click the Naming Properties option and customize the naming settings for your selected objecttype.

4. When you complete your naming configuration, click OK and complete the dataset edit.

218 | OnCommand Console Help

Page 219: Admin help netapp

5. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

Result

After the dataset edit is complete, the OnCommand console applies the custom dataset-level namingformat that you specified to all future objects of that type that are generated by that dataset'sprotection and provisioning jobs.

Related references

Administrator roles and capabilities on page 506

Selecting virtual objects to create a new datasetYou can select a set of virtual objects (of the same type) that you want to manage as one group andcombine them into a new dataset.

Before you begin

• Have the protection information available that you need to complete this task:

• The name you want to give this dataset• The user group that you want this dataset visible to.• Whether you want to use the dataset name or substitute name as a prefix to names assigned to

backup copies that are generated for this dataset.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.

About this task

• The objects that you select must all be of the same virtual object type.The supported object types include VMware datacenter, virtual machine, or datastore objects orHyper-V virtual machine objects.

• Any VMware datacenter objects that you include in a dataset cannot be empty.They must contain datastore or virtual machine objects for successful backup.

Steps

1. Click the View menu and click the Servers option.

2. In the Server tab select the type of the virtual objects that you want to include in a new dataset.

Virtual objects of the selected type are displayed.

3. Select the objects that you want to include in the new dataset.

To select more than one object, hold down Ctrl while you make your selections.

Datasets | 219

Page 220: Admin help netapp

4. After you complete your selections, click Add to new dataset.

The Create Dataset dialog box appears, listing the selected objects as part of a new dataset.

5. In the Create Dataset dialog box, select the Name option and enter the requested information inthe associated content area.

6. If you want to specify, at this time, a storage service that will execute remote protection for theobjects in this dataset, select the Storage Service option and make your selection.

You can also add or change this information for this dataset at a later time.

7. If you want to specify or create and configure, at this time, a local policy that will execute localprotection for the objects in this dataset, select the Local Policy option and make your localpolicy selection or configuration.

You can also add or change this information for this dataset at a later time.

8. After you specify your desired amount of information about this dataset, click OK.

9. When the OnCommand console displays a confirmation box that informs you that your newdataset is created, complete one of the following actions:

• Click Close to close the confirmation box and view the Server tab.• Click the linked dataset name to view the listing of the new dataset in the Datasets tab.

Related references

Administrator roles and capabilities on page 506

Name area on page 260

Data area on page 264

Storage Service area on page 265

Selecting virtual objects to add to an existing datasetYou can select a group of virtual objects and add them directly into an existing dataset.

Before you begin

• Have the protection information available that you need to complete this task:

• The type of virtual objects that you want to add to an existing dataset.• The names of the virtual objects that you want to add.• The name of the dataset to which you want to add your selected objects.

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

About this task

• Although a dataset might contain more than one object type of the same family, the objects thatyou select for this operation must all be of the same virtual object type.

220 | OnCommand Console Help

Page 221: Admin help netapp

The supported object types include VMware datacenter, virtual machine, or datastore objects orHyper-V virtual machine objects.

• Any VMware datacenter objects that you include in a dataset cannot be empty.They must contain datastore or virtual machine objects for successful backup.

Steps

1. Click the View menu and click the Servers option.

2. In the Server tab select the type of the virtual objects that you to add to an existing dataset.

The Server tab displays virtual objects of the selected type.

3. Select the objects that you want to add to the new dataset.

To select more than one object, hold down Ctrl while you make your selections.

4. After you complete your selections, click Add to existing dataset.

The Add to existing dataset dialog box appears listing the selected objects and the existingdatasets that can accept objects of their type.

5. Select the dataset to which you want to add your selected objects and click Add.

6. When the OnCommand console displays a confirmation box that informs you that your existingdataset is updated, complete one of the following actions:

• Click Close to close the confirmation box and view the Server tab.• Click the linked dataset name to view the listing of the updated dataset in the Datasets tab.

Related references

Administrator roles and capabilities on page 506

Configuring local backups for multiple datasets of virtual Hyper-V objectsIf you want to protect multiple datasets of virtual machines that are associated with one Hyper-Vparent host, you must configure separate local policies with non-overlapping schedules for eachdataset.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

A Hyper-V parent host does not allow simultaneous or overlapping local backups on multiple virtualmachines that are associated with it; therefore each associated dataset of Hyper-V objects that youwant to provide with local protection requires a separate local policy with a schedule that does notoverlap the schedule of any other local policy.

Datasets | 221

Page 222: Admin help netapp

Steps

1. In the Policies tab, select Hyper-V Local Policy Template and click Copy to create analternative local policy for each dataset that is associated with the Hyper-V parent host.

2. Still in the Policies tab, edit the Schedule and Retention area of each alternative local policy thatyou just created so that none of those policies has a schedule that overlaps with the schedule ofany other.

3. In the Datasets tab, edit the Local Policy area of each separate dataset that is associated with theHyper-V parent host to assign it a separate one of the local policies that you have just edited.

Related references

Administrator roles and capabilities on page 506

Managing datasets

Performing an on-demand backup of a datasetYou can perform a local on-demand dataset backup to protect your virtual objects.

Before you begin

• You must have reviewed the Guidelines for performing an on-demand backup on page 277• You must have reviewed the Requirements and restrictions when performing an on-demand

backup on page 279• You must have added the virtual objects to an existing dataset or have created a dataset and added

the virtual objects that you want to back up.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.• You must have the following information available:

• Dataset name• Retention duration• Backup settings• Backup script location• Backup description

About this task

If you perform a backup of a dataset containing Hyper-V virtual machines and you are currentlyrestoring those virtual machines, the backup might fail.

222 | OnCommand Console Help

Page 223: Admin help netapp

Steps

1. Click the View menu, then click the Datasets option.

2. In the Datasets tab, choose the dataset that you want to back up.

3. Click Back Up Now.

4. In the Back Up Now dialog box, specify the local protection settings, backup script path, andbackup description for the on-demand backup.

If you have already established local policies for the dataset, that information automaticallyappears for the local protection settings for the on-demand backup. If you change the localprotection settings, the new settings override any existing application policies for the dataset.

5. If you want a remote backup to begin after the local backup has finished, select the Start remotebackup after local backup box.

6. Click Back Up Now.

After you finish

You can monitor the status of your backup from the Jobs tab.

Guidelines for performing an on-demand backup

Before performing an on-demand backup of a dataset, you must decide how you want to assignresources and assign protection settings.

General properties information

When performing an on-demand backup, you need to provide information about what objects youwant to back up, to assign protection and retention settings, and to specify script information thatruns before or after the backup operation.

Datasetname

You must select the dataset that you want to back up.

Localprotectionsettings

You can define the retention duration and the backup settings for your on-demandbackup, as needed.

Retention You can choose to keep a backup until you manually delete it, oryou can assign a retention duration. By specifying a length of timeto keep the on-demand local backup, you can override the retentionduration in the local policy you assigned to the dataset for thisbackup. The retention duration of a local backup defaults to aretention type for the remote backup.

A combination of both the remote backup retention type and storageservice is used to determine the remote backup retention duration.

Datasets | 223

Page 224: Admin help netapp

For example, if you specify a local backup retention duration of twodays, the retention type of the remote backup is Daily. The datasetstorage service then verifies how long daily remote backups are keptand applies this to the backup. This is the retention duration of theremote backup.

The following table lists the local backup retention durations and theequivalent remote backup retention type:

Local retention duration Remote retentiontype

Less than 24 hours Hourly

1 day up to, but not including, 7 days Daily

1 week up to, but not including, 31 days Weekly

More than 31 days Monthly

Backupsettings

You can choose your on-demand backup settings based on the typeof virtual objects you want to back up.

Allow saved statebackup (Hyper-Vonly)

You can choose to skip the backup if itcauses one or more of the virtual machinesto go offline. If you do not choose thisoption, and your Hyper-V virtualmachines are offline, backup operationsfail.

Create VMwaresnapshot(VMware only)

You can choose to create a VMwareformatted in addition to the storage systemSnapshot copies created during localbackup operations.

Includeindependent disks(VMware only)

You can include independent disks.VMDKs belong to VMware virtualmachines in the current dataset, but resideon datastores that are not part of thecurrent dataset.

Backupscript path

You can specify a script that is invoked before and after the local backup. The scriptis invoked on the host service and the path is local to the host service. If you use aPowerShell script, you should use the drive letter convention. For other types ofscripts, you can use either the drive letter convention or the Universal NamingConvention.

224 | OnCommand Console Help

Page 225: Admin help netapp

Backupdescription

You can provide a description for the on-demand backup so you can easily find itwhen you need it.

Clustered virtual machine considerations (Hyper-V only)

Dataset backups of clustered virtual machines take longer to complete when the virtual machines runon different nodes of the cluster. When virtual machines run on different nodes, separate backupoperations are required for each node in the cluster. If all virtual machines run on the same node,only one backup operation is required, resulting in a faster backup.

Requirements and restrictions when performing an on-demand backup

You must be aware of the requirements and restrictions when performing an on-demand backup.Some requirements and restrictions apply to all types of objects and some are specific to Hyper-V orVMware virtual objects.

Requirements Virtual machines or datastores must first belong to a dataset before backing up.You can add virtual objects to an existing dataset or create a new dataset andadd virtual objects to it.

Hyper-V specificrequirements

Each virtual machine contained in the dataset that you want to back up mustcontain at least 300 MB of free disk space. Each Windows volume in thevirtual machine (guest OS) must have at least 300 MB free disk space. Thisincludes the Windows volumes corresponding to VHDs, iSCSI LUNs, andpass-through disks attached to the virtual machine.

Hyper-V virtual machine configuration files, snapshot copy files, and VHDsmust reside on Data ONTAP LUNs, otherwise backup operations fail.

VMware specificrequirements

Backup operations of datasets containing empty VMware datacenters ordatastores will fail. All datacenters must contain datastores or virtual machinesto successfully perform a backup.

Virtual disks must be contained within folders in the datastore. If virtual disksexist outside of folders on the datastore, and that data is backed up, restoringthe backup could fail.

NFS backups might take more time than VMFS backups. This is because ittakes more time for VMware to commit snapshots in a NFS environment.

Hyper-V specificrestrictions

Partial backups are not supported. If the Hyper-V VSS writer fails to back upone of the virtual machines in the backup and the failure occurs at the Hyper-Vparent host, the backup fails for all of the virtual machines in the backup.

Datasets | 225

Page 226: Admin help netapp

Deleting a dataset of virtual objectsYou can delete a dataset if you want to stop protection for all of its members and stop conformancechecking against its assigned protection policies.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

When you delete a dataset, the physical resources that compose the dataset are not deleted.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset that you want to delete and click Delete.

3. Click Yes to confirm the delete request or No to cancel the request and close the dialog box.

After you finish

The deleted dataset is no longer listed in the Datasets tab.

Related references

Administrator roles and capabilities on page 506

Suspending dataset protection and conformance checkingYou can temporarily stop data protection and provisioning conformance checks on a dataset and itsmembers.

Before you begin

About this task

• Before performing maintenance on volumes used as destinations for backups or mirror copies,you might want to stop protection and conformance checking of the dataset to which the volumebelongs to ensure that the protection application does not initiate a new backup or mirrorrelationship for the primary data.

Note: If you suspend protection for a dataset and the lag time exceeds the threshold defined forthe dataset, no lag threshold event is generated until protection is resumed. After you resumeprotection for the dataset, the protection application generates the backlog of lag threshold

226 | OnCommand Console Help

Page 227: Admin help netapp

events that would have been generated had protection been in effect and triggers any applicablealarms.

Note: When you suspend services on application datasets, the external application continues toperform local backups as scheduled.

• This task suspends all policies that are assigned to the dataset. You cannot choose to suspend onlyprotection when both protection and provisioning policies are assigned to a dataset.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset whose protection you want to suspend, click More thenselect Suspend.

3. In the confirmation dialog box, click Yes.

The Details area of the Datasets tab shows a Protection Suspended status if the dataset has aprotection policy assigned, and a Nonconformant status if the dataset has a provisioning policyassigned.

4. Click Yes to confirm the suspend request or No to cancel the request and close the dialog box.

Result

All scheduled backups and provisioning are cancelled until service is resumed.

After you finish

After you bring the storage system volume online again, you must wait for the DataFabric Managerserver to recognize that the volume is back online. You can check the backup volume status usingOperations Manager.

You can resume data protection from the Datasets tab.

Resuming protection and conformance checking on a suspended datasetYou can resume data protection and conformance checking after those services were suspended on adataset.

About this task

If you suspended protection and conformance operations on a dataset in order to performmaintenance on its volumes used as destinations for backups or mirror copies, you can resumeoperations after maintenance is complete.

Note: If you suspend protection for a dataset and the lag time exceeds the threshold defined for thedataset, no lag threshold event is generated until protection is resumed. After you resumeprotection for the dataset, the protection application generates the backlog of lag threshold eventsthat would have been generated had protection been in effect and triggers any applicable alarms.

Datasets | 227

Page 228: Admin help netapp

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset on which you want to resume protection and provisioningactivities, click More then select Resume.

3. In the confirmation dialog box, click Yes.

4. Click Yes to confirm the resume request or No to cancel the request and close the dialog box.

Result

All scheduled backups and provisioning operations are resumed.

Changing a storage service on datasets of storage objectsYou can use the Change Storage Service wizard to change a storage service on datasets of storageobjects that already have another storage service attached.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

The following conditions must exist:

• The new storage service that you want to attach is available in the group in which you want tolocate that dataset.

• All datasets to which you want to attach the new storage service are currently attached to thesame current storage service.

About this task

The Change Storage Service wizard allows you to select an alternative storage service, presents youwith possible node remapping alternatives along with rebaselining requirements for each alternative,carries out a dry run of your request, and then implements your request upon your approval.

During this task, the OnCommand console launches NetApp Management Console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the NetApp Management Console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset whose storage service you want to change, click More thenselect Storage Service and Change to start the Change Storage Service wizard.

3. After you complete each property sheet in the wizard, click Next.

228 | OnCommand Console Help

Page 229: Admin help netapp

4. Confirm the details of the storage service and click Finish.

5. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

6. Refresh your browser to update the OnCommand console with the changes you made.

Result

The selected datasets are listed in the datasets table with their newly attached storage service namedin the storage service column.

Related references

Administrator roles and capabilities on page 506

Attaching a storage service to existing datasets of storage objectsYou can use the Attach Storage Service wizard to attach a storage service to existing datasets.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Confirm that all datasets to which you want to attach the storage service currently use the sameprotection policy or no protection policy.

About this task

The Attach Storage Service wizard allows you to select from a list of possible storage services,presents you with possible node remappings and associated rebaselining requirements for the storageservice that you select, carries out a dry run of your request, and then implements your request uponyour approval.

After you attach a storage service to an existing dataset, you cannot directly edit that dataset tochange its individual protection policy selection, provisioning policy selections, or resource poolselections as long as that storage service is attached. You can only edit the attached storage service tochange the protection policy, provisioning policy, or resource pool selections for all datasets attachedto that storage service.

During this task, the OnCommand console launches NetApp Management Console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the NetApp Management Console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu and click the Datasets option.

Datasets | 229

Page 230: Admin help netapp

2. In the Datasets tab, select the dataset to which you want to attach a storage service, click Morethen select Storage Service and Attach to start the Attach Storage Service wizard.

3. After you complete each property sheet in the wizard, click Next.

4. Confirm the details of the storage service and click Finish.

5. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

6. Refresh your browser to update the OnCommand console with the changes you made.

Result

The selected datasets are listed in the datasets table with their newly attached storage service namedin their storage service column.

Related references

Administrator roles and capabilities on page 506

Restoring data backed up from a dataset of physical storage objectsIn addition to restoring backed up virtual objects, you can also restore data backed up from physicalstorage objects (aggregates, volumes, and qtrees).

Before you begin

Before restoring backed-up data, have the following information available:

• The backup version that you want to restore.• The volumes, qtrees, directories, files or Open Systems SnapVault directories you want to restore.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

You can restore files contained in volumes, qtrees, and vFiler units that were backed up as membersof a dataset.

During this task, the OnCommand console launches NetApp Management Console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the NetApp Management Console open, or you can close it to conserve bandwidth.

Steps

1. Click the View menu and select the Datasets option.

230 | OnCommand Console Help

Page 231: Admin help netapp

2. In the Datasets tab, select the dataset whose data you want to restore, click More then selectRestore to start the Restore wizard.

The wizard displays the Backup Files window.

3. In the Backup Files window, select the backup copy containing the data that you want to restore.

4. Select the volumes, qtrees, directories and files contained in the backup copies that you want torestore and continue running the Restore wizard.

5. Click Finish to end the wizard and begin the restore operation.

6. Press Alt-Tab or click the OnCommand console browser tab to return to the OnCommandconsole.

After you finish

You can use the Jobs window to track the progress of the restore job and monitor the job for possibleerrors.

Related references

Administrator roles and capabilities on page 506

Repairing datasets that contain deleted virtual objectsYou can reenable full OnCommand console backup of a dataset after that backup has been partiallydisabled by the deletion of virtual object members from inventory by third-party management toolsbefore those members were removed from the dataset.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If a virtual object that is a member of a dataset is deleted by use of third-party management toolsfrom inventory before it is removed as a member from its dataset, any subsequent backup jobsattempted on that dataset are only partially successful until you complete the following actions toremove their references from the dataset.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset whose local or remote protection jobs have been achievingonly partial success and click Edit.

3. In the Edit Dataset dialog box, select the Data option.

Datasets | 231

Page 232: Admin help netapp

The associated content area displays all the current members of the selected dataset in the list boxthat is labeled "Members in the dataset." Any virtual object members that have been deleted frominventory and thus no longer exist are labeled as "Deleted."

4. Select the virtual objects that have been deleted from inventory and are marked "Deleted."

5. Click the < button.

The selected virtual objects are removed from the dataset.

6. After you finish removing the objects that were deleted from inventory from your dataset, clickOK.

Result

After the update of the dataset is complete, the partially successful backup failures caused by thedataset containing deleted virtual objects stop, and fully successful backup jobs resume.

Related references

Administrator roles and capabilities on page 506

Evaluating and resolving issues displayed in the Conformance Detailsdialog box

You can use the Conformance Details dialog box to evaluate and resolve a dataset's conformanceissues. You can evaluate the error and warning messages that the dialog displays and attempt toresolve the conformance issues they describe.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

This procedure assumes you are viewing the Conformance Details dialog box that you displayed by

selecting a nonconformant dataset in the Datasets tab and clicking after its nonconformant statusdisplay.

The dialog box displays Information, Error, Action, Reason, and Suggestion text about thenonconformance condition.

Steps

1. Evaluate the displayed Information, Error, Action, Reason or Suggestion text.

Information Indicates configuration operations that the OnCommand console successfullycompleted without conformance issues.

232 | OnCommand Console Help

Page 233: Admin help netapp

Error Indicates the configuration operations that the OnCommand console can notperform on this dataset due to conformance issues.

Action Indicates what the OnCommand console conformance engine did to discover theconformance issue.

Reason Indicates the probable cause of the conformance issue.

Suggestion Indicates a possible way of resolving the conformance issue. Sometimes thesuggested resolution involves a baseline transfer of data although it is usuallydesirable to avoid reinitiating a baseline transfer of data .

2. Based on the dialog box text, decide the best way to resolve the conformance issue.

• If the dialog box text indicates that the OnCommand console conformance monitor cannotautomatically resolve the conformance issue, resolve this issue manually.

• If Suggestion text indicates that automatically resolving the conformance issues requires abaseline transfer of data, first attempt to resolve this issue manually. If unsuccessful, considerresolving the issue automatically even if doing so requires a baseline transfer of data.

• If the Suggestion text indicates that the OnCommand console conformance monitor canresolve the conformance issue automatically without reinitiating a baseline transfer of data,first, consider resolving the issue manually. If unsuccessful resolve the issue automatically.

After you finish

Proceed to the appropriate task to resolve the dataset conformance issues.

Related references

Administrator roles and capabilities on page 506

Resolving conformance issues manually without a baseline transfer of data

If the text displayed in the Conformance Details dialog box identifies conformance issues that youcannot resolve automatically, you can resolve the conformance issue manually.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for a baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset Datasets tab and clicking after its nonconformant statusdisplay.

Datasets | 233

Page 234: Admin help netapp

Steps

1. In the Conformance Details dialog box, confirm that the messages indicate that the conformanceissues cannot be resolved automatically.

2. Using the conformance messages, determine what is causing the nonconformance problem andattempt to correct the condition manually.

You might need to log in to another GUI or CLI console to resolve the issues.

3. After you have attempted to correct the condition, wait at least one hour for the conformancemonitor to update the dataset's conformance status.

4. Return to the Conformance Details dialog box and click Test Conformance to determine if theconformance issue is resolved.

If the conformance issue is resolved, the Conformance Details dialog box does not display the"Conform" button.

5. If the conformance issue is resolved, click Cancel.

6. If the conformance issue is not resolved, repeat Steps 2, 3, and 4.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

Related references

Administrator roles and capabilities on page 506

Resolving conformance issues automatically without a baseline transfer of data

If the text displayed in the Conformance Details dialog box identifies conformance issues that youcan automatically resolve without reinitializing a baseline transfer of data, you can use the dialog boxcontrols to do so.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for a baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset Datasets tab and clicking after its nonconformant statusdisplay.

234 | OnCommand Console Help

Page 235: Admin help netapp

Steps

1. In the Conformance Details dialog box, read the text to determine the ability of the conformanceengine to automatically resolve the nonconformant condition without reinitializing a baselinetransfer of data.

2. If the text suggests that a simple automatic resolution is possible, click Conform .

The OnCommand console conformance engine closes the Conformance Details dialog box andattempts to reconfigure storage resources to resolve storage service protection and provisioningpolicy conformance issues automatically.

3. Monitor the conformance status on the Datasets tab for one of the following values:

• Conformant: The conformance issue is resolved.• Nonconformant: The conformance issue is not resolved. Consider manual resolution of the

issue.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

Related references

Administrator roles and capabilities on page 506

Resolving conformance issues manually when a baseline transfer of data might benecessary

If the text displayed in the Conformance Details dialog box identifies conformance issues whoseautomated resolution might require a baseline transfer of data, attempt a manual resolution beforeusing automated resolution.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset in the Datasets tab and clicking after its nonconformantstatus display.

Datasets | 235

Page 236: Admin help netapp

Steps

1. In the Conformance Details dialog box, confirm that warning text is displayed that indicates thata reinitialized baseline transfer of data might be required.

You should try to resolve the conformance issues manually before initializing a time-consumingbaseline transfer of your data.

2. Using the conformance messages, determine what is causing the conformance problem andattempt to correct the condition manually.

You might need to log in to another GUI or CLI console to resolve the issues.

3. After you have attempted to correct the condition, wait at least one hour for the conformancemonitor to update the dataset's conformance status.

4. Return to the Conformance Details dialog box and click Test Conformance to determine if theconformance issue is resolved.

If the conformance issue is resolved, the Conformance Details dialog box does not display the"Conform" button.

5. If the conformance issue is not resolved, click Conform to attempt automated resolution andinitiate a rebaseline of your data.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

Related references

Administrator roles and capabilities on page 506

Monitoring datasets

Overview of dataset status typesThe OnCommand console reports on each dataset's protection status, conformance status, andresource status.

Although protection policies are not assigned directly to datasets of virtual objects, the OnCommandconsole still displays the statuses related to protection policies that are assigned indirectly to datasetsof virtual objects, as components of assigned storage services.

236 | OnCommand Console Help

Page 237: Admin help netapp

Descriptions of dataset protection status

A dataset is remotely protected only if the secondary storage system specified in the protectionrelationship is successfully backing up data, and if the copies of the data can be restored. You canmonitor dataset status using the Datasets tab.

You should regularly monitor a dataset's protection, because the OnCommand console cannotsufficiently protect the dataset under the following conditions:

• If a secondary storage system runs out of storage space necessary to meet the retention durationrequired by the protection policy

• If the lag thresholds specified by the policy are exceeded

The following list describes protection status values and their descriptions:

Baseline Failed No initial baseline data transfers have registered a backup version.

Initializing The dataset is conforming to the protection policy and the initial baseline datatransfer is in process.

Job Failure The most recent protection job did not succeed.

This status might result for any of the following reasons:

• A backup from a SnapVault or Qtree SnapMirror relationship failed or couldnot be registered.

• A mirror copy from a SnapMirror relationship failed or could not beregistered.

• Local backups (Snapshot copies) failed on the primary node.

Lag Error The dataset has reached or exceeded the lag error threshold specified in theassigned protection policy. This value indicates that there has been no successfulbackup or mirror copy of a node's data within a specified period of time.

This status might result for any of the following reasons:

• The most recent local backup (Snapshot copy) on the primary node is olderthan the threshold setting permits.

• The most recent backup (SnapVault or Qtree SnapMirror) is older than thelag threshold setting or no backup jobs have completed since the dataset wascreated.

• The most recent mirror (SnapMirror) copy is older than the lag thresholdsetting or no mirror jobs have completed since the dataset was created.

Lag Warning The dataset has reached or exceeded the lag warning threshold specified in theassigned protection policy. This value indicates that there has been no successfulbackup or mirror copy of a node's data within a specified period of time.

This status might result for any of the following reasons:

Datasets | 237

Page 238: Admin help netapp

• The most recent local backup (Snapshot copy) on the primary node is olderthan the threshold setting permits.

• The most recent backup (SnapVault or Qtree SnapMirror) is older than thelag threshold setting or no backup jobs have completed since the dataset wascreated.

• The most recent mirror (SnapMirror) copy is older than the lag thresholdsetting or no mirror jobs have completed since the dataset was created.

No ProtectionPolicy

The dataset is managed by the OnCommand console but no protection policy hasbeen assigned to the dataset.

Protected The dataset has an assigned policy and it has conformed to that policy at leastonce.

ProtectionSuspended

An administrator has requested that scheduled backups be temporarily halteduntil the administrator requests that they be resumed.

Uninitialized This status might result for any of the following reasons:

• The dataset has a protection policy that does not have any protectionoperations scheduled.

• The dataset does not contain any data to be protected.• The dataset does not contain storage for one or more destination nodes.• The single node dataset does not have any backup versions.• An application dataset requires at least one backup version associated with it.• The dataset does not contain any backup or mirror relationships.

No Local Policy A dataset of virtual objects has no local policy assigned.

Attention: Importing an external relationship into a dataset temporarily changes the dataset'sprotection status to Unitialized. When the next scheduled backup or mirror backup job runs orwhen you run an on-demand backup the protection status changes to reflect the results of theprotection job.

Descriptions of dataset conformance status

The dataset conformance status indicates whether a dataset is configured according to its local policyor storage service's protection policy. To be in conformance, all secondary and tertiary storage that ispart of the backup relationship must be successfully provisioned and the provisioned objects mustmatch the requirements of the primary data. You can monitor dataset status using the Datasets tab.

The OnCommand console regularly checks a dataset for conformance. If it detects changes in thedataset's membership or policy definition, the console does one of three things:

• Automatically performs corrective steps to bring a dataset back into conformance• Presents you with a list of actions for your approval prior to correction• Lists conditions that it cannot resolve

238 | OnCommand Console Help

Page 239: Admin help netapp

You can view these actions and approve them in the Conformance Details dialog box.

A dataset might be nonconformant because there are no available resources from which to provisionthe storage or because the NetApp Management Console data protection capability does not have thenecessary credentials to provision the storage resources.

The following list describes dataset conformance values:

Conformant The dataset is conformant with all associated policies.

Conforming The dataset is not in conformance with all associated policies. The OnCommandconsole is performing actions to bring the dataset into conformance.

Nonconformant The OnCommand console cannot bring the dataset into conformance with allassociated policies and might require your approval or intervention to completethis task.

Descriptions of dataset resource status

The dataset resource status indicates the event status for all resource objects that are assigned to thedataset. The resources include those that are members of the secondary and tertiary storage systems.If, for example, a tertiary member's status is critical, the dataset 's resource status also is displayed ascritical.

You can monitor dataset status using the Datasets tab . You can troubleshoot the resource objectsusing .

The following list describes resource status values:

Normal A previous abnormal condition for the resource returned to a normal state and theresource is operating within the desired thresholds. No action is required.

Information A normal resource event occurred. No action is required.

Warning The resource experienced an occurrence that you should be aware of. This eventseverity does not cause service disruption, and corrective action might not berequired.

Error The resource is still performing, but corrective action is required to avoid servicedisruption.

Critical A problem occurred that might lead to service disruption if you do not takeimmediate corrective action.

Emergency The resource unexpectedly stopped working and experienced unrecoverable dataloss. You must take corrective action immediately to avoid extended downtime.

How to evaluate dataset conformance to policyThe OnCommand console periodically checks that the dataset conforms to its data protection policy.If either the dataset membership or policy changes, the OnCommand console either tries to bring the

Datasets | 239

Page 240: Admin help netapp

dataset back into conformance or notifies the administrator that the dataset conformance statuschanged to nonconformant.

You can view the Conformance Results from the Datasets tab the Datasets window the by clicking

next to Conformance in the Status area.

Why datasets fail to conform to policy

A dataset must meet several conditions to be conformant to its assigned policy.

A dataset is conformant with its policy when it meets the following conditions:

• Its member storage systems are properly configured.• Its assigned secondary storage system is provisioned and has enough backup space.• Its protection policy includes all necessary relationships to enforce data backups or mirror copies.

Following are some of the common reasons datasets fail to conform to their protection policy:

• Dataset storage service and protection policy definitions changed.• Dataset membership changed.• Volumes or qtrees were created or deleted at the storage system (external to the OnCommand

console).• The configuration of a dataset of virtual objects has been updated, but not yet communicated from

the DataFabric Manager server to the host service.This nonconformance condition normally lasts only a few minutes until the DataFabric Managerserver updates the host service with the latest configuration.

How the OnCommand console monitors dataset conformance

The OnCommand console conformance monitor regularly checks the DataFabric Manager serverdatabase for configuration information to determine if a dataset is in conformance with its assignedpolicy.

Conformance status is determined based on data gathered from SNMP queries by system monitors.The monitors update the DataFabric Manager server database at scheduled intervals. Theconformance monitor queries the DataFabric Manager server database for the information, which isthen displayed in the OnCommand console. As a result, the information displayed by theconformance monitor is not real-time data. This can result in the conformance monitor results beingtemporarily out of date with actual changes to a storage system or configuration.

Should a dataset be nonconformant, you can view the conformance details from the Datasets tab byselecting an item in the dataset list then clicking next to Conformance in the Status area. This opensthe Conformance Results window, that shows the results of the last conformance run on the selecteddataset. The results provide a description of any problems found during the last conformance run andsuggestions for resolving the problems. Depending on the results shown, you can then either makechanges manually to your system configuration or click Conform to allow in the OnCommandconsole. to automatically make changes in an attempt to bring the dataset into conformance.

When you click Conform, you give the OnCommand console full control to do whatever it can tobring everything into conformance for the selected dataset. This could include initiating a rebaseline

240 | OnCommand Console Help

Page 241: Admin help netapp

of your data, which might require significant time and bandwidth. As a result, if you would not wanta rebaseline to occur, you should try manual corrections to your system to resolve conformanceissues before you choose to use the Conform option.

After making manual corrections to your system, you can return to the Conformance Results windowand click the Test Conformance button to see if any changes made to the system have brought thedataset into conformance with the policy assigned to it. Test Conformance initiates a new check onthe dataset but does not execute a conformance run. The results of the check reflect the latest systemupdates that have been identified by the monitors and captured in the DataFabric Manager serverdatabase. Therefore, the information displayed in the Conformance Results window might not reflectrecent changes made to a storage system or configuration and could be outdated by a few minutes ora few hours, depending on the changes made and the scanning interval for each monitor.

You can view a list of monitor intervals by using the command dfm option list | grepInterval. Following are some common monitoring actions, the default update intervals forstandard DataFabric Manager server configurations, and the associated monitors:

Discover new hosts Default interval: 15 minutes

Monitor: discover

Update sizes for aggregates, volumes, and free or used space Default interval: 30 minutes

Monitor: dfmon

Find new disks, aggregates, volumes, qtrees Default interval: 15 minutes

Monitors: fsmon, diskmon

Find vFiler units Default interval: 1 hour

Monitor: vfiler

SnapMirror, SnapVault, and OSSV directory discovery Default interval: 30 minutes

Monitor: relationships

Update license capabilities on storage systems Default interval: 4 hours

Monitor: license

Dataset conformance conditions

The OnCommand console displays a dataset's conformance status in the Datasets tab . If the datasetis nonconformant, there are several ways in which the dataset can be brought back into conformance.

You can view the Conformance Results from the Datasets tab by clicking next to Conformancein the Status area.

When the conformance monitor detects a change in the dataset's membership or policy definition, theconformance monitor does one of three things:

• Automatically performs corrective steps to bring a dataset back into conformance• Presents you with a list of actions for your approval prior to correction

Datasets | 241

Page 242: Admin help netapp

• Lists conditions that it cannot resolve

Conditions the monitor can resolveThe following list describes some of the conditions that the conformance monitor can detect, theactions it can take to bring the dataset back into conformance with its policy, and whether thoseactions are automatic or they require your approval for completion.

• The OnCommand console provisions a destination volume but the aggregate in which the volumeis contained is no longer a member of the assigned resource pool.

Corrective action: The OnCommand console creates a new volume and moves therelationship to it.

Does the action require your approval? Yes

Corrective action: The OnCommand console moves the relationship to an existing volume.

Does the action require your approval? Yes

• The OnCommand console provisions a destination volume but the aggregate in which the volumeis contained is no longer a member of the assigned resource pool.

Corrective action: The OnCommand console creates a new volume and moves therelationship to it.

Does the action require your approval? Yes

Corrective action: The OnCommand console moves the relationship to an existing volume.

Does the action require your approval? Yes

• The destination volume does not have enough backup space or it is over its "nearly full"threshold.

Correctiveaction:

The OnCommand console expands the volume in an aggregate that is amember of the resource pool.

Does the action require your approval? No

Correctiveaction:

The OnCommand console provisions the volume and migrates the physicalrelationship to a new destination volume.

Does the action require your approval? Yes

• The destination volume contains expired backup versions.

Correctiveaction:

The OnCommand console deletes the backup versions. The console alsodeletes the copies of the data if those copies do not contain other backupversions.

Does the action require your approval? No

• Policy calls for the source data to be mirrored but the source volume is not protected in a mirrorrelationship.

242 | OnCommand Console Help

Page 243: Admin help netapp

Correctiveaction:

The OnCommand console creates a new relationship and performs a baselinetransfer of the data. A baseline transfer is defined as an initial backup (alsoknown as a level-0 backup) of a primary volume to a secondary volume inwhich the entire contents of the primary volume are transferred.

Does the action require your approval? No

• Policy calls for the source data to be backed up but the source qtree is not protected in a backuprelationship.

Correctiveaction:

The OnCommand console creates a new relationship and performs abaseline transfer of the data.

Does the action require your approval? No

• The primary volume has extra mirror relationships.

Corrective action: The OnCommand console deletes the extra relationships.

Does the action require your approval? No

• The primary qtree has extra backup relationships.

Corrective action: The OnCommand console deletes the extra relationships.

Does the action require your approval? No

Conditions the monitor cannot resolveThe following list describes some of the conditions that the conformance monitor can detect butcannot resolve. These conditions require manual intervention from an administrator to bring thedataset back into conformance with its policy.

• An imported relationship has been detected in which the secondary volume exceeds thevolFullThreshold.

Corrective action: You must manually increase the secondary volume size. The conformancemonitor cannot resolve this condition.

• A dataset's assigned secondary resources do not offer appropriate backup space.

Corrective action: You must reconfigure the resource pool membership so that TheOnCommand console can successfully continue data protection.

• The application does not have the appropriate credentials to access the assigned resources.

Corrective action: You must provide the credentials for access to the hosts or storage systems.

What is a test conformance check?

When you create or edit a dataset, you can run a test conformance check of your datasetconfiguration or reconfiguration before you commit the OnCommand console to implementing thoseconfiguration changes.

Because initializing a dataset protection configuration requires a baseline transfer of data fromprimary storage to secondary storage and tertiary storage and can take a lot of time, common practice

Datasets | 243

Page 244: Admin help netapp

is to run a quick test on your dataset configuration settings to ensure their conformance to theassigned protection and provisioning policies before you commit to that initializing process.

A test conformance button that is located in the Edit Dataset dialog box and, when appropriate, in theConformance Details dialog box enables you to test conformance when necessary.

What is the difference between Test Conformance and Conform Now?

You can use both of the command buttons in the Conformance Details dialog box to help resolveconformance issues. However, these two buttons perform very different functions and you shouldbecome familiar with their functions before using these buttons.

TestConformance

Allows you to test whether manual changes that you have made to your datasetconfiguration have brought it into conformance with its protection andprovisioning policies before you execute a conformance run. The results of thetest reflect the latest system updates that have been identified by the monitors andchanges you have specified but not yet executed in the current OnCommandconsole session.

Conform Now Initiates the automated attempt by the OnCommand console to reconfigure thecurrent dataset to be in conformance with its protection and provisioning policies.Clicking this button allows the conformance monitor to do whatever it can tobring the dataset into conformance.

Possible automated actions include re-initiating a baseline transfer of your data,which usually requires significant time and bandwidth. If the ConformanceDetails dialog box message text indicates that a reinitialized baseline transfermight be triggered, and you want to avoid the associated time and bandwidthcost, you should try manual corrections to your system to resolve conformanceissues before you choose to use the Conform Now option.

Monitoring dataset statusYou can monitor the status of your datasets for possible errors.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu and click the Datasets option.

2. In the Datasets tab, select the dataset whose status you want to monitor.

The Overview area of the Datasets tab displays the protection, conformance, resources, and spacestatuses of the selected dataset.

244 | OnCommand Console Help

Page 245: Admin help netapp

3.To view status details, click the button, if displayed.

If the button is displayed for resources, clicking it displays a dialog box that lists eventsrelated to warning-level or critical-level resource issues. You can use the Acknowledge button tomark an event as acknowledged. If you take actions outside of the dialog box that resolves anevent issue, you can use the Resolve button to mark that event as resolved.

4. To view details about secondary or tertiary nodes, click the corresponding tabs for these nodes.

Related references

Administrator roles and capabilities on page 506

Window layout customization on page 16

Monitoring backup and mirror relationshipsYou can monitor backup or mirror relationships that are governed by a protection policy that isassigned to a dataset. For example, you can monitor the lag status of each relationship or monitor thatbackups are being created as scheduled.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

Although protection policies are not assigned directly to datasets of virtual objects, the OnCommandconsole still displays the backup and mirror relationships of a protection policy that is assignedindirectly to datasets of virtual objects, as a component of an assigned storage service.

Steps

1. Click the Storage menu and click the View option.

2. In the Datasets tab, select the dataset whose backup or mirror relationships you want to monitor.

3. (Optional) You can customize the Datasets tab to your needs.

Related references

Administrator roles and capabilities on page 506

Window layout customization on page 16

Datasets | 245

Page 246: Admin help netapp

Listing nonconformant datasets and viewing detailsYou can list existing datasets that have become nonconformant with their protection and provisioningpolicies. Then, you can view and attempt to resolve their nonconformance. Conformance issues canprevent completion of dataset protection and provisioning operations.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu and click the Datasets option to display the Datasets tab.

2. In the Datasets tab click the column header labeled Conformance Status and selectNonconformant.

3. If the Datasets tab lists a dataset with nonconformant status, select that dataset to display itsDetails area.

4.In the selected dataset's Details area, click the button.

The Conformance Details dialog box displays the results of the most recent conformance checkand suggestions for resolving the issues encountered.

After you finish

After you display the Conformance Details dialog box, you must address and resolve the issues thatare indicated by its warning and error messages.

Related references

Administrator roles and capabilities on page 506

Evaluating and resolving issues displayed in the Conformance Detailsdialog box

You can use the Conformance Details dialog box to evaluate and resolve a dataset's conformanceissues. You can evaluate the error and warning messages that the dialog displays and attempt toresolve the conformance issues they describe.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

246 | OnCommand Console Help

Page 247: Admin help netapp

About this task

This procedure assumes you are viewing the Conformance Details dialog box that you displayed by

selecting a nonconformant dataset in the Datasets tab and clicking after its nonconformant statusdisplay.

The dialog box displays Information, Error, Action, Reason, and Suggestion text about thenonconformance condition.

Steps

1. Evaluate the displayed Information, Error, Action, Reason or Suggestion text.

Information Indicates configuration operations that the OnCommand console successfullycompleted without conformance issues.

Error Indicates the configuration operations that the OnCommand console can notperform on this dataset due to conformance issues.

Action Indicates what the OnCommand console conformance engine did to discover theconformance issue.

Reason Indicates the probable cause of the conformance issue.

Suggestion Indicates a possible way of resolving the conformance issue. Sometimes thesuggested resolution involves a baseline transfer of data although it is usuallydesirable to avoid reinitiating a baseline transfer of data .

2. Based on the dialog box text, decide the best way to resolve the conformance issue.

• If the dialog box text indicates that the OnCommand console conformance monitor cannotautomatically resolve the conformance issue, resolve this issue manually.

• If Suggestion text indicates that automatically resolving the conformance issues requires abaseline transfer of data, first attempt to resolve this issue manually. If unsuccessful, considerresolving the issue automatically even if doing so requires a baseline transfer of data.

• If the Suggestion text indicates that the OnCommand console conformance monitor canresolve the conformance issue automatically without reinitiating a baseline transfer of data,first, consider resolving the issue manually. If unsuccessful resolve the issue automatically.

After you finish

Proceed to the appropriate task to resolve the dataset conformance issues.

Related references

Administrator roles and capabilities on page 506

Datasets | 247

Page 248: Admin help netapp

Resolving conformance issues manually without a baseline transfer of data

If the text displayed in the Conformance Details dialog box identifies conformance issues that youcannot resolve automatically, you can resolve the conformance issue manually.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for a baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset Datasets tab and clicking after its nonconformant statusdisplay.

Steps

1. In the Conformance Details dialog box, confirm that the messages indicate that the conformanceissues cannot be resolved automatically.

2. Using the conformance messages, determine what is causing the nonconformance problem andattempt to correct the condition manually.

You might need to log in to another GUI or CLI console to resolve the issues.

3. After you have attempted to correct the condition, wait at least one hour for the conformancemonitor to update the dataset's conformance status.

4. Return to the Conformance Details dialog box and click Test Conformance to determine if theconformance issue is resolved.

If the conformance issue is resolved, the Conformance Details dialog box does not display the"Conform" button.

5. If the conformance issue is resolved, click Cancel.

6. If the conformance issue is not resolved, repeat Steps 2, 3, and 4.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

Related references

Administrator roles and capabilities on page 506

248 | OnCommand Console Help

Page 249: Admin help netapp

Resolving conformance issues automatically without a baseline transfer of data

If the text displayed in the Conformance Details dialog box identifies conformance issues that youcan automatically resolve without reinitializing a baseline transfer of data, you can use the dialog boxcontrols to do so.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for a baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset Datasets tab and clicking after its nonconformant statusdisplay.

Steps

1. In the Conformance Details dialog box, read the text to determine the ability of the conformanceengine to automatically resolve the nonconformant condition without reinitializing a baselinetransfer of data.

2. If the text suggests that a simple automatic resolution is possible, click Conform .

The OnCommand console conformance engine closes the Conformance Details dialog box andattempts to reconfigure storage resources to resolve storage service protection and provisioningpolicy conformance issues automatically.

3. Monitor the conformance status on the Datasets tab for one of the following values:

• Conformant: The conformance issue is resolved.• Nonconformant: The conformance issue is not resolved. Consider manual resolution of the

issue.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

Related references

Administrator roles and capabilities on page 506

Datasets | 249

Page 250: Admin help netapp

Resolving conformance issues manually when a baseline transfer of data might benecessary

If the text displayed in the Conformance Details dialog box identifies conformance issues whoseautomated resolution might require a baseline transfer of data, attempt a manual resolution beforeusing automated resolution.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

• Because of the probable time and bandwidth required for baseline transfer completion, aresolution that avoids a baseline transfer of data is preferable to a resolution that triggers one.

• This procedure assumes you are viewing the Conformance Details dialog box that you displayed

by selecting a nonconformant dataset in the Datasets tab and clicking after its nonconformantstatus display.

Steps

1. In the Conformance Details dialog box, confirm that warning text is displayed that indicates thata reinitialized baseline transfer of data might be required.

You should try to resolve the conformance issues manually before initializing a time-consumingbaseline transfer of your data.

2. Using the conformance messages, determine what is causing the conformance problem andattempt to correct the condition manually.

You might need to log in to another GUI or CLI console to resolve the issues.

3. After you have attempted to correct the condition, wait at least one hour for the conformancemonitor to update the dataset's conformance status.

4. Return to the Conformance Details dialog box and click Test Conformance to determine if theconformance issue is resolved.

If the conformance issue is resolved, the Conformance Details dialog box does not display the"Conform" button.

5. If the conformance issue is not resolved, click Conform to attempt automated resolution andinitiate a rebaseline of your data.

After you finish

After you achieve dataset conformant status, continue with the operation that required the dataset tobe conformant.

250 | OnCommand Console Help

Page 251: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Page descriptions

Datasets tabThe Datasets tab enables you to create, edit, survey, and manage protection of your datasets.

From the Datasets tab you can launch the configuration of datasets of both virtual objects andphysical objects, monitor their status, backup dataset content on demand, suspend and resumeprotection, and initiate restore operations.

• Command buttons on page 251• Datasets list on page 253• Members list on page 255• Related objects pane on page 255• Graph area on page 255• Overview tab on page 255• Primary Node tab on page 256• Primary Node to Backup tab on page 257• Backup tab on page 257

Command buttons

Create Enables you to create datasets to manage physical storage objects or virtual objects. TheCreate command gives you the following sub-options:

• If you select the Dataset with Hyper-V objects sub-option, starts the CreateDataset dialog box for adding a Hyper-V virtual object dataset.

• If you select the Dataset with VMware objects sub-option, starts the CreateDataset dialog box for adding a VMware virtual object dataset.

• If you select the Dataset with Storage objects sub-option, starts the NetAppManagement Console Add Dataset wizard for adding a storage dataset.

Edit Enables you edit the configuration of a selected dataset.

• If you select a dataset of VMware virtual objects or Hyper-V virtual objects, startsthe OnCommand console Edit Dataset dialog box for editing datasets that holdvirtual objects.

• If you select a dataset of storage objects, starts the NetApp Management ConsoleEdit Dataset window for editing a storage dataset.

Datasets | 251

Page 252: Admin help netapp

• If you select a dataset that still contains no objects, enables you to specify the typeof dataset that you want it to be (Dataset with Hyper-V objects, Dataset withVMware objects, or Dataset with Storage objects).

Delete Deletes the selected dataset or datasets and thereby removes the protection relationshipsamong its member objects.

More Displays a menu of additional commands.

Suspend Suspends dataset protection for a dataset.

During the time that the OnCommand console suspends protection, itdisplays a Protection Suspended status for the dataset.

Resume Resumes suspended dataset backup protection for a dataset.

Restore (Applies to datasets of physical storage objects only).

For datasets of physical storage objects, opens the NetAppManagement Console Restore wizard, from which you can selectbacked-up copies of a selected resource to restore as the primarydata.

Restore of virtual objects is supported on the Backups tab and theServer tab of the OnCommand console.

StorageServiceAttach

(Enabled for datasets of physical storage objects)

Attaches a storage service to the selected dataset of physical storageobjects.

StorageServiceDetach

(Enabled for datasets of physical storage objects)

Detaches a currently attached storage service from the selecteddataset of physical storage objects.

StorageServiceChange

Opens the NetApp Management Console Edit Dataset window toenable you to change the storage service that is assigned to theselected dataset.

Back UpNow

Enables you to perform on-demand backup or mirror protection operations on datasets.

• For datasets of VMware or Hyper-V objects, opens the Back Up Now dialog box inthe OnCommand console.

• For datasets of physical storage objects, opens the Protect Now dialog box inNetApp Management Console.

Refresh Updates the datasets list.

252 | OnCommand Console Help

Page 253: Admin help netapp

Datasets list

A list that provides information about existing datasets. Click a row in the list to view information inthe Details area about the selected dataset.

Name Displays the name of the dataset.

Data Type Displays the type of object that the selected dataset contains.

Valid data types include:

• Physical• VMware• Hyper-V• Undefined (the dataset is empty of objects)

Overall Status Displays the status derived from the combined status conditions for disasterrecovery, protection, conformance, space, and resources.

Overall status is computed based upon the following other status values:

OverallStatus:Error

DR status condition: ErrorProtection status condition: Lag error or Baseline failedConformance status condition: NonconformantSpace status condition: ErrorResource status condition: Emergency, Critical, or Error

OverallStatus:Warning

DR status condition: WarningProtection status condition: Job failure, Lag warning,Uninitialized, or No protection policy for a non-emptydatasetConformance status condition: NASpace status condition: WarningResource status condition: Warning

Storage Service Displays what storage service, if any, is attached to the dataset. A dataset that isattached to a storage service uses the protection policy, provisioning policies,resource pools, and vFiler unit configurations specified by that storage service.

Local Policy Displays the name of the local protection policy, if any, that is attached to theselected dataset. Datasets of virtual objects might have a local policy attached tothem.

ProtectionPolicy

Displays the name of the protection policy that is either assigned directly to adataset of physical storage objects or to a storage service that is then assigned toa dataset. This information is hidden by default.

Datasets | 253

Page 254: Admin help netapp

PrimaryProvisioningPolicy

Displays the name of the provisioning policy currently assigned to the primarynode of the dataset. If a provisioning policy is assigned to a secondary node inthe dataset, that name is displayed in the details area when you select thesecondary node in the graph area. This information is hidden by default.

Space Status Displays the status of the available space for the selected dataset node (OK,Warning, Error, or Unknown).

ConformanceStatus

Indicates whether the dataset is Conformant, Nonconformant, or Conforming.

ProtectionStatus

Displays protection status.

Valid status values, in alphabetical order, are as follows:

• Baseline Failure• Initializing• Job Failure• Lag Error• Lag Warning• No Local Policy• No Protection Policy Attached• Not Protected• Protected• Protection Suspended• Uninitialized

Resource Status Displays the most severe of all current events on all direct and indirect membersof the dataset nodes. Values can be Emergency, Critical, Error, Warning, orNormal.

Failed Over Indicates whether a disaster recovery-capable dataset has failed over. Thisinformation is hidden by default. Valid values are as follows:

• YesFailover on the dataset was invoked and completed successfully, completedwith warnings, or completed with errors.

• NoFailover on the dataset has not been invoked.

• In ProgressFailover on the dataset is currently in progress.

• Not ApplicableThe dataset is not assigned a disaster recovery protection policy and,therefore, is not capable of failover.

Description Describes the dataset.

254 | OnCommand Console Help

Page 255: Admin help netapp

Application Displays the name of the application that created an application dataset, such asSnapManager for Oracle. This item is not included in the dataset list by default.

ApplicationVersion

Displays the version of the application that created the application dataset. Thisitem is not included in the dataset list by default.

ApplicationServer

Displays the name of the server that runs the application that created theapplication dataset. This item is not included in the dataset list by default.

Members list

A folder list of the physical storage object types or the virtual object types that are currently includedas members of the selected dataset.

• Clicking the folder for an object type displays the names of the dataset members of that objecttype.

• The names of virtual objects are linked to a dataset inventory page for their object type.• The names of physical storage objects are only listed.• If a dataset of physical objects contains more than three members of an object type, clicking

More displays all members of that type in a popup dialog box.• If the dataset of virtual objects contains more than three members of an object type, clicking

More displays all members of that type in the inventory page for that object type.

Related objects pane

A folder list of all the object types in primary storage and the backup copies that are created inrelation to the protection activities executed by the OnCommand console on this dataset.

• Clicking the folder for an object type displays the names of the dataset's related objects of thatobject type.

• If the dataset contains more than three objects of that type, clicking More displays all objects ofthat type, either in a popup dialog box or in the inventory page for that object type.

Graph area

The graphical representation of the nodes for the selected dataset are displayed in the lower sectionof the page.

Overview tab

This tab displays the following status and general property details of the selected dataset:

ProtectionDisplays protection status. Click next to the status value to view status detailsand a list of any jobs that are associated with the status.

• Baseline Failure• Initializing

Datasets | 255

Page 256: Admin help netapp

• Job Failure• Lag Error• Lag Warning• No Protection Policy Attached• Not Protected• Protected• Protection Suspended• Uninitialized

Conformance Indicates whether the dataset is conformant. If a dataset is nonconformant, click

to evaluate errors and warnings and to run the conformance checker.

Resources Represents the most severe of all current events on all direct and indirect membersof the dataset nodes. Values can be Emergency, Critical, Error, Warning, or

Normal. For Emergency, Critical, Error, or Warning conditions, click toevaluate the events and sources causing those conditions.

Space Displays the status of the available space for the selected dataset node (OK,Warning, Error, or Unknown). If any volume, qtree, or LUN of a dataset has spaceallocation error or warning conditions, the dataset's space status indicates thatcondition. You can select the dataset to scan its volumes, LUNs, or qtrees todetermine which member is the cause of the warning or error condition.

Failed over Indicates whether a disaster recovery-capable dataset has failed over. This isdisplayed for disaster recovery-capable dataset only. Valid values are thefollowing:

Description Displays a description of the dataset if one is entered.

Owner Displays the owner of the current dataset if one is specified.

Contact Displays the e-mail contact address for this dataset if one is specified.

Time Zone Displays the time zone in which the primary node of the selected dataset is located.

This detail applies only to empty datasets or datasets of physical objects.

Custom Label Displays values that the user might have defined for this dataset.

Primary Node tab

This subtab displays the schedule of local Snapshot copy backups to be executed on the primarydataset node. If you change the name of the primary node the title of this subtab matches yourchange.

Local BackupSchedule

Displays the names of the local backup schedules that are assigned to theprimary data node of this dataset.

256 | OnCommand Console Help

Page 257: Admin help netapp

Primary Node to Backup tab

If the selected dataset is configured with secondary backup or mirror protection, this subtab isdisplayed. If you change the default names of your primary or secondary nodes, the title of thissubtab matches your changes.

This subtab lists the following information about the connection between the primary node andsecondary node:

Relationships Displays the number of existing backup or mirror relationships for the connectionbetween the volume and qtree objects on the primary node and volume and qtreeobjects on the secondary node.

Schedule Displays the name of the schedule that is assigned to the backup or mirrorconnection.

Throttle Displays the name of the throttle schedule, if any, that is assigned to the backup ormirror connection.

Lag Status Displays the worst current lag status for the backup or mirror connection.

Note: If the selected dataset contains multiple connections between the primary node and multiplesecondary or tertiary nodes, then a subtab similar to this one is displayed for every suchconnection.

Backup tab

If the selected dataset is configured with secondary backup or mirror protection, this subtab isdisplayed. If you change the name of the secondary node, the title of this subtab matches yourchange.

This lists the following information about the secondary node:

Provisioning Policy Lists the provisioning policy, if any, that are assigned to the secondary node.

Physical Resources Lists the physical resources that are assigned to the secondary node.

Resource Pools Lists the resource pools, if any, that are assigned to the secondary node.

Failed over Indicates whether a disaster recovery-capable dataset has failed over. This isdisplayed for disaster recovery-capable dataset only. Valid values are thefollowing:

Note: If the selected dataset contains multiple secondary or tertiary nodes, then a subtab similar tothis one is displayed for every such node.

Related references

Window layout customization on page 16

Descriptions of dataset protection status on page 197

Datasets | 257

Page 258: Admin help netapp

Create Dataset dialog box or Edit Dataset dialog boxBoth the Create Dataset dialog box and the Edit Dataset dialog box enables you to view andconfigure virtual object membership and protection of a dataset.

• Options on page 258• Dataset name and topology diagram on page 258• Dataset nodes table on page 258• Command buttons on page 259

Options

Name Enables you to name, rename, view, and edit the dataset properties and namingformats of the current dataset.

Data Enables you to view and edit the virtual object membership of this dataset.

Local Policy Enables you to assign a local policy to this dataset (to execute local protection of thisdataset's virtual object members).

• You can create and configure a new local policy.• You can assign, and, if necessary, edit an existing local policy.

Storageservice

Enables you select a storage service for this dataset (to execute remote protection ofthis dataset's virtual object members) or review an existing storage serviceassignment.

A storage service assignment cannot be changed.

Dataset name and topology diagram

This area displays the dataset name and the protection topology of the current dataset after youconfigure the dataset Name area and assign a storage service.

The topology diagram reflects both the dataset's local policy and the type of protection specified bythe protection policy that is associated with any assigned storage service.

Dataset nodes table

Below the dataset name and topology, a table provides the node and job schedule details of thecurrent dataset.

Primary nodename tablecolumn

Lists the following information related to primary storage:

• The names of virtual object members of the current dataset.These are the objects that are located in the primary storage node.

• The local backup jobs that are scheduled in primary storage by the currentdataset's assigned local policy.

258 | OnCommand Console Help

Page 259: Admin help netapp

The job times are based on the time settings of the systems running associatedhost services.

Secondarynode nametable column

If the current dataset's topology includes secondary storage, lists the followinginformation related to secondary storage:

• Storage systems and resource pools that provision the secondary storage nodeof the current dataset.

• The schedule for remote backup and mirror protection jobs between primaryand secondary storage nodes and retention times for the backed up data.The listed jobs and retention times are those that are specified by the protectionpolicy that is associated with the storage service that is assigned to the currentdataset. The times are based on the time settings of the DataFabric ManagerserverIf the retention duration and retention count for a secondary node are set to 0,the table displays the text "No Transfer (No retention)" to indicate no actualsecondary back up of primary data has occurred.

Tertiary nodename tablecolumn

If the current dataset's topology includes tertiary storage, lists the followinginformation related to tertiary storage:

• Storage systems and resource pools that provision the tertiary storage node ofthe current dataset.

• The schedule for remote backup and mirror protection jobs between secondaryand tertiary storage nodes and retention times for the backed up data.The listed jobs and retention times are those that are specified by the protectionpolicy that is associated with the storage service that is assigned to the currentdataset.If the retention duration and retention count for a tertiary node are set to 0, thetable displays the text "No Transfer (No retention)" to indicate no actualtertiary back up of data has occurred.

Command buttons

TestConformance

Allows you to pretest the conformance of the latest modifications that you havemade to the dataset configuration in this dialog box before you save and applythose modifications.

OK Saves the latest changes that you have made to the data in the Create Datasetdialog box or Edit Dataset dialog box as the latest configuration for this dataset.

Cancel Cancels any changes you have made to the settings in the Create Dataset dialogbox or Edit Dataset dialog box since the last time you opened it.

Datasets | 259

Page 260: Admin help netapp

Name area

The Name area displays administrative and naming property information about the current dataset ofvirtual objects.

GeneralProperties tab

Displays the name of the dataset; enables you to enter dataset description, owner,and contact information about the dataset; and enables you to assign a resourcegroup to the dataset.

NamingProperties tab

Enables you to accept global naming formats for the related objects of this datasetor enables you to configure dataset-level naming formats to be applied to relatedobjects of this dataset.

Related objects are Snapshot copy, primary volume, secondary volume, orsecondary qtree objects that are generated by local policy or storage serviceprotection jobs on this dataset.

General Properties tab

The General Properties tab displays the name and administrative information about the currentdataset.

Name Enables you to change the name of the current dataset.

Unless you change it, the OnCommand console displays a default dataset name that ithas assigned.

Description Enables you to enter or modify a description of the current dataset.

Owner Enables you to enter the name for the owner of this dataset.

Contact Enables you to enter an e-mail contact address.

Naming Properties tab

The Naming Properties tab enables you to specify dataset-level naming properties of Snapshotcopies, primary volumes, secondary volumes, and secondary qtrees that are generated when theOnCommand console runs protection jobs on this dataset.

• Custom label on page 261• Snapshot copy on page 261• Secondary volume on page 262• Secondary qtree on page 263

The Naming Properties tab of the Create Dataset dialog box or Edit Dataset dialog box enables youto select or specify values for the following dataset-level Naming Settings.

260 | OnCommand Console Help

Page 261: Admin help netapp

Custom label

Specifies a dataset-specific identification string that can be included in the name of all objects that aprotection job generates for this dataset.

Use dataset name Includes the dataset name in the name of all objects that a protection jobgenerates for this dataset.

Use custom label Enables you to enter a custom character string to be included in the name of allobjects that a protection job generates for this dataset.

Snapshot copy

Specifies the name format used for Snapshot copies that are generated by protection jobs run on thisdataset. The display of some of these options depends on your previous configuration choices:

Use globalnamingformat

If you previously chose this format in the Global Naming Settings Snapshot Copyarea in the Setup Options dialog box, selecting this option applies the global namingformat for Snapshot copies to all Snapshot copies that a protection job generates forthis dataset.

Use customformat

Enables you to specify a dataset-level naming format to apply to all Snapshot copiesthat a protection job generates for this dataset. You can enter the following attributes(separated by the underscore character) in this field in any order:

• %T (timestamp attribute)The year, month, day, and time of the Snapshot copy.

• %R (retention type attribute)The Snapshot copy's retention class (Hourly, Daily, Weekly, or Monthly)

• %L (custom label attribute)The custom label, if any, that is specified for the Snapshot copy's containingdataset. If no custom label is specified, then the dataset name is included in theSnapshot copy name.The custom label enables you to specify a custom alphanumeric character, .(period), _ (underscore), or - (hyphen) to include in the names of the relatedobjects that are generated by protection jobs that are run on this dataset. If thenaming format for a related object type includes the Custom label attribute,then the value that you specify is included in the related object names. If you donot specify a value, then the dataset name is used as the custom label. If youinclude a blank space in the custom label string, the blank space is converted toletter x in any Snapshot copy, volume, or qtree object name that includes thecustom label as part of its syntax.

• %H (storage system attribute)The name of the storage system that contains the volume from which a Snapshotcopy is made

• %N (volume name attribute)

Datasets | 261

Page 262: Admin help netapp

The name of the volume from which a Snapshot copy is made• %A (application field attribute)

Data inserted by outside applications into the name of the Snapshot copy. For theNetApp Management Console data protection capability, it is a list of qtreespresent in the volume from which a Snapshot copy is made.

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix, if required, to distinguish Snapshotcopies with otherwise matching names

Namepreview

Displays a sample Snapshot copy name that uses the default or custom namingformat that you selected or specified.

Secondary volume

Specifies the name format used for secondary volumes that are generated by protection jobs run onthis dataset. The display of some of these options depends on your previous configuration choices:

Use globalnamingformat

If you previously chose this format in the Global Naming Settings Secondary Volumearea in the Setup Options dialog box, selecting this option applies the global namingscript for secondary volumes to all secondary volumes that a protection job generatesfor this dataset.

Use customformat

Selecting this option enables you to specify a dataset-level naming format to apply toall secondary volumes that a protection job generates for this dataset. You can enterthe following attributes (separated by the underscore character) in this field in anyorder:

• %L (custom label attribute)The custom label, if any, that is specified for the secondary volume's containingdataset. If no custom label is specified, then the dataset name is included in thesecondary volume name.It enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the related objects that aregenerated by protection jobs that are run on this dataset. If the naming format for arelated object type includes the Custom label attribute, then the value that youspecify is included in the related object names. If you do not specify a value, thenthe dataset name is used as the custom label. If you include a blank space in thecustom label string, the blank space is converted to letter x in any Snapshot copy,volume, or qtree object name that includes the custom label as part of its syntax.

• %S (primary storage system name)The name of the primary storage system

• %V (primary volume name)The name of the primary volume

• %C (type)

262 | OnCommand Console Help

Page 263: Admin help netapp

The connection type (backup or mirror)• %1, %2, %3 (digit suffix)

A one-digit, two-digit, or three-digit suffix, if required, to distinguish secondaryvolumes with otherwise matching names

Namepreview

Displays a sample secondary volume name that uses the default or custom namingformat that you selected or specified.

Secondary qtreeSpecifies how secondary qtrees for this dataset that are generated by local policies or storage serviceprotection jobs are named. Options and information fields include the following:

Use globalnamingformat

Selecting this option applies the global naming format for secondary qtrees to allsecondary qtrees that a protection job generates for this dataset.

Use customformat

Selecting this option enables you to specify a dataset-level naming format to apply toall secondary qtrees that a protection job generates for this dataset. You can enter thefollowing attributes (separated by the underscore character) in this field in any order:

• %L (custom label attribute)The custom label, if any, that is specified for the secondary volume's containingdataset. If no custom label is specified, then the dataset name is included in thesecondary volume name.It enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the related objects that aregenerated by protection jobs that are run on this dataset. If the naming format fora related object type includes the Custom label attribute, then the value thatyou specify is included in the related object names. If you do not specify a value,then the dataset name is used as the custom label. If you include a blank space inthe custom label string, the blank space is converted to letter x in any Snapshotcopy, volume, or qtree object name that includes the custom label as part of itssyntax.

• %S (primary storage system name)The name of the primary storage system

• %V (primary volume name)The name of the primary volume

• %C (type)The connection type (backup or mirror)

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix if required to distinguish secondaryvolumes with otherwise matching names

Datasets | 263

Page 264: Admin help netapp

NamePreview

Displays a sample secondary qtree name that uses the default or custom namingformat that you selected or specified.

Data area

The Data area of the Create Dataset dialog box or Edit Dataset dialog box provides tabs and optionsthat enable you to add various kinds of virtual object types to the current dataset.

Data tabThe Data tab enables you to filter and select the virtual objects to include in the dataset.

Group Specifies the OnCommand console resource group from which you want to selectvirtual objects to include in the dataset.

ResourceType

Specifies the virtual object types that you want to include in the dataset.

• VMware associated object types include datacenter, datastores, and virtualmachines (VM).

• Hyper-V associated object types include virtual machines (VM).

You cannot include both VMware object types and Hyper-V object types in onedataset.

AvailableResources

Lists the virtual objects that you can select for inclusion in the dataset.

Only virtual objects in the selected resource group that match the selected resourcetype are displayed.

Any VMware datacenter objects that you include in a dataset cannot be empty.They must contain datastore or virtual machine objects for successful backup.

SelectedResources

Lists the virtual objects that you have selected for inclusion in the dataset.

Spanned Entities tab

The Spanned Entities tab appears only for datasets that contain VMware objects. If one of yourselected virtual machine objects spans two or more datastores, then the Spanned Entities tab displaysall the datastores that include that virtual machine and allows you to include or exclude any of thosedatastores from this dataset's local or remote protection jobs by selecting or deselecting theirselection boxes.

Any listed datastore that is not deselected will have the dataset's local and remote protection jobsapplied to it.

264 | OnCommand Console Help

Page 265: Admin help netapp

Storage Service area

The Storage Service area of the Create Dataset dialog box or Edit Dataset dialog box displays basicinformation about the storage service that you selected to execute remote protection jobs on thevirtual object members of this dataset.

Storage service Selects the storage service that you want to assign to this dataset.

Selecting a storage service using this option displays the basic information forthat storage service in the content area.

Storage services enabled for disaster recovery support are not displayed.

After a storage service is selected and assigned to a dataset, the assignmentcannot be changed.

Name Displays the name of the selected storage service.

Description Displays a short description of the selected storage service.

Owner Displays the name of the owner of the selected description service.

Contact Displays the contact e-mail of the person in charge of the selected storageservice.

Backup ScriptPath

Specifies a path to an optional backup script (located on the DataFabricManager server) that can specify additional operations to be executed inassociation with protection policy specified backups.

Topology Displays the name and the graphical topology of the protection policy that theselected storage service uses to execute remote protection of the selecteddataset.

Dataset nodes table

In the Storage Service area, a table provides the node and job schedule details of the protectionpolicy associated with the storage service that is assigned to the current dataset.

Secondarynode nametable column

If the current dataset's topology includes secondary storage, lists the followinginformation related to secondary storage:

• Storage systems and resource pools that provision the secondary storage nodeof the current dataset.

• The schedule for remote backup and mirror protection jobs between primaryand secondary storage nodes and retention times for the backed up data.The listed jobs and retention times are those that are specified by the protectionpolicy that is associated with the storage service that is assigned to the currentdataset.

Datasets | 265

Page 266: Admin help netapp

Tertiary nodename tablecolumn

If the current dataset's topology includes tertiary storage, lists the followinginformation related to tertiary storage:

• Storage systems and resource pools that provision the tertiary storage node ofthe current dataset.

• The schedule for remote backup and mirror protection jobs between secondaryand tertiary storage nodes and retention times for the backed up data.The listed jobs and retention times are those that are specified by the protectionpolicy that is associated with the storage service that is assigned to the currentdataset.

Local Policy area

The Local Policy area enables you to configure local protection for this virtual object dataset byselecting an existing local policy or creating a new local policy and assigning it to the dataset.

Local Policy Either specifies the local policy to be assigned to execute local backupprotection of the current dataset, or enables you to start creation of a new localpolicy to execute local backup protection of the current dataset.

• You can select an appropriate the OnCommand console-supplied localpolicy.

• You can create a new local policy.

Policy Name Specifies the name of the local policy.

Description Displays an editable description of the selected local policy.

Add Adds a schedule to be applied to local backups of virtual objects on this dataset.

Delete Deletes an existing schedule.

Schedule list Displays details of the local backup schedules that are in effect for the displayedlocal policy. For each local backup schedule, the following details are displayed:

ScheduleType

The type of schedule (Hourly, Daily, Weekly, or Monthly)whose details you are viewing

Start Time The time of day that the local backups start for the associatedschedule

End Time The time of day at which new local backups stop occurringfor the associated schedule (applies to Hourly type schedulesonly)

Recurrence The frequency with which local backups occur for theassociated schedule

266 | OnCommand Console Help

Page 267: Admin help netapp

Retention The period of time that the backup copies associated with thisschedule remain on the storage systems before becomingsubject to automatic purging

BackupOptions

Additional options that you can enable for the selected localpolicy.

CreateVMwareSnapshot

Whether to creates VMware quiescedsnapshots before taking the storagesystem Snapshot copies during localbackup operations (displayed fordatasets of VMware objects).

Includeindependentdisks

Whether to include independentVMDKs in local backups associatedwith this schedule for this dataset(displayed for datasets of VMwareobjects only).

Allow savedstate backups

Whether to allow the backup of adataset of virtual machines to proceedeven if some of the virtual machines inthat dataset are in saved state or shutdown. Virtual machines in saved stateor that are shut down, receive savedstate or offline backup. Performing asaved state or offline backup can causedowntime (displayed for datasets ofHyper-V object.)

If this option is not selected,encountering a virtual machine that isin saved state or that is shutdown failsthe dataset backup.

Start a remotebackup afterlocal backup

If remote backup of data to secondarystorage is specified by a storage serviceassigned to this dataset, whether to startthat remote backup immediately afterthe local backup is finished.

If this option is not selected, anyremote backup to secondary storagestarts according the schedulesconfigured for the protection policyspecified for the storage service.

Datasets | 267

Page 268: Admin help netapp

Issue a warningif there are nobackups for:

Specifies a period of time after which the OnCommand console issues a warningevent if no local backup has successfully finished.

Issue an error ifthere are nobackups for:

Specifies a period of time after which the OnCommand console issues an errorevent if no local backup has successfully finished.

Backup ScriptPath

Specifies a path to an optional backup script (located on the system upon whichthe host service is installed) that can specify additional operations to be executedin association with local backups.

DatasetDependencies

If clicked, displays the other datasets that use this local policy and would thus beaffected by changes made to the settings in this content area.

This button appears only if the local policy is also assigned to other datasets.

Save Saves any settings or changes made to settings in this content area for a new oran existing local policy.

The OnCommand console enables this button if you make or modify any settingin this content area.

Cancel Cancels any changes to the dataset's local policy settings that have not yet beensaved during the current session.

268 | OnCommand Console Help

Page 269: Admin help netapp

Backups

Understanding backups

Types of backupsYou can perform scheduled or on-demand local backups, or remote backups of datasets. Dependingon what you need, the different types of backups offer variation in how you protect your data.

Scheduledlocal backups

You can create scheduled local backups by adding or editing datasets and theirapplication policies. The host service runs local backups so even when theOnCommand console is down, your backups continue to run.

On-demandlocal backups

You can create on-demand local backups as you need them. On-demand backupsapply to datasets. You can add specific virtual machines or datastores to existing ornew datasets for backup. You can also select specific settings for on-demandbackups that might differ from the local policy that might be attached to a dataset,including starting a remote backup after the local backup operation.

Remotebackups

You can create remote backups by assigning a storage service to the selecteddataset. The storage service you assign to the dataset determines when and how theremote backup occurs. To create a remote backup, you must first create or edit alocal dataset backup, or perform an on-demand dataset backup. During datasetcreation, you can add the storage service to the dataset you want to back up.

Backup version managementThrough backup version management, you can more easily locate and track information, and you canoptimize your system by keeping only current or necessary information.

When you transfer a local backup to a secondary node, you can track which virtual objects arecontained and which of those objects are restorable. These can be different from the virtual objectscontained in the primary backup.

You can use retention settings in a dataset protection policy, local policy, or local on-demand backupsetting to retain backups for set periods of time. When backups reach their expiration, they aredeleted from the storage system.

You can locate any backup, across datasets, using a part of its description. You can also locatebackups that are no longer in a dataset or that have been renamed. You can locate renamed backupsusing the current or former name of a virtual object in the backup.

Mounted backups are not deleted.

269

Page 270: Admin help netapp

Backup scripting informationYou can specify a script that is invoked before and after the local backup. The script is invoked onthe host service and the path is local to the host service.

If you would like to configure a script to run before or after your backup operation, you must first runthe script independently from the DOS command line to determine if the script generates any pop-upmessages. Pop-up messages cause the backup operation to not complete properly. Clear out thecondition that generates the user pop-up message to ensure that the script runs automatically withoutrequiring any user interaction.

Backup script conventions

If you use a PowerShell script, you should use the drive-letter convention. For other types of scripts,you can use either the drive-letter convention or the Universal Naming Convention.

Pre-backup script arguments

Note: If the pre-backup script fails, the backup might fail.

The following arguments apply to scripts that run before the backup occurs:

prebackup Indicates that the script is invoked prior to the backup.

resourceids Specifies the colon-separated list of resource IDs that are backed up.

datasetid Specifies the dataset identifier.

Post-backup script arguments

The following arguments apply to scripts that run after the backup occurs:

postbackup Indicates that the script is invoked after the backup.

resourceids Specifies the colon-separated list of resource IDs that are backed up.

datasetid Specifies the dataset identifier.

backupid Specifies the host service backup identifier.

snapshots Specifies the comma-separated list of Snapshot copies that constitute the backup. Youshould use one of the following formats:

• storage system:/vol/volx:snapshot

• storage system:/vol/volx/lun:snapshot

• storage system:/vol/volx/lun/qtree:snapshot

270 | OnCommand Console Help

Page 271: Admin help netapp

Example

The following script (.bat file) is invoked after the backup:

echo "********************* > c:\post.txtIF %1 == postbackup echo %2 >> C:\post.txtIF %1 == postbackup echo %3 >> C:\post.txtIF %1 == postbackup echo %4 >> C:\post.txtIF %1 == postbackup echo %5 >> C:\post.txtIF %1 == postbackup echo %6 >> C:\post.txtIF %1 == postbackup echo %7 >> C:\post.txtIF %1 == postbackup echo %8 >> C:\post.txtIF %1 == postbackup echo %9 >> C:\post.txt

The following output displays:

datasetid:17798 resourceids:netfs://172.17.167.151//vol/vol_nfs_rlabf1_new/ backupid:2ad744f8-97c7-48f6-8e1b-65aec531bb5d snapshots:172.17.167.151:/vol/vol_nfs_rlabf1_new:2011-06-10_1605-0700_weekly_datastore_1_rlabf1_vol_nfs_rlabf1_new_novmsnap 172.17.164.186:/vol/vol_nfs_86:2011-06-10_1605-0700_weekly_datastore_1_sdwatf3_vol_nfs_86_novmsnap ECHO is on.ECHO is on.ECHO is on.

Retention of job progress informationWhen you install or upgrade the OnCommand Core Package, the DataFabric Manager server retainsthe job progress information for your backup jobs. The default retention period is 90 days. If youneed to retain this information longer than 90 days, for example because of legal or corporaterequirements, you can specify a longer period of time to retain this information using thepurgeJobsOlderThan option with the dfbm option set command.

Guidelines for mounting or unmounting backups in a VMware environmentAfter you create a full or partial backup of your virtual machine or datastore, you can mount thebackup onto an ESX server to verify what the virtual machine or datastore contains. After you verifythe newly created backup to compare its content to the original and to look for mismatches, you canunmount the mounted backup.

When you mount or unmount backups, follow these guidelines:

• You cannot mount or unmount Hyper-V backups just like VMware backups by clicking a button.

Backups | 271

Page 272: Admin help netapp

You can mount Hyper-V backups using a Snapshot copy, virtual hard disk (VHD), LUN andstorage system information available in the OnCommand console GUI.

• You cannot mount a backup on the same or a different ESX server if that backup is alreadymounted.You must unmount this backup from the first ESX server prior to mounting a backup to adifferent ESX server.

• You can mount a local backup and a remote backup on any ESX host that is managed by thesame host service that was used when the backup was created.

• If you include the same datastore in multiple backups and those backups are mounted, thatdatastore is mounted multiple times.These datastores can be differentiated because the name includes a mounted timestamp andcontains the dataset name.

• Backup and restore of mounted objects is not supported.• If there is some data written in the mounted datastore, that data is lost when you unmount the

backup.• If a backup is mounted, you cannot delete it, even if it has expired, until you unmount the backup.• While mounting a remote mirror backup, if the corresponding primary mirror backup has already

been deleted, the mount request fails with a backup not found error.• After you mount a backup, the time it takes to copy data from the datastore depends on your

network bandwidth and whether this datastore is on a secondary storage system.

How the Hyper-V plug-in uses VSSThe Hyper-V plug-in provides integration with Microsoft Hyper-V Volume Shadow Copy Service(VSS) writer to quiesce a virtual machine before making an application-consistent Snapshot copy ofthe virtual machine. The Hyper-V plug-in is a VSS requestor and coordinates the backup operation tocreate a consistent Snapshot copy, using VSS Hardware Provider for Data ONTAP.

The Hyper-V plug-in enables you to make application-consistent backups of a virtual machine, if youhave Microsoft Exchange, Microsoft SQL, or any other VSS aware application running on virtualhard disks (VHDs) in the virtual machine. The Hyper-V plug-in coordinates with the applicationwriters inside the virtual machine to ensure that application data is consistent when the backupoccurs.

You can also restore a virtual machine from an application-consistent backup. The applications thatexist in the virtual machine restore to the same state as at the time of the backup. The Hyper-V plug-in restores the virtual machine to its original location.

Overview of VSS

VSS (Volume Shadow Copy Service) is a feature of Microsoft Windows Server that coordinatesamong data servers, backup applications, and storage management software to support the creationand management of consistent backups. These backups are called shadow copies, or Snapshot copies.

VSS coordinates Snapshot copy-based backup and restore and includes these additional components:

• VSS requestor

272 | OnCommand Console Help

Page 273: Admin help netapp

The VSS requestor is a backup application, such as Hyper-V plug-in or NTBackup. It initiatesVSS backup and restore operations. The requestor also specifies Snapshot copy attributes forbackups it initiates.

• VSS writerThe VSS writer owns and manages the data to be captured in the Snapshot copy. Hyper-V plug-inis an example of a VSS writer.

• VSS providerThe VSS provider is responsible for the creation and management of the Snapshot copy. Aprovider can be either a hardware provider or a software provider:

• A hardware provider integrates storage array-specific Snapshot copy and cloning functionalityinto the VSS framework. The Data ONTAP VSS Hardware Provider integrates the SnapDriveservice and storage systems running Data ONTAP into the VSS framework.

Note: The Data ONTAP VSS Hardware Provider is installed automatically as part of theSnapDrive software installation.

• A software provider implements Snapshot copy or cloning functionality in software that isrunning on the Windows system.

Note: To ensure the Data ONTAP VSS Hardware Provider works properly, do not use theVSS software provider on Data ONTAP LUNs. If you use the VSS software provider tocreate Snapshot copies on a Data ONTAP LUN, you will be unable to delete that LUNusing the VSS hardware provider.

Viewing installed VSS providers

To view the VSS providers installed on your host, complete these steps.

Steps

1. Select Start > Run and enter the following command to open a Windows command prompt:

cmd

2. At the prompt, enter the following command:

vssadmin list providers

The output should be similar to the following:

Provider name: ‘Data ONTAP VSS Hardware Provider’Provider type: HardwareProvider Id: {ddd3d232-a96f-4ac5-8f7b-250fd91fd102}Version: 6.4.0.xxxx

Backups | 273

Page 274: Admin help netapp

Verifying that the VSS Hardware Provider was used successfully

To verify that the Data ONTAP VSS Hardware Provider was used successfully after a Snapshot copywas taken, complete this step.

Step

1. Navigate to System Tools > Event Viewer > Application in MMC and look for an event withthe following values.

Source Event ID Description

Navsspr 4089 The VSS provider hassuccessfully completedCommitSnapshots forSnashotSetId id in nmilliseconds.

Note: VSS requires that the provider initiate a Snapshot copy within 10 seconds. If this timelimit is exceeded, the Data ONTAP VSS Hardware Provider logs Event ID 4364. This limitcould be exceeded due to a transient problem. If this event is logged for a failed backup, retrythe backup.

How the Hyper-V plug-in handles saved-state backupsAlthough the default behavior of the Hyper-V plug-in is to cause backups containing virtualmachines that are in the saved state or shut down to fail, you can perform a saved-state backup bymoving the virtual machines to a dataset that has a policy that allows saved-state backups.

You can also create or edit your dataset policy to allow a saved-state virtual machine backup. If youchoose this option, the Hyper-V plug-in does not cause the backup to fail when the Hyper-V VSSwriter backs up the virtual machine using the saved state or performs an offline backup of the virtualmachine. However, performing a saved-state or offline backup can cause downtime.

For more information about online and offline virtual machine backups, see the Hyper-V Planningfor Backup information in the Microsoft TechNet library.

Related information

Hyper-V Planning for Backup - http://technet.microsoft.com/en-us/library/

Overlapping policies and Hyper-V hostsBecause a Hyper-V parent host does not allow simultaneous or overlapping local backups onmultiple virtual machines that are associated with it, each associated dataset of Hyper-V objects that

274 | OnCommand Console Help

Page 275: Admin help netapp

you want to provide with local protection requires a separate local policy with a schedule that doesnot overlap the schedule of any other local policy in effect.

Co-existence of SnapManager for Hyper-V with the Hyper-V plug-inTo protect your Hyper-V virtual objects, you can install both SnapManager for Hyper-V and theHyper-V plug-in on the same Hyper-V parent hosts. Some actions, like using both SnapManager forHyper-V and the Hyper-V plug-in to back up the same virtual machines, are unsupported.

When the OnCommand console and the Hyper-V plug-in are installed on hosts that also useSnapManager for Hyper-V, the version of SnapDrive for Windows that SnapManager for Hyper-Vuses is upgraded as well.

If you uninstall OnCommand console using the host installer, so that you can use SnapManager forHyper-V as a stand-alone interface, it also uninstalls SnapDrive for Windows. After uninstallingOnCommand console, you must reinstall the appropriate version of SnapDrive for Windows. Thefollowing actions are unsupported when using both SnapManager for Hyper-V and the Hyper-Vplug-in:

Backing up the samevirtual machines

After you create a dataset with protection policies using OnCommandconsole, you must delete the corresponding protection policy inSnapManager for Hyper-V. If you use the same protection schedule for thesame group of Hyper-V virtual machines, and attempt to back them upusing both SnapManager for Hyper-V and OnCommand console, thebackups fail.

Automaticallydeleting backups

SnapManager for Hyper-V does not automatically delete backups after youremove the protection policy. When you no longer need the backup, youmust manually delete it in SnapManager for Hyper-V.

ReinstallingSnapManager forHyper-V

After you have transitioned all of your SnapManager for Hyper-V datasetinformation to the OnCommand console, and uninstalled SnapManager forHyper-V, you should not reinstall SnapManager for Hyper-V.

How to manually transition SnapManager for Hyper-V dataset informationYou can use the stand-alone SnapManager for Hyper-V software to view your dataset protectionschedules so that you can use them as a reference when adding and editing Hyper-V datasets in theOnCommand console.

You can still use the dataset and backup policies that you created using SnapManager for Hyper-V toprotect your data until you have fully transitioned to the OnCommand console. As you create newdatasets and protection schedules in the OnCommand console, you can begin deleting yourprotection policies in SnapManager for Hyper-V.

After you create a dataset and protection schedule in the OnCommand console, you should not useSnapManager for Hyper-V to back up the same virtual machines. If you use overlapping schedulesfor the same virtual machines, the backups might fail, because only one VSS backup can run at atime on a host. SnapManager for Hyper-V does not automatically delete old backups after youdisable the protection policy, so you must manually delete them when they are no longer necessary.

Backups | 275

Page 276: Admin help netapp

You can also use SnapManager for Hyper-V backups to restore virtual machines until the retentionpolicies have timed out. You should not restore the same virtual machines in SnapManager forHyper-V and the OnCommand console at the same time. If you do so, the restore operation mightfail.

After you transition all of your datasets and protection schedules to the OnCommand console, youcan safely uninstall SnapManager for Hyper-V.

Managing backups

Performing an on-demand backup of virtual objectsYou can protect your virtual machines or datastores by adding them to an existing or new dataset andperforming an on-demand backup.

Before you begin

• You must have reviewed the Guidelines for performing an on-demand backup on page 277• You must have reviewed the Requirements and restrictions when performing an on-demand

backup on page 279• You must have added the virtual objects to an existing dataset or have created a dataset and added

the virtual objects that you want to back up.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.• You must have the following information available:

• Dataset name• Retention duration• Backup settings• Backup script location• Backup description

About this task

If you perform a backup of a dataset containing Hyper-V virtual machines and you are currentlyrestoring those virtual machines, the backup might fail.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, choose the virtual machine or datastore that you want to back up.

276 | OnCommand Console Help

Page 277: Admin help netapp

If you want to back up... Then...

A virtual machine or datastore that does notbelong to an existing dataset

You must first add it to an existing dataset.

A virtual machine or datastore, but no datasetscurrently exist

You must first create a dataset and then add to itthe required virtual machines or datastores.

3. Click Backup and select the Back Up Now option.

4. In the Back Up Now dialog box, select the dataset that you want to back up.

If the virtual machine or datastore belongs to multiple datasets, you must select one dataset toback up.

5. Specify the local protection settings, backup script path, and backup description for the on-demand backup.

If you have already established local policies for the dataset, that information automaticallyappears for the local protection settings for the on-demand backup. If you change the localprotection settings, the new settings override only the existing application policies for this on-demand backup.

6. If you want a remote backup to begin after the local backup has finished, select the Start remotebackup after local backup check box.

7. Click Back Up Now.

After you finish

You can monitor the status of your backup from the Jobs tab.

Related references

Jobs tab on page 46

Administrator roles and capabilities on page 506

Guidelines for performing an on-demand backup

Before performing an on-demand backup of a dataset, you must decide how you want to assignresources and assign protection settings.

General properties information

When performing an on-demand backup, you need to provide information about what objects youwant to back up, to assign protection and retention settings, and to specify script information thatruns before or after the backup operation.

Datasetname

You must select the dataset that you want to back up.

Backups | 277

Page 278: Admin help netapp

Localprotectionsettings

You can define the retention duration and the backup settings for your on-demandbackup, as needed.

Retention You can choose to keep a backup until you manually delete it, oryou can assign a retention duration. By specifying a length of timeto keep the on-demand local backup, you can override the retentionduration in the local policy you assigned to the dataset for thisbackup. The retention duration of a local backup defaults to aretention type for the remote backup.

A combination of both the remote backup retention type and storageservice is used to determine the remote backup retention duration.

For example, if you specify a local backup retention duration of twodays, the retention type of the remote backup is Daily. The datasetstorage service then verifies how long daily remote backups are keptand applies this to the backup. This is the retention duration of theremote backup.

The following table lists the local backup retention durations and theequivalent remote backup retention type:

Local retention duration Remote retentiontype

Less than 24 hours Hourly

1 day up to, but not including, 7 days Daily

1 week up to, but not including, 31 days Weekly

More than 31 days Monthly

Backupsettings

You can choose your on-demand backup settings based on the typeof virtual objects you want to back up.

Allow saved statebackup (Hyper-Vonly)

You can choose to skip the backup if itcauses one or more of the virtual machinesto go offline. If you do not choose thisoption, and your Hyper-V virtualmachines are offline, backup operationsfail.

Create VMwaresnapshot(VMware only)

You can choose to create a VMwareformatted in addition to the storage systemSnapshot copies created during localbackup operations.

278 | OnCommand Console Help

Page 279: Admin help netapp

Includeindependent disks(VMware only)

You can include independent disks.VMDKs belong to VMware virtualmachines in the current dataset, but resideon datastores that are not part of thecurrent dataset.

Backupscript path

You can specify a script that is invoked before and after the local backup. The scriptis invoked on the host service and the path is local to the host service. If you use aPowerShell script, you should use the drive letter convention. For other types ofscripts, you can use either the drive letter convention or the Universal NamingConvention.

Backupdescription

You can provide a description for the on-demand backup so you can easily find itwhen you need it.

Clustered virtual machine considerations (Hyper-V only)

Dataset backups of clustered virtual machines take longer to complete when the virtual machines runon different nodes of the cluster. When virtual machines run on different nodes, separate backupoperations are required for each node in the cluster. If all virtual machines run on the same node,only one backup operation is required, resulting in a faster backup.

Requirements and restrictions when performing an on-demand backup

You must be aware of the requirements and restrictions when performing an on-demand backup.Some requirements and restrictions apply to all types of objects and some are specific to Hyper-V orVMware virtual objects.

Requirements Virtual machines or datastores must first belong to a dataset before backing up.You can add virtual objects to an existing dataset or create a new dataset andadd virtual objects to it.

Hyper-V specificrequirements

Each virtual machine contained in the dataset that you want to back up mustcontain at least 300 MB of free disk space. Each Windows volume in thevirtual machine (guest OS) must have at least 300 MB free disk space. Thisincludes the Windows volumes corresponding to VHDs, iSCSI LUNs, andpass-through disks attached to the virtual machine.

Hyper-V virtual machine configuration files, snapshot copy files, and VHDsmust reside on Data ONTAP LUNs, otherwise backup operations fail.

VMware specificrequirements

Backup operations of datasets containing empty VMware datacenters ordatastores will fail. All datacenters must contain datastores or virtual machinesto successfully perform a backup.

Backups | 279

Page 280: Admin help netapp

Virtual disks must be contained within folders in the datastore. If virtual disksexist outside of folders on the datastore, and that data is backed up, restoringthe backup could fail.

NFS backups might take more time than VMFS backups. This is because ittakes more time for VMware to commit snapshots in a NFS environment.

Hyper-V specificrestrictions

Partial backups are not supported. If the Hyper-V VSS writer fails to back upone of the virtual machines in the backup and the failure occurs at the Hyper-Vparent host, the backup fails for all of the virtual machines in the backup.

Performing an on-demand backup of a datasetYou can perform a local on-demand dataset backup to protect your virtual objects.

Before you begin

• You must have reviewed the Guidelines for performing an on-demand backup on page 277• You must have reviewed the Requirements and restrictions when performing an on-demand

backup on page 279• You must have added the virtual objects to an existing dataset or have created a dataset and added

the virtual objects that you want to back up.• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.• You must have the following information available:

• Dataset name• Retention duration• Backup settings• Backup script location• Backup description

About this task

If you perform a backup of a dataset containing Hyper-V virtual machines and you are currentlyrestoring those virtual machines, the backup might fail.

Steps

1. Click the View menu, then click the Datasets option.

2. In the Datasets tab, choose the dataset that you want to back up.

3. Click Back Up Now.

4. In the Back Up Now dialog box, specify the local protection settings, backup script path, andbackup description for the on-demand backup.

280 | OnCommand Console Help

Page 281: Admin help netapp

If you have already established local policies for the dataset, that information automaticallyappears for the local protection settings for the on-demand backup. If you change the localprotection settings, the new settings override any existing application policies for the dataset.

5. If you want a remote backup to begin after the local backup has finished, select the Start remotebackup after local backup box.

6. Click Back Up Now.

After you finish

You can monitor the status of your backup from the Jobs tab.

Related references

Jobs tab on page 46

Administrator roles and capabilities on page 506

Guidelines for performing an on-demand backup

Before performing an on-demand backup of a dataset, you must decide how you want to assignresources and assign protection settings.

General properties information

When performing an on-demand backup, you need to provide information about what objects youwant to back up, to assign protection and retention settings, and to specify script information thatruns before or after the backup operation.

Datasetname

You must select the dataset that you want to back up.

Localprotectionsettings

You can define the retention duration and the backup settings for your on-demandbackup, as needed.

Retention You can choose to keep a backup until you manually delete it, oryou can assign a retention duration. By specifying a length of timeto keep the on-demand local backup, you can override the retentionduration in the local policy you assigned to the dataset for thisbackup. The retention duration of a local backup defaults to aretention type for the remote backup.

A combination of both the remote backup retention type and storageservice is used to determine the remote backup retention duration.

For example, if you specify a local backup retention duration of twodays, the retention type of the remote backup is Daily. The datasetstorage service then verifies how long daily remote backups are kept

Backups | 281

Page 282: Admin help netapp

and applies this to the backup. This is the retention duration of theremote backup.

The following table lists the local backup retention durations and theequivalent remote backup retention type:

Local retention duration Remote retentiontype

Less than 24 hours Hourly

1 day up to, but not including, 7 days Daily

1 week up to, but not including, 31 days Weekly

More than 31 days Monthly

Backupsettings

You can choose your on-demand backup settings based on the typeof virtual objects you want to back up.

Allow saved statebackup (Hyper-Vonly)

You can choose to skip the backup if itcauses one or more of the virtual machinesto go offline. If you do not choose thisoption, and your Hyper-V virtualmachines are offline, backup operationsfail.

Create VMwaresnapshot(VMware only)

You can choose to create a VMwareformatted in addition to the storage systemSnapshot copies created during localbackup operations.

Includeindependent disks(VMware only)

You can include independent disks.VMDKs belong to VMware virtualmachines in the current dataset, but resideon datastores that are not part of thecurrent dataset.

Backupscript path

You can specify a script that is invoked before and after the local backup. The scriptis invoked on the host service and the path is local to the host service. If you use aPowerShell script, you should use the drive letter convention. For other types ofscripts, you can use either the drive letter convention or the Universal NamingConvention.

Backupdescription

You can provide a description for the on-demand backup so you can easily find itwhen you need it.

282 | OnCommand Console Help

Page 283: Admin help netapp

Clustered virtual machine considerations (Hyper-V only)

Dataset backups of clustered virtual machines take longer to complete when the virtual machines runon different nodes of the cluster. When virtual machines run on different nodes, separate backupoperations are required for each node in the cluster. If all virtual machines run on the same node,only one backup operation is required, resulting in a faster backup.

Requirements and restrictions when performing an on-demand backup

You must be aware of the requirements and restrictions when performing an on-demand backup.Some requirements and restrictions apply to all types of objects and some are specific to Hyper-V orVMware virtual objects.

Requirements Virtual machines or datastores must first belong to a dataset before backing up.You can add virtual objects to an existing dataset or create a new dataset andadd virtual objects to it.

Hyper-V specificrequirements

Each virtual machine contained in the dataset that you want to back up mustcontain at least 300 MB of free disk space. Each Windows volume in thevirtual machine (guest OS) must have at least 300 MB free disk space. Thisincludes the Windows volumes corresponding to VHDs, iSCSI LUNs, andpass-through disks attached to the virtual machine.

Hyper-V virtual machine configuration files, snapshot copy files, and VHDsmust reside on Data ONTAP LUNs, otherwise backup operations fail.

VMware specificrequirements

Backup operations of datasets containing empty VMware datacenters ordatastores will fail. All datacenters must contain datastores or virtual machinesto successfully perform a backup.

Virtual disks must be contained within folders in the datastore. If virtual disksexist outside of folders on the datastore, and that data is backed up, restoringthe backup could fail.

NFS backups might take more time than VMFS backups. This is because ittakes more time for VMware to commit snapshots in a NFS environment.

Hyper-V specificrestrictions

Partial backups are not supported. If the Hyper-V VSS writer fails to back upone of the virtual machines in the backup and the failure occurs at the Hyper-Vparent host, the backup fails for all of the virtual machines in the backup.

On-demand backups using the command-line interfaceIf you use the graphic user interface, the OnCommand console performs all backups at the datasetlevel. However, you can also use the DataFabric Manager server command-line interface to choose asubset of a dataset and perform an on-demand backup.

A subset of a dataset includes one or more virtual machines or datastores.

Backups | 283

Page 284: Admin help netapp

You can also use Windows PowerShell cmdlets to perform on-demand backups. For moreinformation on using Windows PowerShell, see the OnCommand Windows PowerShell CmdletsGuide.

For more information on using the CLI, see the OnCommand Operations Manager AdministrationGuide.

Related information

Operations Manager Administration Guide - http://support.netapp.comNetApp® OnCommand™ Windows™ PowerShell Cmdlets Guide for Use with Core Package 5.0and Host Package 1.0 - http://support.netapp.com

Mounting or unmounting backups in a VMware environment

Mounting backups in a VMware environment from the Backups tab

You can mount existing backups onto an ESX server for backup verification prior to completing arestore operation or to restore a virtual machine to an alternate location. All the datastores and thevirtual machines within the backup are mounted to the ESX server that you specify. Both the Mountand Unmount buttons are disabled for Hyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you frommounting its partner mirror destination backup copy. For a Mirror-generated destination backup copyto be mountable, its associated mirror source backup copy must still exist on the source node.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, select an unmounted backup that you want to mount.

3. Click Mount.

You cannot mount Hyper-V backups using this button.

4. In the Mount Backup dialog box, select from the drop-down list the name of the ESX server towhich you want to mount the backup.

You can only mount one backup each time and you cannot mount a mounted backup.

5. Click Mount.

284 | OnCommand Console Help

Page 285: Admin help netapp

A dialog box appears with a link to the mount job and when you click the link, the Jobs tabappears.

After you finish

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Unmounting backups in a VMware environment from the Backups tab

After you are done using a mounted backup for verification or to restore a virtual machine to analternate location, you can unmount the mounted backup from the ESX server that it was mounted to.When you unmount a backup, all the datastores in that backup are unmounted and can no longer beseen from the ESX server that you specify. Both the Mount and Unmount buttons are disabled forHyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If there are virtual objects in use from the previously mounted datastores of a backup, the unmountoperation fails. You must manually clean up the backup prior to mounting the backup again becauseits state reverts to not mounted.

If all the datastores of the backup are in use, the unmount operation fails but this backup's statechanges to mounted. You can unmount the backup after determining the datastores are not in use.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, select a mounted backup to unmount.

3. Click Unmount.

4. At the confirmation prompt, click Yes.

A dialog box opens with a link to the unmount job displays and when you click the link, the Jobstab appears.

Backups | 285

Page 286: Admin help netapp

After you finish

If the ESX server becomes inactive or restarts during an unmount operation, the job is terminated andthe mount state remains mounted and the backup stays mounted on the ESX server.

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Mounting backups in a VMware environment from the Server tab

You can mount existing backups onto an ESX server for backup verification prior to completing arestore operation or to restore a virtual machine to an alternate location. All the datastores and thevirtual machines within the backup are mounted to the ESX server that you specify. Both the Mountand Unmount buttons are disabled for Hyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you frommounting its partner mirror destination backup copy. For a Mirror-generated destination backup copyto be mountable, its associated mirror source backup copy must still exist on the source node.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware option, then click VMware VMs or Datastores.

3. Select a virtual machine or datastore and click Mount.

You cannot mount Hyper-V backups using this button.

4. In the Mount Backup dialog box, select an unmounted backup that you want to mount.

You can only mount one backup each time and you cannot mount a mounted backup.

5. Select from the drop-down list the name of the ESX server to which you want to mount thebackup.

6. Click Mount.

A dialog box appears with a link to the mount job and when you click the link, the Jobs tabappears.

286 | OnCommand Console Help

Page 287: Admin help netapp

After you finish

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Unmounting backups in a VMware environment from the Server tab

After you are done using a mounted backup for verification or to restore a virtual machine to analternate location, you can unmount the mounted backup from the ESX server that it was mounted to.When you unmount a backup, all the datastores in that backup are unmounted and can no longer beseen from the ESX server that you specify. Both the Mount and Unmount buttons are disabled forHyper-V backups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If there are virtual objects in use from the previously mounted datastores of a backup, the unmountoperation fails. You must manually clean up the backup prior to mounting the backup again becauseits state reverts to not mounted.

If all the datastores of the backup are in use, the unmount operation fails but this backup's statechanges to mounted. You can unmount the backup after determining the datastores are not in use.

Steps

1. Click the View menu, then click the Server option.

2. In the Server tab, click the VMware option, then click VMware VMs or Datastores.

3. Select a virtual machine or datastore and click Unmount.

4. In the Unmount Backup dialog box, select a mounted backup to unmount.

5. Click Unmount.

6. At the confirmation prompt, click Yes.

A dialog box opens with a link to the unmount job displays and when you click the link, the Jobstab appears.

Backups | 287

Page 288: Admin help netapp

After you finish

If the ESX server becomes inactive or reboots during an unmount operation, the job is terminated andthe mount state remains mounted and the backup stays mounted on the ESX server.

You can monitor the status of your mount and unmount jobs in the Jobs tab.

Related references

Administrator roles and capabilities on page 506

Guidelines for mounting or unmounting backups in a VMware environment on page 271

Manually mounting or unmounting backups in a Hyper-V environmentusing SnapDrive for Windows

You can use SnapDrive for Windows to assist you in manually mounting or unmounting a Hyper-VSnapshot copy so that you can verify the contents of the backup prior to restoring it.

You must have SnapDrive for Windows installed on your Hyper-V parent host to use the manualmount process.

For more information on connecting and disconnecting LUNs, see the SnapDrive for Windowsdocumentation.

Locating specific backups

You can locate a specific backup copy by searching for different criteria such as names anddescriptions. After you locate a backup, you can then restore it, view the status of it, or delete it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

You can locate a specific backup copy by searching one of the following criteria:

• Whole or partial backup description• Partial current name of a primary object in the backup• Partial name of a virtual object when the backup was taken• UUID of a virtual object within the backup

Steps

1. Click the View menu, then click the Backups option.

2. From the Backups tab, locate the Search field, and type in all or part of the backup description orversion.

288 | OnCommand Console Help

Page 289: Admin help netapp

You can locate multiple backup versions by inserting a comma between search terms and you canclear the search field to view all backups.

3. Click Find.

Related references

Administrator roles and capabilities on page 506

How to find Snapshot copies within a backup

You can locate a specific Snapshot copy within a backup so that you can use its name to connect to aLUN within the copy using SnapDrive for Windows.

After you have located a specific backup, you can view the virtual machines that belong to it. If youselect a virtual machine, you can view the list of associated Snapshot copies and VHDs. By selectinga Snapshot copy and VHD, you can use the following information to connect the Snapshot copy to aLUN using SnapDrive for Windows:

• Snapshot copy name• Primary storage system name• Volume name• Mount point• VHD LUN path

Two Snapshot copy names are displayed for a Hyper-V backup. You must choose the Snapshot copywith the suffix _backup to mount the backup. This ensures that you select the copy containingapplication-consistent data.

Connecting to a LUN in a Snapshot copy

You can connect to a LUN in a Snapshot copy using either a FlexClone volume or a read/writeconnection to a LUN in a Snapshot copy depending on what version of Data ONTAP you haveinstalled on your storage system.

Before you begin

You must have the FlexClone license enabled to connect to a LUN that resides on a volume with aSnapMirror or SnapVault destination.

Steps

1. Under SnapDrive in the left MMC pane, expand the instance of SnapDrive you want to manage,then expand Disks and select the disk you want to manage.

2. Expand the LUN whose Snapshot copy you want to connect, then click on Snapshot Copies todisplay the list of Snapshot copies. Select the Snapshot copy you want to connect.

Backups | 289

Page 290: Admin help netapp

Note: If you cannot see the Snapshot copy list, make sure that cifs.show_snapshot is set toon and vol options nosnapdir is set to off on your storage system.

3. From the menu choices at the top of MMC, navigate to Action > Connect Disk to launch theConnect Disk wizard.

4. In the Connect Disk Wizard, click Next.

5. In the Provide a Storage System Name, LUN Path and Name panel, the information for theLUN and Snapshot copy you selected is automatically filled in. Click Next.

6. In the Select a LUN Type panel, Dedicated is automatically selected because a Snapshot copycan be connected only as a dedicated LUN. Click Next.

7. In the Select LUN Properties panel, either select a drive letter from the list of available driveletters or type a volume mount point for the LUN you are connecting, then click Next.

When you create a volume mount point, type the drive path that the mounted drive will use: forexample, G:\mount_drive1\.

8. In the Select Initiators panel, select the FC or iSCSI initiator for the LUN you are connectingand click Next.

9. In the Select Initiator Group management panel, specify whether you will use automatic ormanual igroup management.

If you specify... Then...

Automatic igroupmanagement

Click Next.

SnapDrive uses existing igroups, one igroup per initiator, or, when necessary, createsnew igroups for the initiators you specified in the Select Initiators panel.

Manual igroupmanagement

Click Next, and then perform the following actions:

a. In the Select Initiator Groups panel, select from the list the igroups to which youwant the new LUN to belong.

Note: A LUN can be mapped to an initiator only once.

ORClick Manage Igroups and for each new igroup you want to create, type a namein the Igroup Name text box, select initiators from the initiator list, click Create,and then click Finish to return to the Select Initiator Groups panel.

b. Click Next.

10. In the Completing the Connect Disk Wizard panel, perform the following actions.

a. Verify all the settings

b. If you need to change any settings, click Back to go back to the previous Wizard panels.

c. Click Finish.

290 | OnCommand Console Help

Page 291: Admin help netapp

Result

The newly connected LUN appears under Disks in the left MMC pane.

Viewing the contents of a LUN

You can view the contents in a LUN, including VHDs and other files. Viewing the contents of theLUN enables you to confirm that you have the correct data before performing a restore operation onthe whole Hyper-V backup.

Before you begin

You must have installed Windows 2008 R2 and the Windows Disk Management Snap-In.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. To view the contents of a specific VHD, use Windows Explorer to locate the VHD in the LUNyou mounted using SnapDrive for Windows.

2. Using the Windows Disk Management Snap-In, right-click the VHD and select Attach VHD.

3. Specify the VHD location and click OK.

The VHD is mounted on the Hyper-V parent host.

4. Verify the contents of the VHD.

5. Using the Windows Disk Management Snap-In, right-click the VHD and select Detach VHD.

After you finish

You can now disconnect the LUN using SnapDrive for Windows to unmount the disk mounted fromthe Snapshot copy.

Disconnecting a LUN

You can use the SnapDrive for Windows MMC snap-in to disconnect a dedicated or shared LUN, ora LUN in a Snapshot copy or in a FlexClone volume.

Before you begin

• Make sure that neither Windows Explorer nor any other Windows application is using ordisplaying any file on the LUN you intend to disconnect. If any files on the LUN are in use, youwill not be able to disconnect the LUN except by forcing the disconnect.

• If you are disconnecting a disk that contains volume mount points, change, move, or delete thevolume mount points on the disk first before disconnecting the disk containing the mount points;otherwise, you will not be able to disconnect the root disk. For example, disconnect G:\mount_disk1\, then disconnect G:\.

Backups | 291

Page 292: Admin help netapp

• Before you decide to force a disconnect of a SnapDrive LUN, be aware of the followingconsequences:

• Any cached data intended for the LUN at the time of forced disconnection is not committed todisk.

• Any mount points associated with the LUN are also removed.• A pop-up message announcing that the disk has undergone "surprise removal" appears in the

console session.

About this task

Under ordinary circumstances, you cannot disconnect a LUN that contains a file being used by anapplication such as Windows Explorer or the Windows operating system. However, you can force adisconnect to override this protection. When you force a disk to disconnect, it results in the diskbeing unexpectedly disconnected from the Windows host.

Steps

1. Under SnapDrive in the left MMC pane, expand the instance of SnapDrive you want to manage,then expand Disks and select the disk you want to manage.

2. From the menu choices at the top of MMC, navigate to either Action > Disconnect Disk todisconnect normally, or Action > Force Disconnect Disk to force a disconnect.

3. When prompted, click Yes to proceed with the operation.

Note: This procedure will not delete the folder that was created at the time the volume mountpoint was added. After you remove a mount point, an empty folder will remain with the samename as the mount point you removed.

The icons representing the disconnected LUN disappear from both the left and right MMCpanels.

Locating specific backupsYou can locate a specific backup copy by searching for different criteria such as names anddescriptions. After you locate a backup, you can then restore it, view the status of it, or delete it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

You can locate a specific backup copy by searching one of the following criteria:

• Whole or partial backup description• Partial current name of a primary object in the backup

292 | OnCommand Console Help

Page 293: Admin help netapp

• Partial name of a virtual object when the backup was taken• UUID of a virtual object within the backup

Steps

1. Click the View menu, then click the Backups option.

2. From the Backups tab, locate the Search field, and type in all or part of the backup description orversion.

You can locate multiple backup versions by inserting a comma between search terms and you canclear the search field to view all backups.

3. Click Find.

Related references

Administrator roles and capabilities on page 506

Deleting a dataset backupYou can delete one or more dataset backups if you do not want to wait for the retention periodassociated with the dataset to expire. You can not delete a mounted backup. You must first unmountthe backup so that you can delete it.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

In the OnCommand console Backups tab, deleting a mirror source backup copy prevents you frommounting its partner mirror destination backup copy. For a Mirror-generated destination backup copyto be mountable, its associated mirror source backup copy must still exist on the source node.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, choose the backup version that you want to delete.

3. Click Delete.

Related references

Administrator roles and capabilities on page 506

Backups | 293

Page 294: Admin help netapp

Monitoring backups

Monitoring local backup progressYou can monitor the progress of your backup job to see whether it is running, succeeded, or failed.

Steps

1. Click the View menu, then click the Jobs option.

2. Select the backup job that you want to monitor.

You can view information about the backup in the Details pane.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Backups tabThe Backups tab displays scheduled and on-demand backup versions that contain virtual objects, butdoes not display storage backup versions. You can view detailed information about each virtualobject backup version and restore virtual objects.

• Command buttons on page 294• Backups list on page 295• Restorable Entities on page 295• Resource Properties on page 295

Command buttons

Edit Enables you to modify the backup description.

Delete Deletes the dataset backup. The Delete button is disabled for mounted backups.

Mount Enables you to mount a selected VMware backup to an ESX server if you want toverify its content before restoring it.

Unmount Enables you to unmount a VMware backup after you mount it on an ESX server andverify its contents.

Refresh Updates the backups list.

294 | OnCommand Console Help

Page 295: Admin help netapp

Search field Enables you to search for a backup by description, resource name, or vendor objectID.

Find Executes the search to find the specified backup.

Clear Clears the search and returns to default backup list.

Backups list

Backup ID The unique identifier of the backup job.

Dataset The name of the dataset to which the backup belongs.

Version The time at which the backup was taken.

Description Description of the backup.

Retention Duration The length of time the backup is retained.

Node (Name) Specifies whether the backup is local or remote, as well as backup and mirrorinformation. Values are Local, Local (Primary data) for Hyper-V, Remote(Backup), and Remote (Mirror).

VMware Snapshot Displays as Yes if the backup contains a VMware snapshot, No if does not,and Not Applicable if it contains Hyper-V virtual objects.

Mount State Specifies the mount state of the VMware backup. Values are Not Mounted,Mounted, Mounting, and Unmounting.

Type Specifies whether the backup contains VMware or Hyper-V virtual objects.

Restorable Entities

Restore button Restores the specified resource.

Resource Displays the virtual machines or datastores that belong to the specified backupin the Backups list.

Vendor Object ID The universal ID assigned to a virtual object by VMware or Hyper-V.

Resource Properties

Partial backup Specifies whether the resource selected is completely or partially backed up.The Entire virtual machine button and the Start virtual machine after restorecheck box are disabled.

Datastore type Displays the type of datastore. Values are VMFS or NFS. If the virtualmachine resides on an NFS datastore, and belongs to a local backup, the ESXHost Name field is disabled.

Is Data ONTAP Specifies whether the resource is stored on Data ONTAP storage or not.

Backups | 295

Page 296: Admin help netapp

Snapshot taken forVM

Specifies whether a Snapshot copy of the resource was created.

Clustered Specifies whether the resource is part of a cluster.

Is template Specifies whether the resource is a template. The Start virtual machine afterrestore check box is disabled.

State Displays the state of the resource.

The Groups drop-down list is not applicable to the Manage Backups window.

Related references

Window layout customization on page 16

296 | OnCommand Console Help

Page 297: Admin help netapp

Restore

Understanding restore

Restoring data from backupsThe OnCommand console allows you to restore your virtual machines and datastores from legacybackups and from backups taken of newly created datasets that contain your virtual machines anddatastores. The OnCommand console supports restore from local and remote backups and frombackups that contain VMware-based snapshots.

Backup selection

The OnCommand console allows you to browse for backups to restore from on the Backups tabpanel or the Server tab when determining which datastore, virtual machine or virtual disk files in thevirtual machine to restore. When you select a datastore, all virtual machines in the datastore will berestored.

The table of backups provides centralized management of VMware and Hyper-V entities. You canfilter the table to show only backups with a backup ID, dataset, description, node name (full orpartial), VMware snapshot, mount state, or resource name of a datastore or virtual machine.

If you search for a comma-separated list, all backups that have either a matching backup ID orbackup name will display. You can specify multiple backup names and IDs by entering them in acomma-separated list. The result lists all of the backups that have either one of the matching criteriaby entering them in a comma-separated list. The result lists all of the backups that have either one ofthe given names or IDs.

Related tasks

Locating specific backups on page 288

Where to restore a backup

OnCommand console allows you to select a restore destination from the Restore Wizard.

• Original locationThe backup of an entire datastore, a single virtual machine, or a virtual machine's disk files arerestored to the original location. You set this location by choosing The entire virtual machineoption.

• Different locationThe backup of the virtual machine disk files are restored to a different location. You select thedestination location by setting the Particular virtual disks option.

297

Page 298: Admin help netapp

Note: If you are recovering data from a remote backup, data access from the ESX server to aHyper-V controller using FC, iSCSI, or NFS protocols must exist to ensure that an NFS mount canbe done from a secondary node to the ESX server.

Restore scripting informationYou can specify a script that is invoked before and after a restore operation. The script is invoked onthe host service and the path is local to the host service.

If you would like to configure a script to run before or after your restore operation, you must first runthe script independently from the DOS command line to determine if the script generates any pop-upmessages. Pop-up messages cause the restore operation to not complete properly. Clear out thecondition that generates the user pop-up message to ensure that the script runs automatically withoutrequiring any user interaction.

Restore script conventions

If you use a PowerShell script, you should use the drive-letter convention. For other types of scripts,you can use either the drive-letter convention or the UNC. When using UNC (Universal NamingConvention) for the script path, the script executes on the system running the host service.

Pre-restore scripting arguments

Note: If the pre-backup script fails, the backup might fail.

The following arguments apply to scripts that run before the restore operation occurs:

prerestore Indicates that the script is invoked prior to the restore operation.

resourceids Species the colon-separated list of resource IDs that are backed up.

backupid Specifies the backup identifier.

snapshots Specifies the comma-separated list of Snapshot copies that constitute thebackup. You should use one of the following formats:

• storage system:/vol/volx:snapshot

• storage system:/vol/volx/lun:snapshot

• storage system:/vol/volx/lun/qtree:snapshot

secondarysnapshots Specifies the comma-separated list of secondary snapshot copies thatconstitute the backup. This argument applies only for a restore from asecondary backup.

Post-restore scripting arguments

The following arguments apply to scripts that run after the restore operation occurs:

postbackup Indicates that the script is invoked after the restore operation.

298 | OnCommand Console Help

Page 299: Admin help netapp

resourceids Specifies the colon-separated list of resource IDs that are being backed up.

backupid Specifies the backup identifier.

Managing restore

Restoring data from backups created by the OnCommand consoleYou can restore a datastore, virtual machine, or its disk files to its original location or an alternatelocation. From the Backup Management panel, you can sort the backup listings by vendor type tohelp you find your backups.

From the Backup Management panel, you can do the following:

• Restore a datastore, virtual machine, or its disk files from a local or remote backup to an originallocation

• Restore virtual machine disk files from a local or remote backup to a different location.• Restore from a backup that has a VMware snapshot.

Restoring a datastore

You can use the OnCommand console to restore a datastore. By doing so, you overwrite the existingcontent with the backup you select.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,you can use either the drive letter convention or the Universal Naming Convention.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, select a backup of the datastore.

3. Click Restore.

Restoring a datastore powers off all virtual machines.

4. In the Restore dialog box, click Restore.

Restore | 299

Page 300: Admin help netapp

Restoring a VMware virtual machine

You can use the OnCommand console to recover a VMware virtual machine from a local or remotebackup. By doing so, you overwrite the existing content with the backup you select.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

The process for restoring a VMware virtual machine differs from restoring a Hyper-V virtualmachine in that you can restore an entire virtual machine or its disk files. Once you start therestoration, you cannot stop the process, and you cannot restore from a backup of a virtual machineafter you delete the dataset the virtual machine belonged to.

If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,you can use either the drive letter convention or the Universal Naming Convention.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, click Type and Node (Name) to sort the backup table by VMware virtualmachines and by local or remote backups.

3. Select a backup, and then select the virtual machine from the list of restorable entities.

4. Click Restore.

The Restore wizard opens and lists all backups which include the virtual machine.

5. Select one of the following recovery options:

Option Description

Entire virtualmachine

Restores the contents of your virtual machine from a Snapshot copy to its originallocation. The Start virtual machine after restore checkbox is enabled if you selectthis option and the virtual machine is registered.

Particular virtualdisks

Restores the contents of the virtual disks on a virtual machine to a different location.This option is enabled if you uncheck the entire virtual machine option. You can set aDestination datastore for each virtual disk.

6. In the ESX host name field, select the name of the ESX host. The ESX host is used to mount thevirtual machine components.

This option is available if you want to restore virtual disk files or the virtual machine is on aVMFS datastore.

300 | OnCommand Console Help

Page 301: Admin help netapp

7. In the Pre/Post Restore Script Path field, type the name of the script that you want to run beforeor after the restore operation.

8. From this wizard, click Restore to begin the restoration.

Related tasks

Adding a virtual machine to inventory on page 59

Where to restore a backup on page 297

Restoring a Hyper-V virtual machine

You can use the OnCommand console to recover a Hyper-V virtual machine from a local or remotebackup. By doing so, you overwrite the existing content with the backup you select.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Before attempting a restore operation on a Hyper-V virtual machine, you must ensure thatconnectivity to the storage system exists. If there is no connectivity, the restore operation fails.

About this task

If you use a PowerShell script, you should use the drive letter convention. For other types of scripts,you can use either the drive letter convention or the Universal Naming Convention.

If storage system volumes are renamed, you cannot restore a virtual machine created prior torenaming the volumes.

Steps

1. Click the View menu, then click the Backups option.

2. In the Backups tab, click Type and Node (Name) to sort the backup table by VMware or Hyper-V virtual machines, or local and remote, backups.

3. Select a dataset that has the backup you want, and then select the virtual machine from the list ofbacked-up entities.

4. Click Restore.

The Restore wizard opens.

5. Select one of the following recovery options:

Option Description

Start VM after restore Restores the contents of your virtual machine from a Snapshot copy andrestarts the virtual machine after the operation completes.

Restore | 301

Page 302: Admin help netapp

Option Description

Pre/Post Restore Script: Runs a script that is stored on the host service before or after the restoreoperation.

The Restore wizard displays the location of the virtual hard disk (.vhd) file.

6. From this wizard, click Restore to begin the restoration.

Monitoring restore

Viewing restore job detailsAfter you start the restore job, you can track the progress of the restore job from the Jobs tab andmonitor the job for possible errors.

Steps

1. Click the View menu, then click the Jobs option.

2. From the Jobs tab, select Restore from the Jobs type field.

302 | OnCommand Console Help

Page 303: Admin help netapp

Reports

Understanding reports

Reports managementYou can print, export, and share data in the reports that are generated by the OnCommand console.You can also schedule a report and send the report schedule to one or more users.

If you want to do this... Perform this action...

Print a reportSelect the report, click the toolbar icon ( ),and select the Print option.

Export a reportSelect the report, click the toolbar icon ( ),and select the Export Content option.

Export data in a reportSelect the report, click the toolbar icon ( ),and select the Export Data option.

Specify parameter valuesSelect the report, click the toolbar icon ( ),and select the Parameters option.

This option is valid only for the Events reports.

Save a report with the existing name Make changes to a report, click Save, and selectthe Save option.

This option is enabled only for custom reports.

Save a report with a new name Make changes to a report, click Save, and selectthe Save As option.

Share a report Select the report, and then click Share.

Schedule a report Select the report, and then click Schedule.

Delete a report Select the report, and then click Delete.

This option is enabled only for custom reports.

303

Page 304: Admin help netapp

Reports tab customizationThe OnCommand console enables you to customize the data in a report and the report layout.

If you want to do this... Perform this action...

Edit column labels Select the column, click the arrow on the right, and clickthe Header option.

Organize data into groups Select the column, click the arrow on the right, and clickGroup > Add Group.

Remove groups Select the column, click the arrow on the right, and clickGroup > Delete Inner Group.

This option is displayed only when data is organized intogroups.

Hide group details Select the column, click the arrow on the right, and clickGroup > Hide Detail.

This option is displayed only when data is organized intogroups.

Show group details Select the grouped column, click the arrow on the right,and click Group > Show Detail.

This option is displayed only when the group details arehidden.

Start each group on a new page Select the column, click the arrow on the right, and clickGroup > Page Break.

Hide columns Select the column, click the arrow on the right, and clickColumn > Hide Column.

Show hidden columns Select the column, click the arrow on the right, and clickColumn > Show Column.

Delete a column Select the column, click the arrow on the right, and clickColumn > Delete Column.

Compute a column Select the column, click the arrow on the right, and clickColumn > New Computed Column.

Reorder columns Select the column, click the arrow on the right, and clickColumn > Reorder Columns.

Hide duplicate values in a column Select the column, click the arrow on the right, and clickColumn > Do Not Repeat Values.

304 | OnCommand Console Help

Page 305: Admin help netapp

If you want to do this... Perform this action...

Redisplay duplicate values in acolumn

Select the column, click the arrow on the right, and clickColumn > Repeat Values.

Aggregate data Select the column, click the arrow on the right, and clickAggregation.

Filter data Select the column, click the arrow on the right, and clickFilter > Filter.

Sort data Select the column, click the arrow on the right, and clickSort.

Format a column Select the column, click the arrow on the right, and clickFormat > Font.

Format data in a report based onconditions

Select the column, click the arrow on the right, and clickFormat > Conditional Formatting.

Types of object statusThe physical or logical objects can display one of six status types: Normal, Warning, Error, Critical,Emergency, or Unknown. An object's status alerts you to when you must take action to preventdowntime or failure.

Normal The object is operating within the desired thresholds.

Warning The object experienced an occurrence that you should be aware of.

Error The object is still performing without service disruption, but its performance might beaffected.

Critical The object is still performing but service disruption might occur if corrective action isnot taken immediately.

Emergency The object unexpectedly stopped performing and experienced unrecoverable data loss.You must take corrective action immediately to avoid extended downtime.

Unknown The object is in an unknown transitory state. This status is displayed only for a briefperiod.

What report scheduling isThe scheduler service in the DataFabric Manager server enables you to schedule a report to begenerated at a specific date and time. The report is sent by e-mail message to one or more users at thespecified date and time.

Note: If you change the system time when the DataFabric Manager server scheduler service isrunning, schedules configured for a report might not work as expected. To ensure that the servicefunctions correctly, you must stop and start the service using the dfm service command.

Reports | 305

Page 306: Admin help netapp

Managing reports

Scheduling reportsYou can use the Reports tab to schedule reports to be generated and sent by e-mail message to one ormore users on a recurring basis at a specified date and time. For example, you can schedule a reportto be sent as e-mail, in the HTML format, every Monday.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Reports option.

2. In the navigation pane, select the report for which you want to create a schedule, and then clickSchedule.

3. In the Schedule Report dialog box, specify the e-mail address or the alias of the administrator orthe user to whom you want to send the report schedule.

The E-mail field is mandatory. You can specify one or more entries, separated by commas.

4. Specify other report schedule parameters, such as the format, frequency, and scope.

The scope specifies the group, storage system, resource pool, or quota user for which the report isgenerated.

Note: For groups, you can navigate only to the direct members of a group.

5. Click Apply, then click Ok.

Result

The new schedule for a report is saved in the DataFabric Manager server, and is displayed in theSaved Settings.

Related references

Administrator roles and capabilities on page 506

306 | OnCommand Console Help

Page 307: Admin help netapp

Viewing the scheduled reports logYou can view the scheduled reports log from the Scheduled Log tab. The log provides details about ascheduled report's status, run time, and the reason for any failed schedule.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Reports option.

2. In the Reports tab, click the Scheduled Log tab.

The list of scheduled reports are displayed in a tabular format.

3. Select a scheduled report to view the details.

Related references

Administrator roles and capabilities on page 506

Sharing reportsYou can share a report with one or more users. The report is sent by e-mail message to the specifiedusers instantly.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If you customize a report, you should save the changes before you share it. If not, when you share thereport, the changes are not displayed. You can save changes made to a custom report withoutchanging the report's current name. For detailed reports, you can save the changes with a new reportname.

Steps

1. Click the View menu, then click the Reports option.

2. In the navigation pane, select the report you want to share, and then click Share.

3. In the Share Report dialog box, specify the e-mail address or the alias of the administrator or theuser with whom you want to share the report.

Reports | 307

Page 308: Admin help netapp

The E-mail field is mandatory. You can specify one or more entries, separated by commas.

4. Specify other report sharing parameters, such as subject, format, and scope.

By default, the name of the report is displayed as the subject. The scope specifies the group,storage system, resource pool, or quota user for which the report is generated.

Note: For groups, you can navigate only to the direct members of a group.

5. Click Ok.

Related references

Administrator roles and capabilities on page 506

Deleting a reportYou can delete one or more custom reports when they are no longer necessary.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Reports option.

2. In the navigation pane, select Custom Reports.

3. Select one or more reports that you want to delete, and then click Delete.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Reports tabThe Reports tab enables you to view detailed information about the reports that you generate. Youcan search for a specific report, save a detailed report as a custom report, and delete a custom report.You can also share and schedule a report.

Reports tab details

You can view the following information in the Viewer tab:

308 | OnCommand Console Help

Page 309: Admin help netapp

Navigation tree Displays detailed and custom reports along with the available subcategories (forexample, Events, Inventory, and Storage capacity). You can click the name of areport in the navigation tree to view the report contents in the reporting area.

Search filter Enables you to search for a specific report by entering the report name in thesearch filter.

Toolbar Enables you to navigate to specific pages of the report, export the report, undo orredo your actions, and so on.

Reporting area Displays, in tabular format or as a combination of charts and tabular data, thecontents of a selected report.

Command buttons

The command buttons enable you to perform the following tasks for a selected report:

Save • SaveSaves the changes made to a custom report without changing the report's currentname.

Note: This option is disabled for detailed reports.

• Save AsDisplays the Save As dialog box, which enables you to save changes made to areport and to give the report a new name.

Delete Enables you to delete one or more custom reports when they are no longer necessary.

Note: This option is disabled for detailed reports.

Schedule Enables you to schedule the report to be generated on a recurring basis at a specifieddate and time. The report is sent by e-mail message to one or more users at the specifieddate and time.

Share Enables you to share a report with one or more users instantly. The report is sent by e-mail message to the specified users instantly.

Refresh Refreshes the contents of the current report.

Note: You can also save, delete, schedule, or share a report by right-clicking the selected report inthe navigation tree.

Scheduled Log tab

You can view the list of logs for scheduled reports by clicking the Scheduled Log tab. You can viewdetails such as the status and run time of a schedule, and the reason for any schedule failure. Thedetails are displayed in the details pane.

Reports | 309

Page 310: Admin help netapp

Operations Manager Reports link

In addition to the reports displayed in the OnCommand console, you can view other reports byclicking the Operations Manager Reports link. During this task, the OnCommand console launchesthe Operations Manager console. Depending on your browser configuration, you can return to theOnCommand console by using the Alt-Tab key combination or clicking the OnCommand consolebrowser tab. After the completion of this task, you can leave the Operations Manager console open,or you can close it to conserve bandwidth.

Schedule Report dialog boxThe Schedule Report dialog box enables you to schedule the report to be generated on a recurringbasis at a specified date and time. The report is sent by e-mail message to one or more users at thespecified date and time.

• Saved Settings on page 310• Properties on page 310• Command buttons on page 310

Saved Settings

Saved Settings displays all of the schedules for a report. You can create a new schedule by selectingthe New option. You can rename or delete a schedule by right-clicking the schedule and choosing theappropriate option. You can also modify a schedule by selecting the schedule and making therequired changes.

Properties

E-mail Specifies either the e-mail address or the alias of the administrator or the user to whomyou want to send the report schedule. You can specify one or more entries, separatedby commas. This is a mandatory field.

Format Specifies the format in which you want to schedule the report. The HTML option isselected by default.

Frequency Specifies the frequency at which you want to schedule the report. The Hourly atMinute option is selected by default.

Scope Specifies the group, storage system, resource pool, or quota user for which the report isgenerated.

Note: For groups, you can navigate only to the direct members of a group.

Command buttons

The command buttons enable you to perform the following tasks:

Browse Enables you to browse through the available resources. The resource you select thendefines the scope of the generated report.

310 | OnCommand Console Help

Page 311: Admin help netapp

Apply Updates the properties that you specify for a report schedule.

Ok Updates the properties that you specify for a report schedule, and closes the ScheduleReport dialog box.

Cancel Enables you to undo the schedule report configuration and closes the Schedule Reportdialog box.

Share Report dialog boxThe Share Report dialog box enables you to e-mail a report to one or more users instantly. If youcustomize a report, you should save the changes before you share it. If not, when you share thereport, the changes are not displayed.

• Properties on page 311• Command buttons on page 311

Properties

E-mail Specifies either the e-mail address or the alias of the administrator or the user with whomyou want to share the report. You can specify one or more entries, separated by commas.This is a mandatory field.

Subject Specifies the subject of the e-mail. By default, the name of the report is displayed.

Format Specifies the format in which you want to share the report. The HTML option is selectedby default.

Scope Specifies the group, storage system, resource pool, or quota user for which the report isgenerated.

Note: For groups, you can navigate only to the direct members of a group.

Command buttons

The command buttons enable you to perform the following tasks:

Browse Enables you to browse through the available resources. The resource you select thendefines the scope of the generated report.

Ok Updates the share report properties that you specify.

Cancel Enables you to undo the share report configuration and closes the Share Report dialogbox.

Events reports

Reports | 311

Page 312: Admin help netapp

Understanding events reports

What event reports are

Event reports provide information about the name, severity, status, and cause of an event. By default,the report displays events with severity level Warning, Error, Critical, and Emergency. You can filterthe reports based on severity, status, and so on, and save the result as a new report.

The event reports display the following information:

• Severity of the event• Time that the event was triggered• Status of the event• The name of the administrator who acknowledged the event• The date and time when the event was acknowledged

Related references

Description of event severity types on page 31

Page descriptions

Events Current report

The Events Current report displays information about events that are current and yet to be resolved.The information includes the name, severity, and cause of the event.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

Severity Displays the severity of the event.

By default, the report displays events with severity level Warning, Error,Critical, and Emergency. To view events with severity level Normal andInformation, you can select the Parameters option from the toolbar, clear theprefilter, and run the report.

Event Displays the event name.

312 | OnCommand Console Help

Page 313: Admin help netapp

Triggered On Displays the date and time when the event was generated.

Acknowledged By Displays the name of the administrator who acknowledged the event.

Acknowledged Displays the date and time when the event was acknowledged.

Source Displays the name of the object with which the event is associated.

Related references

Description of event severity types on page 31

Reports tab customization on page 304

Reports management on page 303

Events All report

The Events All report displays, in tabular format, information about all the events, including resolvedand unresolved events. The information includes the name, severity, and cause of the event, anddetails about the resolved event.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Severity Displays the severity of the event.

By default, the report displays events with severity level Warning, Error,Critical, and Emergency. To view events with severity level Normal orInformation, you can select the Parameters option from the toolbar, clear theprefilter, and run the report.

Event Displays the event name.

Triggered On Displays the date and time when the event was generated.

Acknowledged By Displays the name of the administrator who acknowledged the event.

Acknowledged Displays the date and time when the event was acknowledged.

Source Displays the name of the object with which the event is associated.

Resolved On Displays the date and time when the event was resolved.

Resolved By Displays the name of the administrator who resolved the event.

Related references

Description of event severity types on page 31

Reports tab customization on page 304

Reports management on page 303

Reports | 313

Page 314: Admin help netapp

Inventory reports

Understanding inventory reports

What inventory reports are

Inventory reports provide information about objects such as aggregates, volumes, qtrees, and LUNs.

Inventory report types are as follows:

• Aggregates report• File Systems report• LUNs report• Qtrees report• Storage Systems report• vFiler Units report• Volumes report• Storage Services report• Storage Service Policies report• Storage Service Datasets report

Page descriptions

Aggregates report

The Aggregates report displays information such as the type, state, and status of the aggregate, andthe SnapLock feature in the aggregate.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Aggregate Displays the name of the aggregate.

StorageSystem

Displays the name of the storage system that contains the aggregate.

Type Displays the type of the aggregate:

Traditional An aggregate that contains only one traditional volume. Atraditional aggregate is created automatically in response to atraditional volume create request.

314 | OnCommand Console Help

Page 315: Admin help netapp

Aggregate An aggregate that contains one or more flexible volumes, RAIDgroups, and disks.

StripedAggregate

An aggregate that is striped across multiple nodes, allowing itsvolumes to also be striped across multiple nodes. Stripedaggregates are made up of multiple member aggregates.Member aggregates are the aggregates of actual cluster nodes,which form a striped aggregate.

Block Type Displays the block format of the aggregate as 32_bit or 64_bit. By default, thiscolumn is hidden.

RAID Displays the RAID protection scheme. The RAID protection scheme can be one ofthe following:

raid0 All the raid groups in the aggregate are of type raid0.

raid4 All the raid groups in the aggregate are of type raid4.

raid_dp All the raid groups in the aggregate are of type raid_dp.

mixed_raid_type The aggregate contains RAID groups of different RAID types(raid0, raid4, and raid_dp).

Note: Data ONTAP uses raid4 or raid_dp protection to ensure data integritywithin a group of disks even if one or two of those disks fail.

State Displays the current state of the aggregate. An aggregate can be in one of thefollowing three states:

Offline Read or write access is not allowed.

Restricted Some operations such as parity reconstruction are allowed, but dataaccess is not allowed.

Online Read and write access to volumes hosted on this aggregate areallowed.

Status Displays the current status of the aggregate based on the events generated for theaggregate. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Mirrored Displays whether the aggregate is mirrored.

Note: For an aggregate to be enabled for mirroring, the storage system must havea SyncMirror license for syncmirror_local or cluster_remote installed andenabled, and the storage system's disk configuration must support RAID-levelmirroring.

Reports | 315

Page 316: Admin help netapp

SnapLockType

Displays the type of the SnapLock feature (if it is enabled) used in the aggregate:

SnapLockCompliance

Provides WORM protection of files and restricts the storageadministrator's ability to perform any operations that mightmodify or erase retained WORM records.

SnapLockEnterprise

Provides WORM protection of files, but uses a trustedadministrator model of operation that allows the storageadministrator to manage the system with few restrictions.

No Indicates the SnapLock feature is disabled.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

File Systems report

The File Systems report displays information about the volumes and qtrees in a storage server. Youcan monitor and manage only qtrees created by the user. Therefore, the default qtree, qtree 0, is notmonitored or managed.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Type Displays whether the volume or qtree is clustered or nonclustered.

File System Displays the name and path of the storage object.

StorageServer

Displays the name of the storage server. The storage server can be a storagecontroller, Vserver, or a vFiler unit that contains the volume.

Status Displays the current status of the volume based on the events generated for aspecific volume, qtree, or LUN. The status can be Normal, Warning, Error,Critical, Emergency, or Unknown.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

316 | OnCommand Console Help

Page 317: Admin help netapp

LUNs report

The LUNs report displays information about the LUNs and LUN initiator groups in your storageserver.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

LUN Path Displays the path name of the LUN (volume or qtree that contains the LUN).

Initiator Group(LUN ID)

Displays the name of the initiator group to which the LUN is mapped.

Description Displays the description (comment) that you specified when creating the LUNon your storage server. By default, this column is hidden.

Size (GB) Displays the size of the LUN.

Storage Server Displays the name of the storage server that contains the LUN. The storageserver can be a storage controller or a vFiler unit.

Status Displays the current status of the LUN. The status can be Normal, Warning,Error, Critical, Emergency, or Unknown. The status of a LUN is determinedby the DataFabric Manager server based on the information that it obtainsfrom the storage controller or the vFiler unit in which the LUN exists. Forexample, if the storage controller or the vFiler unit reports that a LUN isoffline, the DataFabric Manager server displays the status of the LUN asWarning.

Read/Sec (Bytes) Displays the rate of bytes (number of bytes per second) read from the LUN.By default, this column is hidden.

Write/Sec (Bytes) Displays the rate of bytes (number of bytes per second) written to the LUN.By default, this column is hidden.

Operations/Sec Displays the rate of the total operations performed on the LUN. By default,this column is hidden.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Reports | 317

Page 318: Admin help netapp

Qtrees report

The Qtrees report displays information about all the qtrees in a volume. You can monitor the capacityand status of the qtree, and the used and available space in the qtree.

You can monitor and manage only qtrees created by the user. Therefore, the default qtree, qtree 0, isnot monitored or managed.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

QtreeDisplays the name of the qtree. The icon indicates if the qtree is clustered ( )

or nonclustered ( ).

Storage Server Displays the name of the storage server that contains the qtree. The storageserver can be a storage controller or a vFiler unit that contains the qtree.

Volume Displays the name of the volume that contains the qtree.

Status Displays the current status of the qtree based on the events generated for theqtree. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Used Capacity(GB)

Displays the amount of space used in the qtree.

Disk Space Limit(GB)

Displays the hard limit on disk space as specified in the /etc/quotas file ofthe storage system. By default, this column is hidden.

Possible Addition(GB)

Displays the amount of additional storage that can be installed on the storageserver to increase the available space for this qtree.

PossibleAvailable (GB)

Displays the total amount of storage (currently available and possible addition)that is available for increasing the capacity of the qtree.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

318 | OnCommand Console Help

Page 319: Admin help netapp

Storage Systems report

The Storage Systems report displays information such as the type, model, serial number, and theoperating system version of the storage system.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

Type Displays whether the storage system is a cluster or a controller. The controllercan be a stand-alone system, a clustered system, or part of an HA pair.

Status Displays the current status of the storage system based on the events generatedfor the storage system. The status can be Normal, Warning, Error, Critical,Emergency, or Unknown.

Storage System Displays the name of the storage system.

Model Displays the model of the storage system.

Serial Number Displays the serial number of the storage system. The system serial number isusually provided on the chassis and is used to identify a system.

OS Version Displays the version of the operating system running on the storage system.

FirmwareVersion

Displays the version of the firmware running on the storage system.

System ID Displays the product ID of the storage system. It is a unique number to identifya system, and is usually specified for the storage system head. By default, thiscolumn is hidden.

Related references

Reports tab customization on page 304

Types of object status on page 305

Reports management on page 303

Reports | 319

Page 320: Admin help netapp

vFiler Units report

The vFiler Units report displays information about the vFiler units that are discovered by theDataFabric Manager server, and the network addresses that are assigned to them.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Status Displays the current status of the vFiler unit based on the events generated forthe vFiler unit. The status can be Normal, Warning, Error, Critical, Emergency,or Unknown.

vFiler Units Displays the name of the vFiler unit.

Hosting StorageSystem

Displays the name of the hosting storage system.

NetworkAddress

Displays the IP address of the vFiler unit.

IP Space Defines a distinct IP address space in which vFiler units can participate. Bydefault, this column is hidden.

System ID Displays the universal unique identifier (UUID) of the vFiler unit. By default,this column is hidden.

Ping Status Displays the status of the ping request sent to the vFiler unit. A vFiler unit mightbe up or down. The ping status is displayed as "Down" if the vFiler unit hasstopped. The ping status is displayed as "Down (inconsistent)" if the vFiler unitstate is inconsistent.

Ping Timestamp Displays the date and time when the vFiler unit was last queried.

Down Timestamp Displays the date and time when the vFiler unit went offline.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

320 | OnCommand Console Help

Page 321: Admin help netapp

Volumes report

The Volumes report displays information about the type, state, and status of volumes, and the RAIDprotection scheme.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Volume Displays the name of the volume. The icon indicates if the volume is clustered

( ) or nonclustered ( ).

Aggregate Displays the name of the aggregate that contains the volume.

StorageServer

Displays the name of the storage server that contains the volume. The storageserver can be a storage controller, Vserver, or a vFiler Unit.

Type Specifies the type of volume that is selected: Traditional, Flexible or Striped. Bydefault, this column is hidden.

Block Type Displays the block format of the volume as 32_bit or 64_bit. By default, thiscolumn is hidden.

RAID Displays the RAID protection scheme. The RAID protection scheme can be one ofthe following:

raid0 All the raid groups in the volume are of type raid0.

raid4 All the raid groups in the volume are of type raid4.

raid_dp All the raid groups in the volume are of type raid_dp.

mixed_raid_type The volume contains RAID groups of different RAID types(raid0, raid4, and raid_dp).

Note: Data ONTAP uses raid4 or raid_dp protection to ensure data integritywithin a group of disks even if one or two of those disks fail.

State Displays the state of the volume. A volume can be in one of the following threestates (also called mount states):

Online Read and write access is allowed.

Offline Read and write access is not allowed.

Restricted Some operations such as copying volumes and parity reconstructionare allowed, but data access is not allowed.

Status Displays the current status of the volume based on the events generated for thevolume. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Reports | 321

Page 322: Admin help netapp

Clones Displays the clone of the specified volume, if any.

ParentClones

Displays the parent volume from which the clone is derived. By default, thiscolumn is hidden.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Storage Services report

The Storage Services report displays information about the available storage services.

Storage Service Displays the name of the storage service.

Storage ServiceDescription

Displays the description about the corresponding storage service.

Protection Policy Displays the protection policy that is assigned to the storage service.

Primary ProvisioningPolicy

Displays the provisioning policy that is assigned to the primary nodeof the storage service.

Dataset Count Displays the number of datasets that are associated with the storageservice.

Related references

Reports tab customization on page 304

Reports management on page 303

Storage Service Policies report

The Storage Service Policies report displays information about the policies and resource pools thatare associated with each policy node of a storage service.

Storage Service Displays the name of the storage service.

Protection Policy Displays the protection policy that is assigned to the storage service.

Policy Node Displays the name of the data protection policy node, depending on the typeof protection policy that is assigned to the storage service.

Provisioning Policy Displays the provisioning policy that is assigned to the policy node of thestorage service.

vFiler Template Displays the vFiler template that specifies the configuration settings that arerequired to create a new vFiler unit.

322 | OnCommand Console Help

Page 323: Admin help netapp

The vFiler template is assigned to the policy node of the storage service.

Resource Pools Displays the resource pools that are associated with the policy node of thestorage service.

Related references

Reports tab customization on page 304

Reports management on page 303

Storage Service Datasets report

The Storage Service Datasets report displays information about the datasets associated with a storageservice.

Storage Service Displays the name of the storage service.

Dataset Displays the name of the dataset that is associated with a storageservice.

Protection Policy Displays the protection policy that is assigned to the storage service.

Primary ProvisioningPolicy

Displays the provisioning policy that is assigned to the primary policynode of the storage service.

Related references

Reports tab customization on page 304

Reports management on page 303

Storage capacity reports

Understanding storage capacity reports

Overview of storage capacity reports

Storage capacity reports provide information about a storage object's capacity, committed space,growth, space savings and reservation, and usage metrics.

Storage capacity reports are as follows:

• Capacity reports• Committed capacity reports• Capacity growth reports• Space reservation reports• Space efficiency reports

Reports | 323

Page 324: Admin help netapp

• Usage metrics reports

Capacity reports

Capacity reports provide information about the total capacity, used space, and free space available inthe storage object.

You can view the following capacity reports:

• Storage Systems Capacity report• Aggregates Capacity report• Volumes Capacity report• Qtrees Capacity report• User Quotas Capacity report

Committed capacity reports

Committed capacity reports provide information about the capacity, space usage, and committedspace statistics. You can determine the amount of space committed in the qtrees or flexible volumesfrom the committed capacity reports.

You can view the following committed capacity reports:

• Aggregates Committed Capacity report• Volumes Committed Capacity report

Capacity growth reports

Capacity growth reports provide information about the space usage and growth of the storage objects.These reports enable you to determine the daily growth rate of aggregates, volumes, or qtrees, andthe time remaining before these storage objects run out of storage space.

You can view the following capacity growth reports:

• Aggregates Capacity Growth report• Qtrees Capacity Growth report• Volumes Capacity Growth report

Space reservation reports

Space reservation reports provide information about the space reservation statistics in a volume. Youcan determine the size and availability of the space reserve, and the thresholds at which the spacereserve is likely to be depleted using the space reservation reports.

You can view the following space reservation report:

• Volumes Space Reservation report

324 | OnCommand Console Help

Page 325: Admin help netapp

Space efficiency reports

Space efficiency reports provide information about the space usage and the space savings achievedfor the volumes through deduplication.

You can determine the space savings achieved through deduplication from these reports.

You can view the following space savings reports:

• Aggregates Space Savings report• Volumes Space Savings report

Usage metrics reports

Usage metric reports provide information about the space used by the dataset nodes.

You can determine the total space used by a dataset, space used by the dataset nodes in primary andsecondary physical storage, allocated primary storage for a dataset node, the node's read and writestatistics, and so on.

You can view the following usage metrics reports:

• Datasets Average Space Usage Metrics report• Datasets Maximum Space Usage Metrics report• Datasets IO Usage Metrics report• Datasets Average Space Usage Metric Samples report• Datasets Maximum Space Usage Metric Sample report• Datasets IO Usage Metric Sample report

Understanding usage metrics reports

What usage metric reports are

Usage metric reports provide information about the space used by the dataset nodes, such as totalspace used by a dataset, space used by the dataset nodes in primary and secondary physical storage,allocated primary storage for a dataset node, and the node's read and write statistics.

Usage Metric reports are generated for each node of the dataset at specified intervals. You can set theinterval at which you want a report to be generated. To set the interval, you must use the commanddfm option.

There are six Usage Metric reports. The reports are broadly classified as—space utilization and input/output measurement.

The space utilization reports provide information about effective used data space, physical used dataspace, total data space, used Snapshot space, Snapshot reserve, total space, and guaranteed space.The space utilization values are presented in the following reports:

• Datasets Average Space Usage Metric report

Reports | 325

Page 326: Admin help netapp

• Datasets Maximum Space Usage Metric report• Datasets Average Space Usage Metric Sample report• Datasets Maximum Space Usage Metric Sample report

The input/output measurement reports provide information about the data read from or written toeach dataset node. The input/output measurement values are presented in the following reports:

• Datasets IO Usage Metric report• Datasets IO Usage Metric Sample report

The Usage Metric reports do not include information earlier than the most recent 12 months.

Guidelines for solving usage metric report issues

You must follow certain guidelines to avoid being unable to view or generate a usage metric report orreceiving excess data.

The following guidelines can help you to avoid issues related to report generation:

• Reports cannot be created for the destination node if a mirror relationship is not created for theprimary node.If a mirror relationship is not created in the primary node, then the destination volumes aredeleted. Therefore, metrics are not calculated as there are no volumes in the dataset of thedestination node.

• Reports cannot be generated for the second node of a node pair if it is not accessible by theDataFabric Manager server.

• Reports cannot be generated for a dataset if the space utilization monitors or the input/outputmonitors are turned off.

• The Notes field of a usage metric report might display overcharge if a dataset has one or moreqtrees in the primary node.Having one or more qtrees in the primary node results in the dataset’s volume information beingincluded in the metric computation of the qtrees.

How space utilization values are calculated

The space utilization values are calculated using different formulas. You can verify the valuespresented in the Usage Metric report by using either the volume-list-info API or the dfTablesfile.

You must use the volume-list-info API to view the sample data collected for the space saved bydata sharing, the volume size, total data space, and the Snapshot reserve. To view the sample datacollected for used data space, the Snapshot overflow, and the used Snapshot space, you must use thesnmpwalk command on the dfTable file.

Effective Used DataSpace formula

Effective Used Data Space = used data space - (Snapshot overflow +available overwrite reserve + hole reserve) + dedupe returns

326 | OnCommand Console Help

Page 327: Admin help netapp

Physical Used DataSpace formula

Physical Used Data Space = used data space - (Snapshot overflow +available overwrite reserve + hole reserve)

Total Data Spaceformula

Total Data Space = Total space of all the volumes in the dataset node

Used Snapshot Spaceformula

Used Snapshot Space = Sum of the physical space used by the volumeSnapshot copies

Snapshot Reserveformula

Snapshot Reserve = Sum of the Snapshot Reserve of all the volumes ofthe dataset node

Total Space formula Total Space = Total Data Space + Snapshot Reserve

Guaranteed Spaceformula

Guaranteed Space for volume = Total Data Space + Snapshot Reserve

Guaranteed Space for file = (used data space + Used Snapshot Space) -Snapshot overflow

Guaranteed Space for none = (Physical Used Data Space + UsedSnapshot Space) - Snapshot overflow

Note: Guaranteed Space is calculated based on the guarantee level seton a volume.

How maximum and average space utilization values are calculated

You can calculate the average space utilization and the maximum space utilization for the volumes ina dataset node at specified intervals.

Space utilization computation using sample data collected from different volumes

Assume the following about a dataset:

• volx, voly, volz are the volumes of the dataset's primary node.• Each volume has two samples. The samples are sx1, sx2 for volx.• All the samples are collected at the same time.

Maximum space utilization = MAX [(sx1 + sy1 + sz1), (sx2 + sy2 + sz2)]

Average space utilization = AVG [(sx1 + sy1 + sz1), (sx2 + sy2 + sz2)]

Example: Space utilization computation when sample data is not present forsome volumes

Assume the following about a dataset:

• volx, voly, and volz are the volumes of the dataset's primary node.• Sample sx1 is collected at time t1 for volume volx.• Samples sy1 and sy2 are collected at time t1 and t2, respectively, for volume voly.

Reports | 327

Page 328: Admin help netapp

• Samples sz1, sz2, sz3, and sz4 are collected at time t1, t2, t3, and t4, respectively, forvolume volz.

Maximum space utilization = MAX [(sx1 + sy1 + sz1), (sy2 + sz2), sz3, sz4]

Average space utilization = AVG [(sx1 + sy1 + sz1), (sy2 + sz2), sz3, sz4]

How input/output measurement values are calculated

The total input/output measurement value is the sum of the total data read to and written from all thevolumes in each dataset node. You can either use the Data ONTAP APIs or the dfTables file to viewthat sample data.

How total input/output measurement values are calculated

You can calculate the total data read and written for each dataset node. The total input/outputmeasurement value is the sum of data read and written for all volumes in a dataset node, at specifiedintervals for a specified period.

Example: Input/output measurement for data collected from different volumes

Assume the following about a dataset:

• volx, voly, volz are the volumes of the dataset's primary node.• Each volume has three samples for a input/output metric. Let the samples be sx1, sx2, and

sx3 for volx, collected at time t1,t2, and t3 respectively.

Total Data Read for volx between interval t1 and t3 = (sx1-sx0) + (sx2-sx1) + (sx3-sx2)

Total Data Read from a dataset node between time t1 and t3 = { [(sx1-sx0) + (sx2-sx1) +(sx3-sx2)] + [(sy1-sy0) + (sy2-sy1) + (sy3-sy2)] + [(sz1-sz0) + (sz2-sz1) + (sz3-sz2)] }

The total data read between time t01 to t3, where t01 is the timestamp between t0 and t1, iscalculated by normalizing the t0 and t1 samples.

Therefore, Total Data Read for volx for the interval between t01 to t3 = [(sx1-sx0) * (t1-t01)/(t1-t0)] + (sx2-sx1) + (sx3-sx2)

Monitoring storage capacity reports

Aggregate capacity thresholds and their events

You can configure capacity thresholds for aggregates and events for these thresholds fromDataFabric Manager server. You can set alarms to monitor the capacity and committed space of anaggregate. You can also take corrective actions based on the event generated.

You can configure alarms to send notification whenever an event related to the capacity of anaggregate occurs. For the Aggregate Full threshold, you can also configure an alarm to sendnotification only when the condition persists over a specified time.

328 | OnCommand Console Help

Page 329: Admin help netapp

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to repeat until you receive anacknowledgment.

Note: If you want to set an alarm for a specific aggregate, you must create a group with thataggregate as the only member.

You can set the following aggregate capacity thresholds:

Aggregate Full(%)

Description: Specifies the percentage at which an aggregate is full.

Note: To reduce the number of Aggregate Full Threshold events generated,you can set an Aggregate Full Threshold Interval. This causes DataFabricManager server to generate an Aggregate Full event only if the conditionpersists for the specified time.

Default value: 90 percent

Event generated: Aggregate Full

Event severity: Error

Corrective Action

Perform one or more of the following actions:

• To free disk space, ask your users to delete files that are no longer neededfrom volumes contained in the aggregate that generated the event.

• You must add one or more disks to the aggregate that generated the event.

Note: After you add a disk to an aggregate, you cannot remove itwithout first destroying all flexible volumes present in the aggregate towhich the disk belongs. You must destroy the aggregate after all theflexible volumes are removed from the aggregate.

• You must temporarily reduce the Snapshot reserve.By default, the reserve is 20 percent of disk space. If the reserve is not inuse, reducing the reserve can free disk space, giving you more time to adda disk.There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. It is, therefore, important tomaintain a large enough reserve for Snapshot copies so that the active filesystem always has space available to create new files or modify existingones. For more information about the Snapshot reserve, see the DataONTAP Data Protection Online Backup and Recovery Guide.

Aggregate NearlyFull (%)

Description: Specifies the percentage at which an aggregate is nearly full.

Default value: 80 percent

Reports | 329

Page 330: Admin help netapp

The value for this threshold must be lower than the value for Aggregate FullThreshold for DataFabric Manager server to generate meaningful events.

Event generated: Aggregate Almost Full

Event severity: Warning

Corrective action

Perform one or more of the actions mentioned in Aggregate Full.

AggregateOvercommitted(%)

Description: Specifies the percentage at which an aggregate isovercommitted.

Default value: 100 percent

Event generated: Aggregate Overcommitted

Event severity: Error

Corrective action

You should perform one or more of the following actions:

• You must create new free blocks in the aggregate by adding one or moredisks to the aggregate that generated the event.

Note: You must add disks with caution. After you add a disk to anaggregate, you cannot remove it without first destroying all flexiblevolumes present in the aggregate to which the disk belongs. You mustdestroy the aggregate after all the flexible volumes are destroyed.

• You must temporarily free some already occupied blocks in the aggregateby taking unused flexible volumes offline.

Note: When you take a flexible volume offline, it returns any space ituses to the aggregate. However, when you bring the flexible volumeonline again, it requires the space again.

• Permanently free some already occupied blocks in the aggregate bydeleting unnecessary files.

Aggregate NearlyOvercommitted(%)

Description: Specifies the percentage at which an aggregate is nearlyovercommitted.

Default value: 95 percent

The value for this threshold must be lower than the value for Aggregate FullThreshold for DataFabric Manager server to generate meaningful events.

Event generated: Aggregate Almost Overcommitted

Event severity: Warning

Corrective action

330 | OnCommand Console Help

Page 331: Admin help netapp

Perform one or more of the actions provided in Aggregate Overcommitted.

AggregateSnapshot ReserveNearly FullThreshold (%)

Description: Specifies the percentage of the Snapshot copy reserve on anaggregate that you can use before the system generates the AggregateSnapshots Nearly Full event.

Default value: 80 percent

Event generated: Aggregate Snapshot Reserve Almost Full

Event severity: Warning

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. If you disable the aggregateSnapshot autodelete option, it is important to maintain a large enoughreserve.

See the Operations Manager Help for instructions on how to identifySnapshot copies you can delete. For more information about the Snapshotreserve, see the Data ONTAP Data Protection Online Backup and RecoveryGuide.

AggregateSnapshot ReserveFull Threshold(%)

Description: Specifies the percentage of the Snapshot copy reserve on anaggregate that you can use before the system generates the AggregateSnapshots Full event.

Default value: 90 percent

Event generated: Aggregate Snapshot Reserve Full

Event severity: Warning

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them.

Note: A newly created traditional volume tightly couples with its containing aggregate so that thecapacity of the aggregate determines the capacity of the new traditional volume. Therefore, youshould synchronize the capacity thresholds of traditional volumes with the thresholds of theircontaining aggregates.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Reports | 331

Page 332: Admin help netapp

Qtree capacity thresholds and events

The OnCommand console enables you to monitor qtree capacity and set alarms. You can also takecorrective actions based on the event generated.

DataFabric Manager server features thresholds to help you monitor the capacity of qtrees. Quotasmust be enabled on the storage systems. You can configure alarms to send notification whenever anevent related to the capacity of a qtree occurs.

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to continue to alert you withevents until it is acknowledged. For the Qtree Full threshold, you can also configure an alarm to sendnotification only when the condition persists over a specified period.

Note: If you want to set an alarm for a specific qtree, you must create a group with that qtree as theonly member.

You can set the following qtree capacity thresholds:

Qtree Full(%)

Description: Specifies the percentage at which a qtree is considered full.

Note: To reduce the number of Qtree Full Threshold events generated, you canset a Qtree Full Threshold Interval to a non-zero value. By default, the Qtree Fullthreshold Interval is set to zero. The Qtree Full Threshold Interval specifies thetime during which the condition must persist before the event is generated. If thecondition persists for the specified amount of time, DataFabric Manager servergenerates a Qtree Full event.

• If the threshold interval is 0 seconds or a value less than the volumemonitoring interval, DataFabric Manager server generates Qtree Full events.

• If the threshold interval is greater than the volume monitoring interval,DataFabric Manager server waits for the specified threshold interval, whichincludes two or more monitoring intervals, and generates a Qtree Full eventonly if the condition persisted throughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and the thresholdinterval is 90 seconds, the threshold event is generated only if the conditionpersists for two monitoring intervals.

Default value: 90 percent

Event generated: Qtree Full

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Ask users to delete files that are no longer needed, to free disk space.

332 | OnCommand Console Help

Page 333: Admin help netapp

• Increase the hard disk space quota for the qtree.

Qtree NearlyFullThreshold(%)

Description: Specifies the percentage at which a qtree is considered nearly full.

Default value: 80 percent

Event severity: Warning

Corrective action

Perform one or more of the following actions:

• Ask users to delete files that are no longer needed, to free disk space.• Increase the hard disk space quota for the qtree.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

User quota thresholds

You can set a user quota threshold to all the user quotas present in a volume or a qtree.

When you configure a user quota threshold for a volume or qtree, the settings apply to all user quotason that volume or qtree.

DataFabric Manager server uses the user quota thresholds to monitor the hard and soft quota limitsconfigured in the /etc/quotas file of each storage system.

Volume capacity thresholds and events

DataFabric Manager server features thresholds to help you monitor the capacity of flexible andtraditional volumes. You can configure alarms to send notification whenever an event related to thecapacity of a volume occurs. You can also take corrective actions based on the event generated. Forthe Volume Full threshold, you can configure an alarm to send notification only when the conditionpersists over a specified period.

By default, if you have configured an alarm to alert you to an event, the DataFabric Manager serverissues the alarm only once per event. You can configure the alarm to repeat until it is acknowledged.

Note: If you want to set an alarm for a specific volume, you must create a group with that volumeas the only member.

You can set the following volume capacity thresholds:

Volume FullThreshold (%)

Description: Specifies the percentage at which a volume is considered full.

Note: To reduce the number of Volume Full Threshold eventsgenerated, you can set the Volume Full Threshold Interval to a non-zerovalue. By default, the Volume Full threshold Interval is set to zero.

Reports | 333

Page 334: Admin help netapp

Volume Full Threshold Interval specifies the time during which thecondition must persist before the event is triggered. Therefore, if thecondition persists for the specified time, DataFabric Manager servergenerates a Volume Full event.

• If the threshold interval is 0 seconds or a value less than the volumemonitoring interval, DataFabric Manager server generates theVolume Full events.

• If the threshold interval is greater than the volume monitoringinterval, DataFabric Manager server waits for the specifiedthreshold interval, which includes two or more monitoring intervals,and generates a Volume Full event only if the condition persistedthroughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and thethreshold interval is 90 seconds, the threshold event is generated only ifthe condition persists for two monitoring intervals.

Default value: 90

Event generated: Volume Full

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Ask your users to delete files that are no longer needed, to free diskspace.

• For flexible volumes containing enough aggregate space, you canincrease the volume size.

• For traditional volumes containing aggregates with limited space, youcan increase the size of the volume by adding one or more disks to theaggregate.

Note: Add disks with caution. After you add a disk to an aggregate,you cannot remove it without destroying the volume and itsaggregate.

• For traditional volumes, temporarily reduce the Snapshot copy reserve.By default, the reserve is 20 percent of the disk space. If the reserve isnot in use, reduce the reserve free disk space, giving you more time toadd a disk. There is no way to prevent Snapshot copies fromconsuming disk space greater than the amount reserved for them.Therefore, it is important to maintain a large enough reserve forSnapshot copies. By maintaining the reserve for Snapshot copies, theactive file system always has space available to create new files or

334 | OnCommand Console Help

Page 335: Admin help netapp

modify existing ones. For more information about the Snapshot copyreserve, see the Data ONTAP Data Protection Online Backup andRecovery Guide.

Volume Nearly FullThreshold (%)

Description: Specifies the percentage at which a volume is considerednearly full.

Default value: 80. The value for this threshold must be lower than thevalue for the Volume Full Threshold in order for DataFabric Managerserver to generate meaningful events.

Event generated: Volume Almost Full

Event severity: Warning

Corrective action

Perform one or more of the actions mentioned in Volume Full.

Volume SpaceReserve NearlyDepleted Threshold(%)

Description: Specifies the percentage at which a volume is considered tohave consumed most of its reserved blocks. This option applies tovolumes with LUNs, Snapshot copies, no free blocks, and a fractionaloverwrite reserve of less than 100%. A volume that crosses this thresholdis getting close to having write failures.

Default value: 80

Event generated: Volume Space Reservation Nearly Depleted

Event severity: Warning

Volume SpaceReserve DepletedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed all its reserved blocks. This option applies to volumeswith LUNs, Snapshot copies, no free blocks, and a fractional overwritereserve of less than 100%. A volume that has crossed this threshold isgetting dangerously close to having write failures.

Default value: 90

Event generated: Volume Space Reservation Depleted

Event severity: Error

When the status of a volume returns to normal after one of the precedingevents, events with severity 'Normal' are generated. Normal events do notgenerate alarms or appear in default event lists, which display events ofwarning or worse severity.

Volume QuotaOvercommittedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed the whole of the overcommitted space for that volume.

Default value: 100

Reports | 335

Page 336: Admin help netapp

Event generated: Volume Quota Overcommitted

Event severity: Error

Corrective action

Perform one or more of the following actions:

• Create new free blocks by increasing the size of the volume thatgenerated the event.

• Permanently free some of the occupied blocks in the volume bydeleting unnecessary files.

Volume QuotaNearlyOvercommittedThreshold (%)

Description: Specifies the percentage at which a volume is considered tohave consumed most of the overcommitted space for that volume.

Default Value: 95

Event generated: Volume Quota Almost Overcommitted

Event Severity: Warning

Corrective action:

Perform one or more of the actions mentioned in Volume QuotaOvercommitted.

Volume GrowthEvent MinimumChange (%)

Description: Specifies the minimum change in volume size (as apercentage of total volume size) that is acceptable. If the change in volumesize is more than the specified value, and the growth is abnormal inrelation to the volume-growth history, DataFabric Manager servergenerates a Volume Growth Abnormal event.

Default value: 1

Event generated: Volume Growth Abnormal

Volume SnapReserve FullThreshold (%)

Description: Specifies the value (percentage) at which the space that isreserved for taking volume Snapshot copies is considered full.

Default value: 90

Event generated: Volume Snap Reserve Full

Event severity: Error

Corrective action: None

There is no way to prevent Snapshot copies from consuming disk spacegreater than the amount reserved for them. If you disable the volumeSnapshot autodelete option, it is important to maintain a largeenough reserve. Disabling would ensure Snapshot copies that there isalways space available to create new files or modify present ones. For

336 | OnCommand Console Help

Page 337: Admin help netapp

instructions on how to identify Snapshot copies you can delete, see theOperations Manager Help.

User Quota FullThreshold (%)

Description: Specifies the value (percentage) at which a user is consideredto have consumed all the allocated space (disk space or files used) asspecified by the user quota. The user quota includes hard limit inthe /etc/quotas file. If this limit is exceeded, DataFabric Managerserver generates a User Disk Space Quota Full event or a User Files QuotaFull event.

Default value: 90

Event generated: User Quota Full

User Quota NearlyFull Threshold (%)

Description: Specifies the value (percentage) at which a user is consideredto have consumed most of the allocated space (disk space or files used) asspecified by the user quota. The user quota includes hard limit inthe /etc/quotas file. If this limit is exceeded, DataFabric Managerserver generates a User Disk Space Quota Almost Full event or a UserFiles Quota Almost Full event.

Default value: 80

Event generated: User Quota Almost Full

Volume No FirstSnapshotThreshold (%)

Description: Specifies the value (percentage) at which a volume isconsidered to have consumed all the free space for its space reservation.This is the space that the volume needs when the first Snapshot copy iscreated.

This option applies to volumes that contain space-reserved files, noSnapshot copies, a fraction of Snapshot copies overwrite reserve set togreater than 0, and where the sum of the space reservations for all LUNsin the volume is greater than the free space available to the volume.

Default value: 90

Event generated: Volume No First Snapshot

Volume Nearly NoFirst SnapshotThreshold (%)

Description: Specifies the value (percentage) at which a volume isconsidered to have consumed most of the free space for its spacereservation. This is the space that the volume needs when the firstSnapshot copy is created.

This option applies to volumes that contain space-reserved files, noSnapshot copies, a fractional overwrite reserve set to greater than 0, andwhere the sum of the space reservations for all LUNs in the volume isgreater than the free space available to the volume.

Default value: 80

Reports | 337

Page 338: Admin help netapp

Event generated: Volume Almost No first Snapshot

Note: When a traditional volume is created, it is tightly coupled with its containing aggregate sothat its capacity is determined by the capacity of the aggregate. For this reason, you shouldsynchronize the capacity thresholds of traditional volumes with the thresholds of their containingaggregates.

Related information

Data ONTAP Data Protection Online Backup and Recovery Guide - now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml

Page descriptions

Aggregates Capacity report

The Aggregates Capacity report displays information about the used and available space in anaggregate and its capacity.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

Aggregate Displays the name of the aggregate.

Storage System Displays the name of the storage system that contains the aggregate.

Available Capacity (GB) Displays the amount of space available for data in the aggregate. Bydefault, this column is hidden.

Total Capacity (GB) Displays the total space in the aggregate.

Used Capacity (%) Displays the percentage of space used for data in the aggregate.

Snap Reserve Total (GB) Displays the size of the Snapshot reserve for this aggregate.

Snap Reserve Used (%) Displays the percentage of the Snapshot reserve currently in use.

Aggregate Used Capacity(GB)

Displays the amount of space used for data in the aggregate. Bydefault, this column is hidden.

338 | OnCommand Console Help

Page 339: Admin help netapp

Aggregate AvailableCapacity (%)

Displays the percentage of free space for data in the aggregate. Bydefault, this column is hidden.

Snapshot Autodelete Specifies whether a Snapshot copy is deleted to free storage spacewhen a write to the volume fails due to lack of space in the aggregate.By default, this column is hidden.

Snap Reserve Used (GB) Displays the amount of the Snapshot reserve currently in use. Bydefault, this column is hidden.

Snapshots Disabled Specifies whether Snapshot copies are disabled for this aggregate. Ifthe status is displayed as Off, then Snapshot copies are not disabled.By default, this column is hidden.

Status Displays the current status of the aggregate based on the eventsgenerated for the aggregate. The status can be Normal, Warning, Error,Critical, Emergency, or Unknown.

Full Threshold (%) Displays the percentage of physical space that is used in the aggregatebefore the system generates the Aggregate Full event.

Nearly Full Threshold(%)

Displays the percentage of physical space that is used in the aggregatebefore the system generates the Aggregate Almost Full event.

OvercommittedThreshold (%)

Displays the percentage of physical space that is used in the aggregatebefore the system generates the Aggregate Overcommitted event. Bydefault, this column is hidden.

Nearly OvercommittedThreshold (%)

Displays the percentage of physical space that is used in the aggregatebefore the system generates the Aggregate Nearly Overcommittedevent. By default, this column is hidden.

Bytes Used (%) Displays the percentage of space in the aggregate that is used byflexible volumes. By default, this column is hidden.

Bytes Committed (%) Displays the percentage of space in the aggregate that is committed toflexible volumes. By default, this column is hidden.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Reports | 339

Page 340: Admin help netapp

Qtrees Capacity report

The Qtrees Capacity report displays information about the status of a qtree, used and available spacein a qtree, and its capacity. You can monitor and manage only qtrees created by the user. Therefore,the default qtree, qtree 0, is not monitored or managed.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Qtree Displays the name of the qtree. The icon indicates if the qtree is clustered

( ) or nonclustered ( ).

Storage Server Displays the name of the storage server that contains the qtree. The storageserver can be a storage controller or a vFiler unit that contains the qtree.

Volume Displays the name of the volume that contains the qtree.

Status Displays the current status of the qtree based on the events generated for theqtree. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Used (%) Displays the percentage of space used in the qtree.

Soft Limit (GB) Displays the soft limit on disk space as specified in the /etc/quotas file ofthe storage system. By default, this column is hidden.

Disk Space Limit(GB)

Displays the hard limit on disk space as specified in the /etc/quotas fileof the storage system. By default, this column is hidden.

Available Capacity(%)

Displays the percentage of available space in the qtree that is not committed.By default, this column is hidden.

Full Threshold (%) Displays the limit, as a percentage, at which a qtree is considered full.

Nearly FullThreshold (%)

Displays the limit, as a percentage, at which a qtree is considered nearly full.

Bytes Used (%) Displays the percentage of storage space used by the qtree. By default, thiscolumn is hidden.

Files Used (% ) Displays the percentage of space used by files in the qtree. By default, thiscolumn is hidden.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

340 | OnCommand Console Help

Page 341: Admin help netapp

Storage Systems Capacity report

The Storage Systems Capacity report displays information about the capacity and the space used in astorage system.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Type Displays whether the storage system is a cluster or a controller. Thecontroller can be a stand-alone system, a clustered system, or part of anHA pair. By default, this column is hidden.

Status Displays the current status of the storage system based on the eventsgenerated for the storage system. The status can be Normal, Warning,Error, Critical, Emergency, or Unknown.

Storage System Displays the name of the storage system.

Volume UsedCapacity (GB)

Displays the amount of space used for data in all the volumes. By default,this column is hidden.

Volume TotalCapacity (GB)

Displays the total space available for data in all the volumes.

Volume UsedCapacity (%)

Displays the percentage of space used for data in all the volumes.

Aggregate UsedCapacity (GB)

Displays the amount of space used for data in an aggregate. By default,this column is hidden.

Aggregate TotalCapacity (GB)

Displays the total space available for data in the aggregate.

Aggregate UsedCapacity (%)

Displays the percentage of space used for data in the aggregate.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Reports | 341

Page 342: Admin help netapp

User Quotas Capacity report

The User Quotas Capacity report displays information about the capacity of the quota users and thequota user groups that exist on the storage systems.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

User Name Displays the name of the user or user group.

The user name contains a comma-separated list of users when a storage system isconfigured to track quotas of multiple users as a single entity. For example, the /etc/ quotas file of storage system F1 contains the following entry:

joe,finance\joe user@/vol/vol0/fin 50M - -

In this case, the DataFabric Manager server displays joe, finance\joe in the UserName column.

When the user name of a storage system cannot be reported, the DataFabricManager server reports one of the following:

• User identifier (UID)• Security identifier (SID)• Group identifier (GID)

File System Displays the name, path, and quota information of the volumes or qtrees onwhich the user quota or group quota is enabled.

Status Displays the status of a user's quotas. The status can be Normal, Warning, Error,Critical, Emergency, or Unknown.

If the status for a user is not Normal, an event related to the user's quotas hasoccurred. For details about the events, you must go to the Events tab.

Disk SpaceUsed (MB)

Displays the total amount of disk space used. By default, this column is hidden.

Disk SpaceThreshold(MB)

Displays the disk space threshold as specified in the /etc/quotas file of thestorage system.

Note: This threshold is different from the user quota thresholds that you canconfigure in the DataFabric Manager server.

Disk Space SoftLimit (MB)

Displays the soft limit on disk space as specified in the /etc/quotas file of thestorage system. By default, this column is hidden.

342 | OnCommand Console Help

Page 343: Admin help netapp

Disk SpaceHard Limit(MB)

Displays the hard limit on disk space as specified in the /etc/quotas file of thestorage system. By default, this column is hidden.

Disk SpaceUsed (%)

Displays the total percentage of disk space used.

Files Used Displays the total number of files used. By default, this column is hidden.

Files Soft Limit Displays the soft limit on files as specified in the /etc/quotas file of thestorage system. By default, this column is hidden.

Files HardLimit (Million)

Displays the hard limit on files as specified in the /etc/quotas file of thestorage system. By default, this column is hidden.

Files Used (%) Displays the percentage of files used.

The percentage of files used is calculated using the following formula:

Percentage = (Files Used / Files Hard Limit) x 100

SID Displays the identifier of the user or user group.

The Security Identifier (SID) contains a comma-separated list of identifiers ofusers when a storage system is configured to track quotas of multiple users as asingle entity. For Windows users or user groups, this identifier specifies the SIDof the user or user group. For other platforms, the identifier specifies the RelativeIdentifier (RID) on the storage system.

Nearly FullThreshold (%)

Displays the percentage value at which a user is likely to consume most of theallocated space (disk space or files used) as specified by the user's quota (hardlimit in the /etc/quotas file).

If this threshold is crossed, the DataFabric Manager server generates a User DiskSpace Quota Almost Full event when the disk space is consumed or a User FilesQuota Almost Full event when the file space is consumed.

Full Threshold(%)

Displays the percentage value at which a user is likely to consume the entireallocated space (disk space or files used) as specified by the user's quota (hardlimit in the /etc/quotas file).

If this threshold is crossed, the DataFabric Manager server generates a User DiskSpace Quota Full event when the disk space is consumed or a User Files QuotaFull event when the file space is consumed.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Reports | 343

Page 344: Admin help netapp

Volumes Capacity report

The Volumes Capacity report displays information about the capacity of the volumes and the usedand available space in these volumes.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Volume Displays the name of the volume. The icon indicates if the volume is

clustered ( ) or nonclustered ( ).

Aggregate Displays the name of the aggregate that contains the volume.

Storage Server Displays the name of the storage server that contains the volume. Thestorage server can be a storage controller, Vserver, or a vFiler unit.

Available Capacity(GB)

Displays the amount of space available for data in the volume. By default,this column is hidden.

Used Capacity (GB) Displays the amount of space that is used for data, in GB, in the volume. Bydefault, this column is hidden.

Total Capacity (GB) Displays the total space available for data in the volume.

Used Capacity (%) Displays the amount of space that is used for data, in percentage, in thevolume.

Used SnapshotSpace (GB)

Displays the amount of space used to store Snapshot copies in the volume.This value can be larger than the specified size of the Snapshot reserve.

Used SnapshotSpace (%)

Displays the percentage of space used to store Snapshot copies in thevolume. This value can be larger than the specified size of the Snapshotreserve.

Available Capacity(%)

Displays the percentage of space available for data in the volume. Bydefault, this column is hidden.

Status Displays the current status of the volume based on the events generated forthe volume. The status can be Normal, Warning, Error, Critical, Emergency,or Unknown.

Full Threshold (%) Displays the limit, as a percentage, at which a volume is considered full.

Nearly FullThreshold (%)

Displays the limit, as a percentage, at which a volume is considered nearlyfull.

Files Used (%) Displays the percentage of files used by the volume. By default, this columnis hidden.

344 | OnCommand Console Help

Page 345: Admin help netapp

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Aggregates Committed Capacity report

The Aggregates Committed Capacity report displays information about the type of the aggregate, andthe space committed to flexible volumes in the aggregate.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Aggregate Displays the name of the aggregate.

Storage System Displays the name of the storage system that contains the aggregate.

Type Displays the type of the aggregate:

Traditional An aggregate that contains only one traditional volume. Atraditional aggregate is created automatically in response toa volume create request on the storage system.

Aggregate An aggregate that contains one or more flexible volumes,RAID groups, and disks.

StripedAggregate

An aggregate that is striped across multiple nodes, allowingits volumes to also be striped across multiple nodes. Stripedaggregates are made up of multiple member aggregates.

BytesCommitted(GB)

Displays the amount of space committed to flexible volumes, in GB, in theaggregate.

BytesCommitted (%)

Displays the amount of space committed to flexible volumes, in percentage, inthe aggregate.

Status Displays the current status of the aggregate based on the events generated for theaggregate. The status can be Normal, Warning, Error, Critical, Emergency, orUnknown.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

Reports | 345

Page 346: Admin help netapp

Volumes Committed Capacity report

The Volumes Committed Capacity report displays information about the capacity and space used inthe volume, and the space committed in the qtrees.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

Volume Displays the name of the volume. The icon indicates if the volume is

clustered ( ) or nonclustered ( ).

Aggregate Displays the name of the aggregate that contains the volume.

Storage Server Displays the name of the storage server that contains the volume. Thestorage server can be a storage controller, Vserver, or a vFiler Unit.

Quota OverCommittedSpace (GB)

Displays the amount of physical space in the qtrees that can be usedbefore the system generates the Volume Quota OverCommitted event.

Used Capacity (GB) Displays the amount of space used for data in the volume.

Committed (GB) Displays the amount of space, in GB, committed in the qtrees. Bydefault, this column is hidden.

Committed (%) Display the amount of space, in percentage, committed in the qtrees.

Total/Max Size (GB) • Displays the total amount of storage allocated for this volume, if thevolume autosize option is disabled on this volume.

• Displays the maximum size to which the volume can grow, if thevolume autosize option is enabled on this volume.

Related references

Reports tab customization on page 304

Reports management on page 303

Aggregates Capacity Growth report

The Aggregates Capacity Growth report displays information about the space used in and the growthin capacity of the aggregate.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

346 | OnCommand Console Help

Page 347: Admin help netapp

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

Aggregate Displays the name of the aggregate.

Storage System Displays the name of the storage system that contains the aggregate.

Data Days To Full Displays the number of days required for the aggregate to reach the actualAggregate Full (in terms of capacity), and is based on the daily growth rate(GB) value.

Daily Growth Rate(GB)

Displays, in GB, the amount of disk space used in the aggregate if theamount of change between the last two samples continues for 24 hours. Thedefault sample collection interval is four hours.

For example, if an aggregate uses 10 GB of disk space at 2 pm and 12 GB at6 pm, the daily growth rate (GB) for this aggregate is 2 GB.

Daily Growth Rate(%)

Displays the percentage of the rate of growth in capacity. This is determinedby dividing the daily growth rate by the total amount of space in theaggregate.

Used Capacity (%) Displays the percentage of the total space currently in use in the aggregate.

Committed % Displays the amount of space, in percentage, committed to flexible volumesin the aggregate.

Committed (GB) Displays the amount of space, in GB, committed to flexible volumes in theaggregate. By default, this column is hidden.

Total Capacity (GB) Displays the total amount of space in the aggregate. By default, this columnis hidden.

Used Capacity (GB) Displays the amount of used space in the aggregate. By default, this columnis hidden.

Related references

Reports tab customization on page 304

Reports management on page 303

Reports | 347

Page 348: Admin help netapp

Qtrees Capacity Growth report

The Qtrees Capacity Growth report displays information about the space usage and the growth incapacity of the qtree. You can monitor and manage only qtrees created by the user. Therefore, thedefault qtree, qtree 0, is not monitored or managed.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

QtreeDisplays the name of the qtree. The icon indicates if the qtree is clustered ( )

or nonclustered ( ).

Storage Server Displays the name of the storage server that contains the qtree. The storageserver can be a storage controller or a vFiler unit that contains the qtree.

Volume Displays the name of the volume that contains the qtree.

Data Days toFull

Displays the estimated amount of time left before this qtree runs out of storagespace.

If the time is less than one day, the current storage status of the qtree isdisplayed.

Daily GrowthRate (GB)

Displays, in GB, the amount of disk space used in the qtree if the amount ofchange between the last two samples continues for 24 hours.

Daily GrowthRate (%)

Displays the percentage of change in the disk space used in the qtree if theamount of change between the last two samples continues for 24 hours.

Related references

Reports tab customization on page 304

Reports management on page 303

348 | OnCommand Console Help

Page 349: Admin help netapp

Volumes Capacity Growth report

TheVolumes Capacity Growth report displays information about the space usage and the growth incapacity of the volume.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

Volume Displays the name of the volume. The icon indicates if the volume is

clustered ( ) or nonclustered ( ).

Aggregate Displays the name of the aggregate that contains the volume.

Storage Server Displays the name of the storage server that contains the volume. The storageserver can be a storage controller, Vserver, or a vFiler unit.

Daily Growth Rate(GB)

Displays, in GB, the amount of disk space used in the volume, if the amountof change between the last two samples continues for 24 hours.

Data Days To Full(GB)

Displays the estimated time left before this volume runs out of storage space.If the estimated time is less than one day, the current storage status of thevolume is displayed.

Daily Growth Rate(%)

Displays the percentage of change in the used space in the volume reserve, ifthe change between the last two samples continues for 24 hours.

Related references

Reports tab customization on page 304

Reports management on page 303

Volumes Space Reservation report

The Volumes Space Reservation report displays information about the space reservation and thespace-reserved files in the volume.

Note: To display the icons in a report, you must ensure that a DNS mapping is established betweenthe client machine from which you are starting the Web connection and the host name of thesystem on which the DataFabric Manager server is installed.

The following details are displayed for volumes that have space-reserved files.

Reports | 349

Page 350: Admin help netapp

Volume Displays the name of the volume. The icon indicates if the volume is

clustered ( ) or nonclustered ( ).

Aggregate Displays the name of the aggregate that contains the volume.

Storage Server Displays the name of the storage server that contains the volume. Thestorage server can be a storage controller, Vserver, or a vFiler unit.

Reserved Files TotalSize (GB)

Displays the total size of the space-reserved files in this volume.

Fractional Reserve(%)

Controls the size of the overwrite reserve. If the fractional reserve is lessthan 100 percent, the reserved space for all the space-reserved files in thatvolume is reduced to the fractional reserve percentage. By default, thiscolumn is hidden.

Reservation Used(GB)

Displays the total space reservation used for overwrites in this volume. Bydefault, this column is hidden.

Reservation Available(GB)

Displays the amount of free space remaining in the space reservation. Bydefault, this column is hidden.

Space Reserve Total(GB)

Displays the total size of the space reserve for this volume.

Space ReservationUsed (GB)

Displays the total space reserve currently in use.

Status Displays the current status of the volume based on the events generatedfor the volume. The status can be Normal, Warning, Error, Critical,Emergency, or Unknown.

Space ReserveDepleted Threshold(%)

Displays the percentage of the threshold at which a volume is consideredto have consumed all of its reserved space. A volume that has crossed thisthreshold is getting dangerously close to having write failures.

Space Reserve NearlyDepleted Threshold(%)

Displays the percentage of the threshold at which the space reserve islikely to be nearly depleted. By default, this column is hidden.

Space Reserve Used(%)

Displays the percentage of the total space reserve currently in use.

Related references

Types of object status on page 305

Reports tab customization on page 304

Reports management on page 303

350 | OnCommand Console Help

Page 351: Admin help netapp

Aggregates Space Savings report

The Aggregates Space Savings report displays information about the space savings achieved in theaggregate through deduplication.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

The following details are displayed when the space savings on the aggregate is enabled.

Aggregate Displays the name of the aggregate.

Storage System Displays the name of the storage system that contains the aggregate.

Physical Used (GB) Displays the active file system data of all the deduplication-enabledvolumes in the aggregate.

Dedupe Space Savings(GB)

Displays the savings achieved in an aggregate through deduplication.

Effective Used (GB) Displays the active file system data of all the deduplicated volumes inthe aggregate without deduplication space savings.

Dedupe Space Savings(%)

Displays the percentage of savings achieved in the aggregate throughdeduplication.

Volume Enabled Displays the number of deduplication-enabled volumes in theaggregate.

Available Capacity (GB) Displays the amount of space available for data in the aggregate.

Total Capacity (GB) Displays the total amount of storage allocated for this aggregate.

Related references

Reports tab customization on page 304

Reports management on page 303

Reports | 351

Page 352: Admin help netapp

Volumes Space Savings report

The Volumes Space Savings report displays information about the space savings achieved in thevolume through deduplication.

Note: To display the charts and icons in a report, you must ensure that a DNS mapping isestablished between the client machine from which you are starting the Web connection and thehost name of the system on which the DataFabric Manager server is installed.

Chart

You can customize the data displayed in the chart, change the subtype and format of the chart, andexport the data from the chart.

Report details

The following details are displayed when the space savings on the volume is enabled.

Volume Displays the name of the volume. The icon indicates if the volume is

clustered ( ) or nonclustered ( ).

Storage Server Displays the name of the storage server that contains the volume. Thestorage server can be a storage controller, Vserver, or a vFiler Unit.

Dedupe Status Specifies whether deduplication is enabled or disabled on the volume.

Last Dedupe Run Displays the timestamp of the last run deduplication operation on thevolume.

Physical Used (GB) Displays the active file system data in the volume with deduplication spacesavings.

Dedupe SpaceSavings (GB)

Displays the savings achieved in a volume through deduplication.

Effective Used (GB) Displays the active file system data in the volume without deduplicationspace savings (that is, if deduplication has not been enabled on thevolume).

Space Savings (%) Displays the percentage of savings achieved in a volume throughdeduplication.

Available Capacity(GB)

Displays the amount of space available for data in the volume.

Total Capacity (GB) Displays the total amount of storage allocated for this volume.

Related references

Reports tab customization on page 304

352 | OnCommand Console Help

Page 353: Admin help netapp

Reports management on page 303

Datasets Average Space Usage Metric report

The Datasets Average Space Usage Metric report provides the average space utilization for eachnode computed at specified intervals—for example, one hour. The computed metrics is consolidatedand reported for a specific period—for example, one month.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at thetime of computing the metrics. If the storage service changes for a dataset, thenew dataset details are displayed in a new row. The field is blank if a storageservice is not associated with the dataset.

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

ProvisioningPolicy

Displays the name of the provisioning policy associated with the dataset atthe time of computing the metrics. If the provisioning policy changes for adataset, the new dataset details are displayed in a new row. The field is blankif a provisioning policy is not associated with the dataset.

Effective UsedData Space

Displays the space used by user data in the dataset node, without accountingfor data sharing.

Physical UsedData Space

Displays the actual space used by user data in the dataset node, accounting forthe space saved by data sharing.

Used SnapshotSpace

Displays the physical space used by the volume Snapshot copies in thedataset node.

Total Data Space Displays the space allocated to the dataset's primary node. The field is blankfor non-primary nodes.

Snapshot Reserve Displays the space allocated for Snapshot copies in the dataset's primarynode. The field is blank for non-primary nodes.

Total Space Displays the space allocated for data and Snapshot copies in the dataset'sprimary node. The field is blank for non-primary nodes.

Guaranteed Space Displays the physical space allocated to the dataset node.

Metric Period Displays the period (in days) for which metrics is calculated for the dataset.

Deleted Displays the time at which the dataset is deleted.

Reports | 353

Page 354: Admin help netapp

Comments Displays the custom comments specified by the user. If custom commentschange during the reporting period, both the new and the old comments aredisplayed.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period.The field is blank if there are no issues.

Related references

Reports tab customization on page 304

Reports management on page 303

Datasets Maximum Space Usage Metric report

The Datasets Maximum Space Usage Metric report provides the maximum space utilization for eachnode computed at specified intervals—for example, one hour. The computed metrics is consolidatedand reported for a specific period—for example, one month.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at thetime of computing the metrics. If the storage service changes for a dataset, thenew dataset details are displayed in a new row. The field is blank if a storageservice is not associated with the dataset.

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

ProvisioningPolicy

Displays the name of the provisioning policy associated with the dataset atthe time of computing the metrics. If the provisioning policy changes for adataset, the new dataset details are displayed in a new row. The field is blankif a provisioning policy is not associated with the dataset.

Effective UsedData Space

Displays the space used by user data in the dataset node, without accountingfor data sharing.

Physical UsedData Space

Displays the actual space used by user data in the dataset node, accounting forthe space saved by data sharing.

Used SnapshotSpace

Displays the physical space used by the volume Snapshot copies in thedataset node.

Total Data Space Displays the space allocated in the dataset's primary node. The field is blankfor non primary nodes.

354 | OnCommand Console Help

Page 355: Admin help netapp

Snapshot Reserve Displays the space allocated for Snapshot copies in the dataset's primarynode. The field is blank for non primary nodes.

Total Space Displays the space allocated for data and Snapshot copies in the dataset'sprimary node. The field is blank for non primary nodes.

Guaranteed Space Displays the physical space allocated to the dataset node.

Metric Period Displays the period (in days) for which metrics is calculated for the dataset.

Deleted Displays the time at which the dataset is deleted.

Comments Displays the custom comments specified by the user. If custom commentschange during the reporting period, both the new and the old comments aredisplayed.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period.The field is blank if there are no issues.

Related references

Reports tab customization on page 304

Reports management on page 303

Datasets IO Usage Metric report

The Datasets IO Usage Metric report provides user data read and written for each dataset node for aspecific period—for example, one week. The consolidated sample is reported for a specific period—for example, one month.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at the timeof computing the metrics. If the storage service changes for a dataset, the newdataset details are displayed in a new row. The field is blank if a storage serviceis not associated with the dataset.

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

ProvisioningPolicy

Displays the name of the provisioning policy associated with the dataset at thetime of computing the metrics. If the provisioning policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprovisioning policy is not associated with the dataset.

Reports | 355

Page 356: Admin help netapp

Data Read Displays the total data read by the user from all volumes of the dataset node.

Data Written Displays the total data written by the user to all volumes of the dataset node.

Metric Period Displays the period (in days) for which metrics is calculated for the dataset.

Deleted Displays the time at which the dataset is deleted.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period. Thefield is blank if there are no issues.

Comments Displays the custom comments specified by the user. If custom commentschange during the reporting period, both the new and the old comments aredisplayed.

Related references

Reports tab customization on page 304

Reports management on page 303

Datasets Average Space Usage Metric Samples report

The Datasets Average Space Usage Metric Samples report provides the average space utilization foreach node computed at specified intervals. The computed metrics is consolidated and reported at theintervals at which the sample is collected. For example, if the interval is set to one hour, the reportcan collect and display the average space utilization for a dataset at each hour.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at thetime of computing the metrics. If the storage service changes for a dataset, thenew dataset details are displayed in a new row. The field is blank if a storageservice is not associated with the dataset.

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

ProvisioningPolicy

Displays the name of the provisioning policy associated with the dataset atthe time of computing the metrics. If the provisioning policy changes for adataset, the new dataset details are displayed in a new row. The field is blankif a provisioning policy is not associated with the dataset.

Timestamp Displays the time at which sample is collected.

356 | OnCommand Console Help

Page 357: Admin help netapp

Effective UsedData Space

Displays the space used by user data in the dataset node, without accountingfor data sharing.

Physical UsedData Space

Displays the actual space used by user data in the dataset node, accounting forthe space saved by data sharing.

Used SnapshotSpace

Displays the physical space used by the volume Snapshot copies in thedataset node.

Total Data Space Displays the space allocated in the dataset's primary node. The field is blankfor non primary nodes.

Snapshot Reserve Displays the space allocated for Snapshot copies in the dataset's primarynode. The field is blank for non primary nodes.

Total Space Displays the space allocated for data and Snapshot copies in the dataset'sprimary node. The field is blank for non primary nodes.

Guaranteed Space Displays the physical space allocated to the dataset node.

Deleted Displays the time at which the dataset is deleted.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period.The field is blank if there are no issues.

Comments Displays the custom comments specified by the user. If custom commentschange during the reporting period, both the new and the old comments aredisplayed.

Related references

Reports tab customization on page 304

Reports management on page 303

Datasets Maximum Space Usage Metric Samples report

The Datasets Maximum Space Usage Metric Samples report provides the maximum space utilizationfor each node computed at specified intervals. The computed metrics is consolidated and reported atthe intervals at which the sample is collected. For example, if the interval is set to one hour, thereport can collect and display the maximum space utilization for a dataset at each hour.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at thetime of computing the metrics. If the storage service changes for a dataset,the new dataset details are displayed in a new row. The field is blank if astorage service is associated with the dataset.

Reports | 357

Page 358: Admin help netapp

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

Provisioning Policy Displays the name of the provisioning policy associated with the dataset atthe time of computing the metrics. If the provisioning policy changes for adataset, the new dataset details are displayed in a new row. The field is blankif a provisioning policy is not associated with the dataset.

Timestamp Displays the time at which sample is collected.

Effective Used DataSpace

Displays the space used by user data in the dataset node, without accountingfor data sharing.

Physical Used DataSpace

Displays the actual space used by user data in the dataset node, accountingfor the space saved by data sharing.

Used SnapshotSpace

Displays the physical space used by the volume Snapshot copies in thedataset node.

Total Data Space Displays the space allocated in the dataset's primary node.

Snapshot Reserve Displays the space allocated for Snapshot copies in the dataset's primarynode.

Total Space Displays the space allocated for data and Snapshot copies in the dataset'sprimary node.

Guaranteed Space Displays the physical space allocated to the dataset node.

Deleted Displays the time at which the dataset is deleted.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period.The field is blank if there are no issues.

Related references

Reports tab customization on page 304

Reports management on page 303

Datasets IO Usage Metric Samples report

The Datasets IO Usage Metric Sample report provides user data read and written for each datasetnode for a specific period. Consolidated sample is reported at the same interval at which the sample

358 | OnCommand Console Help

Page 359: Admin help netapp

is collected. For example, if the interval is set to one hour, the report can collect and display the dataread for a dataset at each hour.

Dataset Displays the name of the dataset.

Storage Service Displays the name of the storage service associated with the dataset at the timeof computing the metrics. If the storage service changes for a dataset, the newdataset details are displayed in a new row. The field is blank if a storage serviceis not associated with the dataset.

Protection Policy Displays the name of the protection policy associated with the dataset at thetime of computing the metrics. If the protection policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprotection policy is not associated with the dataset.

Dataset Node Displays the name of the dataset node.

ProvisioningPolicy

Displays the name of the provisioning policy associated with the dataset at thetime of computing the metrics. If the provisioning policy changes for a dataset,the new dataset details are displayed in a new row. The field is blank if aprovisioning policy is not associated with the dataset.

Timestamp Displays the time at which sample is collected.

Data Read Displays the total data read by the user from all volumes of the dataset node.

Data Written Displays the total data written by the user to all volumes of the dataset node.

Deleted Displays the time at which the dataset is deleted.

Notes Displays any discrepancy in metrics computation. The field value isovercharge if the dataset has one or more qtrees in the primary node. Thefield value is partial if data of some volume does not exist for a period. Thefield is blank if there are no issues.

Related references

Reports tab customization on page 304

Reports management on page 303

Database schema

How to access DataFabric Manager server dataBy using third-party tools, you can create customized reports from the data you export from theDataFabric Manager server. By default, you cannot access the DataFabric Manager server views. To

Reports | 359

Page 360: Admin help netapp

gain access to the views that are defined within the embedded database of the DataFabric Managerserver, you need to first create a database user and then enable database access to this user.

You can access the DataFabric Manager server data through views, which are dynamic virtual tablescollated from data in the database. These views are defined and exposed within the embeddeddatabase of the DataFabric Manager server.

Note: A database user is user-created and authenticated by the database server. Database users arenot related to the DataFabric Manager server users.

Before you can create and give access to a database user, you must have the CoreControl capability.The CoreControl capability allows you to perform the following operations:

• Creating a database user• Deleting a database user• Enabling database access to a database user• Disabling database access to a database user

Disabling the database access denies the read permission on the DataFabric Manager server viewsfor the user account.

• Changing the password for the database user

All of these operations can be performed only through the CLI. For more information about the CLIcommands, see the DataFabric Manager server manual (man) pages.

You can use a third-party reporting tool to connect to the DataFabric Manager server database foraccessing views. Following are the connection parameters:

• Database name: monitordb• User name: <database user name>• Password: <database user password>• Port: 2638• dobroad: none• Links: tcpip

Note: The .jar files required for iAnywhere and jConnect JDBC drivers are copied as part of theDataFabric Manager server installation. The new jar files are saved in the following directorypath: .../install/misc/dbconn.

Supported database viewsBy using third-party tools, you can create customized reports from the data you export from theDataFabric Manager server. You can access the DataFabric Manager server database through views.

Following is the list of supported views:

• alarmView• cpuView• designerReportView

360 | OnCommand Console Help

Page 361: Admin help netapp

• datasetIOMetricView• datasetSpaceMetricView• datasetUsageMetricCommentView• hbaInitiatorView• hbaView• initiatorView• reportOutputView• sanhostLunView• usersView• volumeDedupeDetailsView

alarmView

Column name Type Length Description

alarmId Unsignedinteger

4 Primary key in identifying alarms

alarmScript Varchar 254 The script that is run when an alarm istriggered

alarmScriptRunAs Varchar 64 The user account specified to run thescript

alarmTrapHosts Varchar 254 The SNMP traphost system that receivesthe alarm notification in the form ofSNMP traps

alarmGroupId Unsignedinteger

4 Group ID of the group with which thealarm is associated

alarmGroupName Varchar 1024 Name of the group with which the alarmis associated

alarmEventClass Varchar 254 The event class configured to trigger thealarm

alarmEventType Varchar 128 The event name that triggers the alarm

alarmEventSeverity Varchar 16 The event severity level

alarmEventTimeFrom Date time 8 The time at which the alarm becomesactive

alarmEventTimeTo Date time 8 The time at which the alarm becomesinactive

alarmsRepeatNotify Varchar 8 The alarm repeat notification

Reports | 361

Page 362: Admin help netapp

Column name Type Length Description

alarmRepeatInterval Unsignedsmall integer

2 The time period (in minutes) before theDataFabric Manager server continues tosend a repeated notification until the eventis acknowledged or resolved

alarmDisabled Varchar 8 The alarm is disabled on enabled

alarmEmailAddrs Varchar 254 The e-mail address to which the alarmnotification is sent

alarmEventName Varchar 256 The event name associated with the alarm

alarmEventPageLoginName Text 32767 The login name used by DataFabricManager server for sending pagernotification

alarmAdminEmailLoginName Text 32767 The login name of the administrator whoreceives the alarm notification

alarmAdminPageAddress Text 32767 The pager address of the administrator, towhich the alarm notification is sent

alarmAdminEmailAddress Text 32767 The e-mail address of the administrator, towhich the alarm notification is sent

alarmPageAddrs Varchar 254 The pager address of thenonadministrator, to which the alarmnotification is sent

cpuView

Column name Type Length Description

cpuId Unsignedinteger

4 Primary key to identify CPU

362 | OnCommand Console Help

Page 363: Admin help netapp

Column name Type Length Description

cpuBusyPercentInterval Float 4 The value (as time interval) controls thegeneration of "cpu too busy" event

The value, cpubusy (in percentage) is comparedwith cpuTooBusyThreshold value. If this value isgreater than or equal to cpuTooBusyThreshold,and persists throughout thecpuBusyThresholdInterval period, then "cpu toobusy" event is generated.

If cpuBusyThresholdInterval is set to "15minutes" (default interval), these values arechecked every cpu monitoring cycle.

cpuStatTimestamp Timestamp 8 The time-stamp when the statistics of CPU istaken.

designerReportView

Column name Type Length Description

drId Unsigned integer 4 The designer report ID

drCliName Varchar 64 The CLI name of the designer report

drGuiName Varchar 64 The GUI name of the designer report

drIsCustom Unsigned integer 4 The flag to differentiate if the designer report is acustom report or a canned report

0 indicates Canned and 1 indicates Custom

drDescription Varchar 1024 Brief description of the designer Report

Database view datasetIOMetricView

Column Name Data type Length Description

dsIOMetricDatasetId unsigned int 4 Dataset ID

dsIOMetricDatasetName varchar 255 Dataset name

dsIOMetricProtectionPolicyName varchar 255 Protection policy name that isassociated with the dataset

dsIOMetricProtectionPolicyId unsigned int 4 Protection policy ID that is associatedwith the dataset

Reports | 363

Page 364: Admin help netapp

Column Name Data type Length Description

dsIOMetricStorageServiceName varchar 255 Storage service name that isassociated with the dataset

dsIOMetricStorageServiceId unsigned int 4 Storage service ID that is associatedwith the dataset

dsIOMetricNodeName varchar 255 Dataset node name

dsIOMetricProvisioningPolicyName

varchar 255 Provisioning policy name that isassociated with the dataset node

dsIOMetricProvisioningPolicyId unsigned int 4 Provisioning policy ID that isassociated with the dataset node

dsIOMetricMetricTimestamp timestamp 4 Time at which metric is calculatedfor the dataset

dsIOMetricTotalDataRead unsignedbig int

8 Total data read by the user from allvolumes of the dataset node

dsIOMetricTotalDataWritten unsignedbig int

8 Total data written by the user to allvolumes of the dataset node

dsIOMetricDeletedTimestamp timestamp 8 Time at which the dataset is deleted

dsIOMetricOvercharge bit 1 Flag true if the computed metric isovercharged

dsIOMetricPartialData bit 1 Flag true if some samples requiredfor metric computation are missing

dsIOMetricCommentId unsigned int 4 ID of custom comment name andcustom value pair set for the dataset

Database view datasetSpaceMetricView

Column Name Data type Length Description

dsSpaceMetricDatasetId unsignedint

4 Dataset ID

dsSpaceMetricDatasetName varchar 255 Dataset name

dsSpaceMetricProtectionPolicyName

varchar 255 Protection policy name that isassociated with the dataset

dsSpaceMetricProtectionPolicyId unsignedint

4 Protection policy ID that isassociated with the dataset

dsSpaceMetricNodeName varchar 255 Dataset node name

364 | OnCommand Console Help

Page 365: Admin help netapp

Column Name Data type Length Description

dsSpaceMetricStorageServiceName

varchar 255 Storage service name that isassociated with the dataset

dsSpaceMetricStorageServiceId unsignedint

4 Storage service ID that is associatedwith the dataset

dsSpaceMetricProvisioningPolicyName

varchar 255 Provisioning policy name that isassociated with the dataset node

dsSpaceMetricProvisioningPolicyId

unsignedint

4 Provisioning policy ID that isassociated with the dataset node

dsSpaceMetricTimestamp timestamp 4 Time at which metric is calculatedfor the dataset

dsSpaceMetricAvgEffectiveUsedDataSpace

unsignedbig int

8 The average space used by user datain the dataset node, withoutaccounting for data sharing

dsSpaceMetricAvgPhysicalUsedDataSpace

unsignedbig int

8 The average actual space used byuser data in the dataset node,accounting for the space saved bydata sharing

dsSpaceMetricAvgUsedSnapshotSpace

unsignedbig int

8 The average physical space used bythe volume Snapshot copies in thedataset node

dsSpaceMetricAvgTotalDataSpace unsignedbig int

8 The average space allocated in thedataset's primary node. The field isblank for non primary nodes

dsSpaceMetricAvgSnapshotReserve

unsignedbig int

8 The average space allocated forSnapshot copies in the dataset'sprimary node. The field is blank fornon primary nodes

dsSpaceMetricAvgTotalSpace unsignedbig int

8 The average space allocated for dataand Snapshot copies in the dataset'sprimary node. The field is blank fornon primary nodes

dsSpaceMetricAvgGuaranteedSpace

unsignedbig int

8 The average physical spaceallocated to the dataset node

dsSpaceMetricMaxEffectiveUsedDataSpace

unsignedbig int

8 The maximum space used by userdata in the dataset node, withoutaccounting for data sharing

Reports | 365

Page 366: Admin help netapp

Column Name Data type Length Description

dsSpaceMetricMaxPhysicalUsedDataSpace

unsignedbig int

8 The maximum actual space used byuser data in the dataset node,accounting for the space saved bydata sharing

dsSpaceMetricMaxUsedSnapshotSpace

unsignedbig int

8 The maximum physical space usedby the volume Snapshot copies inthe dataset node

dsSpaceMetricMaxTotalDataSpace unsignedbig int

8 The maximum space allocated inthe dataset's primary node. The fieldis blank for non primary nodes

dsSpaceMetricMaxSnapshotReserve

unsignedbig int

8 The maximum space allocated forSnapshot copies in the dataset'sprimary node. The field is blank fornon primary nodes

dsSpaceMetricMaxTotalSpace unsignedbig int

8 The maximum space allocated fordata and Snapshot copies in thedataset's primary node. The field isblank for non primary nodes

dsSpaceMetricMaxGuaranteedSpace

unsignedbig int

8 The maximum physical spaceallocated to the dataset node

dsSpaceMetricDeletedTimestamp timestamp 8 Time at which the dataset is deleted

dsSpaceMetricOvercharge bit 1 Flag true if the computed metric isovercharged

dsSpaceMetricPartialData bit 1 Flag true if some samples requiredfor metric computation are missing

dsSpaceMetricCommentId unsignedint

4 ID of custom comment name andcustom value pair set for the dataset

Database view datasetUsageMetricCommentView

Column Name Data type Length Description

datasetId unsigned int 4 Dataset ID

dsUsageMetricCommentId unsigned int 4 ID of custom comment name and commentvalue set for the dataset at a specified time

dsUsageMetricCommentName

varchar 255 Custom comment name

366 | OnCommand Console Help

Page 367: Admin help netapp

Column Name Data type Length Description

dsUsageMetricCommentValue

varchar 255 Custom comment value

hbaInitiatorView

Column name Type Length Description

initiatorId Unsigned integer 4 Primary key for initiator

hbaId Unsigned integer 4 WWPN ID of the HBA port

hbaView

Column name Type Length Description

hbaId Unsigned integer 4 Primary key for WWPN of the HBA port

hbaName Varchar 64 WWPN of the HBA port

initiatorView

Column name Type Length Description

initiatorId Unsigned integer 4 Primary key for initiator

iGroupId Unsigned integer 4 Initiator group ID to which the LUN is mapped

initiatorName Varchar 255 Name of the initiator to which the LUN is mapped

reportOutputView

Column name Type Length Description

reportOutputId Unsignedinteger

4 Report output ID

reportScheduleId Unsignedinteger

4 ID of the report schedule that generates thereport output

reportId Unsignedinteger

4 ID of the report for which the output isgenerated

If it is null, it means the report is a cannedlegacy report.

Reports | 367

Page 368: Admin help netapp

Column name Type Length Description

reportName Varchar 64 Name of the report for which the output isgenerated

reportBaseCatalog Varchar 64 The name of the base catalog thatcorresponds to the custom report

The DataFabric Manager server providesreport catalogs that you use to customizereports. You can set basic report propertiesfrom the CLI. If reportBaseCatalog is null,it means the report is either a canned legacyreport or a canned OnCommand report.

reportOutputTargetObjId Unsigned biginteger

8 ID of the object for which the report isscheduled

reportOutputTimestamp Timestamp 8 The time which the report output isgenerated

reportOutputRunStatus Unsignedsmall integer

2 The status of the report output

reportOutputRunBy Varchar 255 The user (with privileges) who generatedthe report

reportOutputFailureReason Varchar 255 The reason for failure of a report generation

reportOutputFileName Varchar 128 The file in which the report output is saved

sanhostlunview

Column name Type Length Description

shlunId Unsigned integer 4 LUN ID

hostId Unsigned integer 4 SAN host ID

shInitiatorId Unsigned integer 4 Initiator ID to which the LUN is mapped

shlunpathId Unsigned integer 4 Path ID of the storage system on which the LUN islocated

usersView

Column name Type Length Description

userId Unsignedinteger

4 The User Quota ID

368 | OnCommand Console Help

Page 369: Admin help netapp

Column name Type Length Description

userNearlyFullThreshold Unsignedsmallinteger

2 The value (in percentage) at which most of theallocated space (disk space or files used), asspecified by the user’s quota (hard limit inthe /etc/quotas file), is consumed

If this threshold is crossed, the DataFabricManager server generates a User Disk SpaceQuota Almost Full event or a User Files QuotaAlmost Full event.

userFullThreshold Unsignedsmallinteger

2 The value (in percentage) at which most of theallocated space (disk space or files used), asspecified by the user’s quota (hard limit inthe /etc/quotas file), is consumed

If this threshold is crossed, the DataFabricManager server generates a User Disk SpaceQuota Full event or a User Files Quota Fullevent.

volumeDedupeDetailsView

Column name Type Length Description

volumeId Unsignedinteger

4 Volume ID

volumeOverDedupeThreshold Unsignedsmallinteger

2 The threshold (as an integerpercentage), used to generate "VolumeOver Deduplicated" event on a dedupeenabled volume

This event is generated if the sum ofvolume used space and saved space (asa result of deduplication), expressed asa percentage of the volume total size,exceeds the value of thevolOverDeduplicatedThreshold set onthe volume. If the threshold is set on thevolume, the value overrides the one atthe global level. IfvolumeOverDedupeThreshold is null, itmeans that the threshold is not set at thevolume level.

Reports | 369

Page 370: Admin help netapp

Column name Type Length Description

volumeNearlyOverDedupeThreshold Unsignedsmallinteger

2 The threshold (as an integerpercentage), used to generate "VolumeNearly Over Deduplicated" event on adedupe enabled volume

This event is generated if the sum ofvolume used space and saved space (asa result of deduplication), expressed asa percentage of the volume total size,exceeds the value of thevolNearlyOverDeduplicatedThresholdset on the volume. If the threshold is seton the volume, the value overrides theone at the global level. IfvolumeNearlyOverDedupeThreshold isnull, it means that the threshold is notset at the volume level.

370 | OnCommand Console Help

Page 371: Admin help netapp

Administration

Users and roles

Understanding users and roles

What RBAC is

RBAC (role-based access control) provides the ability to control who has access to various featuresand resources in DataFabric Manager server.

How RBAC is used

Applications use RBAC to authorize user capabilities. Administrators use RBAC to manage groupsof users by defining roles and capabilities.

For example, if you need to control user access to resources, such as groups, datasets, and resourcepools, you must set up administrator accounts for them. Additionally, if you want to restrict theinformation these administrators can view and the operations they can perform, you must apply rolesto the administrator accounts you create.

Note: RBAC permission checks occur in the DataFabric Manager server. RBAC must beconfigured using the Operations Manager console or command line interface.

How roles relate to administrators

Role management allows the administrator who logs in with super-user access to restrict the use ofcertain DataFabric Manager server functions to other administrators.

The super-user can assign roles to administrators on an individual basis, by group, or globally (andfor all objects in DataFabric Manager server).

You can list the description of an operation by using the dfm role operation list [ -x ][ <operation-name> ] command.

The ability to configure administrative users and roles is supported in the Operations Managerconsole, which can be accessed from the OnCommand console Administration menu.

Example of how to use RBAC to control access

This example describes how a storage architect can use RBAC to control the operations that can beperformed by a virtual server administrator.

Suppose you are a storage architect and you want to use RBAC to enable the virtual serveradministrator (abbreviated to "administrator") to do the following operations: see the VMs associated

371

Page 372: Admin help netapp

with the servers managed by the administrator, create datasets to include them, and attach storageservices and application policies to these datasets to back up the data.

Assume for this example that the host service registration, validation, and so on have beensuccessfully completed and that DataFabric Manager server has discovered the virtual server and itsVMs, datastores, and so on. However, at this point, only you and other administrators with the globalread permission in DataFabric Manager server can see these VMs. To enable the virtual serveradministrator to perform the desired operations, you need to perform the following steps:

1. Add the administrator as a user.You add the administrator as an authorized DataFabric Manager server user by using theOperations Manager console. If DataFabric Manager server is running on Linux, then you mustadd the administrator's UNIX identity or LDAP identity. If DataFabric Manager server is runningon Microsoft Windows, you can add an active directory user group that the administrator belongsto, which allows all administrators in that user group to log onto DataFabric Manager server.

2. Create a resource group.You next create a resource group by using the Operations Manager console. For this example, wecall the group "virtual admin resource group." Then you add the virtual server and its objects tothe resource group. Any new datasets or policies that the administrator creates will be placed inthis resource group.

3. Assign a role to the administrator.Assign a role, by using the Operations Manager console, that gives an appropriate level of controlto the administrator. For example, if there is only one administrator, you can assign the defaultGlobalApplicationProtection role or you can create a custom role by choosing customcapabilities. If there are multiple administrators, then a few of the capabilities assigned to anygiven administrator should be at that administrator's group level and not on the global level. Thatprevents an administrator from reading or modifying objects owned by other administrators.

Related tasks

Accessing the Users and Roles capability (RBAC) on page 375

Administrator roles and capabilities

The RBAC administrator roles determine the tasks you can perform in the OnCommand console.

One or more capabilities must be specified for every role, and you can assign multiple capabilities ifyou want the administrator to have more control than a specific role provides. For example, if youwant an administrator to perform both the backup and restore operations, you can create and assign tothe administrator a single role that has both of these capabilities.

You can use the Operations Manager console to create new roles and to customize the default globalroles provided by the DataFabric Manager server and the client applications. For more informationabout configuring RBAC, see the OnCommand Operations Manager Administration Guide .

Note: If you want a role with global host service management capability, create a role with thefollowing properties:

372 | OnCommand Console Help

Page 373: Admin help netapp

• The role inherits from the GlobalHostService role.• The role includes the DFM.Database.Read operation on a global level.

Note: A user who is part of the local administrators group is treated as a super-user andautomatically granted full control.

Default global roles

GlobalApplicationProtection Enables you to create and manage application policies, createdatasets with application policies for local backups, use storageservices for remote backups, perform scheduled and on-demandbackups, perform restore operations, and generate reports.

GlobalBackup Enables you to initiate a backup to any secondary volume andignore discovered hosts.

GlobalDataProtection Enables you to initiate a backup to any secondary volume; viewbackup configurations, events and alerts, and replication or failoverpolicies; and import relationships into datasets.

GlobalDataset Enables you to create, modify, and delete datasets.

GlobalDelete Enables you to delete information in the DataFabric Managerserver database, including groups and members of a group,monitored objects, custom views, primary and secondary storagesystems, and backup relationships, schedules, and retentionpolicies.

GlobalHostService Enables you to authorize, configure, and unregister a host service.

GlobalEvent Enables you to view, acknowledge, and delete events and alerts.

GlobalFullControl Enables you to view and perform any operation on any object in theDataFabric Manager server database and configure administratoraccounts. You cannot apply this role to accounts with group accesscontrol.

GlobalMirror Enables you to create, destroy, and update replication or failoverpolicies.

GlobalRead Enables you to view the DataFabric Manager server database,backup and provisioning configurations, events and alerts,performance data, and policies.

GlobalRestore Enables you to restore the primary data to a point in time or to anew location.

GlobalWrite Enables you to view or write both primary and secondary data tothe DataFabric Manager server database.

Administration | 373

Page 374: Admin help netapp

GlobalResourceControl Enables you to add members to dataset nodes that are configuredwith provisioning policies.

GlobalProvisioning Enables you to provision primary dataset nodes and can attachresource pools to secondary or tertiary dataset nodes. TheGlobalProvisioning role also includes all the capabilities of theGlobalResourceControl, GlobalRead, and GlobalDataset roles fordataset nodes that are configured with provisioning and protectionpolicies.

GlobalPerfManagement Enables you to manage views, event thresholds, and alarms apartfrom viewing performance information in Performance Advisor.

Related information

Operations Manager Administration Guide - http://now.netapp.com/NOW/knowledge/docs/DFM_win/dfm_index.shtml

Access permissions for the Virtual Infrastructure Administrator role

When you create a virtual infrastructure administrator, you must assign specific permissions toensure that the administrator can view, back up, and recover the appropriate virtual objects.

A virtual infrastructure administrator role must have the following permissions for the resources:

Groups The VI administrator will need the following operation permissions for the groupcreated for the VI administrator role.

DFM.Database All

DFM.BackManager All

DFM.ApplicationPolicy All

DFM.Dataset All

DFM.Resource Control

Policies The VI administrator will need the following operation permissions for each policytemplate, located under Local Policies, that you want the virtual administrator to beable to copy.

DFM.ApplicationPolicy Read

Storageservices

The VI administrator will need the following operation permissions for each of thestorage services that you want to allow the VI administrator to use.

DFM.StorageService Attach, read, detach, and clear

374 | OnCommand Console Help

Page 375: Admin help netapp

ProtectionPolicies

These are the policies contained within the storage services that you selected above.

DFM.Policy All

Configuring users and roles

Accessing the Users and Roles capability (RBAC)

You can configure users and roles from the Administrators page in the Operations Manager console,which you can access from the OnCommand console Administration menu.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the Administration menu, then click the Users and Roles option.

A separate browser window opens to the Administrators page in the Operations Manager console.

2. Configure users and roles.

For more information, see the Operations Manager Help.

3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to theOnCommand console.

Related references

Administrator roles and capabilities on page 506

Groups

Understanding groups

Administration | 375

Page 376: Admin help netapp

What a global group is

By default, a group called Global exists in the DataFabric Manager server database. All objectsmonitored by DataFabric Manager server belong to the global group. All subgroups created alsobelong to the global group.

You cannot delete or rename the global group. When you delete an object, the DataFabric Managerserver stops monitoring and reporting data for that object. Data collection and reporting is notresumed until the object is added back (“recovered”) to the database. You can recover the objectfrom the Deleted Objects view.

Note: You can perform group management tasks, such as copying and moving for groups that youcreate in the OnCommand console. However, you cannot perform management tasks for the globalgroup.

Related references

Deleted Objects view on page 117

What groups and objects are

A group is a collection of objects that are discovered and monitored by the OnCommand console.You can group objects based on characteristics such as the operating system version, location of thestorage systems, or projects to which the all the file systems belong.

By default, a group called Global exists in the DataFabric Manager server database. If you createadditional groups, you can select them from the Group menu.

Objects that are monitored by the OnCommand console, such as storage systems, aggregates, filesystems (volumes and qtrees), and logical unit numbers (LUNs), datasets, are referred to as objects.Objects directly added to the groups are called direct members. You can also add child objects of thedirect members to the groups and categorize them as indirect members.

You can add the following list of objects that can be added to a group.

Following are the server objects:

• Datacenters• Datastores• ESX Servers• Host Agents• Host services• Hyper-V Servers• Hyper-V VMs• Virtual Centers• VMware VMs

Following are the service automation objects:

376 | OnCommand Console Help

Page 377: Admin help netapp

• Datasets• Local policies• Resource pools• Storage services

Following are the storage objects:

• Aggregates• Clusters• LUNs• Qtrees• SRM paths• Storage controllers• vFiler Units• Vservers• Volumes

Configuring groups

Creating groups

You can create groups to contain multiple objects so that you easily manage these objects. You cancreate a group directly under the global group or create a subgroup to a parent group you alreadycreated.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

• You must review the Guidelines for creating groups on page 378.

• The following information must be available:

• Name of the group• Members of the group

• When you create a group, you can add an object to the group membership only if you havepermission to view that object.

Steps

1. Click the Administration menu, then click the Groups option.

2. From the Groups tab, select the default global group.

3. Click Create.

Administration | 377

Page 378: Admin help netapp

4. Enter the name of the group in the name field.

5. Enter the owner's name, e-mail address, annual rate (per GB), and resource tag.

6. Select the appropriate member type from the drop-down menu.

7. Select the required member type and use appropriate arrow keys to move it to the list on the right.

The selected member types are added to the group and the membership list is updated.

8. Click Create.

Result

The new group appears in the Groups list. You can select any group in the Groups list from theGroups menu.

Related references

Guidelines for creating groups on page 378

Administrator roles and capabilities on page 506

Guidelines for creating groups

You should follow a set of guidelines when you create groups.

Use the following guidelines when you create groups:

• You can group similar or mix-types objects in a group.• An object can be a member of any number of groups.• You can group a subset of group members to create a new group.• You can create any number of groups.• You can copy to a group or move to a group in a group hierarchy.

However, parent group cannot be moved to its child group.

Deleting groups

You can delete groups that you no longer find useful.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

Deleting a group removes only the group container from the OnCommand database. The objectscontained in the deleted group are not removed from the database. When you delete a group, you alsodelete all its subgroups, if any. If you want to preserve the subgroups, you must move them to adifferent parent group before deleting the current parent group.

378 | OnCommand Console Help

Page 379: Admin help netapp

Steps

1. Click the Administration menu, then click the Groups option.

2. From the Groups tab, select the group you want to delete from the list.

3. Click Delete.

4. Click Yes.

Related references

Administrator roles and capabilities on page 506

Managing groups

Editing groups

You can edit a group name, add or delete members of a group, and modify the contact information ofa group from the Edit Group dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Groups option.

2. From the Groups tab, select the group you want to modify.

3. Click Edit.

4. In the Edit Group dialog box, specify the group settings, as required.

5. Click Ok.

Related references

Administrator roles and capabilities on page 506

Administration | 379

Page 380: Admin help netapp

Copying a group

You can copy a group and assign it to a different parent group from the Copy To dialog box. Whenyou copy a group, you create a copy of the selected group, and assign the copy to a different parentgroup.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

To copy a group to a new parent group, you must be logged in as an administrator with DatabaseWrite capability on the new parent group.

Steps

1. Click the Administration menu, then click the Groups option.

2. From the Groups tab, select the group you want to copy from the list.

3. Click Copy.

4. Click Ok.

Result

The selected group is copied to the target group.

Related references

Administrator roles and capabilities on page 506

Moving a group

When you move a group, you assign the selected group to a new parent group. If the group youmoved has any subgroups, those subgroups are also moved to the new parent, maintaining the samehierarchical structure and membership.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

To move a group to a new parent group, you must have following capabilities:

• Database Delete capability on the group that you want to move.• Database Write capability on the new parent group.

The group you want to move must not already exist in the target group. If the group already exists, anappropriate message is displayed, and you cannot perform the move operation.

380 | OnCommand Console Help

Page 381: Admin help netapp

Steps

1. Click the Administration menu, then click the Groups option.

2. From the Groups tab, select the group you want to move from the list.

3. Click Move.

4. Click Ok.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Groups tab

The Groups tab enables you to view existing groups and perform tasks such as creating, deleting,modifying, moving, and copying groups.

• Command buttons on page 381• Groups list on page 382• Members tab on page 382• Graph tab on page 382

Command buttons

The command buttons enable you to perform the following management tasks for a selected group:

Create Launches the Create Group dialog box, which enables you to create groups with differentmember types.

Edit Launches the Edit Group dialog box. You can edit the group name, add or delete groupmembers, or modify group contact information.

Delete Deletes the selected group.

Note: When you delete a group, you also delete all of its subgroups, if any. If you wantto preserve the subgroups, you must move them to a different parent group beforedeleting the current parent group.

Copy Launches the Copy To dialog box. When you copy a group, you create a copy of theselected group and assign it to a different parent group.

Move Launches the Move To dialog box. When you move a group, you assign the selectedgroup to a new parent group. If the group you moved has any subgroups, those subgroupsare also moved to the new parent, maintaining the same hierarchical structure andmembership.

Refresh Refreshes the list of groups.

Administration | 381

Page 382: Admin help netapp

Groups list

The groups list displays a list of all of the groups that are created. You can select the name of a groupto see the details for that group.

Group Name Specifies the name of the group.

Owner Displays the name of the owner of the group.

Email Displays the e-mail address of the owner of the group.

Resource Tag Specifies the resource tag of the group. This is a system-generated customcomment field.

Annual Rate(currency unit/GB)

Specifies the amount to charge for storage space usage per GB per year.

Total Capacity (GB) Displays the total space allocated for the group.

Used Capacity (GB) Displays the amount of storage used by the group.

Used (%) Displays the percentage of storage used by the group.

Status Displays the current status of each group. The status can be Normal,Information, Warning, Error, Critical, Emergency, or Unknown.

ID Displays the group ID. By default, this column is hidden.

Members tab

The Members tab displays detailed information about the selected group.

The Members tab displays the current status of each group as mentioned in the groups list, andincludes the following additional information:

Member Name Specifies the name of the group member.

Member Type Specifies the object type of the group member.

Status Displays the current status of the group member. The status can be Normal,Information, Warning, Error, Critical, Emergency, or Unknown.

Member Of Specifies the name of the parent group to which the group member belongs.

Graph tab

The Graph tab displays information about the performance of the selected group. You can select thegraph you want to view from the drop-down menu in the area.

You can display information for a specified time period, such as one day, one week, one year, onemonth, and three months. By clicking the export icon, you can export the graphical data in CSVformat.

382 | OnCommand Console Help

Page 383: Admin help netapp

Related references

Window layout customization on page 16

Create Group dialog box

The Create Group dialog box enables you to create groups so that you can easily manage multiplestorage objects.

• Properties on page 383• Command buttons on page 383

Properties

You can create groups by specifying properties such as group name, owner name, e-mail address ofthe owner, and annual rate.

Name Specifies the name of the group.

Owner Specifies the user who owns the group.

Email Specifies the e-mail address of the user who owns the group.

Resource Tag Specifies the resource tag of the group. This is a system-generated customcomment field.

Annual Rate (PerGB)

Specifies the amount to charge for storage space usage per GB per year. Youmust enter a value in the x.y notation, where x is the integer part of the numberand y is the fractional part. For example, to specify an annual charge rate of$150.55, you must enter 150.55.

Member Type Displays the object types of the group in the drop-down menu.

AvailableMembers

Displays the list of members based on the object type selected. You can use thefilter to search for the objects. You can use appropriate arrow keys to move theobjects to the list on the right.

SelectedMembers

Displays the selected object types.

Command buttons

You can use command buttons to perform the following management tasks:

Create Creates a group based on the properties that you specify.

Cancel Does not save the group configuration and closes the Create Group dialog box.

Administration | 383

Page 384: Admin help netapp

Edit Group dialog box

The Edit Group dialog box enables you to edit the properties of the groups you created. You can editgroup properties such as name, owner, owner e-mail address, members, and chargeback details.

• General tab on page 384• Group Member tab on page 384• Chargeback tab on page 384• Command buttons on page 385

General tab

You can edit properties groups such as group name, owner name, and e-mail address of the owner.

Name Specifies the name of the group.

Owner Specifies the user who owns the group.

Email Specifies the e-mail address of the user who owns the group.

Resource Tag Specifies the resource tag of the group. This is a system-generated custom commentfield.

Group Member tab

You can edit properties of group members such as member type, available members, and selectedmembers.

Member Type Displays the object types of the group in the drop-down menu.

AvailableMembers

Displays the list of members based on the object type selected. You can useappropriate arrow keys to move the member types to the list on the right.

Selected Members Displays the selected member types.

Chargeback tab

You can edit chargeback properties of groups such as annual rate and format of the annual rate.

Annual Rate (PerGB)

Specifies the amount to charge for storage space usage per GB per year. Youmust enter a value in the x.y notation, where x is the integer part of thenumber and y is the fractional part. For example, to specify an annual chargerate of $150.55, you must enter 150.55.

Formatted AnnualRate

Displays the format of the annual rate.

384 | OnCommand Console Help

Page 385: Admin help netapp

Command buttons

You can use command buttons to perform the following management tasks:

OK Updates the group properties that you specify.

Cancel Does not save the modification of the group configuration, and closes the Edit Groupdialog box.

Alarms

Understanding alarms

Alarm configuration

DataFabric Manager server uses alarms to notify you when events occur. DataFabric Manager serversends the alarm notification to one or more specified recipients in different formats, such as e-mailnotification, pager alert, an SNMP traphost, or a script you wrote (you should attach the script to thealarm).

You should determine the events that cause alarms, whether the alarm repeats until it isacknowledged, and how many recipients an alarm has. Not all events are severe enough to requirealarms, and not all alarms are important enough to require acknowledgment. Nevertheless, to avoidmultiple responses to the same event, you should configure DataFabric Manager server to repeatnotification until an event is acknowledged.

Note: DataFabric Manager server does not automatically send alarms for the events.

Configuring alarms

Creating alarms for events

The OnCommand console enables you to configure alarms for immediate notification of events. Youcan also configure alarms even before a particular event occurs. You can add an alarm based on theevent, event severity type, or event class from the Create Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have your mail server configured so that the DataFabric Manager server can send e-mailsto specified recipients when an event occurs.

You must have the following information available to add an alarm:

• The group with which you want the alarm associated.

Administration | 385

Page 386: Admin help netapp

• The event name, event class, or event severity type that triggers the alarm.• The recipients and the modes of event notifications.• The period during which the alarm is active.

You must have the following capabilities to perform this task:

• DFM.Event.Write• DFM.Alarm.Write

About this task

Alarms you configure based on the event severity type are triggered when that event severity leveloccurs.

Steps

1. Click the Administration menu, then click the Alarms option.

2. From the Alarms tab, click Create.

3. In the Create Alarm dialog box, specify the condition for which you want the alarm to betriggered.

Note: An alarm is configured based on event type, event severity, or event class.

4. Specify either one or more means of alarm notification properties.

5. Click Create, then click Close.

Related concepts

Guidelines for creating alarms on page 30

Related tasks

Configuring the mail server for alarm notifications on page 36

Related references

Administrator roles and capabilities on page 506

386 | OnCommand Console Help

Page 387: Admin help netapp

Creating alarms for a specific event

The OnCommand console enables you to configure an alarm when you want immediate notificationfor a specified event name or event class, or if events (of a specified severity level) occur. You canadd an alarm from the Create Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have your mail server configured so that the DataFabric Manager server can send e-mailsto specified recipients when an event occurs.

You must have the following information available to add an alarm:

• The group with which you want the alarm associated• The event name, event class, or severity type that triggers the alarm• The recipients and the modes of event notifications• The period during which the alarm is active

You must have the following capabilities to perform this task:

• DFM.Event.Write• DFM.Alarm.Write

About this task

Alarms you configure for a specific event are triggered when that event occurs.

Steps

1. Click the View menu, then click the Events option.

2. From the events list in the Events tab, analyze the events, and then determine the event for whichyou want to create an alarm notification.

3. Select the event for which you want to create an alarm.

4. Click Create Alarm.

The Create Alarm dialog box opens, and the event is selected by default.

5. Specify either one or more means of alarm notification properties.

6. Click Create, then click Close.

Related concepts

Guidelines for creating alarms on page 30

Administration | 387

Page 388: Admin help netapp

Related tasks

Configuring the mail server for alarm notifications on page 36

Related references

Administrator roles and capabilities on page 506

Managing events and alarms

Resolving events

After you have taken corrective action of a particular event, you should mark the event as resolved toavoid multiple event notifications. You can mark the events as resolved from the Events tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the View menu, then click the Events option.

2. From the events list in the Events tab, select the event that you want to acknowledge.

3. Click Acknowledge.

When you do not acknowledge and mark an event resolved, you will receive multiple eventnotifications for the same event.

4. Find the cause of the event and take corrective action.

5. Click Resolve to mark the event as resolved.

Related references

Administrator roles and capabilities on page 506

Editing alarm properties

You can edit the configuration of an existing alarm from the Edit Alarm dialog box. For example, ifyou have created a script that is executed when there is an event notification, you can provide thecomplete script path in Edit Alarm dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

You must have the following capabilities to perform this task:

388 | OnCommand Console Help

Page 389: Admin help netapp

• DFM.Event.Write• DFM.Alarm.Write

Steps

1. Click the Administration menu, then click the Alarms option.

2. From the alarms list, select the alarm whose properties you want to modify.

3. From the Alarms tab, click Edit.

4. In the Edit Alarm dialog box, edit the properties of the alarm as required.

5. Click Edit.

Result

The new configuration is immediately activated and displayed in the alarms list.

Related references

Administrator roles and capabilities on page 506

Configuring the mail server for alarm notifications

You must configure the mail server so that when an event occurs the DataFabric Manager server cansend e-mails to specified recipients.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. From the File menu, click Operations Manager.

2. In the Operations Manager console, click the Control Center tab.

3. Click Setup menu, and then click Options.

4. In Edit options, click the Events and Alerts option.

5. In the Events and Alerts Options page, specify the name of your mail server.

Administration | 389

Page 390: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Page descriptions

Alarms tab

The Alarms tab provides a single location from which you can view a list of alarms configured basedon event, event severity type, and event class. You can also perform various actions from thiswindow, such as edit, delete, test, and enable or disable alarms.

• Command buttons on page 390• Alarms list on page 390• Details area on page 391

Command buttons

The command buttons enable you to perform the following management tasks for a selected event:

Create Launches the Create Alarm dialog box in which you can create an alarm based on event,event severity type, and event class.

Edit Launches the Edit Alarm dialog box in which you can modify alarm properties.

Delete Deletes the selected alarm.

Test Tests the selected alarm to check its configuration, after creating or editing the alarm.

Enable Enables an alarm to send notifications.

Disable Disables the selected alarm when you want to temporarily stop its functioning.

Refresh Refreshes the list of alarms.

Alarms list

The Alarms list displays a list of all the configured alarms. You can select an alarm to see the detailsfor that alarm.

Alarm ID Displays the ID of the alarm.

Event Displays the event name for which the alarm is created.

Event Severity Displays the severity type of the event.

Group Displays the group name with which the alarm is associated.

Enabled Displays “Yes” if the selected alarm is enabled or “No” if the selected alarm isdisabled.

390 | OnCommand Console Help

Page 391: Admin help netapp

Start Displays the time at which the selected alarm becomes active. By default, thiscolumn is hidden.

End Displays the time at which the selected alarm becomes inactive. By default, thiscolumn is hidden.

Repeat Interval(Minutes)

Displays the time period (in minutes) before the DataFabric Manager servercontinues to send a repeated notification until the event is acknowledged orresolved. By default, this column is hidden.

Repeat Notify Displays “Yes” if the selected alarm is enabled for repeated notification, ordisplays “No” if the selected alarm is disabled for repeated notification. Bydefault, this column is hidden.

Event Class Displays the class of event that is configured to trigger an alarm. By default, thiscolumn is hidden.

You can configure a single alarm for multiple events using the event class. Theevent class is a regular expression that contains rules, or pattern descriptions, thattypically use the word "matches" in the expression. For example, theuserquota.*|qtree.* expression matches all user quota or qtree events.

Details area

Apart from the alarm details displayed in the alarms list, you can view other additional properties ofthe alarms in the area below the alarms list.

Effective Time Range The time during which an alarm is active.

Administrators (EmailAddress)

The e-mail address of the administrator, to which the alarmnotification is sent.

Administrators (PagerNumber)

The pager number of the administrator, to which the alarmnotification is sent.

Email Addresses (Others) The e-mail addresses of nonadministrator users, to which the alarmnotification is sent.

Pager Numbers (Others) The pager numbers of nonadministrator users, to which the alarmnotification is sent.

SNMP Trap Host The SNMP traphost system that receives the alarm notification in theform of SNMP traps.

Script Path The name, along with the path of the script that is run when an alarmis triggered.

Related references

Window layout customization on page 16

Administration | 391

Page 392: Admin help netapp

Create Alarm dialog box

The Create Alarm dialog box enables you to create alarms based on the event type, event severity, orevent class. You can create alarms for specific event or many events.

• Event Options on page 392• Notification Options on page 392• Command buttons on page 393

Event Options

You can create an alarm based on event name, event severity type, or event class:

Group Displays the group that receives an alert when an event or event type triggers analarm.

Event Displays the names of the events that triggers an alarm.

EventSeverity

Displays the severity types of the event that triggers an alarm. The event severitytypes are Normal, Information, Warning, Error, Critical, and Emergency.

Event Class Specifies the events classes that triggers an alarm.

The event class is a regular expression that contains rules, or pattern descriptions,that typically use the word "matches" in the expression. For example, theexpression userquota.* matches all user quota events.

Notification Options

You can specify alarm notification properties by selecting one of the following check boxes:

SNMP Trap Host Specifies the SNMP traphost that receives the notification.

E-mail Administrator(Admin Name)

Specifies the name of the administrator who receives the e-mailnotification. You can specify multiple administrator names, separatedby commas.

Page Administrator(Admin Name)

Specifies the administrator who receives the pager notification. Youcan specify multiple administrator names, separated by commas.

E-mail Addresses(Others)

Specifies the e-mail addresses of nonadministrator users who receivethe notification. You can specify multiple e-mail addresses, separatedby commas.

Pager Numbers (Others) Specifies the pager numbers of other nonadministrator users whoreceive the notification. You can specify multiple pager numbers,separated by commas.

Script Path Specifies the name of the script that is run when the alarm is triggered.

392 | OnCommand Console Help

Page 393: Admin help netapp

Repeat Interval(Minutes)

Specifies whether an alarm notification is repeated until the event isacknowledged, and, if so, how often the notification is repeated.

Effective Time Range Specifies the time during which the alarm is active.

Command buttons

You can use command buttons to perform the following management tasks for a selected event:

Create Creates an alarm based on the properties that you specify.

Cancel Does not save the alarm configuration and closes the Create Alarm dialog box.

Host services

Understanding host services

What a host service is

The host service is software that runs on a physical machine, a Hyper-V parent, or in a virtualmachine. The host service software includes plug-ins that enable the DataFabric Manager server todiscover, back up, and restore virtual objects, such as virtual machines and datastores. The hostservice also enables you to view virtual objects in the OnCommand console.

Guidelines for managing host services

Resource discovery by a host service can be initiated manually by an administrator and by default,automatic notification is available in response to changes in resources. When you make changes tothe virtual infrastructure, the results are available immediately because of the automatic notificationfrom the host service to the DataFabric Manager server. You can manually start a rediscovery job tosee your changes. You might need to refresh the host service information to see the updates in theOnCommand console.

During the process of installing the NetApp OnCommand management software, you must register atleast one host service with the DataFabric Manager server and with the virtual infrastructure(VMware or Hyper-V). You can register additional host services after installation, from the HostServices tab accessible from the Administration menu in the OnCommand console. Afterregistration, you can monitor and manage host services from the Host Services tab.

Note: If the Hyper-V parent is part of a cluster, you must install the OnCommand Host Package oneach node of the cluster and all the cluster nodes must have the same TCP/IP port number toenable communication between host services on different nodes. You must register and authorizeeach node with the same DataFabric Manager server.

Note: When you register a host service with the DataFabric Manager server, you can type the fullyqualified domain name or IP address in the IPv4 format.

Administration | 393

Page 394: Admin help netapp

Note: If you change the certificate on a host service or uninstall a host service then reinstall it, youmust unregister the host service from DataFabric Manager server using the -f flag. See theunregister man page for more information.

The host service is included as part of the installation of the OnCommand Host Package. You caninstall multiple host services on multiple vCenter Servers, virtual machines, or Hyper-V parents.

Note: In a Hyper-V cluster only, if you manually shut down a host service on the node that isdesignated as the owner of a cluster and the node is active, the host service on both the cluster andthe node become inactive.

Note: The OnCommand Host Package upgrade does not force host services to reregister withDataFabric Manager server. Therefore, if you unregister a host service from DataFabric Managerserver prior to an OnCommand Host Package upgrade, you must manually register the host serviceto DataFabric Manager server after the upgrade is finished.

Messages from host services are stored persistently in the DataFabric Manager server databasehsNotifications table so that even if DataFabric Manager server goes down, information is not lostand incomplete operations are automatically restarted or resumed after the server comes back upagain. This table continues to grow over time, and can quickly become huge in a large environment.You can use the following global options to manage the size of this table:

• hsNotificationsMaxCount

• hsNotificationsPurgingInterval

Configuring host services

Adding and registering a host service

Before you can use a VMware or Hyper-V host, you must add the host service and register it with theDataFabric Manager server.

Before you begin

The host service firewall must be disabled for the administration and management ports.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

DataFabric Manager server does not support a host service created as a generic service from failovercluster manager in Microsoft Windows.

If you unregister a cluster level host service, DataFabric Manager server will not automaticallyregister the host service when you re-register the node. You must re-register or add the host serviceusing the cluster IP address.

394 | OnCommand Console Help

Page 395: Admin help netapp

Attention: If you change the name of the machine after installing the OnCommand Host Package,you must uninstall the OnCommand Host Package and perform a fresh installation.

Steps

1. Click the Administration menu, then click the Host Services option.

2. In the Host Services tab, click Add.

3. In the Add Host Service dialog box, type the IP address or the DNS name of the host on whichthe host service is installed.

4. The default number of the administrative port is automatically entered.

This is the port that is used by plug-ins to discover information about the host service. If the portnumber has been changed in the host service, type in the changed port number.

5. Click Add.

Result

The host service is added and registered with the DataFabric Manager server.

Tip: If you see an error stating that the requested operation did not complete in 60 seconds, waitseveral minutes and then click Refresh to see if the host service was actually added.

Attention: Host services can be registered with only one DataFabric Manager server at a time.Before you register a host service with a new DataFabric Manager server, you must first manuallyunregister the host service from the old DataFabric Manager server. To unregister a host serviceyou must use the DataFabric Manager server hsid command.

After you finish

To make the host service fully operational, you might need to authorize the host service. In aVMware environment, you must edit the host service to add the vCenter Server credentials.

Related references

Administrator roles and capabilities on page 506

Configuring a new host service

After you add a host service to OnCommand console, you must configure the host service tocommunicate with DataFabric Manager server, virtual centers, and storage systems that are required

Administration | 395

Page 396: Admin help netapp

for backup and recovery of your data. You also must reconfigure a host service when the credentialson a storage server have been changed.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Verifying that a host service is registered with the DataFabric Manager server on page 396

2. Authorizing a host service to access storage system credentials on page 397

3. Associating a host service with the vCenter Server on page 397

4. Associating storage systems with a host service on page 399

5. Editing storage system login and NDMP credentials from the Host Services tab on page 400

Related references

Administrator roles and capabilities on page 506

Verifying that a host service is registered with the DataFabric Manager server

You must be properly registered a host service with a DataFabric Manager server before the servercan discover objects and before you can perform a backup in a virtual environment.

About this task

A host service can be registered with the DataFabric Manager server during installation, or later fromthe OnCommand console. However, you might want to verify that the registration is still valid whentroubleshooting problems or prior to performing an action involving a host service, such as adding astorage system to a host service.

Steps

1. From the Administration menu, select Host Services.

The Host Services tab opens.

2. In the Host Services list, verify that the name of the host service is listed.

When a host service is registered with the DataFabric Manager server, it displays in the hostservices list.

After you finish

If the host service is not displayed in the list, you must add and configure the new host service.

396 | OnCommand Console Help

Page 397: Admin help netapp

Authorizing a host service to access storage system credentials

If the host service is not authorized, you must authorize the host service to access the storage systemcredentials before you can create backup jobs.

Before you begin

The host service must be registered with the DataFabric Manager server prior to performing this task.

About this task

DataFabric Manager server does not support a host service created as a generic service from failovercluster manager in Microsoft Windows.

Steps

1. From the Administration menu, select the Host Services option.

2. In the Host Services list, select the host service that you want and click Edit.

If the host service is not displayed in the list, you must add the host service and verify that it isregistered with the DataFabric Manager server.

3. In the Edit Host Service dialog box, click Authorize, review the certificate, and then click OK.

If the Authorize area is unavailable, the host service is already authorized.

When authorization is complete, the Authorize area becomes disabled.

4. Click OK.

After you finish

If you do not have storage systems associated with the host service, you must associate at least onestorage system to be able to perform backups.

After you finish editing the host service properties, you can view job progress from the Jobs subtabon the Manage Host Services window and you can view details about each job from the Jobs tab.

Associating a host service with the vCenter Server

In a VMware environment, you must authorize each host service and associate it with a vCenterServer. This provides part of the communication needed for discovery, monitoring, backup, andrecovery of virtual server objects such as virtual machines and datastores.

Before you begin

The host service must be registered with the DataFabric Manager server prior to performing this task.

Have the following information available:

Administration | 397

Page 398: Admin help netapp

• Name or IP address of the vCenter ServerYou must specify the hostname or the fully qualified domain name for host service registration.Do not use localhost.

• User name and password for access to the vCenter Server

About this task

Authorization is required to create backup jobs as it allows the host service to access the storagesystem credentials.

DataFabric Manager server does not support a host service created as a generic service from failovercluster manager in Microsoft Windows.

Steps

1. From the Administration menu, select the Host Services option.

2. In the Host Services list, select the host service that you want and then click Edit.

If the host service is not displayed in the list, you must add the host service and verify that it isregistered with the DataFabric Manager server.

The Edit Host Services dialog box opens.

3. Click Authorize, review the certificate, and then click OK.

If the Authorize area is disabled, the host service is already authorized.

4. Enter the vCenter Server properties.

If the properties fields are populated, then a server is already associated with the host service thatyou selected.

If the vCenter Properties section is not displayed, you might have selected a host service that isinstalled in an environment other than VMware.

5. Click OK.

After you finish

If you do not have storage systems associated with the host service, you must associate at least onestorage system to be able to perform backups.

After you finish editing the host service properties, you can view job progress from the Jobs subtabon the Manage Host Services window and you can view details about each job from the Jobs tab.

398 | OnCommand Console Help

Page 399: Admin help netapp

Associating storage systems with a host service

For each host service instance, you must associate one or more storage systems that host virtualmachines for the host service. This enables communication between the service and storage to ensurethat storage objects, such as virtual disks, are discovered and that host service features work properly.

Before you begin

If you add a new storage system to associate with the host service, you must have the followingstorage system information available:

• IP address or name• Login and NDMP credentials• Access protocol (HTTP or HTTPS)

Steps

1. From the Administration menu, select Host Services.

2. From the Host Services list, select the host service with which you want to associate storage.

In the Storage Systems subtab is a list of storage systems currently associated with the hostservice you selected.

3. Click Edit to associate a new storage system.

The Edit Host Service dialog box opens.

4. In the Storage Systems area, click Associate.

5. Associate the storage systems with the host service, as follows:

• To associate storage systems shown in the Available Storage Systems list, select the systemnames and click OK.

• To associate a storage system not listed in Available Storage Systems, click Add, enter therequired information, and click OK.

The newly associated storage system displays in the Storage Systems area.

6. In the list of storage systems, verify that the status is Good for the login and NDMP credentialsfor each storage system.

After you finish

If the login or NDMP status is other than Good for any storage system, you must edit the storagesystem properties to provide the correct credentials before you can use that storage system.

After you finish editing the host service properties, you can view job progress from the Jobs subtabon the Manage Host Services window and you can view details about each job from the Jobs tab.

Administration | 399

Page 400: Admin help netapp

Editing storage system login and NDMP credentials from the Host Services tab

You must have valid login and NDMP credentials for storage systems so they can be accessed by theDataFabric Manager server. If the server cannot access the storage, your backups might fail.

Before you begin

Have the following storage system information available:

• IP address or name• Login and NDMP credentials• Access protocol (HTTP or HTTPS)

Steps

1. From the Administration menu, select Host Services.

2. In the Host Services list, select a host service.

The storage systems associated with the selected host service display in the Host Services tab.

3. In the Host Services tab, select a storage system with login or NDMP status of Bad or Unknown.

4. Click Edit.

5. In the Edit Host Service dialog box, click Edit.

6. Enter the appropriate login and NDMP credentials and click OK.

7. In the Host Services tab, verify that the Login Status and NDMP Status are Good.

8. Click OK.

The storage system status columns in the Host Services tab update with the new status.

After you finish

After you finish editing the storage system properties, you can view job progress from the Jobssubtab on the Manage Host Services window and you can view details about each job from the Jobstab.

Managing host services

Editing a host service

After you add a host service to OnCommand console, you can edit the properties of the host service,the storage systems associated with the host service, and the system login and NDMP credentials. If

400 | OnCommand Console Help

Page 401: Admin help netapp

you did not authorize the host service when you added it, you can also authorize it from the Edit HostService dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

If you are adding a new host service, see Configuring a new host service.

Attention: If you change the name of the machine after installing the OnCommand Host Package,you must uninstall the OnCommand Host Package and perform a fresh installation.

Steps

1. Verifying that a host service is registered with the DataFabric Manager server on page 401

2. Associating a host service with the vCenter Server on page 402

3. Associating storage systems with a host service on page 403

4. Editing storage system login and NDMP credentials from the Host Services tab on page 404

Related references

Configuring a new host service on page 395

Administrator roles and capabilities on page 506

Verifying that a host service is registered with the DataFabric Manager server

You must be properly registered a host service with a DataFabric Manager server before the servercan discover objects and before you can perform a backup in a virtual environment.

About this task

A host service can be registered with the DataFabric Manager server during installation, or later fromthe OnCommand console. However, you might want to verify that the registration is still valid whentroubleshooting problems or prior to performing an action involving a host service, such as adding astorage system to a host service.

Steps

1. From the Administration menu, select Host Services.

The Host Services tab opens.

2. In the Host Services list, verify that the name of the host service is listed.

When a host service is registered with the DataFabric Manager server, it displays in the hostservices list.

Administration | 401

Page 402: Admin help netapp

After you finish

If the host service is not displayed in the list, you must add and configure the new host service.

Associating a host service with the vCenter Server

In a VMware environment, you must authorize each host service and associate it with a vCenterServer. This provides part of the communication needed for discovery, monitoring, backup, andrecovery of virtual server objects such as virtual machines and datastores.

Before you begin

The host service must be registered with the DataFabric Manager server prior to performing this task.

Have the following information available:

• Name or IP address of the vCenter ServerYou must specify the hostname or the fully qualified domain name for host service registration.Do not use localhost.

• User name and password for access to the vCenter Server

About this task

Authorization is required to create backup jobs as it allows the host service to access the storagesystem credentials.

DataFabric Manager server does not support a host service created as a generic service from failovercluster manager in Microsoft Windows.

Steps

1. From the Administration menu, select the Host Services option.

2. In the Host Services list, select the host service that you want and then click Edit.

If the host service is not displayed in the list, you must add the host service and verify that it isregistered with the DataFabric Manager server.

The Edit Host Services dialog box opens.

3. Click Authorize, review the certificate, and then click OK.

If the Authorize area is disabled, the host service is already authorized.

4. Enter the vCenter Server properties.

If the properties fields are populated, then a server is already associated with the host service thatyou selected.

If the vCenter Properties section is not displayed, you might have selected a host service that isinstalled in an environment other than VMware.

5. Click OK.

402 | OnCommand Console Help

Page 403: Admin help netapp

After you finish

If you do not have storage systems associated with the host service, you must associate at least onestorage system to be able to perform backups.

After you finish editing the host service properties, you can view job progress from the Jobs subtabon the Manage Host Services window and you can view details about each job from the Jobs tab.

Associating storage systems with a host service

For each host service instance, you must associate one or more storage systems that host virtualmachines for the host service. This enables communication between the service and storage to ensurethat storage objects, such as virtual disks, are discovered and that host service features work properly.

Before you begin

If you add a new storage system to associate with the host service, you must have the followingstorage system information available:

• IP address or name• Login and NDMP credentials• Access protocol (HTTP or HTTPS)

Steps

1. From the Administration menu, select Host Services.

2. From the Host Services list, select the host service with which you want to associate storage.

In the Storage Systems subtab is a list of storage systems currently associated with the hostservice you selected.

3. Click Edit to associate a new storage system.

The Edit Host Service dialog box opens.

4. In the Storage Systems area, click Associate.

5. Associate the storage systems with the host service, as follows:

• To associate storage systems shown in the Available Storage Systems list, select the systemnames and click OK.

• To associate a storage system not listed in Available Storage Systems, click Add, enter therequired information, and click OK.

The newly associated storage system displays in the Storage Systems area.

6. In the list of storage systems, verify that the status is Good for the login and NDMP credentialsfor each storage system.

Administration | 403

Page 404: Admin help netapp

After you finish

If the login or NDMP status is other than Good for any storage system, you must edit the storagesystem properties to provide the correct credentials before you can use that storage system.

After you finish editing the host service properties, you can view job progress from the Jobs subtabon the Manage Host Services window and you can view details about each job from the Jobs tab.

Editing storage system login and NDMP credentials from the Host Services tab

You must have valid login and NDMP credentials for storage systems so they can be accessed by theDataFabric Manager server. If the server cannot access the storage, your backups might fail.

Before you begin

Have the following storage system information available:

• IP address or name• Login and NDMP credentials• Access protocol (HTTP or HTTPS)

Steps

1. From the Administration menu, select Host Services.

2. In the Host Services list, select a host service.

The storage systems associated with the selected host service display in the Host Services tab.

3. In the Host Services tab, select a storage system with login or NDMP status of Bad or Unknown.

4. Click Edit.

5. In the Edit Host Service dialog box, click Edit.

6. Enter the appropriate login and NDMP credentials and click OK.

7. In the Host Services tab, verify that the Login Status and NDMP Status are Good.

8. Click OK.

The storage system status columns in the Host Services tab update with the new status.

After you finish

After you finish editing the storage system properties, you can view job progress from the Jobssubtab on the Manage Host Services window and you can view details about each job from the Jobstab.

404 | OnCommand Console Help

Page 405: Admin help netapp

Deleting a host service

You can delete a VMware or Hyper-V host service from DataFabric Manager server.

Before you begin

The host service must not be in use.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Host Services option.

2. In the Host Services tab, click Delete.

3. In the Delete Host Service Confirmation dialog box, click Yes to delete the host service or clickNo to terminate the deletion request.

Result

The host service is deleted from DataFabric Manager server and the associated virtual objects areremoved from the inventory lists.

If you restart the host service on the server side, the host service attempts to register with theDataFabric Manager server. You can prevent this by manually changing the dfm_server attribute inHSServiceHost.exe.config to a different value before restarting the host service plug-in.

Related references

Administrator roles and capabilities on page 506

Re-registering a host service if the hostname changes

If the hostname of the host service machine changes, but the IP address remains the same, you mustunregister the host service and re-register it using the IP address.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Unregister the host service by completing the following steps:

a. Click the Administration menu, then click the Host Services option.

b. In the Host Services tab, click Delete.

Administration | 405

Page 406: Admin help netapp

c. In the Delete Host Service Confirmation dialog box, click Yes.

2. Re-register the host service with DataFabric Manager server using the IP address in the Add HostService dialog box.

Related tasks

Adding and registering a host service on page 394

Related references

Administrator roles and capabilities on page 506

Bringing up a host service after restoring it to a different DataFabric Manager server

After you restore a host service to a different DataFabric Manager server, you must manually bringup the host service before it can receive notifications.

Steps

1. Copy the DataFabric Manager server keys to the new DataFabric Manager server.

2. Enter the following command to reload the SSL service: dfm ssl service reload

3. Enter the following command for each host service: dfm hs configure -i <NewDataFabric Manager server IP> <host service name or ID>.

Moving a host service to a different DataFabric Manager server

If you uninstall DataFabric Manager server and start with a fresh database or make your host servicepoint to a new DataFabric Manager server with brand new databases without first unregistering thehost service, you might need to clean up the host service repository either by reinstalling the hostservice or by cleaning up the old, leftover data from the host service repository.

Before you begin

If any of the resources from the host service you want to move are in a dataset, they must be removedfrom the dataset prior to unregistering the host service.

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

This procedure describes how to manually delete the old dataset information rather than reinstallingthe host service.

Steps

1. Unregister the host service by completing the following steps:

406 | OnCommand Console Help

Page 407: Admin help netapp

a. Click the Administration menu, then click the Host Services option.

b. In the Host Services tab, click Delete.

c. In the Delete Host Service Confirmation dialog box, click Yes.

2. Stop the host service by using the Service Control Manager on the host service machine.

3. Clean up the data by performing the following steps:

a. Remove policyenforcementdata.xml and eventrepository.xml from the data storesfolder in the host service installation directory.

b. Delete any leftover messages in the messages queues.

Message queue folders end with "queue" and are located in the installation directory.

c. Clean up the scheduled jobs.

This step is done from the Microsoft Windows Task Scheduler on the host service machine.

4. Restart the host service by using the Service Control Manager on the host service machine.

5. Re-register the host service with DataFabric Manager server.

Related tasks

Adding and registering a host service on page 394

Related references

Administrator roles and capabilities on page 506

Monitoring host services

Viewing configured hosts

You can view information about the configured VMware or Hyper-V hosts and the status of jobsassociated with those hosts from the Host Services tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Step

1. Click the Administration menu, then click the Host Services option.

Related references

Administrator roles and capabilities on page 506

Administration | 407

Page 408: Admin help netapp

Rediscovering virtual object inventory

If you have made changes to the virtual infrastructure managed by a host service and don't want towait for the host service to automatically notify DataFabric Manager server of the changes, you canupdate the list of configured VMware or Hyper-V hosts and virtual objects by manually starting arediscovery job in the Host Services tab.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

Note: If the host service FQDN changes, DataFabric Manager server will be unable to discoverany new virtual objects for the host service until you delete it and then add it again.

Steps

1. Click the Administration menu, then click the Host Services option.

2. In the Host Services tab, select a host service, then click Rediscover.

Result

A discovery job is started for the selected host service. When the discovery job finishes, DataFabricManager server reflects the current list of configured VMware or Hyper-V hosts and virtual objectsmanaged by the host service. You might need to refresh the host service information to see theupdated list.

Related tasks

Refreshing host service information on page 408

Related references

Administrator roles and capabilities on page 506

Refreshing host service information

Since a host service can be added or deleted while you are viewing the Host Services tab, you maywant to refresh the list of host services.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

408 | OnCommand Console Help

Page 409: Admin help netapp

Steps

1. Click the Administration menu, then click the Host Services option.

2. In the Host Services tab, click Refresh.

Result

The current list of host services is retrieved from the DataFabric Manager server.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Host Services tab

You can view information about registered virtual host services; add, configure, edit, and delete hostservices; rediscover virtual objects; and refresh the virtual object inventory display from the HostServices tab. You can access this window by clicking Administration > Host Services.

• Command buttons on page 409• Host services list on page 410• Storage Systems tab on page 410• More Details tab on page 411• Jobs tab on page 411

Command buttons

Add Opens the Add Host Service dialog box, which allows you to add a virtual host serviceto OnCommand console.

You can add a host service by using the Add button and then configure it later byusing the Edit button.

Edit Configures the credentials for a selected host service, which enables the host to beused by the OnCommand console. Typically, you first add a host service and thenconfigure it. Thereafter, you use the Edit button to edit the configuration for a host, ifneeded.

Delete Deletes a host service from DataFabric Manager server.

Refresh Updates the virtual object inventory information that is displayed.

Rediscover Starts a discovery job for the selected host service.

Administration | 409

Page 410: Admin help netapp

Host services list

Displays information about the host services that are registered on DataFabric Manager server.

ID The ID of the host on which the host service is installed.

Name The name of the host on which the host service is installed. This might be afully qualified domain name if the host is on a domain.

IP Address The IP address of the host on which the host service is installed.

Admin Port The host service port that is used for administrative operations.

Management Port The host service port that is used for management operations.

Version The version of the host service software.

Discovery Status Indicates whether the discovery of the host service was successful. "Error"indicates that the discovery was not completely successful. The Jobs tab at thebottom of the list displays the reason that the discovery failed.

Status Indicates whether the host service is running (up) or not running (down).

Storage Systems tab

This section displays information about the storage system associated with the selected host service.

Storage SystemName

Indicates the name of the storage system associated with the host service.

You can click on the storage system name to display the respective entry inthe storage inventory.

IP Address Displays the IP address of the storage system.

System Status Indicates the current status of the storage system.

Login Status (HostService)

Indicates whether or not the host service has valid credentials for the storagesystem.

NDMP Status Indicates whether or not DataFabric Manager server has the valid networkdata management protocol credentials for the storage system.

Login Status(Server)

Indicates whether or not DataFabric Manager server has the valid logincredentials for the storage system.

Transport Protocol Specifies the transport protocol used by the host service to communicatewith the storage system.

Valid values are http, https, and rpc.

410 | OnCommand Console Help

Page 411: Admin help netapp

More Details tab

This section displays detailed information about the components of the selected host service. Thecomponents that are listed vary depending upon the virtual infrastructure type the host service ismanaging.

Type Specifies the type of host service plugin.

Version Specifies the version of the host service plugin.

Jobs tab

This section displays information about the most recent jobs that ran on the host service. Host servicejobs are typically discovery and host service software upgrade jobs.

Job ID The identification number of the job.

Job Type The type of job, which is determined by the policy assigned to the dataset or by thedirect request initiated by a user.

Description A description of the job taken from the policy configuration or the job descriptionentered when the job was manually started.

Started By The ID of the user who started the job.

Start The date and time the job started.

Status The running status of the job.

End The date and time the job ended.

Related references

Window layout customization on page 16

Storage systems users

Understanding storage system users

Who local users are

Local users are the users created on storage systems and vFiler units.

Configuring storage system users

Administration | 411

Page 412: Admin help netapp

Adding a storage system user

To manage your storage systems efficiently, you can configure a local user to a host from the AddLocal User to Host page in the Operations Manager console. You can access this page from theOnCommand console Administration menu.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the Administration menu, then click the Storage Systems user option.

A separate browser window opens to the Host Users page in the Operations Manager console.

2. Click the Local Users tab.

3. Configure the local users.

For more information, see the Operations Manager Help.

4. When finished, press Alt-Tab or click the OnCommand console browser tab to return to theOnCommand console.

Related references

Administrator roles and capabilities on page 506

Storage system configuration

Understanding storage system configuration

Configuration of storage systems

You can remotely configure multiple storage systems using Operations Manager.

By creating configuration resource groups and applying configuration settings to them,administrators can remotely configure multiple storage systems from the server on which the

412 | OnCommand Console Help

Page 413: Admin help netapp

DataFabric Manager server is installed. Administrators can also manage CIFS data throughconfiguration management.

List of configuration management tasks for storage systems

You can perform a variety of configuration management tasks by using the storage systemconfiguration management feature.

Following are some of the tasks you can perform:

• Pull a configuration file from a storage system• View the contents of each configuration file.• Edit the configuration file settings (registry options and /etc files).• Copy or rename configuration files.• Edit a configuration file to create a partial configuration file.• Compare configuration files against a standard template.• View the list of existing configuration files.• Upgrade or revert file versions.• Delete a configuration file.• Import and export configuration files.• Remove an existing configuration file from a group’s configuration list.• Change the order of files in the configuration list.• Specify configuration overrides for a storage system assigned to a group.• Exclude configuration settings from being pushed to a storage system.• View Groups configuration summary for a version of Data ONTAP.• Push configuration files to a storage system or a group of storage systems.• Delete push configuration jobs.• View the status of push configuration jobs.

Configuring storage systems

Configuring storage systems

You can configure storage systems from the Storage System Configurations page in the OperationsManager console, which you can access from the OnCommand console Administration menu.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab key

Administration | 413

Page 414: Admin help netapp

combination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the Administration menu, then click the Storage Systems Configuration option.

A separate browser window opens to the Storage System Configurations page in the OperationsManager console.

2. Configure the storage system.

For more information, see the Operations Manager Help.

3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to theOnCommand console.

Related references

Administrator roles and capabilities on page 506

vFiler configuration

Understanding vFiler unit configuration

List of configuration management tasks for vFiler units

You can perform a variety of configuration management tasks by using the vFiler units configurationmanagement feature.

Following are some of the tasks you can perform:

• Pull a configuration file from a vFiler unit• View the contents of each configuration file.• Edit the configuration file settings (registry options and /etc files).• Copy or rename configuration files.• Edit a configuration file to create a partial configuration file.• Compare configuration files against a standard template.• View the list of existing configuration files.• Upgrade or revert file versions.• Delete a configuration file.• Import and export configuration files.• Remove an existing configuration file from a group’s configuration list.• Change the order of files in the configuration list.• Specify configuration overrides for a vFiler unit assigned to a group.

414 | OnCommand Console Help

Page 415: Admin help netapp

• Exclude configuration settings from being pushed to a vFiler unit.• View Groups configuration summary for a version of Data ONTAP.• Push configuration files to a group of vFiler units.• Delete push configuration jobs.• View the status of push configuration jobs.

Configuring vFiler units

Configuring vFiler units

You can configure vFiler units from the vFiler Configurations page in the Operations Managerconsole, which you can access from the OnCommand console Administration menu.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

During this task, the OnCommand console launches the Operations Manager console. Depending onyour browser configuration, you can return to the OnCommand console by using the Alt-Tab keycombination or clicking the OnCommand console browser tab. After the completion of this task, youcan leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Click the Administration menu, then click the vFiler Configuration option.

A separate browser window opens to the vFiler Configurations page in the Operations Managerconsole.

2. Configure the vFiler units.

For more information, see the Operations Manager Help.

3. When finished, press Alt-Tab or click the OnCommand console browser tab to return to theOnCommand console.

Related references

Administrator roles and capabilities on page 506

Options

Page descriptions

Administration | 415

Page 416: Admin help netapp

Setup Options dialog box

The Setup Options dialog box enables you to configure various options from the DataFabric Managerserver such as chargeback settings, database backup, default threshold values, SNMP settings,monitoring intervals, and naming settings.

• Backup on page 416• Global Naming Settings on page 416• Costing on page 417• Database Backup on page 417• Default Thresholds on page 417• Discovery on page 417• File SRM on page 418• LDAP on page 418• Monitoring on page 418• Management on page 418• Systems on page 419• Command buttons on page 419

Setup Options

You can configure the following options from the Setup Options dialog box:

Backup

The Backup option enables you to configure the lag threshold values for all volumes or specificsecondary volumes.

• Default ThresholdsSpecifies the Backup Manager monitoring intervals.

Global Naming Settings

The Global Naming Settings options specifies how to name a dataset's related objects, such asSnapshot copies, volumes, and qtrees that result from protection jobs run on that dataset.

Global naming settings apply to all related objects except those that belong to datasets configuredwith dataset-level naming settings.

• Snapshot copySpecifies the global-level names for Snapshot copies that are generated by dataset protection jobs.

• Primary volumeSpecifies the global-level names of primary volumes that are generated by protection jobs.

• Secondary volumeSpecifies the global-level names of secondary volumes that are generated by protection jobs.

416 | OnCommand Console Help

Page 417: Admin help netapp

• Secondary qtreeSpecifies the global-level names of secondary qtrees that are generated by protection jobs.

Costing

The Costing option enables you to configure the chargeback settings to obtain billing reports for thespace used by a specific storage object or a group of objects.

• ChargebackSpecifies the parameters you can configure to generate billing reports for the amount of spaceused by a specific object or a group of objects.

Database Backup

The Database Backup option enables you to configure the backup destination directory and retentioncount for the DataFabric Manager server database backup, and also manages existing backups.

• ScheduleSpecifies the parameters that you can configure to schedule a database backup.

• CompletedDisplays the ongoing database backups and the associated database backup events.

Default Thresholds

The Default Thresholds option enables you to configure the global default threshold values forobjects such as aggregates, volumes, qtrees, user quotas, resource pools, HBA ports and Hosts.

• AggregatesSpecifies the global default threshold values for monitored aggregates.

• VolumesSpecifies the global default threshold values for monitored volumes.

• OtherSpecifies the global default threshold values for host agents, HBA ports, qtrees, user quotas, andresource pools.

Discovery

The Discovery option enables you to set host discovery options, discovery methods, timeout andinterval values. You can also configure networks, and settings for the discovery of storage objectssuch as networks, storage systems (including clusters), host agents, and Open Systems SnapVaultagents.

• OptionsSpecifies the discovery methods and the monitoring interval for discovery.

• AddressesSpecifies the network addresses which are discovered for new hosts.

Administration | 417

Page 418: Admin help netapp

• CredentialsSpecifies the credentials for network addresses that are used for network and host discovery.

File SRM

The File SRM option enables you to configure the File SRM settings, such as the number of largestfiles, recently modified files, least accessed files, and least modified files.

• OptionsSpecifies the file parameters that you can configure.

LDAP

The LDAP option enables you to configure the LDAP settings to successfully retrieve data from theLDAP server.

• AuthenticationSpecifies the authentication settings that help the DataFabric Manager server to communicatewith the LDAP servers.

• Server TypesSpecifies settings that are configured to establish compatibility with the LDAP server.

• ServersSpecifies the LDAP server properties and the last authentication status.

Monitoring

The Monitoring option enables you to configure the monitoring intervals for various storage objectsmonitored by the DataFabric Manager server.

• StorageSpecifies the monitoring parameters for storage objects.

• ProtectionSpecifies the monitoring parameters for the protection of storage objects.

• NetworkingSpecifies the monitoring parameters for networking objects.

• InventorySpecifies the monitoring parameters for inventory objects.

• SystemSpecifies the monitoring parameters for system objects.

Management

The Management option enables you to configure the connection protocols settings for managementpurposes.

• Client

418 | OnCommand Console Help

Page 419: Admin help netapp

Specifies the HTTP and HTTPS settings that you can configure to establish a connection betweenthe client and the DataFabric Manager server.

• Managed HostSpecifies settings that you can configure to establish a connection between the managed host andthe DataFabric Manager server.

• Host AgentSpecifies settings that you can configure to establish a connection between the host agent and theDataFabric Manager server.

Systems

The Systems option enables you to configure the system settings such as event notifications (e-mailnotification, pager alerts, and SNMP traps), create custom comment fields, and set audit log options.

• AlarmsSpecifies settings that you can configure to send event notifications in different formats.

• AnnotationsEnables you to create annotations for the DataFabric Manager server that can be assigned to anyresource objects.

• MiscellaneousSpecifies miscellaneous settings that you can configure such as audit log options, credential TTLcache, and options to preserve your local configuration settings.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Backup setup options

Understanding backup options

Backup relationships and lag thresholds

The DataFabric Manager server uses the SnapVault technology of Data ONTAP to manage backupand restore operations. The DataFabric Manager server discovers and imports existing SnapVaultrelationships by using NDMP.

To use SnapVault, you must have separate SnapVault licenses for both the primary and secondarystorage systems.

Lag thresholds are limits set on the time elapsed since the last successful backup. When those limitsare exceeded, the DataFabric Manager server generates events that indicate the severity of the event.

Administration | 419

Page 420: Admin help netapp

After you add a secondary volume, the default values for lag thresholds are applied. However, youcan change these lag thresholds, either for all volumes or for specific secondary volumes, from theSetup Options dialog box.

Managing backup options

Editing backup default threshold options

You can edit the lag threshold values for all volumes or specific secondary volumes from the SetupOptions dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Backup option.

3. In the Default Thresholds area, specify the new values, as required.

4. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Backup Default Thresholds area

You can use the Backup Default Threshold options to configure the lag threshold values for allvolumes or specific secondary volumes. When a secondary volume is added to Backup Manager, thedefault values for the lag thresholds are applied automatically, but you can change them for allvolumes or specific secondary volumes.

• Options on page 420• Command buttons on page 421

Options

You can configure the lag threshold values using the following options:

SnapVaultReplica

Displays options to configure thresholds for secondary volumes.

420 | OnCommand Console Help

Page 421: Admin help netapp

Out-of-DateThreshold

Specifies the limit at which the backups on a secondaryvolume are considered obsolete. If this limit is exceeded,DataFabric Manager server generates a SnapVault ReplicaOut of Date event.

The default is 2 days.

Nearly Out-of-Date Threshold

Specifies the limit at which the backups on a secondaryvolume are considered nearly obsolete. If this limit isexceeded, DataFabric Manager server generates theSnapVault Replica Nearly Out of Date event.

The default is 1.5 days.

Purge BackupJobs

Displays options to list purge backup job files.

OlderThan

Specifies whether backup job files that are older than thedesignated period of time are purged.

The default is 12.86 weeks.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Global naming settings setup options

Understanding naming settings options

Differences between global and dataset-level naming settings

Global naming settings apply, by default, to all objects that are generated in datasets that are notconfigured with dataset-level naming settings. Dataset-level naming settings are configured forindividual datasets and apply only to the objects that are generated in those datasets.

Global naming settings

Protection-related objects are, by default, automatically named using a global naming format for eachobject type.

The OnCommand console enables you to customize the global naming settings for one or morerelated object type. After you customize the global naming format for a particular object type, that

Administration | 421

Page 422: Admin help netapp

customization then applies by default to all newly generated objects of that type that do not belong toa dataset with conflicting dataset-level naming settings.

However, if the newly generated object is part of a dataset that already has customized dataset-levelSnapshot copy, primary volume, secondary volume, or secondary qtree naming formats, then thecustomized naming formats apply to that object.

Dataset-level naming settingsBoth the OnCommand console and NetApp Management Console enable you to configure dataset-level naming settings for one or more object types in a particular dataset. Those dataset-level namingsettings then apply to the naming of their object types within that dataset.

Within that dataset, dataset-level naming settings have priority over any conflicting global namingsettings.

When to customize naming of protection-related objects

To support backup tasks, storage administration tasks, and application administration tasks, you cancustomize naming of the OnCommand console protection-related objects.

Custom naming and backup tasks

If you are a backup administrator, configuration of custom naming presents you with the followingadvantages:

• Easy location of secondary and tertiary storage backup items for restorationCustom naming applied to Snapshot copies, secondary volumes, or secondary qtrees enables youto easily identify the backup objects in which to locate files for restoration.

• Easy identification of backup data by an assortment of criteriaCustom naming, configured with specific naming conventions and applied to the backup objects,enables you to identify those objects by priority, business unit, administrator, backup time,physical container, or logical container.

• Customization of formats to maintain naming used by imported protection relationshipsCustom naming can be used by the OnCommand console to assign naming formats for relatedobjects that matches the related object naming conventions originally used by imported protectionrelationships.

• Easy identification of backup data related to protection, SnapManager, or SnapDrive operationsA consistent naming convention enables you to easily find the right data to restore even if thatbackup data is related to such disparate activities as the OnCommand console protectionoperations, SnapManager activity, or SnapDrive activity.

• Easy identification of source and secondary volumes in case of application shutdownIf the database application that is generating the backed up data shuts down, the naming formatsof the associated Snapshot copy, secondary volume, or secondary qtree objects, if set properly,enable you to identify primary volumes from the names of the secondary volumes and Snapshotcopies that are associated with that application.

422 | OnCommand Console Help

Page 423: Admin help netapp

Custom naming and storage management tasks

If you are a storage administrator, configuration of custom naming enables you to specify acompany-wide naming convention for related objects at the global level or at the dataset level.

Custom naming and application management tasks

If you are an application administrator and have specified a particular dataset in which to store datagenerated by a particular application, custom naming enables you to specify distinctive namingconventions for that dataset's related object types. The distinctive naming enables you to track theobjects that are generated by that application more easily.

Naming settings by format strings

The OnCommand console naming settings enable you to enter custom format strings that containletter attributes for identifying information to include in the names of protection-related objects.

The attributes that you can include in the format strings cause such identifying characteristics astimestamps, storage system names, custom labels, dataset names, and retention type to be included inthe name of a protection-related object type.

Naming settings by naming script

Naming scripts are user-authored scripts for naming some protection-related object types (Snapshotcopies, primary volumes, or secondary volumes) that are generated by protection jobs being executedon a dataset.

For each supported protection-related object type (secondary qtrees are the only related object typefor which naming scripts are not supported), you can write a script that makes use of DataFabricManager server supported environmental variables to generate a name for objects of that type that aregenerated when a protection job is executed on a dataset. When you configure global naming settingsin the Setup Options dialog box, you can specify this naming script and path as an alternative toaccepting the default naming settings or to entering a custom naming format string in the SetupOptions dialog box itself.

You can specify naming scripts for global naming settings only. Naming scripts are not applied to adataset's protection-related objects if that dataset's dataset-level naming settings specify a customnaming format string instead.

Naming script restrictions and precautions

As you author and apply naming scripts for your dataset's protection-related object types, keep inmind the following points:

• You can specify naming scripts for global naming settings only.Naming scripts are not applied to a dataset's protection-related objects if that dataset's dataset-level naming settings specify a custom naming format string instead.

• The output of your naming scripts must be tested.

Administration | 423

Page 424: Admin help netapp

The output of the naming script determines what name is generated and given to the dataset'sprotection-related objects for whom the script is written. If the script generates an error message,part of that error message might be included in the names generated for the objects in question.

Naming script limitationsSpecial limitations apply to protection-related object names that are specified by naming scripts.These limitations are in addition to the naming limitations that apply to protection-related objectnames that are specified by custom format.

Special limitations that apply to the use of naming scripts include the following:

• The naming script must be in a location that is readable from the DataFabric Manager server.• You can assign a naming script only to the global naming settings for Snapshot copy, primary

volume, and secondary volume objects.• You cannot assign a naming script to secondary qtree objects.• A naming script does not apply to an object type in a dataset if that dataset has a dataset-level

custom naming format enabled for that object type.• If you specify an incomplete script path in the global naming settings, an error event and job

failure results when a protection job is run.• If you specify a script that does not exist, an error event and job failure results when a protection

job is run.• If the script generates the name of a volume that already exists, an error event and job failure

results when a protection job is run.

Naming script environment variables for Snapshot copiesThe OnCommand console supports a set of environment variables that you can reference or use whenauthoring a global naming script to apply to Snapshot copies that are generated when a protection jobis run on a dataset.

You can author your Snapshot copy naming script to use the following variables when specifying theinformation to include in Snapshot copy names.

ENV_DATASET_ID The system-assigned dataset ID

ENV_DATASET_NAME The user-specified dataset name

ENV_DATASET_LABEL An optional user-specified custom label

ENV_NODE_ID The system-assigned ID of the node on which theoperation is being performed

ENV_NODE_NAME The user-specified name of the node on which theoperation is being performed

ENV_STORAGE_SYSTEM_ID The system-assigned ID of the storage system on which theoperation is being performed

ENV_STORAGE_SYSTEM_NAME The user-specified name of the storage system on whichthe operation is being performed

424 | OnCommand Console Help

Page 425: Admin help netapp

ENV_VOLUME_ID The system-assigned ID of the volume on which theoperation is being performed

ENV_VOLUME_NAME The user-specified name of the volume on which theoperation is being performed

ENV_TIMESTAMP The timestamp indicating the date and time of Snapshotcopy generation

ENV_RETENTION_TYPE The retention type (Hourly, Daily, Weekly, or Monthly) ofthe Snapshot copy being generated

Naming script environment variables for primary volumesThe OnCommand console supports a set of environment variables that you can reference or use whenauthoring a global naming script to apply to primary volumes that are generated when provisioningor protection jobs are run on a dataset.

You can author your primary volume naming script to use the following variables when specifyingthe information to include in primary volume names.

ENV_DATASET_ID The system-assigned dataset ID

ENV_DATASET_NAME The user-specified dataset name

ENV_DATASET_LABEL The user-specified custom label

ENV_NODE_ID The system-assigned ID of the node on which the operation is beingperformed (not supported if no protection policy is assigned)

ENV_NODE_NAME The user-specified name of the node on which the operation is beingperformed (not supported if no protection policy is assigned)

Naming script environment variables for secondary volumesThe OnCommand console supports a set of environment variables that you can reference or use whenauthoring a global naming script to apply to secondary volumes that are generated when protectionjobs are run on datasets.

You can author your secondary volume naming script to use the following variables when specifyingthe information to include in secondary volume names.

ENV_DATASET_ID The system-assigned dataset ID

ENV_DATASET_NAME The user-specified dataset name

ENV_DATASET_LABEL The user-specified custom label

ENV_NODE_ID The system-assigned ID of the node on which theoperation is being performed

ENV_NODE_NAME The user-specified name of the node on which theoperation is being performed

Administration | 425

Page 426: Admin help netapp

ENV_PRI_STORAGE_SYSTEM_ID The system-assigned ID of the primary storagesystem (not supported for Open Systems SnapVaultconfigurations)

ENV_PRI_STORAGE_SYSTEM_NAME The user-specified name of the primary storagesystem (not supported for Open Systems SnapVaultconfigurations)

ENV_PRI_VOLUME_ID The system-assigned ID of the primary root volume(not supported for Open Systems SnapVaultconfigurations)

ENV_PRI_VOLUME_NAME The user-specified name of the primary root volume(not supported for Open Systems SnapVaultconfigurations)

ENV_CONNECTION_TYPE The type of relationship ("backup or "mirror") thissecondary volume has with its primary volume

ENV_DP_POLICY_ID The system-assigned ID of the protection policyattached

ENV_DP_POLICY_NAME The user-specified name of the protection policyattached

Snapshot copy naming settings

Snapshot copy naming settings determine how Snapshot copies that are generated by OnCommandconsole protection jobs are named.

Default Snapshot copy naming formatThe default global naming attribute format for Snapshot copies generated by OnCommand consoleprotection operations is %T_%R_%L_%H_%N_%A (Timestamp_Retention type_Customlabel_Storage system name_Volume name_Application field ).

This format applies to the naming of all Snapshot copies generated by OnCommand consoleprotection operations.

Snapshot copy naming attributesSnapshot copy names can be configured to contain the following attributes:

%T (Timestamp) Indicates the year, month, date, and time of the Snapshot copy. Timestampis in the format yyyy-mm-dd_hhmm (along with UTC offset).

%R (Retention type) Indicates whether the Snapshot copy's retention class is hourly, daily,weekly, monthly, or unlimited.

%L (Custom label) Enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen), to include in the names of the related objects

426 | OnCommand Console Help

Page 427: Admin help netapp

that are generated by protection jobs that are run on this dataset. If thenaming format for a related object type includes the Custom labelattribute, then the value that you specify is included in the related objectnames. If you do not specify a value, then the dataset name is used as thecustom label. If you include a blank space in the custom label string, theblank space is converted to letter x in any Snapshot copy, volume, or qtreeobject name that includes the custom label as part of its syntax.

%H (Storage systemname)

Indicates the name of the storage system that contains the volume fromwhich a Snapshot copy is made.

%N (Volume name) Indicates the name of the volume from which a Snapshot copy is made.

%A (Applicationfields)

Indicates application-inserted fields. For the NetApp Management Consoledata protection capability, it is a list of qtrees present in the volume fromwhich a Snapshot copy is made.

%1 (One-digitsuffix), %2 (Two-digit suffix), or %3(Three-digit suffix)

Specifies a one-digit, two-digit, or three-digit suffix if required todistinguish Snapshot copies.

Example: Snapshot copy custom naming

If a dataset has a custom label of "my_data," includes a volume named "myVol" on a storagesystem named "mgt-u35," and is configured for Hourly backup, then the following Snapshotcopy format strings result in the following names for the resulting Snapshot copies:

Format string Resulting name

%L_%R_%T_%H my_data_hourly_2010-03-04_0330+0430_mgt-u35

%T_myunit_%L-mysection-%R

2010-03-04_0330_myunit_my_data-mysection-hourly

myunit-mydept-%R_%H_%T

myunit-mydept-hourly_mgt-u35_2010-03-04_0403-0800

%R_%T_%N_%A hourly_2010-03-04_0330_myVol_qtree1_qtree2_qtree3

%T 2010-03-04_0330 (if no UTC offset)

2010-03-04_0330+0530 (for IST)

%L_%R_%H_%2 my_data_hourly_mgt-u35_01

my_data_hourly_mgt-u35_02

Administration | 427

Page 428: Admin help netapp

Snapshot copy naming exceptions

The following are Snapshot copy naming exceptions:

• When a SMHV plug-in creates a Snapshot copy in a host system, the plug-in creates twoSnapshot copies for every backup.The second Snapshot copy has the string "_backup" appended to the end of the Snapshot copyname irrespective of the order of attribute selection.

• When a SMHV plug-in creates a Snapshot copy, and the Snapshot copy name exceeds the SMHVSnapshot copy character limit, the SMHV plug-in does not truncate the name by removing thecharacters of the Application fields attribute.Instead, it truncates the name by removing characters before the Application fieldsattribute, from right to left.

• For Snapshot copies created by a SMHV plug-in, if the Application fields attribute is notspecified, it is added automatically at the end of the naming format.

• For Snapshot copies created by a SMVI plug-in, if the Application fields attribute is notspecified, it is not added to the naming format.

• For Snapshot copies created by the NetApp Management Console data protection capability, ifthe Application fields attribute is not mentioned, it is not added implicitly in the namingformat.

• When SMVI plug-in creates a Snapshot copy, and the Snapshot copy name exceeds the SMVISnapshot copy character limit, SMVI plug-in does not truncate the name by removing thecharacters of the Application fields attribute.Instead, it truncates the name by removing characters before the Application fieldsattribute, from right to left.

• If you want to use scripts to generate the Snapshot copy name, and if the Snapshot copy isgenerated by the SnapManager plug-in in the host system, the plug-in does not use the user script.Instead, the plug-in uses the global naming format to create the Snapshot copy name. The userscript is used only if the Snapshot copy is created by the NetApp Management Console dataprotection capability.

• Snapshot copies created by the host system are in the local time zone of the host system.

Snapshot copy naming restrictions

The following are Snapshot copy naming restrictions:

• The timestamp attribute is mandatory.• The Application fields attribute is mandatory in the custom Snapshot copy naming for all

application datasets controlled by the SMHV plug-in Host services.• The Snapshot copy name can only display ASCII alphanumeric characters, _ (underscore), -

(hyphen), + (plus sign), and . (dot).Any other characters cause errors.

• If there is no custom label for the dataset, the Snapshot copy name defaults to the dataset name.• UTF-8 encoded characters are not supported.

428 | OnCommand Console Help

Page 429: Admin help netapp

• Including four digits reserved for suffixes, a Snapshot copy name cannot exceed 128 characters.The Snapshot copy name, excluding the suffixes, can be no more than 124 characters. If thegenerated Snapshot copy name exceeds 124 characters, then the name is truncated by removingcharacters from right to left.To avoid possible truncation of timestamp information from the Snapshot copy name, bestpractice is to place the timestamp %T attribute at the left end of the format string.

Primary volume naming settings

Primary volume naming settings determine how primary storage volumes generated by OnCommandconsole provisioning operations are named.

Default primary volume naming formatThe default global naming format for primary volumes generated by OnCommand consoleprovisioning operations is %L (Custom label).

The global format applies to the naming of all NAS primary volumes generated by OnCommandconsole provisioning operations. The default global naming format does not apply to SAN volumes,because they are named directly at the time of provisioning.

Primary volume naming attributesPrimary volume names can be configured to contain the following attributes:

%L (Customlabel)

Enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the related objects thatare generated by protection jobs that are run on this dataset. If the namingformat for a related object type includes the Custom label attribute, then thevalue that you specify is included in the related object names. If you do notspecify a value, then the dataset name is used as the custom label. If youinclude a blank space in the custom label string, the blank space is converted toletter x in any Snapshot copy, volume, or qtree object name that includes thecustom label as part of its syntax.

%D (Datasetname)

Indicates the actual name of the dataset in which a volume is created.

%1 (One-digitsuffix), %2(Two-digitsuffix), or %3(Three-digitsuffix)

Displays a one-digit, two-digit, or three-digit suffix if required to distinguishprimary volumes.

Administration | 429

Page 430: Admin help netapp

Example: Primary volume custom naming

If a dataset named "mydataset" has a custom label of "mydata", and a new volume isprovisioned through the OnCommand console, then the following primary volume formatstrings result in the following names for the resulting primary volumes:

Format string Resulting name

%L_%D mydata_mydataset

%L mydata

pri_%L pri_mydata

myunit-privol myunit-privol

%L_%D_%3 mydata_mydataset_001

mydata_mydataset_002

Primary volume naming exceptions

The following are primary volume naming exceptions:

• If the primary volume's naming format in one or more datasets is customized, then the primaryvolumes generated in those datasets are named according to the dataset-level format.

• For a primary volume to be provisioned, if you specify the primary volume name from theOnCommand console user interface, then the name that you specify in the user interface takesprecedence over the options for primary volume naming settings.

Primary volume naming restrictions

The following are the naming restrictions of primary volume:

• All naming attributes are optional.• At least one attribute must be enabled at any point of time, or there should be some free-form

text.• In case of name conflict, numerical suffixes are appended to the names.• If the Custom label attribute is included in the naming format, but no custom name exists for a

dataset, the resulting names use the actual dataset name instead.• Including four digits reserved for suffixes, a primary volume name cannot exceed 64 characters.

The primary volume name, excluding the suffixes, can be no more than 60 characters. If thegenerated primary volume name exceeds 60 characters, then the name gets truncated by removingcharacters from left to right.

• The primary volume name can only display ASCII alphanumeric characters and _ (underscore).Any other characters cause errors. The primary volume name cannot start with a number.

430 | OnCommand Console Help

Page 431: Admin help netapp

Secondary volume naming settings

Secondary volume naming settings determine how secondary storage volumes that are generated byOnCommand console protection operations are named.

Default secondary volume naming formatThe default global naming format for secondary volumes generated by OnCommand consoleprotection operations is %V (Primary volume name).

This format applies to the naming of all secondary volumes generated by OnCommand consoleprotection operations.

Secondary volume naming attributesSecondary volume names can be configured to contain the following attributes:

%L (Custom label) Enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the related objects thatare generated by protection jobs that are run on this dataset. If the namingformat for a related object type includes the Custom label attribute, thenthe value that you specify is included in the related object names. If you donot specify a value, then the dataset name is used as the custom label. If youinclude a blank space in the custom label string, the blank space is convertedto letter x in any Snapshot copy, volume, or qtree object name that includesthe custom label as part of its syntax.

%C (Type) Indicates the connection type (backup or mirror).

%S (Primarystorage systemname)

Indicates the name of the primary storage system.

%V (Primaryvolume name)

Indicates the name of the primary volume.

%1 (One-digitsuffix), %2 (Two-digit suffix), or %3(Three-digit suffix)

Displays a one-digit, two-digit, or three-digit suffix if required to distinguishsecondary volumes.

Example: Secondary volume custom naming

If a dataset has a custom label of "mydata," includes a primary volume named "myVol1" on aprimary storage system named "myhost1," and is configured for Hourly backup, then thefollowing secondary volume format strings result in the following names for the resultingsecondary volumes:

Administration | 431

Page 432: Admin help netapp

Format string Resulting name

%L_C_%S_%V mydata_backup_myhost1_myVol1

%C-%S-%L-destVol backup_muhost1_mydata_destVol

%C_%L backup_mydata

%V myhost1

%C_%L_%1 backup_mydata_1

backup_mydata_2

Secondary volume naming exceptions

The following is secondary volume naming exception:

• If secondary volume's naming format in one or more datasets is customized, then the secondaryvolumes generated in those datasets are named accordingly.

Secondary volume naming restrictions

The following are secondary volume naming restrictions:

• All naming attributes are optional.• At least one attribute must be enabled at any point of time, or there must be some free-form text.• In case of name conflict, numerical suffixes are appended to the names.• If the Fan-in feature is enabled for a backup destination, and two or more qtrees from different

primary volumes are backed up into the same secondary volume, then the Primary storagesystem name and Primary volume name attributes of the source volumes are randomlyselected to form the names of the secondary volumes.For example, if host1:/vol1/qtr1, host2:/vol2/qtr2, and host3:/vol3/qt3 are backed up to onesecondary volume, then all the names for the secondary qtrees in that volume include onecommon <Host name> and <Volume name> attribute combination character string. Thatcommon string is either: "host1_vol1", "host2_vol2," or "host3_vol3."

• If the Custom label attribute is included in the naming format, but no custom name exists for adataset, the resulting names use the actual dataset name instead.

• Including four digits reserved for suffixes, a secondary volume name cannot exceed 64characters.The secondary volume name, excluding the suffixes, can be no more than 60 characters. If thegenerated secondary volume name exceeds 60 characters, then the name gets truncated byremoving characters from left to right.

• The secondary volume name can only display ASCII alphanumeric characters and _ (underscore).Any other characters cause errors. The secondary volume name cannot start with a number.

432 | OnCommand Console Help

Page 433: Admin help netapp

• When taking a backup of Open Systems SnapVault (OSSV) directories, if you include %V(Primary volume name) attribute in the naming format, %V will be replaced with %S(Primary storage system name).This is because OSSV does not have the concept of volume.

Secondary qtree naming settings

Secondary qtree naming settings determine how secondary storage qtrees that are generated byOnCommand console protection operations are named.

Default secondary qtree naming formatThe default global naming format for secondary qtrees generated by OnCommand console protectionoperations is %Q (Primary qtree name).

This format applies to the naming of all secondary qtrees generated by OnCommand consoleprotection operations.

Secondary qtree attributesSecondary qtree names can be configured to contain the following attributes:

%Q (Primary qtreename)

The name of the primary qtree.

%L (Custom label) Enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the related objectsthat are generated by protection jobs that are run on this dataset. If thenaming format for a related object type includes the Custom labelattribute, then the value that you specify is included in the related objectnames. If you do not specify a value, then the dataset name is used as thecustom label. If you include a blank space in the custom label string, theblank space is converted to letter x in any Snapshot copy, volume, or qtreeobject name that includes the custom label as part of its syntax.

%S (Primarystorage system)

Indicates the name of the primary storage system.

%V (Primaryvolume name)

Indicates the name of the primary storage volume.

%1 (One-digitsuffix), %2 (Two-digit suffix), or %3(Three-digit suffix)

Displays a one-digit, two-digit, or three-digit suffix if required to distinguishsecondary qtrees.

Example: Secondary qtree custom naming

If a dataset has a custom label of "mydata," includes a primary qtree named "qtree1", andprimary volume named "myVol1" on a primary storage system named "myhost1," and is

Administration | 433

Page 434: Admin help netapp

configured for Hourly backup, then the following secondary qtree format strings result in thefollowing names for the resulting secondary qtrees:

Format string Resulting name

%L_%S_%V_%Q mydata_myhost1_myVol1_qtree1

%L_%S_%Q mydata_myhost1_qtree1

%V_%Q myVol1_qtree1

%L_%S_%V_%Q_%3 mydata_myhost1_myVol1_qtree1_001

mydata_myhost1_myVol1_qtree1_002

Secondary qtree naming exceptions

The following is secondary qtree naming exception:

• If secondary qtree naming format in one or more datasets is customized, the secondary qtreesgenerated in those datasets are named accordingly.

Secondary qtree naming restrictions

The following are secondary qtree naming restrictions:

• All naming attributes are optional.• At least one attribute must be enabled at any point of time, or there must be some free-form text.• In case of name conflict, numerical suffixes are appended to the names.• If the Custom label attribute is included in the naming format, but no custom name exists for a

dataset, the resulting names use the actual dataset name instead.• Including four digits reserved for suffixes, a secondary qtree name cannot exceed 64 characters.

The secondary qtree name, excluding the suffixes, can be no more than 60 characters. If thegenerated secondary qtree name exceeds 60 characters, then the name gets truncated by removingcharacters from left to right.

• The secondary qtree name can only display ASCII alphanumeric characters, _ (underscore), -(hyphen), and . (dot).Any other characters cause errors.

• When taking a backup of Open Systems SnapVault directories, if you include the Primaryvolume name attribute in the naming format, it is replaced by the Primary storage systemattribute.

• When taking a backup of Open Systems SnapVault directories, if you include the Primaryqtree name attribute in the naming format, it is replaced by the path of the root directory.The characters of the root directory path that are not supported are converted to the letter x.

• When taking a backup of Open Systems SnapVault directories, if you include the Primaryqtree name attribute in the naming format, and if the directory path contains non-ASCII

434 | OnCommand Console Help

Page 435: Admin help netapp

characters, or if the directory path is / (slash), then the Primary qtree name attribute isreplaced with the directory ID.

Managing naming settings options

Globally customizing naming of protection-related objects

To improve recognition and usability, you can globally customize the naming formats of allprotection-related objects (Snapshot copies, primary volumes, secondary volumes, or secondaryqtrees that are generated by the OnCommand console protection or provisioning operations).

Before you begin

• You must have reviewed the Guidelines for globally customizing naming of protection-relatedobjects on page 436

• You must have reviewed the Requirements and restrictions when customizing naming ofprotection-related objects on page 438

• Have the following custom naming information available:

• The protection-related object types whose naming you want to customize• If you want to customize by selecting and ordering OnCommand console-supported attributes

in the Name format field, the naming attributes that you want to include in the naming format• If you want to customize naming by using a pre-authored naming script, the name and

location of that script• You must be authorized to perform all the steps of this task; your RBAC administrator can

confirm your authorization in advance.

About this task

Configuring global custom naming for a related object type applies to all objects of that type that aregenerated by OnCommand console protection or provisioning jobs unless those objects belong to adataset that is already configured with dataset-level custom naming for object type in question.

Steps

1. In the OnCommand console Setup Options dialog box, select Naming Settings and select theobject type (Snapshot copy, primary volume, secondary volume, or secondary qtree) whoseglobal naming format you want to customize.

2. Customize the naming settings for the selected object type.

• Select Use naming format if you want to customize naming by selecting and orderingnaming attributes.Then use the name format field to complete your selection and ordering of naming attributes.

• Select Use naming script if you want to customize naming by a pre-authored script.

Administration | 435

Page 436: Admin help netapp

Then complete the Script path and Run as fields to specify the path to find the script and theauthorized user under whose identity to execute the script.

3. Click OK.

Result

The OnCommand console applies the custom global naming format to all protection job orprovisioning job generated objects of the type that you customized. Exception is made only to thoseobjects that belong to a dataset that has dataset-level naming customized for it.

Related references

Administrator roles and capabilities on page 506

Guidelines for globally customizing naming of protection-related objectsBefore you use the Settings options to customize global naming for protection-related objects (theSnapshot copies, primary volumes, secondary volumes, or secondary qtrees that are generated byOnCommand console protection jobs or provisioning jobs) you need to decide how you want tocustomize the global naming formats.

Custom naming by script

You can use a script to customize global naming settings of your Snapshot copy, primary volume, orsecondary volume objects if custom naming by OnCommand console-supported attribute selectionand ordering is inadequate for your global Snapshot copy, primary volume, and secondary volumenaming needs.

Note: The OnCommand console does not support global customization of secondary qtree namingby script.

Note: You do not have to assign a policy to create a new dataset. You can assign a policy to thedataset later by running the Dataset Policy Change wizard.

If you intend to customize global naming by script, you should specify the path location to a pre-authored script and the name of an application to read and execute that script.

Note: The output of the naming script determines what name is generated and given to thedataset's related objects for whom the script is written. If the script generates an error message,part of that error message might be included in the names generated for the objects in question.

Custom naming by attribute selection and ordering

The most common method of customizing global naming of your Snapshot copy, primary volume,secondary volume, or secondary qtree objects is by specifying the selection and ordering of theirOnCommand console-supported naming attributes. Each object type for which custom naming issupported supports a different set of attributes.

436 | OnCommand Console Help

Page 437: Admin help netapp

Snapshotcopy namingattributes

Attributes that can be specified and ordered to customize global Snapshot copynaming include the following:

• %T (Timestamp) (required)• %R (Retention type) (retention class of the Snapshot copy)• %L (Custom label) (custom label of the containing dataset, if one exists)• %H (Storage system name) (storage system of the containing volume)• %N (Volume name) (the containing volume)• %A (Application fields)

• %1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Three-digit suffix) (applied as necessary if naming differentiation is required)

Primaryvolumenamingattributes

Attributes that can be specified and ordered to customize global primary volumenaming include the following:

• %L (Custom label) (custom label of the containing dataset, if one exists)• %D (Dataset name) (actual name of the containing dataset)• %1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Three-

digit suffix) (applied as necessary if naming differentiation is required)

Secondaryvolumenamingattributes

Attributes that can be specified and ordered to customize global secondary volumenaming include the following:

• %L (Custom label) (custom label of the containing dataset, if one exists)• %C (Type) (the connection type: backup or mirror).• %S (Primary storage system name) (storage system of the volume being

backed up or mirrored)• %V (Primary volume name) (volume being backed up or mirrored)• %1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Three-

digit suffix) (applied as necessary if naming differentiation is required)

Secondaryqtree namingattributes

Attributes that can be specified and ordered to customize global secondary volumenaming include the following:

• %Q (Primary qtree name) (qtree being backed up)• %L (Custom label) (custom label of the containing dataset, if one exists)• %S (Primary storage system name) (storage system of the volume being

backed up or mirrored)• %V (Primary volume name) (volume being backed up or mirrored)• %1 (One-digit suffix), %2 (Two-digit suffix), or %3 (Three-

digit suffix) (applied as necessary if naming differentiation is required)

Administration | 437

Page 438: Admin help netapp

Requirements and restrictions when customizing naming of protection-relatedobjectsYou must follow the requirements and restrictions when customizing naming of protection-relatedobjects (Snapshot copies, volumes, or qtrees that are generated by protection jobs run on datasets).

Snapshot copies

The following are the naming restrictions of Snapshot copies:

• The timestamp attribute is mandatory in the name.• The Application fields attribute is mandatory in the custom Snapshot copy naming for all

application datasets controlled by Host services.• Snapshot copy name can only display ASCII alphanumeric characters, _ (underscore), - (hyphen),

+ (plus sign), and . (dot).Any other characters cause errors.

• If there is no custom label for the dataset, the Snapshot copy name defaults to the dataset name.• UTF-8 encoded characters are not supported.• Snapshot copy name can be 124 characters long.

If the generated Snapshot copy name exceeds 124 characters, then the name gets truncated byremoving characters from right to left. Additional 4 digits are dedicated for suffixes. Therefore,the maximum length of Snapshot copy name including the suffix can be 128 characters.

Primary volume

The following are the naming restrictions of primary volume:

• All naming attributes are optional.• At least one attribute must be enabled at any point of time, or there should be some free-form

text.• In case of name conflict, numerical suffixes are appended to the names.• If the Custom label attribute is included in the naming format, but no custom name exists for a

dataset, the resulting names use the actual dataset name instead.• Primary volume name can be 60 characters long.

If the generated primary volume name exceeds 60 characters, then the name gets truncated byremoving characters from left to right. Additional 4 digits are dedicated for suffixes. Therefore,the maximum length of primary volume name including the suffix can be 64 characters long.

• Primary volume name can only display ASCII alphanumeric characters, and _ (underscore).Any other characters cause errors. Primary volume name cannot start with a number.

Secondary volume

The following are the naming restrictions of secondary volume:

• All naming attributes are optional.

438 | OnCommand Console Help

Page 439: Admin help netapp

• At least one attribute must be enabled at any point of time, or there must be some free-form text.• In case of name conflict, numerical suffixes are appended to the names.• If the Fan-in feature is enabled for a backup destination, and two or more qtrees from different

primary volumes are backed up into the same secondary volume, then the Primary storagesystem name and Primary volume name attributes of the source volumes are randomlyselected to form the names of the secondary volumes.For example, if host1:/vol1/qtr1, host2:/vol2/qtr2, and host3:/vol3/qt3 are backed up to onesecondary volume, then all the names for the secondary qtrees in that volume include onecommon <Host name> and <Volume name> attribute combination character string. Thatcommon string is either: "host1_vol1", "host2_vol2," or "host3_vol3."

• If the Custom label attribute is included in the naming format, but no custom name exists for adataset, the resulting names use the actual dataset name instead.

• Secondary volume name can be 60 characters long.If the generated secondary volume name exceeds 60 characters, then the name gets truncated byremoving characters from left to right. Additional 4 digits are dedicated for suffixes. Therefore,the maximum length of secondary volume name including the suffix can be 64 characters long.

• Secondary volume name can only display ASCII alphanumeric characters, and _ (underscore).Any other characters cause errors. Secondary volume name cannot start with a number.

Secondary qtree

The following are the naming restrictions of secondary qtree naming:

• All naming attributes are optional.• At least one attribute must be enabled at any point of time, or there must be some free-form text.• In case of name conflict, numerical suffixes are appended to the names.• If the Custom label attribute is included in the naming format, but no custom name exists for a

dataset, the resulting names use the actual dataset name instead.• Secondary qtree name can be 60 characters long.

If the generated secondary qtree name exceeds 60 characters, then the name gets truncated byremoving characters from left to right. Additional 4 digits are dedicated for suffixes. Therefore,the maximum length of secondary qtree name including the suffix can be 64 characters long.

• Secondary qtree name can only display ASCII alphanumeric characters, _ (underscore), -(hyphen), and . (dot).Any other characters cause errors.

• For non qtree data (Qtree0) naming on the secondary, if the secondary qtree name end up being -(hyphen), or "etc", then the naming format for the non qtree data secondary qtree is (Customlabel)_(Primary storage name)_(Primary volume name).

• When taking a backup of Open Systems SnapVault directories, if you include the Primaryvolume name attribute in the naming format, it is replaced by the Primary storage systemattribute.

• When taking a backup of Open Systems SnapVault directories, if you include the Primaryqtree name attribute in the naming format, it is replaced by the path of the root directory.

Administration | 439

Page 440: Admin help netapp

The characters of the root directory path that are not supported are converted to letter "x".• When taking a backup of Open Systems SnapVault directories, if you include the Primary

qtree name attribute in the naming format, and if the directory path contains non-ASCIIcharacters, or if the directory path is / (slash), then the Primary qtree name attribute isreplaced with the directory ID.

Page descriptions

Global Naming Settings Snapshot Copy area

You can customize the Snapshot copy naming settings in the Setup Options dialog box to determineglobal-level names for Snapshot copies generated by dataset protection jobs.

OptionsYou can specify a Snapshot copy name at the global level by specifying either a naming format orthe path to a naming script.

Note: The global-level Snapshot copy naming settings apply to any Snapshot copy that isgenerated by a OnCommand console protection job that is executed on a dataset that does not havedataset-level Snapshot copy naming settings specified. Dataset-level Snapshot copy namingsettings take precedence over global-level naming settings.

UseNamingFormat

Selecting this option specifies that global-level Snapshot copy naming is determined bythe format that is specified in the Name Format field.

NameFormat

Enables you to specify global-level naming attributes of any Snapshot copythat is generated by OnCommand console protection jobs. You can typethe following attributes (separated by the underscore character) in this fieldin any order:

• %T (timestamp attribute)The year, month, day, and time of the Snapshot copy.

• %R (retention type attribute)The Snapshot copy's retention class (Hourly, Daily, Weekly, Monthly,or Unlimited)

• %L (custom label attribute)The custom label, if any, that is specified for the Snapshot copy'scontaining dataset. If no custom label is specified, then the datasetname is included in the Snapshot copy name.It enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the relatedobjects that are generated by protection jobs that are run on this dataset.If the naming format for a related object type includes the Customlabel attribute, then the value that you specify is included in therelated object names. If you do not specify a value, then the dataset

440 | OnCommand Console Help

Page 441: Admin help netapp

name is used as the custom label. If you include a blank space in thecustom label string, the blank space is converted to letter x in anySnapshot copy, volume, or qtree object name that includes the customlabel as part of its syntax.

• %H (storage system name attribute)The name of the storage system that contains the volume from which aSnapshot copy is made

• %N (volume name attribute)The name of the volume from which a Snapshot copy is made

• %A (application fields attribute)Data inserted by outside applications into the name of the Snapshotcopy. In the case of regular datasets, %A contains a list of qtrees on thevolume for which the Snapshot copy is made.

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix, if required, to distinguishSnapshot copies with otherwise matching names

The default Snapshot copy naming format is %T_%R_%L_%H_%N_%A.

NamePreview

Displays a sample Snapshot copy name based on the attributes that youentered in the Name Format field.

For example, if Snapshot copy naming format is customized as %T_%R_%L_%H_%N_%A , and if a dataset with custom label "mydata" has somedata backed up "hourly" from primary to the backup destination mgt-u35:/myVol, then the name of the Snapshot copy on myVol is"2010-03-04_03.30.45+0430_hourly_mydata_mgt-u35_myVol_(Application fields)".

UseNamingScript

Selecting this option specifies at a global level the path and name of a user-authorednaming script. The script specifies how a Snapshot copy that is generated byOnCommand console protection jobs is named. Naming scripts for Snapshot copiesapply to datasets of physical storage objects only. They do not apply to datasets ofvirtual objects. Host services will use the Name Format instead of the script.

Script Path The path and file name of a user-supplied naming script. The namingscript must be in a location that is readable by DataFabric Managerserver.

Run As: The name of the authorized user under whose identity the operationsthat are specified in the naming script are executed.

Administration | 441

Page 442: Admin help netapp

Global Naming Settings Primary Volume area

You can customize the primary volume naming settings in the Setup Options dialog box to determineglobal-level names of primary volumes that are generated by OnCommand console protection jobs.

OptionsYou can specify a primary volume naming at the global level by specifying either a naming format orthe path to a naming script.

Note: The global-level primary volume naming settings apply to any primary volume that isgenerated by an OnCommand console protection job that is executed on a dataset that does nothave dataset-level primary volume naming settings specified. Dataset-level primary volumenaming settings take precedence over global-level naming settings.

UseNamingFormat

Selecting this option specifies that global-level primary volume naming is determinedby the format that is specified in the Name Format field.

NameFormat

Enables you to specify at a global-level the naming attributes that are tobe included in the name of any primary volume that is generated byOnCommand console protection jobs. You can type the followingattributes (separated by the underscore character) in this field in anyorder:

• %L (custom label attribute)The custom label, if any, that is specified for the primary volume'scontaining dataset. If no custom label is specified, then the datasetname is included in the primary volume name.

• %D (dataset name)The actual name of the dataset in which a volume was created

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix, if required, to distinguishprimary volumes with otherwise matching names

The default primary volume naming format is %L.

NamePreview

Displays a sample primary volume name based on the attributes that youentered in the Name Format field.

For example, if primary volume naming format is customized as %L_%D , and if a dataset named "mydataset" with custom label "mydata"and a new volume is provisioned through the OnCommand console, thenthe name of the primary volume is "mydata_mydataset."

UseNamingScript

Selecting this option specifies at a global level the path location and name of a user-authored naming script. The script specifies how a primary volume that is generated by

442 | OnCommand Console Help

Page 443: Admin help netapp

OnCommand console protection jobs is named. Host services will use the NameFormat instead of the script.

Script Path The path and file name of a user-supplied naming script. The namingscript must be in a location that is readable by DataFabric Managerserver.

Run As: The name of the authorized user under whose identity the operationsthat are specified in the naming script are executed.

Global Naming Settings Secondary Volume area

You can customize the secondary volume naming settings in the Setup Options dialog box todetermine the global-level names of secondary volumes that are generated by OnCommand consoleprotection jobs.

OptionsYou can specify a secondary volume naming at the global level by specifying either a naming formator the path to a naming script.

Note: The global-level secondary volume naming settings apply to any secondary volume that isgenerated by an OnCommand console protection job that is executed on a dataset that does nothave dataset-level secondary volume naming settings specified. Dataset-level secondary volumenaming settings take precedence over global-level naming settings.

UseNamingFormat

Selecting this option specifies that global-level secondary volume naming is determinedby the format that is specified in the Name Format field of this option.

NameFormat

Enables you to specify at a global level the naming attributes that are to beincluded in the name of any secondary volume that is generated byOnCommand console protection jobs. You can type the followingattributes (separated by the underscore character) in this field in any order:

• %L (custom label attribute)The custom label, if any, that is specified for the secondary volume'scontaining dataset. If no custom label is specified, then the datasetname is included in the secondary volume name.It enables you to specify a custom alphanumeric character, . (period), _(underscore), or - (hyphen) to include in the names of the relatedobjects that are generated by protection jobs that are run on thisdataset. If the naming format for a related object type includes theCustom label attribute, then the value that you specify is included inthe related object names. If you do not specify a value, then the datasetname is used as the custom label. If you include a blank space in thecustom label string, the blank space is converted to letter x in any

Administration | 443

Page 444: Admin help netapp

Snapshot copy, volume, or qtree object name that includes the customlabel as part of its syntax.

• %S (primary storage system name)The name of the primary storage system

• %V (primary volume name)The name of the primary volume

• %C (type)The connection type (backup or mirror)

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix, if required, to distinguishsecondary volumes with otherwise matching names

The default secondary volume naming format is %V .

NamePreview

Displays a sample secondary volume name based on the attributes thatyou entered in the Name Format field.

For example, if secondary volume naming format is customized as %L_%C_%S_%V, and if a dataset with custom label "mydata" has some databacked up "hourly" from primary myhost1:/myvol1 to the backupdestination "myhost2:/myvol2", then the name of the secondary volume is"mydata_backup_myhost1_myvol1".

UseNamingScript

Selecting this option specifies at a global level the path location and name of a user-authored naming script. The script specifies how a secondary volume that is generatedby OnCommand console protection jobs is named. Host services will use the NameFormat instead of the script.

Script Path The path and file name of a user-supplied naming script. The namingscript must be in a location that is readable by DataFabric Managerserver.

Run As: The name of the authorized user under whose identity the operationsthat are specified in the naming script are executed.

Global Naming Settings Secondary Qtree area

You can customize the secondary qtree naming settings in the Setup Options dialog box to determinethe global-level names of secondary qtrees that are generated by OnCommand console protectionjobs.

OptionsYou can specify a secondary qtree naming at the global level by specifying a naming format.

Note: The global-level secondary qtree naming settings apply to any secondary qtree that isgenerated by an OnCommand console protection job that is executed on a dataset that does not

444 | OnCommand Console Help

Page 445: Admin help netapp

have dataset-level secondary qtree naming settings specified. Dataset-level secondary qtreenaming settings take precedence over global-level naming settings.

NameFormat

Enables you to specify at a global level the naming attributes that are to be included inthe name of any secondary qtree that is generated by OnCommand console protectionjobs. You can type the following attributes (separated by the underscore character) inthis field in any order:

• %L (custom label attribute)The custom label, if any, that is specified for the secondary qtree's containingdataset. If no custom label is specified, then the dataset name is included in thesecondary qtree name.It enables you to specify a custom alphanumeric character, . (period), _ (underscore),or - (hyphen) to include in the names of the related objects that are generated byprotection jobs that are run on this dataset. If the naming format for a related objecttype includes the Custom label attribute, then the value that you specify isincluded in the related object names. If you do not specify a value, then the datasetname is used as the custom label. If you include a blank space in the custom labelstring, the blank space is converted to letter x in any Snapshot copy, volume, orqtree object name that includes the custom label as part of its syntax.

• %S (primary storage system name)The name of the primary storage system

• %V (primary volume name)The name of the primary volume

• %Q (primary qtree name)The name of the primary qtree

• %1, %2, %3 (digit suffix)A one-digit, two-digit, or three-digit suffix, if required, to distinguish secondaryqtrees with otherwise matching names.

The default secondary qtree naming format is %Q .

NamePreview

Displays a sample secondary qtree name based on the attributes that you entered in theName Format field.

For example, if secondary qtree naming format is customized as %L_%S_%V_%Q, andif a dataset with custom label "mydata" has some data backed up "hourly" from primarymyhost1:/myvol1/qtree1 to the backup destination myhost2:/myvol2, then the name ofthe secondary qtree is "qtree1_mydata_host1_vol1".

Costing setup options

Understanding costing options

Administration | 445

Page 446: Admin help netapp

What chargeback is

You can configure the DataFabric Manager server to collect data related to space usage byindividual, or groups of, appliances and file systems. The statistical information you collect can beused for chargeback, and planning space utilization.

The chargeback reports provide an easy way to track space usage and to generate bills based on yourspecifications. If your organization bills other organizations or groups in your company for thestorage services they use, you can use the chargeback reports.

Managing costing options

Editing chargeback options

You can edit the chargeback options for objects to customize the billing information.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Costing option.

3. Click the Chargeback option.

The Costing Chargeback area appears.

4. Specify the chargeback increment, currency display format, the amount to charge for disk spaceusage, and the day of the month when the billing cycle begins and ends.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Costing Chargeback area on page 446

Page descriptions

Costing Chargeback area

The chargeback feature in DataFabric Manager server enables you to obtain billing reports for theamount of space used by a specific object or a group of objects.

• Options on page 447

446 | OnCommand Console Help

Page 447: Admin help netapp

• Command buttons on page 447

Options

You can configure the following chargeback options:

ChargebackIncrement

Displays how the charge rate is calculated. You can specify this setting only atthe global level.

You can specify the following values for the Chargeback increment option:

• Daily: Charges are variable and they are adjusted based on the number ofdays in the billing period.Formula used by DataFabric Manager server to calculate the charges: AnnualRate / 365 x number of days in the billing period

• Monthly: Charges are fixed and there is a flat rate for each billing periodregardless of the number of days in the period.Formula used by DataFabric Manager server to calculate the charges: AnnualRate / 12

The default is Daily.

CurrencyFormat

Displays the format of the currency to be specified.

The default is $ #,###.##.

Annual ChargeRate (Per GB)

Displays the amount to charge for storage space usage, per GB, per year.

You must specify the value in the x.y notation, where x is the integer of thenumber and y is the fraction. For example, to specify an annual charge rate of $150.55, you must enter 150.55.

Day of theMonth forBilling

Displays the day of the month when the billing cycle begins.

You can specify the following values for the day of the month for the billingoption:

• 1 through 28: These values specify the day of the month.For example, if you specify 15, the billing cycle begins on the 15th day of themonth.

• -27 through 0: These values specify the number of days before the last day ofthe month. Therefore, 0 specifies the last day of the month.For example, if you want the bill on the fifth day before the month endsevery month, specify -4.

The default is one day.

Command buttons

The command buttons enable you to save or cancel the setup options.

Administration | 447

Page 448: Admin help netapp

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Database backup setup options

Understanding database backup options

Overview of the DataFabric Manager server database backup process

You can back up the DataFabric Manager server database, script plug-ins, and performance datawithout stopping any of the DataFabric Manager server services. However, data collection and viewmodifications of Performance Advisor are suspended during the backup process.

There are two types of backups:

• ArchiveThis backup process backs up your critical data in compressed form as a .zip file. The DataFabricManager server data is automatically converted to an archive format and the DataFabric Managerserver stores the backup in a local or remote directory. You can easily move an archive-basedbackup to a different system and restore it. This backup process is time-consuming.

• SnapshotThis backup process uses the Snapshot technology to back up the database. You can quicken thebackup process through this approach. But you cannot transfer a Snapshot backup to a differentsystem and restore it.

Configuring database backup options

Deleting a database backup

You can delete a database backup that you do not need from OnCommand to save space.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Database Backup option.

3. Click the Completed option.

4. In the Database Backup Completed area, select the backup file you want to delete.

448 | OnCommand Console Help

Page 449: Admin help netapp

5. Click Delete.

Related references

Database Backup Completed area on page 453

Administrator roles and capabilities on page 506

Managing database backup options

Scheduling a database backup

You can schedule a database backup to occur at a specific time on a recurring basis, to preserve yourdata.

Before you begin

• You must be authorized to perform all the steps of this task; your RBAC administrator canconfirm your authorization in advance.

• You must log in to the DataFabric Manager server as an administrator with the GlobalFullControlrole.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Database Backup option.

3. Click the Schedule option.

4. In the Database Backup Schedule area, specify the database backup properties, such as backuptype, backup path, retention count, and schedule.

You can select between the Archive and Snapshot backup types.

5. Select Schedule.

You can configure the time of your database backup schedule in minutes, hours, days, and weeks.

6. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Database Backup Schedule area on page 451

Administration | 449

Page 450: Admin help netapp

Changing the directory path for database backups

You can use the Setup Options dialog box to change the directory path for database backups if youwant to back up to a different location.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Database Backup option.

3. Click the Schedule option.

4. In the Database Backup Schedule area, select a backup type:

• Archive• Snapshot

5. Change the Backup path.

6. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Database Backup Schedule area on page 451

Starting a database backup

You must back up your data before any upgrade operation, and before any maintenance on thesystem host on the DataFabric Manager server. You can start a database backup from theOnCommand console in one of the two ways; schedule based or on-demand. Follow the steps toperform an on-demand backup.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Database Backup option.

450 | OnCommand Console Help

Page 451: Admin help netapp

3. Click the Schedule option.

4. In the Database Backup Schedule area, select a backup type:

• Archive• Snapshot

5. Specify the location where you want to save the backup.

6. Specify a Retention Count.

7. Click Back Up Now.

A message is displayed confirming that the backup has started successfully.

Related references

Administrator roles and capabilities on page 506

Database Backup Schedule area on page 451

Page descriptions

Database Backup Schedule area

You can schedule a database backup, manage available database information, and view the list ofcompleted backups and related events using the Database Backup setup option.

• Options on page 451• Command buttons on page 452

Options

The following options are available in the Database Backup Schedule area:

Status Displays the status of the scheduled backup and current backup.

• Pending: Indicates that a backup is scheduled, but yet to start.• Schedule Active: Indicates that a backup is scheduled and enabled.• Schedule Inactive: Indicates that a backup is scheduled and disabled.• Running: Indicates that a backup is currently running.• Not Scheduled: Indicates that a backup is not scheduled.

BackupType

Enables you to select a backup type. You can select one of the following backup types:

• Archive: Performs only critical data backup in a compressed form using the ZIPformat. You can transfer the backup to a different system and restore it with ease;however, this process is time-consuming. By default, Archive is enabled.

Administration | 451

Page 452: Admin help netapp

• Snapshot: Uses the Snapshot technology to perform the database backup. Althoughthis is a faster option, you cannot transfer the backup to a different system andrestore it.You can export a Snapshot backup to Archive backup by using the dfm backupexport snapshot-name command.

Note: Snapshot backup is enabled only when both of the following conditionsexist:

• Either SnapDrive for Linux 2.2.1 (and later) or SnapDrive for Windows 4.2(and later) is installed.

• DataFabric Manager server data resides on a dedicated LUN managed bySnapDrive.

Backup Path Displays the location at which the backed-up database is stored.

RetentionCount

Specifies the maximum number of backups that the OnCommand console can storesimultaneously. If you exceed this limit, the old backups are automatically deletedto provide space for new backups. The default retention count is 0.

Schedule Enables you to plan the database backup.

The following options are available to schedule your backup:

• Hourly at Minute: Displays the time (in minutes) at which the hourly backup must beperformed.

• Every (Hours): Displays the time (in hours) at which the backup must be performed.• Starting Every Day At: Displays the time during the day when the backup must start.

This backup is performed based on the time interval set to the Every (Hours) field.Daily At: Displays the time during the day when the backup must start. This backupis performed once every 24 hours.

• Weekly On: Displays the day for the weekly backup schedule.At: Displays the time for the weekly backup schedule.

Note: Hourly backups are not possible with Archive backup. By default, Snapshotbackup is selected when the DataFabric Manager server data resides on a LUN.

Command buttons

The command buttons enable you to save or cancel the setup options, and back up data.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

452 | OnCommand Console Help

Page 453: Admin help netapp

Back Up Now Enables you to start a manual backup.

Database Backup Completed area

You can view information about the database backups that have been completed and the relatedevents.

• Options on page 453• Command buttons on page 453

OptionsYou can view the following information about the completed database backup:

File Name Displays the name of the database backup file.

File Size Displays the size of the database backup file.

Creation Time Displays the time when the database backup file was created.

Backup Events Display the events that are triggered during the database backup. You can alsoview the time when the event was triggered.

Command buttons

The command buttons enable you to save or cancel the setup options, and delete the database backup.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Delete Enables you to delete a database backup.

Default thresholds setup options

Understanding default thresholds options

Default thresholds

Default thresholds are values or limits set on a storage object attribute, that allow the DataFabricManager server to trigger event(s) when the value or limit is reached. You can modify these values toset specific thresholds for individual storage objects or groups of storage objects.

You can access the global default threshold values from the Setup Options dialog box. From theDefault Thresholds options area, you can change the global default threshold values for the followingobjects:

• Aggregates

Administration | 453

Page 454: Admin help netapp

• Volumes• Qtrees• Hosts• Resource pools• User quotas• HBA ports

Managing default thresholds options

Editing threshold conditions for an aggregate

You can change the default threshold settings for an aggregate. When you edit thresholds, you editvalues that are associated with the individual aggregates, not the aggregate groups.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Default Thresholds option.

3. Click the Aggregates option.

4. In the Default Thresholds Aggregates area, specify the new values, as required.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Default Thresholds Aggregates area on page 456

Editing threshold conditions for a volume

You can change the default threshold settings for a volume or all volumes in a group. If you modifythe settings of a volume that belongs to more than one group, the settings are applied to the volumeacross all groups it belongs to.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

454 | OnCommand Console Help

Page 455: Admin help netapp

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Default Thresholds option.

3. Click the Volumes option.

4. In the Default Thresholds Volumes area, specify the new values, as required.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Default Thresholds Aggregates area on page 456

Editing threshold conditions for other objects

You can change the default threshold settings for qtrees, hosts, user quotas, resource pools, and HBAports.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

About this task

User quota thresholds that you configure for a qtree do not apply to quotas on a qtree.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Default Thresholds option.

3. Click the Other option.

4. In the Default Thresholds Other area, specify the new values, as required.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Default Thresholds Aggregates area on page 456

Page descriptions

Administration | 455

Page 456: Admin help netapp

Default Thresholds Aggregates area

You can change the global default threshold values for monitored aggregates using the DefaultThreshold setup option.

• Options on page 456• Command buttons on page 457

OptionsThe following thresholds apply to all monitored aggregates. You can override the default values ofany aggregate from the Aggregates View page.

Full Threshold (%) Displays the percentage of physical space on an aggregate that can be usedbefore the system generates an Aggregate Full event. All requests for spaceexceeding this threshold are dropped.

The default is 90%.

Nearly FullThreshold (%)

Displays the percentage of physical space on an aggregate that can be usedbefore the system generates an Aggregate Almost Full event.

You should to specify a limit that is less than the value specified forAggregate Full Threshold.

The default is 80%.

Full ThresholdInterval

Displays the time that a condition can persist before the event is generated.

If the condition persists for the specified amount of time, the DataFabricManager server generates an Aggregate Full event. Threshold intervalsapply only to error and informational events.

If the threshold interval is 0 seconds, or a value less than the aggregatemonitoring interval, the DataFabric Manager server continuously generatesAggregate Full events until an event is resolved. If the threshold interval isgreater than the aggregate monitoring interval, the DataFabric Managerserver waits for the specified threshold interval (which includes two ormore monitoring intervals), and generates an Aggregate Full event only ifthe condition persisted throughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and the thresholdinterval is 90 seconds, the threshold event is generated only if the conditionpersists for two monitoring cycles.

The default is 0 seconds.

OvercommittedThreshold (%)

Displays the percentage of physical space on an aggregate that can be usedbefore the system generates an Aggregate Overcommitted event.

The default is 100%.

456 | OnCommand Console Help

Page 457: Admin help netapp

NearlyOvercommittedThreshold (%):

Displays the percentage of physical space on an aggregate that can be usedbefore the system generates an Aggregate Nearly Overcommitted event.

You should specify a limit that is less than the value specified forAggregate Overcommitted Threshold.

The default is 95%.

Snapshot ReserveFull Threshold (%)

Displays the percentage of Snapshot reserve on an aggregate that can beused before the system generates an Aggregate Snapshot Full event.

The default is 90%.

Snapshot ReserveNearly FullThreshold (%)

Displays the percentage of Snapshot reserve on an aggregate that can beused before the system generates an Aggregate Snapshot Nearly Full event.

You should specify a limit that is less than the value specified forAggregate Snapshot Reserve Full Threshold.

The default is 80%.

Over-DeduplicatedThreshold (%)

Displays the percentage of user data that can be deduplicated and stored ona volume before the system generates a Volume Over Deduplicated event.

The default is 150%.

Nearly over-DeduplicatedThreshold (%)

Displays the percentage of user data that can be deduplicated and stored ona volume before the system generates a Volume Nearly Over Deduplicatedevent.

The default is 140%.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Default Thresholds Volumes area

You can change the global default threshold values for monitored volumes using the DefaultThreshold setup option.

• Options on page 457• Command buttons on page 460

OptionsThe following thresholds apply to all monitored volumes.

Administration | 457

Page 458: Admin help netapp

Full Threshold (%) Displays the percentage at which a volume is considered full. If this limit isexceeded, the DataFabric Manager server generates a Volume Full event.

The default is 90%.

Nearly FullThreshold (%)

Displays the percentage at which a volume is considered nearly full. If thislimit is exceeded, the DataFabric Manager server generates a VolumeNearly Full event.

You should specify a limit that is less than the value specified for theVolume Full Threshold option.

The default is 80%.

Full ThresholdInterval

Displays the maximum duration of a threshold breach before a fullthreshold event is generated.

If the condition persists for the specified amount of time, the DataFabricManager server generates a Volume Full event. Threshold intervals applyonly to error and informational events.

If the threshold interval is 0 seconds or a value less than the volumemonitoring interval, the DataFabric Manager server generates Volume Fullevents as they occur. If the threshold interval is greater than the volumemonitoring interval, the DataFabric Manager server waits for the specifiedthreshold interval, which includes two or more monitoring intervals, andgenerates a Volume Full event only if the condition persisted throughoutthe threshold interval. For instance, if the monitoring cycle time is 60seconds and the threshold interval is 90 seconds, the threshold event isgenerated only if the condition persists for two monitoring cycles.

The default is 0 seconds.

QuotaOvercommittedThreshold (%)

Displays the percentage of allocated space (disk space or files used) on aquota, as specified by the user’s quota file that can be used before thesystem generates an Quota Overcommitted event.

The default is 50%.

Quota NearlyOvercommittedThreshold (%):

Displays the percentage of allocated space (disk space or files used) on aquota, as specified by the user’s quota file that can be used before thesystem generates an Quota Nearly Overcommitted event.

The default is 40%.

Growth EventMinimum Change(%)

Displays the minimum change in volume size (as a percentage of totalvolume size) that is considered normal. If the change in volume size is morethan the specified value, the DataFabric Manager server generates aVolume Growth Abnormal event.

The default is 1%.

458 | OnCommand Console Help

Page 459: Admin help netapp

Snap Reserve FullThreshold (%)

Displays the value at which the space reserved for making volume Snapshotcopies is considered full.

The default is 90%.

No First SnapshotThreshold (%)

Displays the value at which a volume is considered to have consumed allthe free space that it needs when the first Snapshot copy is created.

This option applies to volumes that contain space-reserved files, noSnapshot copies, and a fractional overwrite reserve set to greater than 0, andfor which the sum of the space reservations for all LUNs in the volume isgreater than the free space available to the volume.

The default is 90%.

Nearly No FirstSnapshot Threshold(%)

Displays the value at which a volume is considered to have consumed mostof the free space that it needs when the first Snapshot copy is created.

This option applies to volumes that contain space-reserved files, noSnapshot copies, and a fractional overwrite reserve set to greater than 0, andfor which the sum of the space reservations for all LUNs in the volume isgreater than the free space available to the volume.

You should specify a limit that is less than the value specified for theVolume No First Snapshot Threshold option.

The default is 80%.

Space ReserveDepleted Threshold(%)

Displays the value at which a volume is considered to have consumed all itsreserved space.

This option applies to volumes with LUNs, Snapshot copies, no free space,and a fractional overwrite reserve of less than 100%. A volume that hascrossed this threshold is getting dangerously close to having write failures.

The default is 90%.

Space ReserveNearly DepletedThreshold (%)

Displays the value at which a volume is considered to have consumed mostof its reserved space.

This option applies to volumes with LUNs, Snapshot copies, no free space,and a fractional overwrite reserve of less than 100%. A volume that crossesthis threshold is getting close to having write failures.

You should specify a limit that is less than the value specified for theVolume Space Reserve Depleted Threshold option.

The default is 80%.

Snapshot CountThreshold

Displays the limit to the number of Snapshot copies allowed on the volume.A volume is allowed up to 255 Snapshot copies.

Administration | 459

Page 460: Admin help netapp

The default is 250 Snapshot copies.

Too Old SnapshotThreshold

Specifies the limit to the age of a Snapshot copy allowed for the volume.The Snapshot copy age can be specified in seconds, minutes, hours, days, orweeks.

The default is 52 weeks.

Nearly over-Deduplicatedthreshold (%)

Displays the percentage of user data that can be deduplicated and stored ona volume before the system generates a Nearly Over-Deduplicated event.

The default is 140%.

Over Deduplicatedthreshold (%)

Displays the percentage of user data that can be deduplicated and stored ona volume before the system generates an Over Deduplicated event.

The default is 150%.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Default Thresholds Other area

You can change the global default threshold values for host agents, HBA ports, qtrees, user quotas,and resource pools using Default threshold options.

• Options on page 460• Command buttons on page 462

OptionsThe following information is available under Other thresholds:

HBA Port TooBusy Threshold(%)

Displays the percentage of maximum traffic the HBA port can handle withoutadversely affecting performance. If this threshold is crossed, the DataFabricManager server generates an HBA Port Traffic High event.

The default is 90%.

Host CPU TooBusy Threshold(%)

Displays the percentage of maximum traffic the host CPU can handle withoutadversely affecting performance.

The default is 95%.

460 | OnCommand Console Help

Page 461: Admin help netapp

Host CPU BusyThresholdInterval

Specifies the maximum duration of a threshold breach that can persist before theCPU busy event is generated.

If the condition persists for the specified amount of time, the DataFabricManager server generates a CPU-too-busy event. Threshold intervals apply onlyto error and informational events.

• If the threshold interval is 0 seconds or a value less than the CPU monitoringinterval, the DataFabric Manager server generates CPU-too-busy events asthey occur.

• If the threshold interval is greater than the CPU monitoring interval, theDataFabric Manager server waits for the specified threshold interval, whichincludes two or more monitoring intervals, and generates a CPU-too-busyevent only if the condition persisted throughout the threshold interval.

For example, if the monitoring cycle time is 60 seconds and the thresholdinterval is 90 seconds, the event is generated only if the condition persists fortwo monitoring cycles.

The default is 15 minutes.

Qtree FullThreshold (%)

Displays the percentage at which a qtree is considered full. If this limit isexceeded, the DataFabric Manager server generates a Qtree Full event.

The default is 90%.

Qtree NearlyFull Threshold(%)

Displays the percentage at which a qtree is considered nearly full. If this limit isexceeded, the DataFabric Manager server generates a Qtree Nearly Full event.

You should to specify a limit that is less than the value specified for the QtreeFull Threshold option

The default is 80%.

Qtree FullThresholdInterval

Displays the maximum duration of a threshold breach before the Qtree fullthreshold event is generated.

If the condition persists for a specified amount of time, the DataFabric Managerserver generates a Qtree Full event. Threshold intervals apply only to error andinformational events.

If the threshold interval is 0 seconds or a value less than the qtree monitoringinterval, the DataFabric Manager server continuously generates Qtree Fullevents until an event is resolved.. If the threshold interval is greater than theqtree monitoring interval, the DataFabric Manager server waits for the specifiedthreshold interval, which includes two or more monitoring intervals, andgenerates a Qtree Full event only if the condition persisted throughout thethreshold interval. For instance, if the monitoring cycle time is 60 seconds andthe threshold interval is 90 seconds, the threshold event is generated only if thecondition persists for two monitoring cycles.

Administration | 461

Page 462: Admin help netapp

The default is 0 seconds.

Qtree GrowthEventMinimumChange (%)

Displays the minimum change in qtree size (as a percentage of total volumesize). If the change in qtree size is more than the specified value, and the growthis abnormal with respect to the qtree-growth history, the DataFabric Managerserver generates a Qtree Growth Abnormal event.

The default is 1%.

User Quota FullThreshold (%)

Displays the value at which a user is considered to have consumed all theallocated space (disk space or files used) as specified by the user's quota (hardlimit in the /etc/quotas file).

If this limit is exceeded, the DataFabric Manager server generates a User DiskSpace Quota Full or User Files Quota Full event.

The default is 90%.

User QuotaNearly FullThreshold (%)

Displays the value at which a user is considered to have consumed most of theallocated space (disk space or files used) as specified by the user's quota (hardlimit in the /etc/quotas file).

If this limit is exceeded, the DataFabric Manager server generates a User DiskSpace Quota Almost Full or User Files Quota Almost Full event.

You should to specify a limit that is less than the value specified for the UserQuota Full Threshold option.

The default is 80%.

Resource PoolFull Threshold(%)

Displays the percentage of space used in a resource pool before DataFabricManager server generates a Resource Pool Full event.

The default is 90%.

Resource PoolNearly FullThreshold (%)

Displays the percentage of space used in a resource pool before DataFabricManager server generates a Resource Pool Nearly Full event.

The default is 80%.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Discovery setup options

462 | OnCommand Console Help

Page 463: Admin help netapp

Understanding discovery options

What the discovery process is

The DataFabric Manager server discovers all the storage systems in your organization's network bydefault. You can add other networks to the discovery process or enable discovery on all the networks.Depending on your network setup, you can disable discovery entirely. You can disable auto-discovery if you do not want SNMP network walking.

SNMP version setup

You must know of the settings that are used for the SNMP version preferred at the storage systemlevel or network level.

If the SNMP version is... Then...

Specified at the storage system level The version preferred takes precedence over thenetwork and global settings.

Not specified at the storage system level Network setting is used.

Not specified at the network level Global setting is used.

When the DataFabric Manager server is installed for the first time or updated, by default, the globaland network setting uses SNMPv1 as the preferred version. However, you can configure the globaland network setting to use SNMPv3 as the default version.

Guidelines for editing discovery options

You must follow a set of guidelines for changing the default values of the discovery options.

Interval This option specifies the period after which the DataFabric Manager server scansfor new storage systems and networks.

You can change the default value if you want to increase the minimum timeinterval between system discovery attempts. This option affects the discoveryinterval only at the time of installation. After storage systems are discovered, theuser should determine the interval based on the number of networks and their size.If you choose a longer interval, there might be a delay in discovering new storagesystems, but the discovery process is less likely to affect the network load.

The default is 15 minutes.

Timeout This option specifies the time interval after which the DataFabric Manager serverconsiders a discovery query to have failed.

You can change the default value if you want to lengthen the time beforeconsidering a discovery to have failed (to avoid discovery queries on a local areanetwork failing due to the long response times of a storage system).

Administration | 463

Page 464: Admin help netapp

The default is 5 seconds.

Hostdiscovery

This option enables the discovery of storage systems, host agents and vFiler Unitsthrough SNMP.

You can change the default value if any of the following situations exist:

• All storage systems that you expected the DataFabric Manager server todiscover have been discovered and you do not want the DataFabric Managerserver to continue scanning for new storage systems.

• You want to manually add storage systems to the DataFabric Manager serverdatabase.Manually adding storage systems is faster than discovering storage systems inthe following cases:

• You want the DataFabric Manager server to manage a small number ofstorage systems.

• You want to add a single new storage system to the DataFabric Managerserver database.

Host agentdiscovery

This option allows you to enable or disable discovery of host agents.

You can change the default value if you want to disable the discovery of LUNs orstorage area network (SAN) hosts and host agents.

Networkdiscovery

This option enables the discovery of networks, including SAN and clusternetworks.

You can change the default value if you want the DataFabric Manager server toautomatically discover storage systems on your entire network.

Note: When the Network Discovery option is enabled, the list of networks onthe Networks to Discover page can expand considerably as the DataFabricManager server discovers additional networks attached to previously discoverednetworks.

NetworkDiscoveryLimit (inhops)

This option sets the boundary of network discovery as a maximum number of hops(networks) from the DataFabric Manager server.

You can change the default value if you want to increase this limit if the storagesystems that you want the DataFabric Manager server to discover are connected tonetworks that are more than 15 hops (networks) away from the network to whichthe DataFabric Manager server is attached. The other method for discovering thesestorage systems is to add them manually.

You can decrease the discovery limit if a smaller number of hops includes all thenetworks with storage systems you want to discover. For example, reduce the limitto six hops if there are no storage systems that must be discovered on networksbeyond six hops. Reducing the limit prevents the DataFabric Manager server from

464 | OnCommand Console Help

Page 465: Admin help netapp

using cycles to probe networks that contain no storage systems that you want todiscover.

The default is 15 hops.

Networks todiscover

This option enables you to manually add or delete networks that the DataFabricManager server scans for new storage systems.

You can change the default value if you want to add a network to the DataFabricManager server that it cannot discover automatically, or you want to delete anetwork for which you no longer want storage systems to be discovered.

NetworkCredentials

This option enables you to specify, change, or delete an SNMP community that theDataFabric Manager server uses for a specific network or host.

You can change the default value if storage systems and routers that you want toinclude in the DataFabric Manager server do not use the default SNMPcommunity.

Configuring discovery options

Adding addresses for discovery

The DataFabric Manager server uses the automatic discovery process to find storage systems on yourorganization’s network. You can add the addresses of your network or storage systems for discoveryfrom the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Discovery option, then click the Addresses option.

3. In the Discovery Addresses area, click Add.

4. In the Add Network Address dialog box, specify your network address and the prefix length.

5. Click OK.

6. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Administration | 465

Page 466: Admin help netapp

Managing discovery options

Editing SNMP communities and network credentials

You can change the default network credentials and SNMP settings if storage systems and routersthat you want to include in the DataFabric Manager server , do not use the default SNMPcommunity.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Discovery option.

3. Click the Credentials option.

4. In the Discovery Credentials area, select the network address you want to edit.

5. Click Edit.

6. In the Edit Network Credentials dialog box, under Preferred SNMP version, you can select oneof the following options:

• SNMP v1; if you select SNMP v1, you can configure the SNMP communities.• SNMP v3; if you select SNMP v3, you can configure the following:

• Auth protocol: You can choose either of the two protocol options: MD5 and SHA. Thedefault option is MD5.

• Login: Type your login information.• Password: Type your password.• Privacy password: Type your privacy password.

7. Click OK.

8. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Guidelines for editing discovery options on page 463

Discovery Credentials area on page 469

Page descriptions

466 | OnCommand Console Help

Page 467: Admin help netapp

Discovery Options area

You can use Discovery options to discover networks, storage systems (including clusters), OpenSystems SnapVault agents, and host agents.

• Options on page 467• Command buttons on page 468

Options

Displays the settings you can configure to discover storage objects and the time taken for discovery.

Host discovery Enables (default) or disables the discovery of hosts.

Host agentdiscovery

Enables (default) or disables the discovery of hosts running the NetApp HostAgent software.

vFiler Unitdiscovery

Enables (default) or disables discovery of vFiler units. When disabled, theDataFabric Manager server does not discover new vFiler units but continuesto monitor existing vFiler units.

Host-initiateddiscovery

Enables (default) or disables host-initiated discovery. When enabled, theDataFabric Manager server accepts communication requests initiated by hostagents.

Note: Currently, host-initiated discovery is supported only in NetApp HostAgent.

Other DiscoveryMethods

Displays the methods you can use to discover storage systems andnetworks.

SAN discovery Enables(default) or disables the discovery of systems in a SANenvironment.

Cluster discovery Enables (default) or disables the discovery of cluster systems.

Network discovery Enables or disables (default) the discovery of new networks.

Default DiscoveryOptions (exceptvFiler Units)

Displays default settings that are used by the DataFabric Manager server tostart or end the discovery process.

Interval Displays the minimum time interval during which the DataFabric Managerserver scans for new storage systems and networks.

The default is 15 minutes.

Timeout Displays the time interval after which the DataFabric Manager serverconsiders a discovery query to have failed.

The default is 5 seconds.

Administration | 467

Page 468: Admin help netapp

Network DiscoveryLimit (in hops)

Sets the boundary of network discovery as a maximum number of hops(networks) from the DataFabric Manager server.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Discovery Addresses area

The Discovery setup options enable the DataFabric Manager server to discover storage systems andhost agents from the network.

• Options on page 468• Command buttons on page 468

Options

Displays the properties of the discovered network address in a tabular format. You can sort databased on the filters applied to the columns. Multiple filters and single column sorting can be appliedat the same time.

Address Displays the network address of the object which will be discovered.

Prefix Length Displays the prefix length of the network address.

Hop Count Displays the hop count of the network address.

Last Searched Displays the date and timestamp of the last searched network address.

Command buttons

The command buttons enable you to save or cancel the setup options, and add, edit, or delete networkaddresses.

Add Enables you to add a network address and its prefix length.

Edit Enables you to edit the specified network address.

Delete Enables you to delete the specified network address.

You can choose to delete Network Only or Network and Hosts from the drop-down list.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

468 | OnCommand Console Help

Page 469: Admin help netapp

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Discovery Credentials area

You can discover storage objects from their network credentials by using the Discovery credentialsoption.

• Options on page 469• Command buttons on page 469

Options

Displays, in tabular format, the properties of the discovered network credential. You can sort databased on the filters applied to the columns. Multiple filters and single column sorting can be appliedat the same time.

Address Displays the network address.

Prefix Length Displays the prefix length of the network address.

Preferred SNMP Version Displays the preferred SNMP version.

Command buttons

The command buttons enable you to save or cancel the setup options, and add, delete, or edit networkcredentials.

Add Enables you to add a network and configure the SNMP settings.

Edit Enables you to edit the selected network and configure the SNMP settings.

Delete Enables you to delete the selected network.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

File SRM setup options

Understanding File SRM options

Administration | 469

Page 470: Admin help netapp

What File Storage Resource Management does

File Storage Resource Manager (FSRM) enables you to gather file-system metadata and generatereports on different characteristics of that metadata.

The DataFabric Manager server interacts with the NetApp Host Agent residing on remote Windows,Solaris, or Linux workstations or servers (called hosts) to recursively examine the directorystructures (paths) you have specified.

For example, if you suspect that certain file types are consuming excessive storage space on yourstorage systems, you can perform the following tasks:

1. Deploy one or more host agents.

2. Configure FSRM to walk a path.

The host agents might have a NetApp LUN, volume, or qtree mounted. You can configure FSRM togenerate reports periodically. These reports contain the following details:

• Files that are consuming the most space• Files that are outdated or have been accessed recently• Types of files (.doc, .gif, .mp3, and so on) on the file system

You can then decide how to most efficiently use your existing space.

How FSRM monitoring works

The DataFabric Manager server monitors directory paths that are visible to the host agent. Therefore,if you want to enable FSRM monitoring of NetApp storage systems, the remote host must mount aNetApp share using NFS or CIFS, or the host must use a LUN on the storage system.

Note: The DataFabric Manager server cannot obtain FSRM data for files that are located inNetApp volumes, which are not exported by CIFS or NFS. Host agents can also gather FSRM dataabout other file system paths that are not on a NetApp storage system: for example, local disk orthird-party storage systems.

Configuring File SRM options

Adding new file types for monitoring file-level statistics

DataFabric Manager server collects file-system metadata using File SRM. You can add new filetypes from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

470 | OnCommand Console Help

Page 471: Admin help netapp

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the File SRM option, then click Options.

3. In the File SRM Options area, click Add to add a SRM file type.

4. Specify the file type in the Add SRM File Type dialog box, then click OK.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Deleting File SRM file types

When you do not want DataFabric Manager server to collect file-system metadata for a particular filetype, you can delete those file types from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the File SRM option, then click Options.

3. In the File SRM Options area, select one or more file types you want to delete from SRM FileTypes.

4. Click Delete.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Managing File SRM options

Administration | 471

Page 472: Admin help netapp

Editing File SRM options

You can edit the values for the largest files, recently modified files, and recently accessed file-reportsfrom the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the File SRM option, then click Options.

3. In the File SRM Options area, specify the new values, as required.

4. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Page descriptions

File SRM area

You can use the File SRM setup option to configure the number of largest files, recently modifiedfiles, accessed files, and file types.

• Options on page 472• Command buttons on page 473

Options

You can configure the following File SRM setting options:

Largest Files (Max) Displays the maximum number of largest files for each File SRMpath.

Least Recently ModifiedFiles (Max)

Displays the maximum number of files that were least modifiedrecently, for each File SRM path.

Least Recently AccessedFiles (Max)

Displays the maximum number of files that were least accessedrecently, for each File SRM path.

Recently Accessed Files(Max)

Displays the maximum number of files that were recently accessedfor each File SRM path.

472 | OnCommand Console Help

Page 473: Admin help netapp

SRM File Types Displays the SRM file types to add or delete.

Note: The default value for all the options is 20.

Command buttons

The command buttons enable you to save or cancel the setup options, and add or delete SRM filetypes.

Add Adds the specified SRM file type.

Delete Deletes the specified SRM file type.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

LDAP setup options

Understanding authentication

Authentication methods on the DataFabric Manager server

The DataFabric Manager server uses the information available in the native operating system forauthentication. The server does not maintain its own database of the administrator names and thepasswords.

You can also configure the DataFabric Manager server to use Lightweight Directory Access Protocol(LDAP). If you configure LDAP, then the server uses it as the preferred method of authentication.

Authentication with LDAP

You can enable LDAP authentication on the DataFabric Manager server and configure it tocommunicate with your LDAP servers in order to retrieve relevant data.

The DataFabric Manager server provides predefined templates for the most common LDAP servertypes. These templates provide predefined LDAP settings that make the DataFabric Manager servercompatible with your LDAP server.

Configuring LDAP options

Administration | 473

Page 474: Admin help netapp

Adding LDAP servers for authentication

You can enable LDAP authentication on the DataFabric Manager server and configure it to workwith your LDAP servers. You can add the LDAP servers from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the LDAP option, then click Servers.

3. In the LDAP Servers area, click Add.

4. In the Add LDAP Server dialog box, specify the server address or host name, and port details.

5. Click OK.

6. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Deleting LDAP servers

If you want to disable authentication, you can delete your LDAP servers from the Setup Optionsdialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the LDAP option, then click Servers.

3. In the LDAP Servers area, select one or more LDAP server you want to delete, and then clickDelete.

4. Click Save and Close.

474 | OnCommand Console Help

Page 475: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Managing LDAP options

Editing server type options for authentication

When configuring the DataFabric Manager server for LDAP authentication, you can use a templateto automatically select the settings compatible with your LDAP server. You can edit the defaulttemplate settings from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the LDAP option, then click Server Types.

3. In the LDAP Server Types area, modify the template settings, as required.

4. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Page descriptions

LDAP Authentication area

You can use the LDAP Authentication setup option to configure the DataFabric Manager server tocommunicate with your LDAP servers.

• Options on page 475• Command buttons on page 476

Options

The LDAP authentication setting options are as follows:

LDAP Is Enabled Enables you to enable LDAP.

LDAP Bind DN Specifies the bind distinguished name (DN) that DataFabric Manager serveruses to identify itself to the LDAP server.

Administration | 475

Page 476: Admin help netapp

LDAP BindPassword

Specifies the password that DataFabric Manager server uses to gain accessto the bind distinguished name.

LDAP Base DN Specifies the directory on the LDAP server that DataFabric Manager serveruses to search for the LDAP server.

For example, dc=domain, dc=com specifies whether LDAP is enabled ordisabled. By default, LDAP is disabled.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

LDAP Server Types area

You can use the LDAP Server types setup option to configure the DataFabric Manager server forLDAP authentication and use a template to automatically select the settings compatible with yourLDAP server.

• Options on page 476• Command buttons on page 477

Options

The LDAP server type setting options are as follows:

Templates Specifies the templates you can select to facilitate LDAP configuration.

You can select one of three templates; each file provides different defaultinformation. Templates provide predefined LDAP settings to configure theDataFabric Manager server with your LDAP server.

Product Line Specifies the template that provides the predefined LDAP settings designed tomake DataFabric Manager server compatible with your LDAP server. You canselect Netscape/iPlanet (default), UMich/OpenLDAP, or Lotus Domino from thedrop-down menu.

Netscape/iPlanet Selects the access control settings to use with yourNetscape/iPlanet servers.

UMich/OpenLDAP Selects the access control settings to use with yourUmich/OpenLDAP servers.

476 | OnCommand Console Help

Page 477: Admin help netapp

Lotus Domino Selects the access control settings to use with your LotusDomino servers.

Custom Displays LDAP server attributes that you can modify.

ProtocolVersion

Specifies the LDAP protocol version. You can either select 2 or 3 (default) fromthe drop-down menu.

The default is 3.

UIDAttribute

Specifies the name of the attribute in the LDAP directory that contains user loginnames to be authenticated by the DataFabric Manager server.

The default is UID.

GIDAttribute

Specifies a value that assigns the DataFabric Manager server group membership toLDAP users based on an attribute and value specified in their LDAP user objects.

UGIDAttribute

If the LDAP users are included as members of a GroupOfUniqueNames object inthe LDAP directory, this option enables you to assign the DataFabric Managerserver group membership to them based on a specified attribute in thatGroupOfUniqueNames object.

The default is CN.

MemberAttribute

Specifies the attribute name that your LDAP server uses to store information aboutthe individual members of a group.

The default is uniqueMember.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

LDAP Servers area

You can use the LDAP Servers setup option to identify the LDAP server that DataFabric Managerserver queries for authentication information.

• Options on page 477• Command buttons on page 478

Options

The LDAP server setting options are as follows:

Administration | 477

Page 478: Admin help netapp

Address orHostname

Displays the IP address or host name of the LDAP server that is used toauthenticate the user on the DataFabric Manager server.

Port Displays the port number of the LDAP server.

Last Used Displays the date and timestamp of the most recent authentication success.

Last Failed Displays the date and timestamp of the most recent permission failure.

Command buttons

The command buttons enable you to add, delete, save or cancel the setup options.

Add Adds a new LDAP server.

Delete Deletes the selected LDAP server.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Monitoring setup options

Understanding monitoring options

The DataFabric Manager server monitoring process

The DataFabric Manager server discovers the storage systems supported on your network. TheDataFabric Manager server periodically monitors data that it collects from the discovered storagesystems, such as CPU usage, interface statistics, free disk space, qtree usage, and chassisenvironmental. The DataFabric Manager server generates events when it discovers a storage system,when the status is abnormal, or when a predefined threshold is breached. If configured to do so, theDataFabric Manager server sends a notification to a recipient when an event triggers an alarm.

The following flow chart illustrates the DataFabric Manager server monitoring process.

478 | OnCommand Console Help

Page 479: Admin help netapp

Start

DataFabric Manager server discovers a storage system

DataFabric Manager server periodically collects data from

the storage system

Abnormal status received from the storage

system?

Generate an event

Generate the alarm

Alarm configured for

the event?

Repeat notification for

the alarm configured?

Alarm acknowledged?

No

No

No

No

Ye

Ye

Ye

Ye

Administration | 479

Page 480: Admin help netapp

SNMP queries

The DataFabric Manager server uses periodic SNMP queries to collect data from the storage systemsit discovers. The data is reported by the DataFabric Manager server in the form of tabular andgraphical reports and event generation.

The time interval at which an SNMP query is sent depends on the data being collected. For example,although the DataFabric Manager server pings each storage system every minute to ensure that thestorage system is reachable, the amount of free space on the disks of a storage system is collectedevery 30 minutes.

Guidelines for changing monitoring intervals

Although you should generally keep the default values, you might need to change some of theoptions to suit your environment. All the monitoring option values apply to all storage systems in allgroups.

If you decrease the monitoring intervals, you receive more real-time data. However, the DataFabricManager server queries the storage systems more frequently, thereby increasing the network trafficand the load on the DataFabric Manager server and the storage systems responding to the queries.

If you increase the monitoring interval, the network traffic and the storage system load are reduced.However, the reported data might not reflect the current status or condition of a storage system.

Managing monitoring options

Editing storage options for monitoring

Although the DataFabric Manager server is configured with defaults that enable you to manage theglobal default threshold values immediately, you might need to change some of the storage options tosuit your environment. You can change the storage options from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Monitoring option, then click the Storage option.

3. In the Monitoring Storage area, specify the new values, as required.

4. Click Save and Close.

480 | OnCommand Console Help

Page 481: Admin help netapp

Related concepts

Guidelines for changing monitoring intervals on page 480

Related references

Administrator roles and capabilities on page 506

Editing protection options for monitoring

Although the DataFabric Manager server is configured with defaults that enable you to manage theglobal default threshold values immediately, you might need to change some of the protectionoptions to suit your environment. You can change the protection options from the Setup Optionsdialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Monitoring option, then click the Protection option.

3. In the Monitoring Protection area, specify the new values, as required.

4. Click Save and Close.

Related concepts

Guidelines for changing monitoring intervals on page 480

Related references

Administrator roles and capabilities on page 506

Editing network options for monitoring

Although the DataFabric Manager server is configured with defaults that enable you to manage theglobal default threshold values immediately, you might need to change some of the network optionsto suit your environment. You can change the network options from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Administration | 481

Page 482: Admin help netapp

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Monitoring option, then click the Networking option.

3. In the Monitoring Networking area, specify the new values, as required.

4. Click Save and Close.

Related concepts

Guidelines for changing monitoring intervals on page 480

Related references

Administrator roles and capabilities on page 506

Editing inventory options for monitoring

Although the DataFabric Manager server is configured with defaults that enable you to manage theglobal default threshold values immediately, you might need to change some of the inventory optionsto suit your environment. You can change the inventory options from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Monitoring option, then click the Inventory option.

3. In the Monitoring Inventory area, specify the new values, as required.

4. Click Save and Close.

Related concepts

Guidelines for changing monitoring intervals on page 480

Related references

Administrator roles and capabilities on page 506

482 | OnCommand Console Help

Page 483: Admin help netapp

Editing system options for monitoring

Although the DataFabric Manager server is configured with defaults that enable you to manage theglobal default threshold values immediately, you might need to change some of the system options tosuit your environment. You can change the system options from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Monitoring option, then click the System option.

3. In the Monitoring System area, specify the new values, as required.

4. Click Save and Close.

Related concepts

Guidelines for changing monitoring intervals on page 480

Related references

Administrator roles and capabilities on page 506

Editing ping intervals

You can perform a ping operation to ensure that a storage object is available.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click the Setup Options option.

2. In the Setup Options dialog box, click the Monitoring option.

3. Click the Networking option.

4. In the Monitoring Networking area, specify the following settings:

• The ping monitoring interval• Ping method that the DataFabric Manager server uses to contact a storage object

Administration | 483

Page 484: Admin help netapp

• Ping timeout interval• Ping retry delay

The ping is declared successful only if the reply is received before the ping timeout interval.

5. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Monitoring Networking area on page 486

Page descriptions

Monitoring Storage area

You can use the Setup Options dialog box to configure and customize global options for storageobjects.

• Options on page 484• Command buttons on page 485

Options

You can use the monitoring options to configure monitoring intervals for various storage objects thatare monitored by DataFabric Manager server.

ClusterInterval

Specifies the time at which the DataFabric Manager server gathers statusinformation from each cluster.

The default is 15 minutes.

ClusterFailoverInterval

Specifies the time at which the DataFabric Manager server gathers high-availability configuration status information from each controller.

The default is 5 minutes.

vFiler UnitInterval

Specifies the time at which the DataFabric Manager server gathers informationabout vFiler units that are configured or destroyed on hosting storage systems.

The default is 1 hour.

VserverInterval

Specifies the time at which the DataFabric Manager server gathers informationabout virtual servers on hosting cluster systems.

The default is 1 hour.

User QuotaInterval

Specifies the time at which the DataFabric Manager server collects the user quotainformation from the monitored storage systems.

The default is 1 day.

484 | OnCommand Console Help

Page 485: Admin help netapp

Note: The process of collecting the user quota information from storage systemsis resource-intensive for storage systems. Decreasing the User QuotaMonitoring Interval option to a low value to increase the frequency of collectingthe information might affect the performance of the monitored storage systems.

Qtree Interval Specifies the time at which the DataFabric Manager server gathers statistics aboutmonitored qtrees.

The default is 8 hours.

LUN Interval Specifies the time at which the DataFabric Manager server gathers informationabout LUNs on each storage system.

The default is 30 minutes.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Monitoring Protection area

You can use the Setup Options dialog box to configure and customize global options for theprotection policies of your storage objects. You can configure the dataset details, resource poolspace, and the time at which the DataFabric Manager server gathers SnapMirror, Snapshot, andSnapVault information from the storage system.

• Options on page 485• Command buttons on page 486

Options

You can use the monitoring options to configure monitoring intervals for various protection objectsthat are monitored by DataFabric Manager server.

DatasetConformance

Specifies the time at which the DataFabric Manager server checks whethereach dataset conforms to its protection policy.

The default is 1 hour.

Dataset DisasterRecovery Status

Specifies the time at which the DataFabric Manager server checks thedisaster recovery status of each dataset.

The default is 15 minutes.

Administration | 485

Page 486: Admin help netapp

Dataset ProtectionStatus

Specifies the time at which the DataFabric Manager server checks theprotection status of each dataset.

The default is 15 minutes.

Resource Pool Space Specifies the time at which the DataFabric Manager server checks the spaceusage in each resource pool.

The default is 1 hour.

SnapMirror Specifies the time at which the DataFabric Manager server gathersSnapMirror information from each storage system.

The default is 30 minutes.

SnapShot Specifies the time at which the DataFabric Manager server gathers Snapshotinformation from each storage system.

The default is 30 minutes.

SnapVault Specifies the time at which the DataFabric Manager server gathersSnapVault information from each storage system.

The default is 30 minutes.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Monitoring Networking area

You can use the Setup Options dialog box to configure and customize global options for networking.You can configure details about the ping methods, SNMP retries and timeout, SAN host, and FibreChannel switch.

• Options on page 486• Command buttons on page 488

Options

You can use the monitoring options to configure monitoring intervals for various networking objectsthat are monitored by DataFabric Manager server.

Ping Specifies the time at which the DataFabric Manager server pings a storage system.

486 | OnCommand Console Help

Page 487: Admin help netapp

A short ping interval is recommended when you want to quickly detect if a storagesystem is available. The minimum ping monitoring interval is 1 second.

The default is 1 minute.

Note: The actual ping interval depends on variables such as networking conditionsand the number of monitored hosts. Therefore, the ping interval can be longer thanthe specified value.

PingMethod

Specifies the ping method that the DataFabric Manager server uses to check that astorage system is accessible. By default, the ICMP echo and SNMP ping method isselected.

You can select one of the following options from the drop-down menu:

• ICMP echo and SNMPEnables the DataFabric Manager server to use ICMP ping first, and then SNMP,to determine if the storage system is running. ICMP echo and SNMP is valid forall storage systems.

• ICMP echoEnables the DataFabric Manager server to ICMP ping a storage system. ICMPecho is valid for all storage systems.

• HTTP (port 80)Enables HTTP connect on port 80 of the storage system. HTTP is valid for allstorage systems.

Note: Do not use the HTTP ping method on networks where transparentcaching is enabled, to avoid a transparent redirect when a storage system isdown. In this scenario, the DataFabric Manager server mistakes the storagesystem to be running.

• SNMP (port 161)Enables SNMP service to listen to the storage system on port 161. SNMP is validfor all storage systems.

• NDMP (port 10000)Enables NDMP service to listen to the storage system on port 10000.

• All methods

The default is ICMP echo and SNMP.

Pingtimeout

Specifies the time after which a storage system is considered to be not responsive ifthe DataFabric Manager server does not receive a reply from the storage system to aping request.

The default is 3 seconds.

Ping RetryDelay

Specifies the time period that the ping utility remains inactive before retrying anunresponsive host.

Administration | 487

Page 488: Admin help netapp

The default is 3 seconds.

You might want to select a different value if you are experiencing network difficultiesthat cause false Host Down events.

SNMPRetries

When an SNMP timeout occurs, this option specifies the time between SNMPconnection attempts.

The default is 4 seconds.

When an SNMP timeout occurs, the SNMP monitor attempts to reconnect to thedevice for the number of times specified in the SNMP retries option. If the number ofretries is exceeded, the DataFabric Manager server generates a Host SNMP NotResponding event.

SNMPTimeout

Specifies the time that can elapse before an SNMP timeout occurs.

The default is 5 seconds.

When an SNMP timeout occurs, the SNMP monitor attempts to reconnect to thedevice for the number of times specified in the SNMP retries option. If the number ofretries is exceeded, the DataFabric Manager server generates a Host SNMP NotResponding event.

FibreChannel

Specifies the time at which the DataFabric Manager server gathers status informationfrom each Fibre Channel switch.

The default is 5 minutes.

SAN Host Specifies the time at which the DataFabric Manager server gathers information fromeach SAN host.

The default is 5 minutes.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Monitoring Inventory area

You can use the Setup Options dialog box to configure and customize global options for storageinventory, such as the time at which the DataFabric Manager server gathers information about CPU

488 | OnCommand Console Help

Page 489: Admin help netapp

usage, disk space and status, environmental status, file system, and global status from the storagesystem.

• Options on page 489• Command buttons on page 489

Options

You can use the monitoring options to configure monitoring intervals for various inventory objectsthat are monitored by DataFabric Manager server.

CPU Specifies the time at which the DataFabric Manager server gathers CPU usageinformation from each storage object.

The default is 5 minutes.

Disk Free Space Specifies the time at which the DataFabric Manager server gathers available diskspace information from each storage object.

In addition to monitoring free disk space on storage systems, the DataFabricManager server also monitors disk space on the workstation. If the workstation isrunning low on disk space, monitoring is turned off.

The default is 30 minutes.

Disk Specifies the time at which the DataFabric Manager server gathers disk statusinformation, such as disks that are not functioning or spare disk count.

The default is 4 hours.

Environmental Specifies the time at which the DataFabric Manager server gathersenvironmental status information from each storage object.

The default is 5 minutes.

File System Specifies the time at which the DataFabric Manager server gathers file systeminformation from each storage object.

The default is 15 minutes.

Global Status Specifies the time at which the DataFabric Manager server gathers global statusinformation from each storage object.

The default is 10 minutes.

Interface Specifies the time at which the DataFabric Manager server gathers networkinterface information from each storage object.

The default is 15 minutes.

Command buttons

The command buttons enable you to save or cancel the setup options.

Administration | 489

Page 490: Admin help netapp

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Monitoring System area

You can use the Setup Options dialog box to configure and customize global options for systems,such as the details for host agent discovery, configuration conformance of storage systems, RBACmonitoring, gathering license status, gathering information about the number of operations, andgathering storage system information.

• Options on page 490• Command buttons on page 491

Options

You can use the monitoring options to configure monitoring intervals for various system objects thatare monitored by DataFabric Manager server.

Agent Specifies the time at which the DataFabric Manager server gathers statusinformation from each host agent.

The default is 2 minutes.

ConfigConformance

Specifies the time at which the DataFabric Manager server verifies that theconfiguration on the storage system conforms with the configurationprovided by the DataFabric Manager server.

The default is 4 hours.

Host RBAC Specifies the time interval at which the host RBAC Monitor should run.

The default is 1 day.

License Specifies the time at which the DataFabric Manager server gathers licensestatus information from each appliance.

The default is 4 hours.

Operation Count Specifies the time at which the DataFabric Manager server gathersinformation about the number of operations that have taken place.

The default is 10 minutes.

SRM Host Specifies the time at which the DataFabric Manager server gathersinformation from each SRM host.

The default is 10 minutes.

490 | OnCommand Console Help

Page 491: Admin help netapp

SystemInformation

Specifies the time at which the DataFabric Manager server gathers systeminformation from each system.

The default is 1 hour.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves all the configuration settings and then closes the Setup Options dialog box.

Cancel Enables you to undo all the configuration settings and then closes the SetupOptions dialog box.

Save Saves all your configuration settings.

Management setup options

Understanding management options

Managed host options

You can configure the managed host options to ensure secure communication between theDataFabric Manager server and storage systems.

You can select conventional (HTTP) or secure (HTTPS) administration transport for APIcommunication and conventional (RSH) or secure (SSH) login protocol for login connection. Youcan use RSH/SSH connection for executing commands on a storage system. You can use SSHconnection for executing commands on a remote LAN module (RLM) card.

You can set managed host options globally (for all storage systems) or individually (for specificstorage systems). If you set storage system-specific options, the DataFabric Manager server retainsinformation about the security settings for each managed storage system. It references thisinformation when choosing one of the following options to connect to the storage system:

• HTTP or HTTPS• RSH or SSH• Administration Port• hosts.equiv authentication

Guidelines for changing managed host options

You can change managed host options, such as the login protocol, transport protocol, port, andhosts.equiv option.

Login Protocol This option enables you to set login protocols (RSH or SSH) that theDataFabric Manager server uses when connecting to the managed hosts.

Administration | 491

Page 492: Admin help netapp

• Login connections.• Active/active configuration operations• The dfm run command for running commands on the storage system

Change the default value if you want a secure connection for active/activeconfiguration operations, running commands on the storage system.

AdministrationTransport

This options enables you to select a conventional (HTTP) or secure (HTTPS)connection to monitor and manage storage systems through APIs (XML).

Change the default value if you want a secure connection for monitoring andmanagement.

AdministrationPort

This options enables you to configure the administration port which, alongwith administration transport, monitors and manages storage systems.

If you do not configure the port option at the storage system-level, the defaultvalue for the corresponding protocol is used.

hosts.equiv option This option enables users to authenticate storage systems when the user nameand password are not provided.

You must change the default value if you have selected the global defaultoption and if you do not want to set authentication for a specific storagesystem.

Note: If you do not set the transport and port options for a storage system, then the DataFabricManager server uses SNMP to get storage system-specific transport and port options forcommunication. If SNMP fails, then the DataFabric Manager server uses the options set at theglobal level.

Managing management options

Editing managed host options

You can modify the managed host options to control the connection between DataFabric Managerserver and storage systems from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Management option, then click the Managed Hostoption.

492 | OnCommand Console Help

Page 493: Admin help netapp

3. In the Management Managed Host area, specify the new values, as required.

4. Click Save and Close.

Related concepts

Guidelines for changing managed host options on page 491

Related references

Administrator roles and capabilities on page 506

Editing Host Agent options

The DataFabric Manager server stores authentication credentials globally for all host agents, basedon the Host Agent Options settings. You can change a login name or password for an individual hostagent from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Management option, then click Host Agent option.

3. In the Management Host Agent area, specify the new values, as required.

4. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Management Client area

You can use the Setup Options dialog box for clients to configure a secure connection between theDataFabric Manager server and your storage systems and your browsers. You can enable HTTP orHTTPS for establishing the secure connection.

• Options on page 494• Command buttons on page 494

Administration | 493

Page 494: Admin help netapp

Options

You can configure the following options:

EnableHTTP

Enables you to connect the clients to the DataFabric Manager server. By default,HTTP is enabled with the default port 8080.

EnableHTTPS

Enables you to connect the clients to the DataFabric Manager server. By default,HTTPS is enabled with the default port 8443.

Note: To enable HTTPS on the DataFabric Manager server, you must configureSSL using the dfm ssl server setup command.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Management Managed Host area

You can use the Managed Host option in the Setup Options dialog box to configure connectionsbetween the DataFabric Manager server and the storage systems. You can select conventional(HTTP) or secure (HTTPS) administration transport protocol, and conventional (RSH) or secure(SSH) login protocol.

• Options on page 494• Command buttons on page 495

Options

You can configure the following options:

Login Protocol Specifies the login protocol that the DataFabric Manager server must usewhen connecting to managed hosts.

The default is Remote Shell (RSH).

AdministrationTransport

Specifies the transport protocol that the DataFabric Manager server mustuse when connecting to storage systems.

The default is HTTP.

Administration Port Specifies the port that the DataFabric Manager server must use whenconnecting to storage systems.

The default is port 80.

494 | OnCommand Console Help

Page 495: Admin help netapp

Enable hosts.equiv Specifies the use of the hosts.equiv option to authenticate the storagesystem. By default, the hosts.equiv option is disabled. For moreinformation about configuring the hosts.equiv option, see the OperationsManager Administration Guide.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Management Host Agent area

You can use the Setup Options dialog box for host agents to configure login credentials for theNetApp Host Agent software. The DataFabric Manager server interacts with the NetApp Host Agentresiding on remote Windows, Solaris, or Linux workstations or servers (called hosts) to recursivelyexamine the directory structures (paths) you have specified.

• Options on page 495• Command buttons on page 496

Options

You can configure the following options:

Login Specifies the administrator access level that the DataFabric Manager serveruses to connect with the host agent.

The default login option is guest.

You can specify the following login options for NetApp Host Agent:

• guestEnables the user to log in to the host agent to monitor its LUNs.

• adminEnables the user to log in to the host agent to monitor and manage itsLUNs and successfully execute file walk on directory structures (paths).

MonitoringPassword

Specifies the password that the NetApp Host Agent uses to authenticate a"guest" user that has monitoring access privilege on the host agent. Thisvalue is the NetApp Host Agent Software option, Monitoring API Password.

ManagementPassword

Specifies the password that the NetApp Host Agent uses to authenticate an"admin" user account that has monitoring and management access privileges

Administration | 495

Page 496: Admin help netapp

on the host agent. This value is the NetApp Host Agent Software option,Management API Password.

AdministrationTransport

Specifies the transport protocol that runs on the host agent. You can selecteither HTTP or HTTPS from the drop-down menu.

The default is http.

AdministrationPort

Specifies the port used for communication between the DataFabric Managerserver and NetApp Host Agent.

The default is port 4092.

CIFS Account Specifies the CIFS account name. This information is required for host agentsrunning Windows during file walk on CIFS shares.

CIFS Password Specifies the Host Agent CIFS password. This information is required forWindows during file walk on CIFS shares.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Systems setup options

Understanding system options

Overview of system options

You can configure alarm settings, system properties such as owner e-mail, audit log settings,credential cache settings, storage system configurations, script plug-ins, and table settings by usingthe system setup options.

You can configure the following system options:

Alarms You can configure the following alarm properties:

• Mail server for sending the e-mail notification• Time interval for deleting events• SNMP trap (if you want to receive SNMP traps from storage systems.)• User quota alert details

Annotations You can add, edit, or delete user-defined properties such as the e-mail addressof the owner, owner name, and resource tag information.

496 | OnCommand Console Help

Page 497: Admin help netapp

Audit Log You can use the audit log option to set the global auditLogForever optionto keep the audit log files forever in the default log directory of the DataFabricManager server. You can view the specific operation in the audit.log fileand determine who performed certain actions from the CLI.

Credential Cache You can use the credential cache option to specify the Time-To-Live (TTL) forweb responses cached by the DataFabric Manager server.

Storage SystemConfiguration

You can use the storage system configuration option to manage localconfiguration file changes on all storage systems that are discovered andmanaged by the DataFabric Manager server.

Script Plugins You can configure the script plug-ins to specify the search path that theDataFabric Manager server uses to find script interpreters.

Paged Tables You can configure the number of rows for display in a table. This is applicableonly to the Operations Manager console

Note: This option is not applicable to OnCommand console list tables. It isapplicable only to the Operations Manager console tables.

Configuring system options

Adding custom annotations for alarm recipients

You can add customized annotations for the DataFabric Manager server that can be associated withstorage systems, SAN hosts, FC switches, volumes, qtrees, LUNs, groups, and quota users. Forexample, you can include properties such as the asset number, department code, location name, orsupport contact for any storage object.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Systems option, then click the Annotations option.

3. In the Systems Annotations area, click Add.

4. Type the name of your annotation in the Add System Annotation dialog box.

5. Click OK.

6. Click Save and Close.

Administration | 497

Page 498: Admin help netapp

Related references

Administrator roles and capabilities on page 506

Deleting custom annotations for alarm recipients

You can delete the customized annotations that you created. You cannot delete system-definedannotations such as the owner name, owner e-mail, and resource tag.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Systems option, then click the Annotations option.

3. In the Systems Annotations area, select the annotation you want to delete.

4. Click Delete.

5. Click Yes.

6. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Managing system options

Editing alarm settings

You can modify the alarm settings such as mail server details, event purge intervals, SNMP trapdetails, and user quota alerts from the Setup Options dialog box.

Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirmyour authorization in advance.

Steps

1. Click the Administration menu, then click Setup Options.

2. In the Setup Options dialog box, click the Systems option, then click Alarms option.

3. In the Systems Alarms area, specify the new values, as required.

498 | OnCommand Console Help

Page 499: Admin help netapp

4. Click Save and Close.

Related references

Administrator roles and capabilities on page 506

Page descriptions

Systems Alarms area

You can use the Setup Options dialog box to configure the DataFabric Manager server to send eventnotifications in different formats. You can also configure the mail server, event purge interval,SNMP trap details, and user quota alert details.

• Options on page 499• Command buttons on page 500

Options

You can configure the following options:

E-mail Specifies the following e-mail settings for the notification of alarms:

• Mail ServerSpecifies the name of your mail server.

• From FieldSpecifies the e-mail address of the owner who is marked on all the e-mails.

Events Specifies the event evaluation details.

• Purge IntervalSpecifies the period of time after which events are removed from the DataFabricManager server database. The DataFabric Manager server evaluates events fordeletion on a daily basis.

The default is 25.71 weeks.

SNMPTraps

Specifies the SNMP trap settings to be received from storage systems. By default, theSNMP trap is enabled.

• Listener PortSpecifies the UDP port on which the DataFabric Manager server Trap Listenerreceives traps. To use this feature, you must also configure the DataFabric Managerserver as a trap destination in the systems you are monitoring. The DataFabricManager server Trap Listener communicates through port 162, by default.

• Window Size

Administration | 499

Page 500: Admin help netapp

Displays the SNMP Maximum Traps Received per window option to determine thenumber of SNMP traps that can be received by the trap listener within a specifiedperiod of time. By default, 5 minutes is specified.

• Max Traps/WindowSpecifies the maximum number of SNMP traps that the workstation receives withinthe time specified in the SNMP Trap Window Size option. The trap listener attemptsto limit the incoming rate of traps to this value.The default is 250 traps per window.

UserQuotaAlerts

Enables you to configure or change the values of options that specify e-mail domainsand enable or disable alerts based on quota events. By default, the user quota alertsoption is enabled.

• Default E-mail DomainSpecifies the domain that the DataFabric Manager server appends to the user namewhen sending a user quota alert.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Systems Annotations area

You can use the Setup Options dialog box to view system-defined annotations such as owner e-mailaddress, owner name, and resource tag and modify them.

• Options on page 500• Command buttons on page 501

Options

You can configure the following options:

Name Displays the list of comment field names that include both system-defined and user-defined annotations:

• ownerEmailSpecifies the system-defined annotation used for owner's e-mail address.

• ownerNameSpecifies the system-defined annotation used for the name of the owner.

• resourceTag

500 | OnCommand Console Help

Page 501: Admin help netapp

Specifies the system-defined annotation used for resource tag.

SystemDefined

DisplaysYes

if the corresponding annotation is system generated, and displaysNo

if the corresponding annotation is not system generated.

Command buttons

The command buttons enable you to save or cancel the setup options.

Add Adds a new system annotation name.

Edit Edits the selected system annotations.

Delete Deletes the selected system annotations.

Save and Close Saves the configuration settings for the selected option.

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Systems Miscellaneous area

You can use the Setup Options dialog box to configure miscellaneous settings such as audit logoptions, credential TTL cache, storage system configuration, script plug-ins, and options to preserveyour local configuration settings.

• Options on page 501• Command buttons on page 502

Options

You can configure the following options:

Audit Log Enables you to set the global option auditLogForever to keep the audit logfiles forever in the default log directory of the DataFabric Manager server.

You can view the specific operation in the audit.log file and determine whoperformed certain actions in the OnCommand console, the command lineinterface, and the APIs. By default, the audit log option is enabled.

• Keep Log FilesEnables you to keep the audit log files forever in the default log directory ofthe DataFabric Manager server.

Administration | 501

Page 502: Admin help netapp

CredentialCache

Enables you to specify the Time-To-Live (TTL) for web responses cached by theDataFabric Manager server.

When a user authenticates to the DataFabric Manager server, the DataFabricManager server caches the web response and reuses the information to satisfysubsequent authentication queries for the amount of time specified in this option.

• TTLDisplays the Time-To-Live for LDAP server responses cached by theDataFabric Manager server. By default, LDAP and Windows authenticationinformation is cached for 20 minutes.

Storage SystemConfiguration

Enables you to manage local configuration file changes on all storage systemsthat are recognized and managed by the DataFabric Manager server.

• Preserve Local Configuration ChangesEnables you to keep the local configuration file changes on the storagesystems. By default, this option is selected.

Note: The "Preserve Local Configuration Changes" is a global option andaffects all storage systems that are managed by the DataFabric Managerserver.

Script Plugins Enables you to specify the search path that the DataFabric Manager server usesto find script interpreters. The value you enter for this option must be a stringthat contains multiple paths, delimited by colons or semicolons, depending onyour DataFabric Manager server's platform.

• Search PathDisplays the path for locating the script interpreters. The DataFabricManager server uses this path information before using the system path whensearching for script interpreters.

Paged Tables Enables you to specify the number of rows in a report for display.

• Rows Per PageDisplays the number of rows displayed per page. By default, the number ofrows selected for display is 20.

Note: This option is applicable only to the Operations Manager consolereports.

Command buttons

The command buttons enable you to save or cancel the setup options.

Save and Close Saves the configuration settings for the selected option.

502 | OnCommand Console Help

Page 503: Admin help netapp

Cancel Does not save the recent changes and closes the Setup Options dialog box.

Save Saves the configuration settings for the selected option.

Administration | 503

Page 504: Admin help netapp

504 | OnCommand Console Help

Page 505: Admin help netapp

Security and access

Understanding RBAC

What RBAC isRBAC (role-based access control) provides the ability to control who has access to various featuresand resources in DataFabric Manager server.

How RBAC is usedApplications use RBAC to authorize user capabilities. Administrators use RBAC to manage groupsof users by defining roles and capabilities.

For example, if you need to control user access to resources, such as groups, datasets, and resourcepools, you must set up administrator accounts for them. Additionally, if you want to restrict theinformation these administrators can view and the operations they can perform, you must apply rolesto the administrator accounts you create.

Note: RBAC permission checks occur in the DataFabric Manager server. RBAC must beconfigured using the Operations Manager console or command line interface.

How roles relate to administratorsRole management allows the administrator who logs in with super-user access to restrict the use ofcertain DataFabric Manager server functions to other administrators.

The super-user can assign roles to administrators on an individual basis, by group, or globally (andfor all objects in DataFabric Manager server).

You can list the description of an operation by using the dfm role operation list [ -x ][ <operation-name> ] command.

The ability to configure administrative users and roles is supported in the Operations Managerconsole, which can be accessed from the OnCommand console Administration menu.

Example of how to use RBAC to control accessThis example describes how a storage architect can use RBAC to control the operations that can beperformed by a virtual server administrator.

Suppose you are a storage architect and you want to use RBAC to enable the virtual serveradministrator (abbreviated to "administrator") to do the following operations: see the VMs associatedwith the servers managed by the administrator, create datasets to include them, and attach storageservices and application policies to these datasets to back up the data.

505

Page 506: Admin help netapp

Assume for this example that the host service registration, validation, and so on have beensuccessfully completed and that DataFabric Manager server has discovered the virtual server and itsVMs, datastores, and so on. However, at this point, only you and other administrators with the globalread permission in DataFabric Manager server can see these VMs. To enable the virtual serveradministrator to perform the desired operations, you need to perform the following steps:

1. Add the administrator as a user.You add the administrator as an authorized DataFabric Manager server user by using theOperations Manager console. If DataFabric Manager server is running on Linux, then you mustadd the administrator's UNIX identity or LDAP identity. If DataFabric Manager server is runningon Microsoft Windows, you can add an active directory user group that the administrator belongsto, which allows all administrators in that user group to log onto DataFabric Manager server.

2. Create a resource group.You next create a resource group by using the Operations Manager console. For this example, wecall the group "virtual admin resource group." Then you add the virtual server and its objects tothe resource group. Any new datasets or policies that the administrator creates will be placed inthis resource group.

3. Assign a role to the administrator.Assign a role, by using the Operations Manager console, that gives an appropriate level of controlto the administrator. For example, if there is only one administrator, you can assign the defaultGlobalApplicationProtection role or you can create a custom role by choosing customcapabilities. If there are multiple administrators, then a few of the capabilities assigned to anygiven administrator should be at that administrator's group level and not on the global level. Thatprevents an administrator from reading or modifying objects owned by other administrators.

Related tasks

Accessing the Users and Roles capability (RBAC) on page 375

Administrator roles and capabilitiesThe RBAC administrator roles determine the tasks you can perform in the OnCommand console.

One or more capabilities must be specified for every role, and you can assign multiple capabilities ifyou want the administrator to have more control than a specific role provides. For example, if youwant an administrator to perform both the backup and restore operations, you can create and assign tothe administrator a single role that has both of these capabilities.

You can use the Operations Manager console to create new roles and to customize the default globalroles provided by the DataFabric Manager server and the client applications. For more informationabout configuring RBAC, see the OnCommand Operations Manager Administration Guide .

Note: If you want a role with global host service management capability, create a role with thefollowing properties:

• The role inherits from the GlobalHostService role.

506 | OnCommand Console Help

Page 507: Admin help netapp

• The role includes the DFM.Database.Read operation on a global level.

Note: A user who is part of the local administrators group is treated as a super-user andautomatically granted full control.

Default global roles

GlobalApplicationProtection Enables you to create and manage application policies, createdatasets with application policies for local backups, use storageservices for remote backups, perform scheduled and on-demandbackups, perform restore operations, and generate reports.

GlobalBackup Enables you to initiate a backup to any secondary volume andignore discovered hosts.

GlobalDataProtection Enables you to initiate a backup to any secondary volume; viewbackup configurations, events and alerts, and replication or failoverpolicies; and import relationships into datasets.

GlobalDataset Enables you to create, modify, and delete datasets.

GlobalDelete Enables you to delete information in the DataFabric Managerserver database, including groups and members of a group,monitored objects, custom views, primary and secondary storagesystems, and backup relationships, schedules, and retentionpolicies.

GlobalHostService Enables you to authorize, configure, and unregister a host service.

GlobalEvent Enables you to view, acknowledge, and delete events and alerts.

GlobalFullControl Enables you to view and perform any operation on any object in theDataFabric Manager server database and configure administratoraccounts. You cannot apply this role to accounts with group accesscontrol.

GlobalMirror Enables you to create, destroy, and update replication or failoverpolicies.

GlobalRead Enables you to view the DataFabric Manager server database,backup and provisioning configurations, events and alerts,performance data, and policies.

GlobalRestore Enables you to restore the primary data to a point in time or to anew location.

GlobalWrite Enables you to view or write both primary and secondary data tothe DataFabric Manager server database.

GlobalResourceControl Enables you to add members to dataset nodes that are configuredwith provisioning policies.

Security and access | 507

Page 508: Admin help netapp

GlobalProvisioning Enables you to provision primary dataset nodes and can attachresource pools to secondary or tertiary dataset nodes. TheGlobalProvisioning role also includes all the capabilities of theGlobalResourceControl, GlobalRead, and GlobalDataset roles fordataset nodes that are configured with provisioning and protectionpolicies.

GlobalPerfManagement Enables you to manage views, event thresholds, and alarms apartfrom viewing performance information in Performance Advisor.

Related information

Operations Manager Administration Guide - http://now.netapp.com/NOW/knowledge/docs/DFM_win/dfm_index.shtml

Access permissions for the Virtual Infrastructure Administrator roleWhen you create a virtual infrastructure administrator, you must assign specific permissions toensure that the administrator can view, back up, and recover the appropriate virtual objects.

A virtual infrastructure administrator role must have the following permissions for the resources:

Groups The VI administrator will need the following operation permissions for the groupcreated for the VI administrator role.

DFM.Database All

DFM.BackManager All

DFM.ApplicationPolicy All

DFM.Dataset All

DFM.Resource Control

Policies The VI administrator will need the following operation permissions for each policytemplate, located under Local Policies, that you want the virtual administrator to beable to copy.

DFM.ApplicationPolicy Read

Storageservices

The VI administrator will need the following operation permissions for each of thestorage services that you want to allow the VI administrator to use.

DFM.StorageService Attach, read, detach, and clear

ProtectionPolicies

These are the policies contained within the storage services that you selected above.

DFM.Policy All

508 | OnCommand Console Help

Page 509: Admin help netapp

Understanding authentication

Authentication methods on the DataFabric Manager serverThe DataFabric Manager server uses the information available in the native operating system forauthentication. The server does not maintain its own database of the administrator names and thepasswords.

You can also configure the DataFabric Manager server to use Lightweight Directory Access Protocol(LDAP). If you configure LDAP, then the server uses it as the preferred method of authentication.

Authentication with LDAPYou can enable LDAP authentication on the DataFabric Manager server and configure it tocommunicate with your LDAP servers in order to retrieve relevant data.

The DataFabric Manager server provides predefined templates for the most common LDAP servertypes. These templates provide predefined LDAP settings that make the DataFabric Manager servercompatible with your LDAP server.

Security and access | 509

Page 510: Admin help netapp

510 | OnCommand Console Help

Page 511: Admin help netapp

Plug-ins

Hyper-V troubleshooting

Error: Vss Requestor - Backup Components failed with partial writer error.

Description This message occurs when backing up a dataset using the Hyper-V plug-in. Thiserror causes the backup to fail for some of the virtual machines in the dataset.

The following message appears:

Error: Vss Requestor - Backup Components failed with partial writer error.Writer Microsoft Hyper-V VSS Writer involved in backup or restore operation reported partial failure. Writer returned failure code 0x80042336. Writer state is 5.Application specific error information:Application error code: 0x1Application error message: -Failed component information:Failed component: VM GUID XXX Writer error code: 0x800423f3 Application error code: 0x8004230f Application error message: Failed to revert to VSS snapshot on the virtual hard disk 'volume_guid' of the virtual machine 'vm_name'. (Virtual machine ID XXX)

The following errors appear in the Windows Application event log on the Hyper-Vhost:

Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult. hr = 0x80070057, The parameter is incorrect.

Operation: Revert a Shadow Copy

Context: Execution Context: System Provider

Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error.Check the Application event log for more information.].

Operation: Revert a Shadow Copy

511

Page 512: Admin help netapp

Context: Execution Context: Coordinator

Correctiveaction

Retry the dataset backup.

Error: Failed to start VM. Job returned error 32768

Description After a successfully restore operation, you might get an error message stating thatyour Hyper-V virtual machine did not restart. The Hyper-V plug-in gives this errorbecause the virtual machine is not yet ready to start.

Correctiveaction

Currently the Hyper-V plug-in waits two seconds before restarting the virtualmachine. You can configure a longer delay by adding the following attribute in theWindows registry:

KEY: System\CurrentControlSet\Services\OnCommandHyperV\Parameters"; attribute (DWORD) : 'vm_restart_sleep'

Error: Failed to start VM. You might need to start the VM using Hyper-VManager

Description After a successfully restore operation, you might get an error message stating thatyour Hyper-V virtual machine did not restart. The Hyper-V plug-in gives this errorbecause the virtual machine is not yet ready to start.

Correctiveaction

Currently the Hyper-V plug-in waits two seconds before restarting the virtualmachine. You can configure a longer delay by adding the following attribute in theWindows registry:

KEY: System\CurrentControlSet\Services\OnCommandHyperV\Parameters"; attribute (DWORD) : 'vm_restart_sleep'

Error: Vss Requestor - Backup Components failed. An expected disk didnot arrive in the system

Description This message occurs when you back up a dataset using the Hyper-V plug-in and thefollowing error appears in the Windows Application event log on the Hyper-V host.

A Shadow Copy LUN was not detected in the system and did not arrive.

LUN ID guid Version 0x0000000000000001 Device Type 0x0000000000000000 Device TypeModifier 0x0000000000000000

512 | OnCommand Console Help

Page 513: Admin help netapp

Command Queueing 0x0000000000000001 Bus Type 0x0000000000000006 Vendor Id vendor Product Id LUN Product Revision number Serial Number serial_number Storage Identifiers Version 0 Identifier Count 0 Operation: Exposing Disks Locating shadow-copy LUNs PostSnapshot Event Executing Asynchronous Operation Context: Execution Context: Provider Provider Name: Data ONTAP VSS Hardware Provider Provider Version: 6. 1. 0. 4289 Provider ID: {ddd3d232-a96f-4ac5-8f7b-250fd91fd102} Current State: DoSnapshotSet

Correctiveaction

Retry the dataset backup.

Error: Vss Requestor - Backup Components failed. Writer Microsoft Hyper-V VSS Writer involved in backup or restore encountered a retryable error

Description If you receive a VSS retry error that causes your backup to fail, the Hyper-V plug-inretries the backup three times with a wait of one minute between each attempt.

The following error message is displayed in the Hyper-V plug-in report and theWindows Event log:

Error: Vss Requestor - Backup Components failed.WriterMicrosoft Hyper-V VSS Writer involved in backup or restoreencountered a retryable error. Writer returned failure code0x800423f3. Writer state is XXX. For more information, see theHyper-V-VMMS event log in the Windows Event Viewer.

Correctiveaction

You can configure the number of retries (retry count) and the duration of wait timebetween the retries (retry interval) using the following registry keys:

Key: HKLM\System\CurrentControlSet\Services\OnCommandHyperV\Parameters DWORD value in seconds: vss_retry_sleep (The timeduration to wait between retries) DWORD value: vss_retry(Number of retries)

Plug-ins | 513

Page 514: Admin help netapp

These settings are at the Hyper-V host level and the keys and values should be set onthe Hyper-V host for each virtual machine. If the virtual machine is clustered, thekeys should be set on each node in the cluster.

Hyper-V virtual objects taking too long to appear in OnCommand console

Issue After either first configuring the Hyper-V plug-in or after a failover, Hyper-Vvirtual objects take a long time to appear in the OnCommand console.

Cause The Hyper-V plug-in uses SnapDrive for Windows to enumerate virtual machines.With large numbers of virtual machines in a clustered setup, it can take SnapDrivefor Windows a significant amount of time to enumerate all of the virtual machines,so the discovery of Hyper-V objects takes time.

Correctiveaction

Depending on the size of your setup, discovery might take longer than you expect.

Increasing SnapDrive operations timeout value in the Windows registrySnapDrive for Windows has a default operations timeout of 60 seconds; however, you can increasethe timeout value by creating a new registry key in the Windows registry. You can change thetimeout value in instances when you need SnapDrive for Windows to wait longer for operations,such as backups, to complete.

To increase the SnapDrive for Windows operations timeout value, in the Windows registry, add aDWORD value named OperationsTimeout in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SWSvc\Parameters, and set a timeout to a value greater thanthe default of 60 seconds.

MBR unsupported in the Hyper-V plug-in

Issue The Hyper-V plug-in does not support MBR LUNs for virtual machines runningon shared volumes or cluster shared volumes.

Cause A Microsoft API issue returns different volume GUIDs when the cluster sharedvolume disk ownership changes from active to passive, for example Node A toNode B. The volume GUID is not the same as the GUID in the cluster diskresource property. This issue also applies to virtual machines made highlyavailable using Microsoft Failover clustering.

Correctiveaction

See Knowledge Base article 2006163 on the Microsoft support site.

Related information

Knowledge Base article 2006163 - http://support.microsoft.com/

514 | OnCommand Console Help

Page 515: Admin help netapp

Some types of backup failures do not result in partial backup failureIf one virtual machine in a dataset has an error, the Hyper-V plug-in does not successfully completethe dataset backup, and in some scenarios, does not generate a partial failure. In these situations, theentire dataset backup fails.

The following examples illustrate some of the scenarios that do not generate partial failure, eventhough the problem is associated to a subset of virtual machines in the dataset :

• One virtual machine has data on a non-Data ONTAP LUN• One storage system volume exceeds the 255 Snapshot copy limit• One virtual machine is in a Critical state

To successfully complete the backup operation, you need to fix the virtual machine that has the issue.If that is not possible, you can temporarily move the virtual machine out of the dataset, or create adataset that only contains virtual machines known not to have a problem.

Space consumption when taking two snapshot copies for each backup

Issue For every backup containing Hyper-V objects two snapshots are created, which canlead to concerns over space consumption.

Cause Microsoft Hyper-V VSS Writer creates both VM and application-consistent backupswithin the VMs, with the applications residing on VHDs. In order to create bothsoftware and VM consistent backups, VSS employs the native auto-recovery process,which sets the VM to a state consistent with the software snapshot. Hyper-V VSSwriter contacts each VM in the backup, and creates a software consistent snapshot.

Once the snapshots are created, the parent partition creates a VSS snapshot of theentire disk (LUN) that houses these VMs. Once the parent partition snapshot iscreated, VSS requires mounting of the previously created parent partition to roll eachof the VMs back to the software consistent state, and remove any changes that weremade to the VMs after the software snapshot was created. These modifications to theVHDs must be made persistent. Since these snapshots are read-only by default, a newsnapshot must be made to retain the updated copies of the VHDs. For this reason, asecond snapshot of the volume is created. This snapshot is labeled with a the suffix_backup and is backup used in restore operations.

Correctiveaction

The two snapshots are considered a pair. When the retention period ends for thebackup, both the snapshots are deleted. You should not manually delete the firstsnapshot since it is necessary for restore operations.

Microsoft VSS only supports backing up VMs on the host that owns the ClusterShared Volume (CSV), so CSV ownership moves between the nodes to createbackups of the VMs on each host in the cluster.

When backing up a CSV, the Hyper-V plug-in creates two snapshots per host in thecluster that runs a VM from that CSV. This means that if you backup up 15 VMs on a

Plug-ins | 515

Page 516: Admin help netapp

single CSV and those VMs are evenly split across three Hyper-V Servers that therewill be a total of six snapshots per backup.

Virtual machine snapshot file location change can cause the Hyper-V plug-in backup to fail

If you change a virtual machine snapshot file location to a different Data ONTAP LUN after creatingthe virtual machine, you should create at least one virtual machine snapshot using Hyper-V managerbefore making a backup using the Hyper-V plug-in. If you change the snapshot file location to adifferent LUN and do not make a virtual machine snapshot before making a backup, the backupoperation could fail.

Virtual machine backups taking too long to complete

Issue If a virtual machine contains several direct-attached iSCSI LUNs or pass-throughLUNs, and SnapDrive for Windows is installed on the virtual machine, the virtualmachine backup can take a long time.

Cause The Hyper-V writer takes a hardware snapshot of all the LUNs in the virtualmachine using the SnapDrive for Windows VSS hardware provider.

Correctiveaction

You can use a Microsoft hotfix that uses the default system provider (softwareprovider) in the virtual machine to make the snapshot. As a result, the DataONTAP VSS hardware provider is not used for snapshot creation inside the childOS and the backup speed increases.

The Hyper-V writer takes a hardware snapshot of all the LUNs in the virtual machine using theSnapDrive for Windows VSS hardware provider. There is a Microsoft hotfix that uses the defaultsystem provider (software provider) in the virtual machine to make the snapshot. As a result, theData ONTAP VSS hardware provider is not used for snapshot creation inside the child OS and thebackup speed increases. See Knowledge Base article 975354 on the Microsoft support site.

Related information

Knowledge Base article 975354 - http://support.microsoft.com/

Virtual machine backups made while a restore operation is in progressmight be invalid

Issue A backup created while a restore operation is in progress might be invalid, becausethe virtual machine configuration information is missing in the backup copy. Thebackup operation is successful, but the backup copy is invalid because the virtualmachine configuration information is not included. Restoring a virtual machinefrom this incomplete backup results in data loss and the virtual machine is deleted.

516 | OnCommand Console Help

Page 517: Admin help netapp

Cause The Hyper-V plug-in restore operations delete the virtual machine configurationinformation from the Hyper-V host before performing a restore operation. Thisbehavior is by design from the Microsoft Hyper-V Writer.

Correctiveaction

Ensure that the backup schedule does not coincide with the restore operation, or thatthe on-demand backup you want to perform does not overlap with a restoreoperation of the same data.

Manually locating a lost virtual machine

If your virtual machine is deleted after a failed restore operation, you can manually copy the virtualmachine data from backup snapshot copies so you can restore it.

Before you begin

You must have manually copied all of the virtual machine files, including the virtual machineconfiguration, VHD's, and virtual machine snapshot files, from the backup snapshot copy to thevirtual machine's original path.

Steps

1. Enter the following registry key: HKLM\System\CurrentControlSet\Services\OnCommandHyperV\Parameters DWORD value name: RestoreVMFromExistingData

value data: 1

When you set the value to 1 and perform a restore operation, the Hyper-V plug-in does not copythe data from the Data ONTAP snapshot copies to the original virtual machine location.

2. Restore the virtual machine from any backup using the Hyper-V plug-in or the Restore-BackupPowerShell cmdlet.

The Hyper-V plug-in notifies the Hyper-V VSS writer to restore the virtual machine fromexisting data.

3. When you are finished, delete the registry value created during Step 1.

Volume Shadow Copy Service error: An internal inconsistency wasdetected in trying to contact shadow copy service writers.

Description When you perform a backup of a virtual machine that uses Windows Server 2003,it repeatedly fails due to a retry error.

Correctiveaction

Check the Windows Application event log inside the virtual machine for any VSSerrors. You can also see Knowledge Base article 940184 on the Microsoft supportsite if you see the following error:

Volume Shadow Copy Service error: An internal inconsistency

was detected in trying to contact shadow copy service

Plug-ins | 517

Page 518: Admin help netapp

writers. Please check to see that the Event Service and

Volume Shadow Copy Service are operating properly.

Related information

Knowledge Base article 940184 - http://support.microsoft.com/

Hyper-V VHDs do not appear in the OnCommand console

Issue Hyper-V VHDs do not appear in the OnCommand console after being properlyadded to a virtual machine.

Cause After a VHD is added to a virtual machine, a Windows WMI event for virtualmachine changes is not generated and pass-through LUNS are not properly listedin the OnCommand console.

Correctiveaction

Resend the storage system credentials for one of the storage systems managed bythe host service.

518 | OnCommand Console Help

Page 519: Admin help netapp

Copyright information

Copyright © 1994–2011 NetApp, Inc. All rights reserved. Printed in the U.S.A.

No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in anelectronic retrieval system—without prior written permission of the copyright owner.

Software derived from copyrighted NetApp material is subject to the following license anddisclaimer:

THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS ORIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANYDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIALDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTEGOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESSINTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHERIN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OROTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IFADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

NetApp reserves the right to change any products described herein at any time, and without notice.NetApp assumes no responsibility or liability arising from the use of products described herein,except as expressly agreed to in writing by NetApp. The use or purchase of this product does notconvey a license under any patent rights, trademark rights, or any other intellectual property rights ofNetApp.

The product described in this manual may be protected by one or more U.S.A. patents, foreignpatents, or pending applications.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject torestrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and ComputerSoftware clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).

519

Page 520: Admin help netapp

520 | OnCommand Console Help

Page 521: Admin help netapp

Trademark information

NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri,ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, CampaignExpress, ComplianceClock, Cryptainer, CryptoShred, Data ONTAP, DataFabric, DataFort, Decru,Decru DataFort, DenseStak, Engenio, Engenio logo, E-Stack, FAServer, FastStak, FilerView,FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy, GetSuccessful,gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management, LockVault,Manage ONTAP, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp on the Web),Onaro, OnCommand, ONTAPI, OpenKey, PerformanceStak, RAID-DP, ReplicatorX, SANscreen,SANshare, SANtricity, SecureAdmin, SecureShare, Select, Service Builder, Shadow Tape,Simplicity, Simulate ONTAP, SnapCopy, SnapDirector, SnapDrive, SnapFilter, SnapLock,SnapManager, SnapMigrator, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot,SnapSuite, SnapValidator, SnapVault, StorageGRID, StoreVault, the StoreVault logo, SyncMirror,Tech OnTap, The evolution of storage, Topio, vFiler, VFM, Virtual File Manager, VPolicy, WAFL,Web Filer, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United States,other countries, or both.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corporation in the United States, other countries, or both. A complete and current list ofother IBM trademarks is available on the Web at www.ibm.com/legal/copytrade.shtml.

Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/orother countries. Microsoft is a registered trademark and Windows Media is a trademark of MicrosoftCorporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer,RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, andSureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries.

All other brands or products are trademarks or registered trademarks of their respective holders andshould be treated as such.

NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks.

NetApp, Inc. NetCache is certified RealSystem compatible.

521

Page 522: Admin help netapp

522 | OnCommand Console Help

Page 523: Admin help netapp

How to send your comments

You can help us to improve the quality of our documentation by sending us your feedback.

Your feedback is important in helping us to provide the most accurate and high-quality information.If you have suggestions for improving this document, send us your comments by e-mail to [email protected]. To help us direct your comments to the correct division, include in thesubject line the product name, version, and operating system.

You can also contact us in the following ways:

• NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089• Telephone: +1 (408) 822-6000• Fax: +1 (408) 822-4501• Support Telephone: +1 (888) 4-NETAPP

523

Page 524: Admin help netapp

524 | OnCommand Console Help

Page 525: Admin help netapp

IndexA

access roles (RBAC)See RBAC

Add Dataset wizarddecisions to make for protection 200

administrationHTTP transport 494, 495port 494–496transport 495, 496

Administration Port option 491Administration Transport option 491administrator roles

list and descriptions 372, 373, 506, 507See also RBAC

aggregate full threshold 94, 328aggregate nearly overcommitted threshold 94, 328aggregate overcommitted threshold 94, 328aggregates

editing threshold conditions 454monitoring inventory 98relation to traditional volume 141, 333viewing inventory 98

Aggregates Capacity Growth report 346, 347Aggregates Capacity report 338Aggregates Committed Capacity report 345Aggregates report 314Aggregates Space Savings report 351Aggregates view 109–111, 114, 115alarm conditions

event class 32, 33, 385, 387event severity 32, 33, 385, 387

alarm formatse-mail format 32, 33, 385, 387pager alert 32, 33, 385, 387script 32, 33, 385, 387SNMP trap 32, 33, 385, 387

alarmsadding 30, 32, 37–41, 385, 390, 391adding for a specific event 33, 387alarm begin 32, 33, 385, 387alarm end 32, 33, 385, 387configurations 32, 385configuring 41, 42, 392, 393creating 30, 41, 42, 392, 393deleting 40, 41, 390, 391

details 37disabling 40, 41, 390, 391editing 35, 40, 41, 43, 44, 388, 390, 391enabling 40, 41, 390, 391guidelines for creating 30modifying 35, 43, 44, 388notification 32, 385repeat notification 32, 33, 385, 387settings 496testing 40, 41, 390, 391viewing details 37

Alarms tab 29, 40, 41, 390, 391alarmView 361archive backup 448associating storage systems 399, 403audit log

file 501, 502settings 496

authenticationLDAP 473, 509methods 473, 509

AutoSupportinformation provided for 19

Availability dashboard panel 22

B

backing upvirtual objects on-demand 61, 276datasets on-demand 222, 280

Backup Management panel 299backup options

editing 420backup relationships 419Backup Settings area

for configuring local policies 179backups

before performing on-demand backup 62, 64, 223,225, 277, 279, 281, 283

deleting 293guidelines for mounting or unmounting backups in

a VMware environment 271guidelines for performing on-demand backup 62,

64, 223, 225, 277, 279, 281, 283Hyper-V saved-state backups 274locating 288, 292

Index | 525

Page 526: Admin help netapp

monitoring 294mounting using Backups tab 69, 284mounting using Server tab 67, 286on-demand 269performing on-demand 222, 280properties 294, 295remote 269requirements and restrictions when performing an

on-demand backup 64, 225, 279, 283restoring data from 297retaining job progress information 271scheduled local 269scripting 270searching for 297selecting 297selecting the restore destination 297unmounting using Backups tab 70, 285unmounting using Server tab 68, 287using CLI 283version management 269

Backups tab 294, 295bookmarks to favorite topics 15

C

calculationseffective used data space 326guaranteed space 326physical used data space 326Snapshot reserve 326total data space 326total space 326used Snapshot space 326

chargeback rates for groupsconfiguring 446

CIFSaccount 495, 496password 495, 496

cluster-related objects 86clusters

adding 100–102deleting 100–102editing settings 100–102grouping 100–102monitoring inventory 97viewing inventory 97

Clusters view 100–102configured threshold

reached 24configuring

mail server 36, 389non-overlapping schedules for Hyper-V objects

221storage systems 413users and roles 375vFiler units 415

conformanceconditions for datasets 241–243dataset status 238datasets, failure to conform 240evaluating error text 232, 246evaluating for datasets 239how dataset conformance is monitored 240monitor intervals 240monitoring and correction 184resolving dataset issues manually with a baseline

transfer of data 233, 248resolving issues automatically without a baseline

transfer of data 234, 249resolving manually in datasets when a baseline

transfer of data might be necessary 235,250

test button 243troubleshooting 246

costing optionschargeback 446, 447

cpuView 362Create Alarm dialog box 40–42, 390–393Create Group dialog box 383Create Local Policy dialog box 176, 177credential cache

settings 496credentials

storage systems 400, 404custom annotations

adding 497deleting 498for alarm recipients 497, 498

custom commentfield names 500, 501

custom label 200customization

window layout 16

D

dashboard panelsAvailability 22Dataset Overall Status 25descriptions 21

526 | OnCommand Console Help

Page 527: Admin help netapp

Events 23External Relationship Lags 26Fastest Growing Storage 24Full Soon Storage 24monitoring objects 22Resource Pools 25Unprotected Data 26

Data ONTAPlicenses, described 195

database backuparchive backup 448deleting 448process 448scheduling to reoccur 449Snapshot backup 448starting 450types 448

database backup optioncompleted 453scheduling 451, 452

datacentersviewing VMware inventory of 54VMware Datacenters view 72, 73

DataFabric Manager serververifying host service registration 396, 401

Dataset Overall Status dashboard panel 25dataset-level naming settings

configuring while adding datasets of virtual objects215

editing a dataset of virtual objects for 217datasets

adding to manage physical storage objects 199adding virtual objects to 209adding, decisions to make for protection 200attaching storage services to 229best practices when configuring datasets of virtual

objects 187, 208, 209changing storage services on 228conformance conditions 241–243conformance status values 238conformance to policy, evaluating 239creating for protection of virtual objects 203decisions to make before adding to manage

physical storage objects 200deleting backups 293editing to specify naming settings 218evaluating conformance issues 232, 246general concepts 181guidelines for adding a dataset of virtual objects

204–207

how conformance is monitored 240Hyper-V objects that a dataset can include 186listing nonconformant datasets 246monitoring backup and mirror relationships 245monitoring conformance to policy 239monitoring status 244names, acceptable characters 200object types that can be members of datasets of

physical objects 184of virtual objects 185properties of, for protection 200protection of physical storage 184protection status values 197, 237provisioning policies

role in dataset management 183reasons for failure to conform to policy 240removing a virtual object 213repairing datasets that contain deleted virtual

objects 231resolving conformance issues automatically

without a baseline transfer of data 234,249

resolving conformance issues manually when abaseline transfer of data might benecessary 235, 250

resolving conformance issues manually without abaseline transfer of data 233, 248

resource status values 239restoring data from backed up storage objects 230role of protection policies 183role of provisioning policy 183status types 236testing conformance 243troubleshooting 246types of conformance status, defined 238types of protection status, defined 197, 237types of resource status, defined 239virtual objects, best practices when configuring

datasets of 187, 208, 209VMware objects that a dataset can include 186what a dataset is 181

Datasets Average Space Usage Metric report 353Datasets Average Space Usage Metric Samples report

356Datasets IO Usage Metric report 355Datasets IO Usage Metric Samples report 358Datasets Maximum Space Usage Metric report 354Datasets Maximum Space Usage Metric Samples report

357datastores

Index | 527

Page 528: Admin help netapp

restoring 65, 299selecting a backup 297viewing VMware inventory of 56VMware Datastores view 77, 78, 80

days to full 24deduplication

license, described 195default role 371, 505default tabs 15default threshold options

aggregates 456, 457volumes 457, 460

default thresholds optionsother 460, 462

deleted objectsrepairing datasets that contain deleted virtual

objects 231retrieving 99viewing 117, 118viewing inventory 99what deleted objects are 87what happens when storage objects are deleted 87

Deleted Objects view 117, 118designerReportView 363directory path for archive backups

changing 450disconnecting

a LUN 291forced (of LUN) 291

discovery optionaddresses 468credentials 469

disksmonitoring inventory 99viewing inventory 99

Disks view 116, 117display

minimum setting 15

EEdit Alarm dialog box 35, 43, 44, 388Edit Group dialog box 384, 385Edit Local Policy dialog box 176, 177ESX host name field

selecting the host server for a virtual machinerestore 65, 300

ESX serversviewing VMware inventory of 55VMware ESX Servers view 73, 74

event purge interval 499, 500

event reports 312event severity types

critical 31emergency 31error 31information 31normal 31warning 31

eventsacknowledging 34, 37–39, 388aggregate almost full 94, 328aggregate almost overcommitted 94, 328aggregate full 94, 328aggregate overcommitted 94, 328definition of 29details 36–39how to know when events occur 31qtree full 146, 332qtree nearly full 146, 332resolving 34, 37–39, 388severity types 23, 31triggered 37–39viewing details 36volume almost full 141, 333

Events All report 313Events Current report 312Events dashboard panel 23Events tab 37–39External Relationship Lags dashboard panel 26

F

Fastest Growing Storage dashboard panel 24favorite topics, adding to list 15File SRM

editing options 472See also FSRM

File SRM area 472, 473File Storage Resource Management (FSRM)

See FSRMFile Systems report 316file types

adding 470File SRM 470

file-level metadatamonitoring 470

file-level statistics 470FlexClone

license, described 195forced disconnect (of LUN) 291

528 | OnCommand Console Help

Page 529: Admin help netapp

FSRMmonitoring requirements 470what FSRM does 470

Full Soon Storage dashboard panel 24

G

General Properties tab 260global access control

precedence over group access control 371, 505global groups 376global naming settings

customizing 435guidelines for customizing 436requirements for customizing 438, 439use of a naming script 423

Global Naming Settings Primary Volume area 442Global Naming Settings Secondary Qtree area 444Global Naming Settings Secondary Volume area 443Global Naming Settings Snapshot Copy area 440group access control

precedence over global access control 371, 505groups

adding 381, 382chargeback 384, 385copying 380–382creating 377, 383deleting 378, 381, 382editing 379, 384, 385global 376managing 378member types 383–385moving 380–382what groups are 376

Groups tab 381, 382growth rate

of storage space utilization 24

H

hbaInitiatorView 367hbaView 367health

of managed objects 23Host Agent

editing options 493login 495, 496

host servicesadding 394associating with vCenter Server 397, 402

authorizing access to storage 397configuring 395deleting 405overview 393rediscovering virtual object inventory 408refreshing the list of 408registering 394registering with vCenter Server 397, 402verifying server registration 396, 401viewing configured hosts 407

Host Services tab 409–411hosts

Data ONTAP licenses, described 195hosts.equiv 494, 495hosts.equiv option 491how to use Help 15HTTP

enabling 493, 494HTTPS

enabling 493, 494Hyper-V

best practices when configuring datasets of Hyper-V objects 187, 208, 209

how virtual objects are discovered 53local protection of virtual objects 167, 192, 193objects that a dataset can include 186overlapping policies 274parent hosts, local policies 274remote protection of virtual objects 188viewing server inventory 57viewing VM inventory 58

Hyper-V plug-inapplication-consistent backups 272co-existence with SnapManager for Hyper-V 275saved-state backups 274VSS 272

Hyper-V Servers view 80, 81Hyper-V VMs view 81–83

I

initiatorView 367interface groups 86inventory options

editing 482monitoring 482

inventory reportsoverview 314

Index | 529

Page 530: Admin help netapp

J

jobscanceling 45defined 45monitoring 46understanding 45viewing details of restore 302

Jobs tab 46, 47, 51, 52junctions 86

L

lag thresholds 419LDAP

adding servers 474authentication 473, 475, 476, 509deleting servers 474disabling authentication 474editing authentication settings 475enabling 475, 476enabling authentication 474server types 476, 477servers 477, 478template settings 475

LDAP Authentication area 475, 476LDAP Server Types area 476, 477LDAP Servers area 477, 478licenses

Data ONTAP 195LIFs

cluster management LIF 86data LIF 86node management LIF 86

local backupsscheduling for virtual objects 172, 211

local policiesadding 170and local backup of virtual objects 167, 192, 193Backup Settings area 179copying 173deleting 174editing 171effect of time zones 194guidelines for adding or editing 169, 170Name area 177Policies tab 175, 176Schedule and Retention area 177scheduling local backup of virtual objects 172, 211

local protection

of virtual objects 167, 192, 193local users 411locating

backup copies 288, 292Snapshot copies in a backup 289

logical interfaces (LIFs)See LIFs

logical storageconfiguring

LUN path settings 138qtree quota 137quota settings 139volume quota 136

groupingLUNs 141qtrees 140volumes 139

LUNs 135LUNs view 157–159monitoring

qtree capacity threshold and events 146, 332volume capacity threshold and events 141,

333overview 135qtrees 135Qtrees view 160–162Quota Settings view 162, 163Snapshot Copies view 164, 165volumes 135Volumes view 150–153, 156, 157

loginadmin 495, 496guest 495, 496

login credentials 400, 404login protocol

RSH 494, 495SSH 494, 495

Login Protocol option 491LUNs

definition of 135disconnecting 291forced disconnect 291grouping 141monitoring inventory 148path settings 138viewing inventory 148

LUNs report 317LUNs view 157–159

530 | OnCommand Console Help

Page 531: Admin help netapp

M

mail serverconfiguring 499, 500configuring for alarm notifications 36, 389

managed host optionsediting 492guidelines for changing 491overview 491

Management Consolehow the OnCommand console works with 17installing 18

management optionsfor clients 493, 494for Host Agent 495, 496for managed hosts 494, 495

miscellaneous options 501, 502monitoring

dataset conformance to policy 239flow chart of process 478local backup progress 294process 478query intervals 480system options 490, 491

monitoring optionsfor inventory 488, 489for networking 486, 488for protection 485, 486for storage 484, 485guidelines for changing 480location to change 480

mounting backupsmanually in a Hyper-V environment 288

MultiStore Optionlicense, described 195

N

Name areafor configuring local policies 177

namespace 120namespaces 86naming properties

custom label 200Naming Properties tab 260–263naming scripts

environment variables for naming primary volumes425

environment variables for naming secondaryvolumes 425

environment variables for naming Snapshot copies424

limitations 424understanding 423

naming settingsadding datasets of virtual objects with dataset-level

custom naming 215configuring naming settings using the Add Dataset

wizard 214customizing global settings 435definition of 181descriptions for primary volumes 429, 430descriptions for secondary qtrees 433, 434descriptions for secondary volumes 431, 432descriptions for Snapshot copies 426, 428editing a dataset of virtual objects for dataset-level

custom naming 217editing a dataset to specify dataset level custom

naming 218global and dataset-level differences 421, 422guidelines for customizing global settings 436requirements for customizing global settings 438,

439understanding naming scripts 423use of format strings 423when to customize for related objects 422, 423

navigation 15NDMP credentials 400, 404NearStore Option

license, described 195network address

adding 465for discovery 465

network credentialsmodifying 466

network optionsediting 481monitoring 481

notificationof events 29

O

objectsstatus types 305what objects are 376

on-demand backupsbefore performing 62, 64, 223, 225, 277, 279, 281,

283

Index | 531

Page 532: Admin help netapp

guidelines for performing 62, 64, 223, 225, 277,279, 281, 283

performing 222, 280requirements and restrictions 64, 225, 279, 283

Operations Manager consolehow the OnCommand console works with 17

Operations Manager Consolelaunching 17

P

paged tables 496password

management 495, 496monitoring 495, 496

percentageof space availability 22

physical storageadding

clusters 88storage controllers 88

aggregates 85Aggregates view 109–111, 114, 115clusters 85Clusters view 100–102configuring

aggregate settings 91cluster settings 90storage controller settings 89

Deleted Objects view 117, 118disks 85Disks view 116, 117grouping

aggregates 93clusters 92storage systems 92

monitoringaggregate capacity thresholds and events 94,

328discovery of storage systems 94

overview 85Storage Controllers view 102–104, 107–109storage systems 85

ping intervalsconfiguring 483

policiesevaluating dataset conformance to 239monitoring dataset conformance to 239

Policies tab

for listing, configuring, and managing local policies175, 176

Pre/Post Restore Script fielddefining which script to run 65, 300

primary volumesnaming settings descriptions 429, 430

propertiesof backups 294, 295of datasets 200of events 37–39of jobs 46, 47, 51, 52

protected datadataset conformance conditions 241–243dataset conformance status, described 238dataset protection status, described 197, 237dataset resource status, described 239evaluating conformance to policy 239how dataset conformance is monitored 240reasons for failure to conform to policy 240

protectiondecisions before adding a dataset 200

protection optionsediting 481monitoring 481

protection policiesoverview 181role in dataset management 183

provisioning policiesoverview 181

Q

qtree threshold conditionsediting 455

qtreesconfiguring quota settings 137definition of 135grouping 140monitoring inventory 149viewing inventory 149

Qtrees Capacity Growth report 348Qtrees Capacity report 340Qtrees report 318Qtrees view 160–162quota settings

monitoring inventory 149viewing inventory 149

Quota Settings view 162, 163quotas

configuring settings 139

532 | OnCommand Console Help

Page 533: Admin help netapp

monitoring quota settings inventory 149process 135viewing quota settings inventory 149why you use 135

R

RBACcapabilities 372, 373, 506, 507default roles 372, 373, 506, 507definition 371, 505example of how to use 371, 505how RBAC is used 371, 505how roles relate to administrators 371, 505

related objectsconfiguring naming settings using the Add Dataset

wizard 214datasets

configuring naming settings using the AddDataset wizard 214

definition of 181global and dataset-level naming settings 421, 422primary volume naming settings 429, 430secondary qtrees naming settings 433, 434secondary volume naming settings 431, 432Snapshot copy naming settings 426, 428specifying naming by editing a dataset 218when to customize naming for 422, 423

remote backups 269remote configuration 412remote protection

assigning to virtual objects 210of virtual objects 188

reportOutputView 367reports

Aggregates 314Aggregates Capacity 338Aggregates Capacity Growth 346, 347Aggregates Committed Capacity 345Aggregates Space Savings 351aggregating data 304computing new columns 304Datasets Average Space Usage Metric 353Datasets Average Space Usage Metric Samples 356Datasets IO Usage Metric 355Datasets IO Usage Metric Samples 358Datasets Maximum Space Usage Metric 354Datasets Maximum Space Usage Metric Samples

357deleting 303, 308

deleting columns 304displaying group details 304displaying hidden columns 304Events All 313Events Current 312exporting content 303exporting data 303File Systems 316filtering data 304formatting a column 304formatting data based on conditions 304hiding columns 304hiding duplicate values in a column 304hiding group details 304LUNs 317management 303parameters 303printing 303Qtrees 318Qtrees Capacity 340Qtrees Capacity Growth 348reordering columns 304scheduling 303, 306scheduling, defined 305sharing 303, 307sorting data 304starting each group on a new page 304Storage Service Datasets 323Storage Service Policies 322Storage Services 322Storage Systems 319Storage Systems Capacity 341tab 308–310usage metric reports

overview 325User Quotas Capacity 342vFiler Units 320Volumes 321Volumes Capacity 344Volumes Capacity Growth 349Volumes Committed Capacity 346Volumes Space Reservation 349Volumes Space Savings 352

Reports tab 304, 308–310resource groups

global 376Resource Pools dashboard panel 25resource status, dataset 239restore

scripting 298

Index | 533

Page 534: Admin help netapp

viewing job details 302where to restore a backup 297

restoringa datastore 65, 299a Hyper-V virtual machine 66, 301a VMware virtual machine 65, 300data from backups 299

role-based access controlSee RBAC

rolesglobal 371, 505

S

sanhostlunview 368saved-state backups

how Hyper-V plug-in handles 274Schedule and Retention area

for configuring local policies 177Schedule Report dialog box 310scheduled backups 269scheduled reports log

viewing 307script plugins

configuring 496scripting

arguments 270, 298backups 270restore 298

searching for backup copies 288, 292secondary qtrees

naming settings descriptions 433, 434secondary volumes

naming settings descriptions 431, 432secure connections 493, 494selection of backups 297server types

editing 475servers

grouping virtual objects 59Hyper-V Servers view 80, 81Hyper-V VMs view 81–83viewing Hyper-V server inventory 57viewing Hyper-V VM inventory 58viewing VMware datacenter inventory 54, 55viewing VMware datastore inventory 56viewing VMware virtual center inventory 53viewing VMware virtual machine inventory 56VMware Datacenters view 72, 73VMware Datastores view 77, 78, 80

VMware ESX Servers view 73, 74VMware Virtual Centers view 71, 72VMware VMs view 74–77

Setup Options dialog box 495, 496Share Report dialog box 311SnapDrive for Windows

Hyper-V manual mount and unmount process 288SnapManager for Hyper-V

co-existence with Hyper-V plug-in 275co-existence with OnCommand console 275manually transitioning dataset information 275

SnapMirrorlicense, described 195

SnapMirror Synclicense, described 195

Snapshot backup 448Snapshot copies

locating in a backup 289monitoring inventory 150naming settings descriptions 426, 428overview 136viewing inventory 150

Snapshot Copies view 164, 165SnapVault 419SnapVault Data

primary license, described 195secondary license, described 195

SnapVault Linuxlicense, described 195

SnapVault UNIXlicense, described 195

SnapVault Windows Open File Managerlicense, described 195

SNMP communitiesadding 466editing 466

SNMP traps 499, 500space utilization 24SRM, File

See FSRMstatus definitions

dataset conformance 238dataset protection 197, 237dataset resource 239

status of an objectcritical 305emergency 305error 305normal 305unknown 305

534 | OnCommand Console Help

Page 535: Admin help netapp

warning 305storage capacity reports

overview 323storage chargeback

configuring 446storage controllers

adding 102–104, 107–109deleting 102–104, 107–109editing settings 102–104, 107–109grouping 102–104, 107–109monitoring inventory 98viewing inventory 98

Storage Controllers view 102–104, 107–109storage objects

adding again 117, 118deleted by 117, 118deleted date 117, 118deleted objects 87deleting 117, 118restoring data from 230undeleting 117, 118viewing deleted objects 117, 118

storage optionsediting 480monitoring 480

Storage Service Datasets report 323Storage Service Policies report 322storage services

assigning to a dataset of virtual objects 210attaching to existing datasets 229changing on a dataset 228for executing remote protection of virtual objects

188overview 181supplied with the product 189

Storage Services report 322storage systems

adding users 412associating with a host service 399, 403authorizing host service access 397configuration 501, 502configuring 413, 496Data ONTAP licenses, described 195discovering 467, 468login credentials 400, 404managing configuration files 413NDMP credentials 400, 404remote configuration 412

Storage Systems Capacity report 341Storage Systems report 319

surprise removal (of LUN) 291system annotations

settings 496system options

configuring 496editing 483monitoring 483

Systems Alarms area 499, 500Systems Annotations area 500, 501Systems Miscellaneous area 501, 502

Ttable settings 496, 501, 502test conformance check 243thresholds

aggregate full 94, 328aggregate full interval 94, 328aggregate nearly full 94, 328aggregate nearly overcommitted 94, 328aggregate overcommitted 94, 328

time zoneeffect on protection job schedules in datasets of

virtual objects 194traditional volumes

See volumestransitioning legacy Hyper-V dataset information

manually 275trends

of storage space utilization 24troubleshooting

dataset conformance conditions 241–243dataset conformance issues 246dataset failure to conform 240evaluating dataset conformance 239listing nonconformant datasets 246

Uunmounting backups

manually in a Hyper-V environment 288Unprotected Data dashboard panel 26usage metric reports

guidelines for solving issues 326user quotas

alerts 499, 500editing 123

User Quotas Capacity report 342Users and Roles capability

See RBACusersView 368

Index | 535

Page 536: Admin help netapp

V

vCenter Serverassociating a host service 397, 402registering a host service 397, 402

version managementbackups 269

vFiler unitsconfiguration tasks to manage 414configuring 415defined 119deleting 125–127, 129, 130discovery 119editing settings 120, 125–127, 129, 130grouping 122, 125–127, 129, 130monitoring inventory 124viewing inventory 124

vFiler Units report 320vFiler Units view 125–127, 129, 130vFilers

discovery 119editing settings 120grouping 122threshold settings 120

virtual centersviewing VMware inventory of 53VMware Virtual Centers view 71, 72

virtual disk filesselecting a backup 297

virtual inventoryadding virtual machines to 59deleting virtual objects from 60

virtual machinesadding to inventory 59restoring Hyper-V virtual machines 66, 301restoring VMware virtual machines 65, 300selecting a backup 297viewing Hyper-V inventory 58viewing VMware inventory 56VMware VMs view 74–77

virtual object inventoryrediscovery 408

virtual objectsadding to a dataset 209best practices when configuring datasets of virtual

objects 187, 208, 209configuring remote protection for 210definition of 181deleting from virtual inventory 60

discovery 53grouping 59guidelines for adding datasets containing virtual

objects 204–207local protection of 167, 192, 193performing on-demand backups 61, 276remote protection of 188removing from a dataset 213scheduling local backup for 172, 211unprotecting 213

virtual serversdefinition 120deleting 130–134editing settings 121, 130–134grouping 123, 130–134monitoring inventory 125viewing inventory 125

virtual storagegrouping vFiler units 122grouping Vservers 123vFiler Units view 125–127, 129, 130Vservers view 130–134

VMs (virtual machines)See virtual machines

VMwarebest practices when configuring datasets of

VMware objects 187, 208, 209guidelines for mounting or unmounting backups in

a VMware environment 271how virtual objects are discovered 53local protection of virtual objects 167, 192, 193objects that a dataset can include 186remote protection of virtual objects 188viewing datacenter inventory 54, 55viewing datastore inventory 56viewing virtual center inventory 53viewing virtual machine inventory 56

VMware Datacenters view 72, 73VMware Datastores view 77, 78, 80VMware ESX Servers view 73, 74VMware Virtual Centers view 71, 72VMware VMs view 74–77volume full threshold 141, 333volume nearly full threshold 141, 333volume threshold conditions

editing 454volumeDedupeDetailsView 369volumes

configuring quota settings 136definition of 135

536 | OnCommand Console Help

Page 537: Admin help netapp

grouping 139monitoring inventory 148viewing inventory 148

Volumes Capacity Growth report 349Volumes Capacity report 344Volumes Committed Capacity report 346Volumes report 321Volumes Space Reservation report 349Volumes Space Savings report 352Volumes view 150–153, 156, 157Vservers

definition 120deleting 130–134editing settings 121, 130–134grouping 123, 130–134monitoring inventory 125

viewing inventory 125Vservers view 130–134VSS

about 272Hyper-V plug-in 272verifying provider used 274viewing installed providers 273

W

welcome 15window layout

customization 16navigation 15

Index | 537

Page 538: Admin help netapp