This is Google's cache of http://www.datadisk.co.uk/html_docs/netapp/netapp_cs.htm. It is a snapshot of the page as it appeared on 25 Mar 2019 05:37:02 GMT. The current page could have changed in the meantime. Learn more. Full versionText-only versionView source Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar. NetApp Commandline Cheatsheet This is a quick and dirty NetApp commandline cheatsheet on most of the common commands used, this is not extensive so check out the man pages and NetApp documentation. I will be updating this document as I become more familar with the NetApp application. Server Startup and Shutdown Boot Menu 1) Normal Boot. 2) Boot without /etc/rc. 3) Change password. 4) Clean configuration and initialize all disks. 5) Maintenance mode boot. 6) Update flash from backup config. 7) Install new software first. 8) Reboot node. Selection (1-8)? Normal Boot - continue with the normal boot operation Boot without /etc/rc - boot with only default options and disable some services Change Password - change the storage systems password Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default settings Maintenance mode boot - file system operations are disabled, limited set of commands Update flash from backup config - restore the configuration information if corrupted on the boot device Install new software first - use this if the filer does not include support for the storage array Reboot node - restart the filer startup modes boot_ontap - boots the current Data ONTAP software release stored on the boot device boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel boot_backup - boots the backup Data ONTAP release from the boot device boot_diags - boots a Data ONTAP diagnostic kernel Note: there are other options but NetApp will provide these as when necessary shutdown halt [-t ] [-f] -t = shutdown after minutes specified -f = used with HA clustering, means that the partner filer does not take over restart reboot [-t ] [-s] [-r] [-f] -t = reboot in specified minutes -s = clean reboot but also power cycle the filer (like pushing the off button) -r = bypasses the shutdown (not clean) and power cycles the filer -f = used with HA clustering, means that the partner filer does not take over System Privilege and System shell Privilege priv set [-q] [admin | advanced] Note: by default you are in administrative mode -q = quiet suppresses warning messages Access the systemshell ## First obtain the advanced privileges priv set advanced ## Then unlock and reset the diag users password useradmin diaguser unlock useradmin diaguser password ## Now you should be able to access the systemshell and use all the standard Unix ## commands systemshell login: diag password: ******** Licensing and Version licenses (commandline) ## display licenses license ## Adding a license license add ## Disabling a license license delete Data ONTAP version version [-b] -b = include name and version information for the primary, secondary and diagnostic kernels and the firmware Useful Commands read the messages file rdfile /etc/messages write to a file wrfile -a # Examples wrfile -a /etc/test1 This is line 6 # comment here wrfile -a /etc/test1 "This is line \"15\"." System Configuration General information sysconfig sysconfig -v sysconfig -a (detailed) Configuration errors sysconfig -c Display disk devices sysconfig -d sysconfig -A Display Raid group information sysconfig -V Display arregates and plexes sysconfig -r Display tape devices sysconfig -t Display tape libraries sysconfig -m Environment Information General information environment status Disk enclosures (shelves) environment shelf [adapter] environment shelf_power_status Chassis environment chassis all environment chassis list-sensors environment chassis Fans environment chassis CPU_Fans environment chassis Power environment chassis Temperature environment chassis [PS1|PS2] Fibre Channel Information Fibre Channel stats fcstat link_status fcstat fcal_stat fcstat device_map SAS Adapter and Expander Information Shelf information sasstat shelf sasstat shelf_short Expander information sasstat expander sasstat expander_map sasstat expander_phy_state Disk information sasstat dev_stats Adapter information sasstat adapter_state Statistical Information System stats show system Processor stats show processor Disk stats show disk Volume stats show volume LUN stats show lun Aggregate stats show aggregate FC stats show fcp iSCSI stats show iscsi CIFS stats show cifs Network stats show ifnet Storage Storage Commands Display storage show adapter storage show disk [-a|-x|-p|-T] storage show expander storage show fabric storage show fault storage show hub storage show initiators storage show mc storage show port storage show shelf storage show switch storage show tape [supported] storage show acp storage array show storage array show-ports storage array show-luns storage array show-config Enable storage enable adapter Disable storage disable adapter Rename switch storage rename Remove port storage array remove-port -p Load Balance storage load balance Power Cycle storage power_cycle shelf -h storage power_cycle shelf start -c storage power_cycle shelf completed Disks Disk Information Disk name This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17 depending on the type of disk enclosure 2a = SCSI adapter 17 = disk SCSI ID Any disks that are classed as spare will be used in any group to replace failed disks. They can also be assigned to any aggregate. Disks are assigned to a specific pool. Disk Types Data holds data stored within the RAID group Spare Does not hold usable data but is available to be added to a RAID group in an aggregate, also known as a hot spare Parity Store data reconstruction information within the RAID group dParity Stores double-parity information within the RAID group, if RAID-DP is enabled Disk Commands Display disk show disk show disk_list sysconfig -r sysconfig -d ## list all unnassigned/assigned disks disk show -n disk show -a Adding (assigning) ## Add a specific disk to pool1 the mirror pool disk assign -p 1 ## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p" ## option is not specififed disk assign all -p 0 Remove (spin down disk) disk remove Reassign disk reassign -d Replace disk replace start disk replace stop Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command Zero spare disks disk zero spares fail a disk disk fail Scrub a disk disk scrub start disk scrub stop Sanitize disk sanitize start disk sanitize abort disk sanitize status disk sanitize release Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. Maintanence disk maint start -d disk maint abort disk maint list disk maint status Note: you can test the disk using maintain mode swap a disk disk swap disk unswap Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only. Statisics disk_stat Simulate a pulled disk disk simpull Simulate a pushed disk disk simpush -l disk simpush ## Example ontap1> disk simpush -l The following pulled disks are available for pushing: v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448 ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448 Aggregates Aggregate States Online Read and write access to volumes is allowed Restricted Some operations, such as parity reconstruction are allowed, but data access is not allowed Offline No access to the aggregate is allowed Aggregate Status Values 32-bit This aggregate is a 32-bit aggregate 64-bit This aggregate is a 64-bit aggregate aggr This aggregate is capable of contain FlexVol volumes copying This aggregate is currently the target aggregate of an active copy operation degraded This aggregate is contains at least one RAID group with single disk failure that is not being reconstructed double degraded This aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAID-DP aggregate only) foreign Disks that the aggregate contains were moved to the current storage system from another storage system growing Disks are in the process of being added to the aggregate initializing The aggregate is in the process of being initialized invalid The aggregate contains no volumes and none can be added. Typically this happend only after an aborted "aggr copy" operation ironing A WAFL consistency check is being performewd on the aggregate mirror degraded The aggregate is mirrored and one of its plexes is offline or resynchronizing mirrored The aggregate is mirrored needs check WAFL consistency check needs to be performed on the aggregate normal The aggregate is unmirrored and all of its RAID groups are functional out-of-date The aggregate is mirrored and needs to be resynchronized partial At least one disk was found for the aggregate, but two or more disks are missing raid0 The aggrgate consists of RAID 0 (no parity) RAID groups raid4 The agrregate consists of RAID 4 RAID groups raid_dp The agrregate consists of RAID-DP RAID groups reconstruct At least one RAID group in the aggregate is being reconstructed redirect Aggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded resyncing One of the mirror aggregates plexes is being resynchronized snapmirror The aggregate is a SnapMirror replica of another aggregate (traditional volumes only) trad The aggregate is a traditional volume and cannot contain FlexVol volumes. verifying A mirror operation is currently running on the aggregate wafl inconsistent The aggregate has been marked corrupted; contact techincal support Aggregate Commands Displaying aggr status aggr status -r aggr status [-v] Check you have spare disks aggr status -s Adding (creating) ## Syntax - if no option is specified then the defult is used aggr create [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [-B {32|64}] ## create aggregate called newaggr that can have a maximum of 8 RAID groups aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19 ## create aggregated called newfastaggr using 20 x 15000rpm disks aggr create newfastaggr -R 15000 20 ## create aggrgate called newFCALaggr (note SAS and FC disks may bge used) aggr create newFCALaggr -T FCAL 15 Note: -f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools -m = specifies the optional creation of a SyncMirror -n = displays the results of the command but does not execute it -r = maximum size (number of disks) of the RAID groups for this aggregate -T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN -R = rpm which include 5400, 7200, 10000 and 15000 Remove(destroying) aggr offline aggr destroy Unremoving(undestroying) aggr undestroy Rename aggr rename Increase size ## Syntax aggr add [-f] [-n] [-g {raid_group_name | new |all}] ## add an additonal disk to aggregate pfvAggr, use "aggr status" to get group name aggr status pfvAggr -r aggr add pfvAggr -g rg0 -d v5.25 ## Add 4 300GB disk to aggregate aggr1 aggr add aggr1 4@300 offline aggr offline online aggr online restricted state aggr restrict Change an aggregate options ## to display the aggregates options aggr options ## change a aggregates raid group aggr options raidtype raid_dp ## change a aggregates raid size aggr options raidsize 4 show space usage aggr show_space Mirror aggr mirror Split mirror aggr split Copy from one agrregate to another ## Obtain the status aggr copy status ## Start a copy aggr copy start ## Abort a copy - obtain the operation number by using "aggr copy status" aggr copy abort ## Throttle the copy 10=full speed, 1=one-tenth full speed aggr copy throttle Scrubbing (parity) ## Media scrub status aggr media_scrub status aggr scrub status ## start a scrub operation aggr scrub start [ aggrname | plexname | groupname ] ## stop a scrub operation aggr scrub stop [ aggrname | plexname | groupname ] ## suspend a scrub operation aggr scrub suspend [ aggrname | plexname | groupname ] ## resume a scrub operation aggr scrub resume [ aggrname | plexname | groupname ] Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on all RAID groups contained in the plex. Look at the following system options: raid.scrub.duration 360 raid.scrub.enable on raid.scrub.perf_impact low raid.scrub.schedule Verify (mirroring) ## verify status aggr verify status ## start a verify operation aggr verify start [ aggrname ] ## stop a verify operation aggr verify stop [ aggrname ] ## suspend a verify operation aggr verify suspend [ aggrname ] ## resume a verify operation aggr verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made. Media Scrub aggr media_scrub status Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on raid.media_scrub.rate 600 raid.media_scrub.spares.enable on Volumes Volume States Online Read and write access to this volume is allowed. Restricted Some operations, such as parity reconstruction, are allowed, but data access is not allowed. Offline No access to the volume is allowed. Volume Status Values access denied The origin system is not allowing access. (FlexCache volumes only.) active redirect The volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance may be reduced while the volume is in this state. connecting The caching system is trying to connect to the origin system. (FlexCache volumes only.) copying The volume is currently the target of an active vol copy or snapmirror operation. degraded The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed after single disk failure. double degraded The volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed after double disk failure. flex The volume is a FlexVol volume. flexcache The volume is a FlexCache volume. foreign Disks used by the volume's containing aggregate were moved to the current storage system from another storage system. growing Disks are being added to the volume's containing aggregate. initializing The volume's containing aggregate is being initialized. invalid The volume does not contain a valid file system. ironing A WAFL consistency check is being performed on the volume's containing aggregate. lang mismatch The language setting of the origin volume was changed since the caching volume was created. (FlexCache volumes only.) mirror degraded The volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing. mirrored The volume's containing aggregate is mirrored. needs check A WAFL consistency check needs to be performed on the volume's containing aggregate. out-of-date The volume's containing aggregate is mirrored and needs to be resynchronized. partial At least one disk was found for the volume's containing aggregate, but two or more disks are missing. raid0 The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only). raid4 The volume's containing aggregate consists of RAID4 groups. raid_dp The volume's containing aggregate consists of RAID-DP groups. reconstruct At least one RAID group in the volume's containing aggregate is being reconstructed. redirect The volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p option. Read performance to volumes in the aggregate might be degraded. rem vol changed The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. (FlexCache volumes only.) rem vol unavail The origin volume is offline or has been deleted. (FlexCache volumes only.) remote nvram err The origin system is experiencing problems with its NVRAM. (FlexCache volumes only.) resyncing One of the plexes of the volume's containing mirrored aggregate is being resynchronized. snapmirrored The volume is in a SnapMirror relationship with another volume. trad The volume is a traditional volume. unrecoverable The volume is a FlexVol volume that has been marked unrecoverable; contact technical support. unsup remote vol The origin system is running a version of Data ONTAP the does not support FlexCache volumes or is not compatible with the version running on the caching system. (FlexCache volumes only.) verifying RAID mirror verification is running on the volume's containing aggregate. wafl inconsistent The volume or its containing aggregate has been marked corrupted; contact technical support . General Volume Operations (Traditional and FlexVol) Displaying vol status vol status -v (verbose) vol status -l (display language) Remove (destroying) vol offline vol destroy Rename vol rename online vol online offline vol offline restrict vol restrict decompress vol decompress status vol decompress start vol decompress stop Mirroring vol mirror volname [-n][-v victim_volname][-f][-d ] Note: Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process. The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. Change language vol lang Change maximum number of files ## Display maximum number of files maxfiles ## Change maximum number of files maxfiles Change root volume vol options root Media Scrub vol media_scrub status [volname|plexname|groupname -s disk-name][-v] Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on raid.media_scrub.rate 600 raid.media_scrub.spares.enable on FlexVol Volume Operations (only) Adding (creating) ## Syntax vol create vol_name [-l language_code] [-s {volume|file|none}] size{k|m|g|t} ## Create a 200MB volume using the english character set vol create newvol -l en aggr1 200M ## Create 50GB flexvol volume vol create vol1 aggr0 50g additional disks ## add an additional disk to aggregate flexvol1, use "aggr status" to get group name aggr status flexvol1 -r aggr add flexvol1 -g rg0 -d v5.25 Resizing vol size [+|-] n{k|m|g|t} ## Increase flexvol1 volume by 100MB vol size flexvol1 + 100m Automatically resizing vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on ## automatically grow by 10MB increaments to max of 500MB vol autosize flexvol1 -m 500m -I 10m on Determine free space and Inodes df -Ah df -I Determine size vol size automatic free space preservation vol options try_first [volume_grow|snap_delete] Note: If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command. If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command. display a FlexVol volume's containing aggregate vol container Cloning vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap] vol clone split start vol clone split stop vol clone split estimate vol clone split status Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes. Copying vol copy start [-S|-s snapshot] vol copy status vol copy abort vol copy throttle ## Example - Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1. vol copy start -s nightly.1 vol0 toaster1:vol0 Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume. The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types. The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format. Traditional Volume Operations (only) adding (creating) vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list ## create traditional volume using aggr command aggr create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## create traditional volume using vol command vol create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## Create traditional volume using 20 disks, each RAID group can have 10 disks vol create vol1 -r 10 20 additional disks vol add volname[-f][-n][-g ]{ ndisks[@size]|-d } ## add another disk to the already existing traditional volume vol add tradvol1 -d v5.28 splitting aggr split Scrubing (parity) ## The more new "aggr scrub " command is preferred vol scrub status [volname|plexname|groupname][-v] vol scrub start [volname|plexname|groupname][-v] vol scrub stop [volname|plexname|groupname][-v] vol scrub suspend [volname|plexname|groupname][-v] vol scrub resume [volname|plexname|groupname][-v] Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub’s suspended status (if any). Verify (mirroring) ## The more new "aggr verify" command is preferred ## verify status vol verify status ## start a verify operation vol verify start [ aggrname ] ## stop a verify operation vol verify stop [ aggrname ] ## suspend a verify operation vol verify suspend [ aggrname ] ## resume a verify operation vol verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made. FlexCache Volumes FlexCache Consistency Delegations You can think of a delegation as a contract between the origin system and the caching volume; as long as the caching volume has the delegation, the file has not changed. Delegations are used only in certain situations. When data from a file is retrieved from the origin volume, the origin system can give a delegation for that file to the caching volume. Before that file is modified on the origin volume, whether due to a request from another caching volume or due to direct client access, the origin system revokes the delegation for that file from all caching volumes that have that delegation. Attribute cache timeouts When data is retrieved from the origin volume, the file that contains that data is considered valid in the FlexCache volume as long as a delegation exists for that file. If no delegation exists, the file is considered valid for a certain length of time, specified by the attribute cache timeout. If a client requests data from a file for which there are no delegations, and the attribute cache timeout has been exceeded, the FlexCache volume compares the file attributes of the cached file with the attributes of the file on the origin system. write operation proxy If a client modifies a file that is cached, that operation is passed back, or proxied through, to the origin system, and the file is ejected from the cache. When the write is proxied, the attributes of the file on the origin volume are changed. This means that when another client requests data from that file, any other FlexCache volume that has that data cached will re-request the data after the attribute cache timeout is reached. FlexCache Status Values access denied The origin system is not allowing FlexCache access. Check the setting of the flexcache.access option on the origin system. connecting The caching system is trying to connect to the origin system. lang mismatch The language setting of the origin volume was changed since the FlexCache volume was created. rem vol changed The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. rem vol unavail The origin volume is offline or has been deleted. remote nvram err The origin system is experiencing problems with its NVRAM. unsup remote vol The origin system is running a version of Data ONTAP that either does not support FlexCache volumes or is not compatible with the version running on the caching system. FlexCache Commands Display vol status vol status -v ## How to display the options available and what they are set to vol help options vol options Display free space df -L Adding (Create) ## Syntax vol create [size{k|m|g|t}] -S origin:source_vol ## Create a FlexCache volume called flexcache1 with autogrow in aggr1 aggregate with the source volume vol1 ## on storage netapp1 server vol create flexcache1 aggr1 -S netapp1:vol1 Removing (destroy) vol offline < flexcache_name> vol destroy Automatically resizing vol options flexcache_autogrow [on|off] Eject file from cache flexcache eject [-f] Statistics ## Client stats flexcache stats -C ## Server stats flexcache stats -S -c ## File stats flexcache fstat FlexClone Volumes FlexClone Commands Display vol status vol status -v df -Lh adding (create) ## Syntax vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap] ## create a flexclone called flexclone1 from the parent flexvol1 vol clone create flexclone1 -b flexvol1 Removing (destroy) vol offline vol destroy splitting ## Determine the free space required to perform the split vol clone split estimate ## Double check you have the space df -Ah ## Perform the split vol clone split start ## Check up on its status vol colne split status ## Stop the split vol clone split stop log file /etc/log/clone The clone log file records the following information: • Cloning operation ID • The name of the volume in which the cloning operation was performed • Start time of the cloning operation • End time of the cloning operation • Parent file/LUN and clone file/LUN names • Parent file/LUN ID • Status of the clone operation: successful, unsuccessful, or stopped and some other details Deduplication Deduplication Commands start/restart deduplication operation sis start -s sis start -s /vol/flexvol1 ## Use previous checkpoint sis start -sp stop deduplication operation sis stop schedule deduplication sis config -s sis config -s mon-fri@23 /vol/flexvol1 Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms: day_list[@hour_list] If hour_list is not specified, deduplication runs at midnight on each scheduled day. hour_list[@day_list] If day_list is not specified, deduplication runs every day at the specified hours. • - A hyphen (-) disables deduplication operations for the specified FlexVol volume. enabling sis on disabling sis off status sis status -l Display saved space df -s QTrees QTree Commands Display qtree status [-i] [-v] Note: The -i option includes the qtree ID number in the display. The -v option includes the owning vFiler unit, if the MultiStore license is enabled. adding (create) ## Syntax - by default wafl.default_qtree_mode option is used qtree create path [-m mode] ## create a news qtree in the /vol/users volume using 770 as permissions qtree create /vol/users/news -m 770 Remove rm -Rf Rename mv convert a directory into a qtree directory ## Move the directory to a different directory mv /n/joel/vol1/dir1 /n/joel/vol1/olddir ## Create the qtree qtree create /n/joel/vol1/dir1 ## Move the contents of the old directory back into the new QTree mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1 ## Remove the old directory name rmdir /n/joel/vol1/olddir stats qtree stats [-z] [vol_name] Note: -z = zero stats Change the security style ## Syntax qtree security path {unix | ntfs | mixed} ## Change the security style of /vol/users/docs to mixed qtree security /vol/users/docs mixed Quotas Quota Commands Quotas configuration file /mroot/etc/quotas Example quota file ## hard limit | thres |soft limit ##Quota Target type disk files| hold |disk file ##------------- ----- ---- ----- ----- ----- ---- * tree@/vol/vol0 - - - - - # monitor usage on all qtrees in vol0 /vol/vol2/qtree tree 1024K 75k - - - # enforce qtree quota using kb tinh user@/vol/vol2/qtree1 100M - - - - # enforce users quota in specified qtree dba group@/vol/ora/qtree1 100M - - - - # enforce group quota in specified qtree # * = default user/group/qtree # - = placeholder, no limit enforced, just enable stats collection Note: you have lots of permutations, so checkout the documentation Displaying quota report [] Activating quota on [-w] Note: -w = return only after the entire quotas file has been scanned Deactivitating quota off [-w] Reinitializing quota off [-w] quota on [-w] Resizing quota resize Note: this commands rereads the quota file Deleting edit the quota file quota resize log messaging quota logmsg LUNs, igroups and LUN mapping LUN configuration Display lun show lun show -m lun show -v Initialize/Configure LUNs, mapping lun setup Note: follow the prompts to create and configure LUN's Create lun create -s 100m -t windows /vol/tradvol1/lun1 Destroy lun destroy [-f] /vol/tradvol1/lun1 Note: the "-f" will force the destroy Resize lun resize lun resize /vol/tradvol1/lun1 75m Restart block protocol access lun online /vol/tradvol1/lun1 Stop block protocol access lun offline /vol/tradvol1/lun1 Map a LUN to an initiator group lun map /vol/tradvol1/lun1 win_hosts_group1 0 lun map -f /vol/tradvol1/lun2 linux_host_group1 1 lun show -m Note: use "-f" to force the mapping Remove LUN mapping lun show -m lun offline /vol/tradvol1 lun unmap /vol/tradvol1/lun1 win_hosts_group1 0 Displays or zeros read/write statistics for LUN lun stats /vol/tradvol1/lun1 Comments lun comment /vol/tradvol1/lun1 "10GB for payroll records" Check all lun/igroup/fcp settings for correctness lun config_check -v Manage LUN cloning # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1 Show the maximum possible size of a LUN on a given volume or qtree lun maxsize /vol/tradvol1 Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1 Display/change LUN serial number lun serial -x /vol/tradvol1/lun1 Manage LUN properties lun set reservation /vol/tradvol1/hpux/lun0 Configure NAS file-sharing properties lun share { none | read | write | all } Manage LUN and snapshot interactions lun snap usage -s igroup configuration display igroup show igroup show -v igroup show iqn.1991-05.com.microsoft:xblade create (iSCSI) igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade create (FC) igroup create -i -f windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade destroy igroup destroy win_hosts_group1 add initiators to an igroup igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop remove initiators to an igroup igroup remove win_hosts_group1 iqn.1991-05.com.microsoft:laptop rename igroup rename win_hosts_group1 win_hosts_group2 set O/S type igroup set win_hosts_group1 ostype windows Enabling ALUA igroup set win_hosts_group1 alua yes Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required. iSCSI commands display iscsi initiator show iscsi session show [-t] iscsi connection show -v iscsi security show status iscsi status start iscsi start stop iscsi stop stats iscsi stats nodename iscsi nodename # to change the name iscsi nodename interfaces iscsi interface show iscsi interface enable e0b iscsi interface disable e0b portals iscsi portal show Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol accesslists iscsi interface accesslist show Note: you can add or remove interfaces from the list Port Sets display portset show portset show portset1 igroup show linux-igroup1 create portset create -f portset1 SystemA:4b destroy igroup unbind linux-igroup1 portset1 portset destroy portset1 add portset add portset1 SystemB:4b remove portset remove portset1 SystemB:4b binding igroup bind linux-igroup1 portset1 igroup unbind linux-igroup1 portset1 FCP service display fcp show adapter -v daemon status fcp status start fcp start stop fcp stop stats fcp stats -i interval [-c count] [-a | adapter] fcp stats -i 1 target expansion adapters fcp config [down|up] fcp config 4a down target adapter speed fcp config speed [auto|1|2|4|8] fcp config 4a speed 8 set WWPN # fcp portname set [-f] adapter wwpn fcp portname set -f 1b 50:0a:09:85:87:09:68:ad swap WWPN # fcp portname swap [-f] adapter1 adapter2 fcp portname swap -f 1a 1b change WWNN # display nodename fcp nodename fcp nodename [-f]nodename fcp nodename 50:0a:09:80:82:02:8d:ff Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system. WWPN Aliases - display fcp wwpn-alias show fcp wwpn-alias show -a my_alias_1 fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2 WWPN Aliases - create fcp wwpn-alias set [-f] alias wwpn fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f WWPN Aliases - remove fcp wwpn-alias remove [-a alias ... | -w wwpn] fcp wwpn-alias remove -a my_alias_1 fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2 Snapshotting and Cloning Snapshot and Cloning commands Display clones snap list create clone # Create a LUN by entering the following command lun create -s 10g -t solaris /vol/tradvol1/lun1 # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010 destroy clone # display the snapshot copies lun snap usage tradvol1 tradvol1_snapshot_08122010 # Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command lun destroy /vol/tradvol1/clone_lun1 # Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear snap delete tradvol1 tradvol1_snapshot_08122010 clone dependency vol options on vol options off Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies. This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken. Restoring snapshot snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun splitting the clone lun clone split start lun_path lun clone split status lun_path stop clone splitting lun clone split stop lun_path delete snapshot copy snap delete vol-name snapshot-name snap delete -a -f disk space usage lun snap usage tradvol1 mysnap Use Volume copy to copy LUN's vol copy start -S source:source_volume dest:dest_volume vol copy start -S /vol/vol0 filerB:/vol/vol1 The estimated rate of change of data between Snapshot copies in a volume snap delta /vol/tradvol1 tradvol1_snapshot_08122010 The estimated amount of space freed if you delete the specified Snapshot copies snap reclaimable /vol/tradvol1 tradvol1_snapshot_08122010 File Access using NFS Export Options actual= Specifies the actual file system path corresponding to the exported file system path. anon=| Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path. nosuid Disables setuid and setgid executables and mknod commands on the file system path. ro | ro=clientid Specifies which NFS clients have read-only access to the file system path. rw | rw=clientid Specifies which NFS clients have read-write access to the file system path. root=clientid Specifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-). sec=sectype Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively). security types could be one of the following: none No security. Data ONTAP treats all of the NFS client's users as anonymous users. sys Standard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the NFS client's users, applying the file access permissions specified for those users in the NFS server's /etc/passwd file. This is the default security type. krb5 Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key encryption to authenticate the NFS client's users. krb5i Kerberos(tm) Version 5 integrity. In addition to authenticating the NFS client's users, Data ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client's remote procedure requests and responses, thus preventing "man-in-the-middle" tampering. krb5p Kerberos(tm) Version 5 privacy. In addition to authenticating the NFS client's users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy. Examples rw=10.45.67.0/24 ro,root=@trusted,rw=@friendly rw,root=192.168.0.80,nosuid Export Commands Displaying exportfs exportfs -q create # create export in memory and write to /etc/exports (use default options) exportfs -p /vol/nfs1 # create export in memory and write to /etc/exports (use specific options) exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1 # create export in memory only using own specific options exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1 remove # Memory only exportfs -u # Memory and /etc/exportfs exportfs -z export all exportfs -a check access exportfs -c 192.168.0.80 /vol/nfs1 flush exportfs -f exportfs -f reload exportfs -r storage path exportfs -s Write export to a file exportfs -w fencing # Suppose /vol/vol0 is exported with the following export options: -rw=pig:horse:cat:dog,ro=duck,anon=0 # The following command enables fencing of cat from /vol/vol0 exportfs -b enable save cat /vol/vol0 # cat moves to the front of the ro= list for /vol/vol0: -rw=pig:horse:dog,ro=cat:duck,anon=0 stats nfsstat File Access using CIFS Useful CIFS options change the security style options wafl.default_security_style {ntfs | unix | mixed} timeout options cifs.idle_timeout time Performance options cifs.oplocks.enable on Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost. CIFS Commands useful files /etc/cifsconfig_setup.cfg /etc/usermap.cfs /etc/passwd /etc/cifsconfig_share.cfg Note: use "rdfile" to read the file CIFS setup cifs setup Note: you will be prompted to answer a number of questions based on what requirements you need. start cifs restart stop cifs terminate # terminate a specific client cifs terminate | sessions cifs sessions cifs sessions cifs sessions # Authentication cifs sessions -t # Changes cifs sessions -c # Security Info cifs session -s Broadcast message cifs broadcast * "message" cifs broadcast "message" permissions cifs access # Examples cifs access sysadmins -g wheel Full Control cifs access -delete releases ENGINEERING\mary Note: rights can be Unix-style combinations of r w x - or NT-style "No Access", "Read", "Change", and "Full Control" stats cifs stat cifs stat cifs stat create a share # create a volume in the normal way # then using qtrees set the style of the volume {ntfs | unix | mixed} # Now you can create your share cifs shares -add TEST /vol/flexvol1/TEST -comment "Test Share " -forcegroup workgroup -maxusers 100 change share characteristics cifs shares -change sharename {-browse | -nobrowse} {-comment desc | - nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {-symlink_strict_security | - nosymlink_strict_security} {-vscan | -novscan} {-vscanread | - novscanread} {-umask mask | -noumask {-no_caching | -manual_caching | - auto_document_caching | -auto_program_caching} # example cifs shares -change -novscan home directories # Display home directories cifs homedir # Add a home directory wrfile -a /etc/cifs_homedir.cfg /vol/TEST # check it rdfile /etc/cifs_homedir.cfg # Display for a Windows Server net view \\ # Connect net use * \\192.168.0.75\TEST Note: make sure the directory exists domain controller # add a domain controller cifs prefdc add lab 10.10.10.10 10.10.10.11 # delete a domain controller cifs prefdc delete lab # List domain information cifs domaininfo # List the preferred controllers cifs prefdc print # Restablishing cifs resetdc change filers domain password cifs changefilerpwd Tracing permission problems sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a] #Examples sectrace add -ip 192.168.10.23 sectrace add -unixuser foo -path /vol/vol0/home4 -a # To remove sectrace delete all sectrace delete # Display tracing sectrace show # Display error code status sectrace print-status sectrace print-status 1:51544850432:32:78 File Access using FTP Useful Options Enable options ftpd.enable on Disable options ftpd.enable off File Locking options ftpd.locking delete options ftpd.locking none Note: To prevent users from modifying files while the FTP server is transferring them, you can enable FTP file locking. Otherwise, you can disable FTP file locking. By default, FTP file locking is disabled. Authenication Style options ftpd.auth_style {unix | ntlm | mixed} bypassing of FTP traverse checking options ftpd.bypass_traverse_checking on options ftpd.bypass_traverse_checking off Note: If the ftpd.bypass_traverse_checking option is set to off, when a user attempts to access a file using FTP, Data ONTAP checks the traverse (execute) permission for all directories in the path to the file. If any of the intermediate directories does not have the "X" (traverse permission), Data ONTAP denies access to the file. If the ftpd.bypass_traverse_checking option is set to on, when a user attempts to access a file, Data ONTAP does not check the traverse permission for the intermediate directories when determining whether to grant or deny access to the file. Restricting FTP users to a specific directory options ftpd.dir.restriction on options ftpd.dir.restriction off Restricting FTP users to their home directories or a default directory options ftpd.dir.override "" Maximum number of connections options ftpd.max_connections n options ftpd.max_connections_threshold n idle timeout value options ftpd.idle_timeout n s | m | h anonymous logins options ftpd.anonymous.enable on options ftpd.anonymous.enable off # specify the name for the anonymous login options ftpd.anonymous.name username # create the directory for the anonymous login options ftpd.anonymous.home_dir homedir FTP Commands Log files /etc/log/ftp.cmd /etc/log/ftp.xfer # specify the max number of logfiles (default is 6) and size options ftpd.log.nfiles 10 options ftpd.log.filesize 1G Note: use rdfile to view Restricting access /etc/ftpusers Note: using rdfile and wrfile to access /etc/ftpusers stats ftp stat # to reset ftp stat -z File Access using HTTP HTTP Options enable options httpd.enable on disable options httpd.enable off Enabling or disabling the bypassing of HTTP traverse checking options httpd.bypass_traverse_checking on options httpd.bypass_traverse_checking off Note: this is similar to the FTP version root directory options httpd.rootdir /vol0/home/users/pages Host access options httpd.access host=Host1 AND if=e3 options httpd.admin.access host!=Host1 HTTP Commands Log files /etc/log/httpd.log # use the below to change the logfile format options httpd.log.format alt1 Note: use rdfile to view redirects redirect /cgi-bin/* http://cgi-host/* pass rule pass /image-bin/* fail rule fail /usr/forbidden/* mime types /etc/httpd.mimetypes Note: use rdfile and wrfile to edit interface firewall ifconfig f0 untrusted stats httpstat [-dersta] # reset the stats httpstat -z[derta] Network Interfaces Display ifconfig -a ifconfig IP address ifconfig e0 ifconfig e0a # Remove a IP Address ifconfig e3 0 subnet mask ifconfig e0a netmask broadcast ifconfig e0a broadcast media type ifconfig e0a mediatype 100tx-fd maximum transmission unit (MTU) ifconfig e8 mtusize 9000 Flow control ifconfig # example ifconfig e8 flowcontrol none Note: value is the flow control type. You can specify the following values for the flowcontrol option: none - No flow control receive - Able to receive flow control frames send - Able to send flow control frames full - Able to send and receive flow control frames The default flowcontrol type is full. trusted ifconfig e8 untrusted Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped. HA Pair ifconfig e8 partner ## You must enable takeover on interface failures by entering the following commands: options cf.takeover.on_network_interface_failure enable ifconfig interface_name {nfo|-nfo} nfo — Enables negotiated failover -nfo — Disables negotiated failover Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs Alias # Create alias ifconfig e0 alias 192.0.2.30 # Remove alias ifconfig e0 -alias 192.0.2.30 Block/Unblock protocols # Block options interface.blocked.cifs e9 options interface.blocked.cifs e0a,e0b # Unblock options interface.blocked.cifs "" Stats ifstat netstat Note: there are many options to both these commands so I will leave to the man pages bring up/down an interface ifconfig up ifconfig down Routing default route # using wrfile and rdfile edit the /etc/rc file with the below route add default 192.168.0.254 1 # the full /etc/rc file will look like something below hostname netapp1 ifconfig e0 192.168.0.10 netmask 255.255.255.0 mediatype 100tx-fd route add default 192.168.0.254 1 routed on enable/disable fast path options ip.fastpath.enable {on|off} Note: on — Enables fast path off — Disables fast path enable/disable routing daemon routed {on|off} Note: on — Turns on the routed daemon off — Turns off the routed daemon Display routing table netstat -rn route -s routed status Add to routing table route add 192.168.0.15 gateway.com 1 Hosts and DNS Hosts # use wrfile and rdfile to read and edit /etc/hosts file , it basically use the sdame rules as a Unix # hosts file nsswitch file # use wrfile and rdfile to read and edit /etc/nsswitch.conf file , it basically uses the same rules as a # Unix nsswitch.conf file DNS # use wrfile and rdfile to read and edit /etc/resolv.conf file , it basically uses the same rules as a # Unix resolv.conf file options dns.enable {on|off} Note: on — Enables DNS off — Disables DNS Domain Name options dns.domainname DNS cache options dns.cache.enable options dns.cache.disable # To flush the DNS cache dns flush # To see dns cache information dns info DNS updates options dns.update.enable {on|off|secure} Note: on — Enables dynamic DNS updates off — Disables dynamic DNS updates secure — Enables secure dynamic DNS updates time-to-live (TTL) options dns.update.ttl