ZFS Root File System

9
To Migrate a UFS Root File System to a ZFS Root File System # zpool create mpool mirror c1t2d0s0 c2t1d0s0 # lucreate -c ufs1009BE -n zfs1009BE -p mpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <ufs1009BE>. Creating initial configuration for primary boot environment <ufs1009BE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufs1009BE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufs1009BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfs1009BE>. Source boot environment is <ufs1009BE>. Creating boot environment <zfs1009BE>. Creating file systems on boot environment <zfs1009BE>. Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfs1009BE>. Populating file systems on boot environment <zfs1009BE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points.

description

rt

Transcript of ZFS Root File System

To Migrate a UFS Root File System to a ZFS Root File System

To Migrate a UFS Root File System to a ZFS Root File System

# zpool create mpool mirror c1t2d0s0 c2t1d0s0

# lucreate -c ufs1009BE -n zfs1009BE -p mpoolAnalyzing system configuration.

No name for current boot environment.

Current boot environment is named .

Creating initial configuration for primary boot environment .

The device is not a root device for any boot environment; cannot get BE ID.

PBE configuration successful: PBE name PBE Boot Device .

Comparing source boot environment file systems with the file

system(s) you specified for the new boot environment. Determining which

file systems should be in the new boot environment.

Updating boot environment description database on all BEs.

Updating system configuration files.

The device is not a root device for any boot environment; cannot get BE ID.

Creating configuration for boot environment .

Source boot environment is .

Creating boot environment .

Creating file systems on boot environment .

Creating file system for in zone on .

Populating file systems on boot environment .

Checking selection integrity.

Integrity check OK.

Populating contents of mount point .

Copying.

Creating shared file system mount points.

Creating compare databases for boot environment .

Creating compare database for file system .

Creating compare database for file system .

Updating compare databases on boot environment .

Making boot environment bootable.

Creating boot_archive for /.alt.tmp.b-qD.mnt

updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive

Population of boot environment successful.

Creation of boot environment successful.

# lustatusBoot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

ufs1009BE yes yes yes no -

zfs1009BE yes no no yes -

# zfs listNAME USED AVAIL REFER MOUNTPOINT

mpool 7.17G 59.8G 95.5K /mpool

mpool/ROOT 4.66G 59.8G 21K /mpool/ROOT

mpool/ROOT/zfs1009BE 4.66G 59.8G 4.66G /

mpool/dump 2G 61.8G 16K -

mpool/swap 517M 60.3G 16K -

# luactivate zfs1009BEA Live Upgrade Sync operation will be performed on startup of boot environment .

**********************************************************************

The target boot environment has been activated. It will be used when you

reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You

MUST USE either the init or the shutdown command when you reboot. If you

do not use either init or shutdown, the system will not boot using the

target BE.

**********************************************************************

.

.

.

Modifying boot archive service

Activation of boot environment successful.

# init 6# lustatusBoot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

ufs1009BE yes no no yes -

zfs1009BE yes yes yes no - ok boot -Z mpool/ROOT/zfs1009BE

ok boot -Z mpool/ROOT/zfs1009BE -F failsafe

How to Replace a Disk in the ZFS Root Pool

# zpool offline rpool c1t0d0s0

# cfgadm -c unconfigure c1::dsk/c1t0d0

# cfgadm -c configure c1::dsk/c1t0d0

# zpool replace rpool c1t0d0s0

# zpool online rpool c1t0d0s0

# zpool status rpool

SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0

# zpool attach rpool c1t10d0s0 c1t9d0s0

# zpool status rpool pool: rpool

state: ONLINE

status: One or more devices is currently being resilvered. The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

scrub: resilver in progress, 25.47% done, 0h4m to go

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

mirror ONLINE 0 0 0

c1t10d0s0 ONLINE 0 0 0

c1t9d0s0 ONLINE 0 0 0

errors: No known data errors

How to Create Root Pool Snapshots

Create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to do a recursive snapshot of the root pool.

The procedure below creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. In the case of a root pool failure, the remote dataset can be mounted by using NFS and the snapshot file received into the recreated pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.

Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery and in either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.

In the following example, the system is booted from the zfs1009BE boot environment.

1. Create space on a remote system to store the snapshots.

For example:

remote# zfs create rpool/snaps

2. Share the space to the local system.

For example:

remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps# share-@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system ""

3. Create a recursive snapshot of the root pool.

local# zfs snapshot -r rpool@0804local# zfs listNAME USED AVAIL REFER MOUNTPOINT

rpool 6.17G 60.8G 98K /rpool

rpool@0804 0 - 98K -

rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT

rpool/ROOT@0804 0 - 21K -

rpool/ROOT/zfs1009BE 4.67G 60.8G 4.67G /

rpool/ROOT/zfs1009BE@0804 386K - 4.67G -

rpool/dump 1.00G 60.8G 1.00G -

rpool/dump@0804 0 - 1.00G -

rpool/swap 517M 61.3G 16K -

rpool/swap@0804 0 - 16K -

4. Send the root pool snapshots to the remote system.

For example:

local# zfs send -Rv rpool@0804 > /net/remote-system/rpool/snaps/rpool.0804sending from @ to rpool@0804

sending from @ to rpool/swap@0804

sending from @ to rpool/ROOT@0804

sending from @ to rpool/ROOT/zfs1009BE@0804

sending from @ to rpool/dump@0804

How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

In this scenario, assume the following conditions:

ZFS root pool cannot be recovered

ZFS root pool snapshots are stored on a remote system and are shared over NFS

All steps below are performed on the local system.

1. Boot from CD/DVD or the network.

On a SPARC based system, select one of the following boot methods:

ok boot net -sok boot cdrom -s

If you don't use -s option, you'll need to exit the installation program.

On an x86 based system, select the option for booting from the DVD or the network. Then, exit the installation program.

2. Mount the remote snapshot dataset.

For example:

# mount -F nfs remote-system:/rpool/snaps /mnt

If your network services are not configured, you might need to specify the remote-system's IP address.

3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you will have to relabel the disk.

For more information about relabeling the disk, go to the following site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

4. Recreate the root pool.

For example:

# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t1d0s0

5. Restore the root pool snapshots.

This step might take some time. For example:

# cat /mnt/rpool.0804 | zfs receive -Fdu rpool

6. Verify that the root pool datasets are restored.

For example:

# zfs listNAME USED AVAIL REFER MOUNTPOINT

rpool 6.17G 60.8G 98K /a/rpool

rpool@0804 0 - 98K -

rpool/ROOT 4.67G 60.8G 21K /legacy

rpool/ROOT@0804 0 - 21K -

rpool/ROOT/zfs1009BE 4.67G 60.8G 4.67G /a

rpool/ROOT/zfs1009BE@0804 398K - 4.67G -

rpool/dump 1.00G 60.8G 1.00G -

rpool/dump@0804 0 - 1.00G -

rpool/swap 517M 61.3G 16K -

rpool/swap@0804 0 - 16K -

7. Set the bootfs property on the root pool BE.

For example:

# zpool set bootfs=rpool/ROOT/zfs1009BE rpool

8. Install the boot blocks on the new disk.

On a SPARC based system:

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t5d0s0

On an x86 based system:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t5d0s0

9. Reboot the system.

# init 6