Operations grimoire/ZFS: Difference between revisions
No edit summary |
|||
Line 72: | Line 72: | ||
Finally, if you overwrite a dataset instead of sending a snapshot, the dataset can't be used on the target server: <code>zfs umount -f arcology/home/luser</code> | Finally, if you overwrite a dataset instead of sending a snapshot, the dataset can't be used on the target server: <code>zfs umount -f arcology/home/luser</code> | ||
=== No root access, what to do? === | |||
Any user can use zfs send and recv on their own datasets, but the property or umount could require root access. If you need assistance to do so, please open a task on DevCentral under Servers tag. | Any user can use zfs send and recv on their own datasets, but the property or umount could require root access. If you need assistance to do so, please open a task on DevCentral under Servers tag. | ||
=== Send new snapshots === | |||
Note the snapshot you sent at last, so you can ask ZFS to send incremental snapshot: | |||
# Initial ZFS send | |||
$ zfs send arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h30 | ssh another-server zfs recv -F arcology/home/luser | |||
# Follow-up ZFS send | |||
$ zfs send -i arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h30 arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h50 | ssh another-server zfs recv arcology/home/luser | |||
== Useful links == | == Useful links == | ||
* https://github.com/jimsalterjrs/sanoid for full replication | * https://github.com/jimsalterjrs/sanoid for full replication |
Revision as of 16:41, 18 September 2024
ZFS pool administration
ZFS pool features
ZFS pools are versioned, so when we upgrade the OS (FreeBSD) or ZFS driver (Linux), we can get the new features:
zpool upgrade -a
The only bad moment to do this is if you intend to use the pool elsewhere with a non-compatible ZFS implementation. For example, if you enable a feature in FreeBSD absent from ZFS on Linux, you wouldn't be able to directly mount in write the disk on a Linux machine. It's generally safe to do so, as we don't move pools from one OS to another.
Boot loader
Any pool with the bootfs property set is bootable. When a ZFS pool has new features enabled, you need to ensure the boot loader is still able to manage them.
On Linux, ZFS is supported by GRUB2, but not all the features. So you're safe as long as GRUB2 package is upgraded.
On FreeBSD, you can simply update the boot code. For Ysul for example:
$ gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 partcode written to ada0p1 bootcode written to ada0
The protective MBR /boot/pmbr is optional but typically used. This bootstrap code is to search through the GPT table for a freebsd-boot partition and run the next bootstrap stage from it. That freebsd-boot partition then receives the gptboot (GPT with UFS disks) or gptzfsboot (GPT with ZFS disks).
The -i argument here specifies the target partition to install gptzfsboot. To find it, runs gpart show
.
Here for example the output for WindRiver:
$ gpart show
=> 40 1000215136 ada0 GPT (477G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 8388608 2 freebsd-swap (4.0G)
8390656 991823872 3 freebsd-zfs (473G)
1000214528 648 - free - (324K)
=> 40 1000215136 ada1 GPT (477G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 8388608 2 freebsd-swap (4.0G)
8390656 991823872 3 freebsd-zfs (473G)
1000214528 648 - free - (324K)
As we can see, the `freebsd-boot` partition type is located as the index 1. That's the one to be used, so -i 1
will be used there too.
Migrating data with ZFS send + receive
zfs send <pool>/<dataset> | ssh target-server zfs recv <pool>/<dataset>
Tips
- Systems created on FreeBSD 14+ uses tank/home instead of tank/usr/home
- Long operation? CTRL + T will give you how many GB have been transferred
- Use or create a snapshot to use as dataset to send, so you'll know exactly what you sent, and you can create incremental snapshots then send them too.
Get snapshots out of the way
When migrating home directory to a new devserver, snapshots on the destination server can block the migration:
$ zfs send arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h30 | ssh another-server zfs recv -F arcology/home/luser cannot receive new filesystem stream: destination has snapshots (eg. arcology/home/luser@zfs-auto-snap_hourly-2024-09-16-21h00) must destroy them to overwrite it
To delete all snapshots for a dataset, you can list them, then call zfs destroy
:
zfs list -t snapshot | grep arcology/home/luser | awk '{print $1}' | xargs -n1 zfs destroy
Once all the snapshots have been destroyed, you need to block zfs-auto-snapshot to create new ones:
- new server migration? Delete crontab for zfs-auto-snapshot:
rm rm /etc/cron.d/zfs
- regular operation for only one dataset? Set com.sun:auto-snapshot attribute to false with
zfs set com.sun:auto-snapshot=false arcology/home/luser
Finally, if you overwrite a dataset instead of sending a snapshot, the dataset can't be used on the target server: zfs umount -f arcology/home/luser
No root access, what to do?
Any user can use zfs send and recv on their own datasets, but the property or umount could require root access. If you need assistance to do so, please open a task on DevCentral under Servers tag.
Send new snapshots
Note the snapshot you sent at last, so you can ask ZFS to send incremental snapshot:
# Initial ZFS send $ zfs send arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h30 | ssh another-server zfs recv -F arcology/home/luser # Follow-up ZFS send $ zfs send -i arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h30 arcology/usr/home/luser@zfs-auto-snap_frequent-2024-09-18-21h50 | ssh another-server zfs recv arcology/home/luser
Useful links
- https://github.com/jimsalterjrs/sanoid for full replication