accumulated collection of notes about my server shenanigans
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Physically replace the failed drive

zpool list -v The replaced drive should show up by its uuid

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_change_failed_dev

Changing a failed device#

zpool replace -f <pool> <old-device> <new-device>

Changing a failed bootable device#

Depending on how Proxmox VE was installed it is either using systemd-boot or GRUB through proxmox-boot-tool or plain GRUB as bootloader (see Host Bootloader).

You can check by running: proxmox-boot-tool status

The first steps of copying the partition table,

sgdisk <healthy bootable device> -R <new device>

reissuing GUIDs

sgdisk -G <new device>

and replacing the ZFS partition are the same.

zpool replace -f <pool> <old zfs partition> <new zfs partition>
Disclaimer

Use the zpool status -v command to monitor how far the resilvering process of the new disk has progressed.

To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use.

With proxmox-boot-tool:#

proxmox-boot-tool format <new disk's ESP>
proxmox-boot-tool init <new disk's ESP> [grub]
Disclaimer

ESP stands for EFI System Partition. By default partition #2 on bootable disks since Proxmox VE version 5.4.

With plain GRUB:#

grub-install <new disk>

Disclaimer

Plain GRUB is only used on systems installed with Proxmox VE 6.3 or earlier, which have not been manually migrated to use proxmox-boot-tool yet.