I am having trouble with my proxmox. I started about four years ago with proxmox and installed it on a 250GB nvme drive on my intel nuc. As my disk space needs growed I shut down my proxmox server and cloned the nvme to a new nvme with 1TB, because I didn'T want to loose all of my settings and vm's and stuff.
Now it's been around a year where I am still not able to get the full capacity of the 1TB nvme. I searched a lot, didn't make it, lost interest, searched again, tried different commands and so on.
I hope that anybody here can help me with my problem.. I'm really stuck!
This is a scrfeenshot of my setup: sda i an external 500G hdd forpvb (on same machine) sdb is a dangling nvme attached via usb-nvme adapter (which I mounted as /media1tb for extra storage and backup) nvme0n1 is the main storage for this server of which I want to use it's full capacity on the root partition /
Hopefully somebody can help to grow the root partition to it's max possible size..
btw: I'm running pve 8.2.9
I resized a disk on my test VM, and test all of this commands.
Please note: Although I have tested all of these instructions, any changes you make are at your own risk. Make sure you have a backup before you begin. It might be worth testing the same thing on a virtual machine yourself.
In a nutshell, you need to resize your disk step by step (layer by layer):
I will show you the commands using the partition names I have, and you can replace them with your own.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.7M 1 loop /snap/core20/2434
loop1 7:1 0 61.9M 1 loop /snap/core20/1405
loop2 7:2 0 87M 1 loop /snap/lxd/29351
loop3 7:3 0 89.4M 1 loop /snap/lxd/31333
loop4 7:4 0 38.8M 1 loop /snap/snapd/21759
loop5 7:5 0 44.3M 1 loop /snap/snapd/23258
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1.8G 0 part /boot
└─sda3 8:3 0 18.2G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 10G 0 lvm /
sr0 11:0 1 4M 0 rom
Here, we need to extend the /dev/sdb3 device. To do this, we will remove it and create a new (check this article for more info) one in its place with an extended size:
fdisk /dev/sda
In this step, make sure the starting sector number for partition /dev/sda3 hasn’t changed (3719168 in my case). It should automatically remain the same, but if not, update it manually.
Command (m for help): p
Disk /dev/sda: 22 GiB, 23622320128 bytes, 46137344 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 28349215-8C4C-4206-8906-6E808EE3E6CC
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 3719167 3715072 1.8G Linux filesystem
/dev/sda3 3719168 41940991 38221824 18.2G Linux filesystem
Command (m for help): d
Partition number (1-3, default 3):
Partition 3 has been deleted.
Command (m for help): n
Partition number (3-128, default 3):
First sector (3719168-46137310, default 3719168):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (3719168-46137310, default 46135295):
Created a new partition 3 of type 'Linux filesystem' and of size 20.2 GiB.
Partition #3 contains a LVM2_member signature.
Do you want to remove the signature? [Y]es/[N]o: N
Command (m for help): p
Disk /dev/sda: 22 GiB, 23622320128 bytes, 46137344 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 28349215-8C4C-4206-8906-6E808EE3E6CC
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 3719167 3715072 1.8G Linux filesystem
/dev/sda3 3719168 46135295 42416128 20.2G Linux filesystem
Command (m for help): w
The partition table has been altered.
Syncing disks.
Ensure that the new partition type matches the original one (e.g., Linux filesystem in my case).
Next, we need to extend the LVM Physical Volume (PV). Without any options, the size will automatically be extended to use all the available space:
pvresize /dev/sda3
Physical volume "/dev/sda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
After extending the PV, we need to extend the Logical Volume (LV). First, let's check the current state:
lvdisplay
--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-lv
LV Name ubuntu-lv
VG Name ubuntu-vg
LV UUID rzjkxf-nR8s-KnEG-2qzM-clzm-Zr1v-dXqbbj
LV Write Access read/write
LV Creation host, time ubuntu-server, 2022-07-19 23:06:49 +0000
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
You'll see that the LV size is still 10.00 GiB. Extend it to use all the available space:
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Verify the changes:
lvdisplay
--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-lv
LV Name ubuntu-lv
VG Name ubuntu-vg
LV UUID rzjkxf-nR8s-KnEG-2qzM-clzm-Zr1v-dXqbbj
LV Write Access read/write
LV Creation host, time ubuntu-server, 2022-07-19 23:06:49 +0000
LV Status available
# open 1
LV Size 20.22 GiB
Current LE 5177
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
Now that the LVM is updated, we need to update the filesystem:
# df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 392M 1.1M 391M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 9.8G 6.0G 3.3G 65% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 1.8G 182M 1.5G 12% /boot
tmpfs 392M 12K 392M 1% /run/user/1001
When you run resize2fs
without any options, the size will automatically extend to use all the available space:
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
# df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 392M 1.1M 391M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 20G 6.0G 13G 32% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 1.8G 182M 1.5G 12% /boot
tmpfs 392M 12K 392M 1% /run/user/1001
Thats it!