LVM (Logical Volume Manager) is one of those tools that every Linux sysadmin should know well. It sits between your physical disks and your filesystems, giving you the flexibility to resize volumes on the fly, span storage across multiple disks, take snapshots before risky changes, and migrate data between drives. All of this without downtime in most cases.
This guide covers everything from initial setup to advanced operations like thin provisioning, snapshots, and live resizing. I’ve structured it as a practical reference that you can follow end-to-end or jump into at the section you need.
All commands require root privileges. Use
sudo -ior prefix individual commands withsudo.
How LVM works
LVM introduces three abstraction layers between your disks and your filesystems:
Physical Disks /dev/sdb /dev/sdc /dev/sdd
│ │ │
▼ ▼ ▼
Physical Volumes PV (sdb) PV (sdc) PV (sdd)
│ │ │
└────────┬────────┘ │
▼ │
Volume Groups VG: vgdata ◄───────────────────────┘
(pooled storage)
│ │
▼ ▼
Logical Volumes LV: lvapps LV: lvbulk
(ext4) (xfs)
│ │
▼ ▼
Mount Points /srv/apps /srv/bulk
- Physical Volume (PV): A disk or partition initialized for LVM. This is the raw storage that feeds into the pool.
- Volume Group (VG): A storage pool made from one or more PVs. Think of it as a single large virtual disk.
- Logical Volume (LV): A “virtual partition” carved from a VG. This is what you format with a filesystem and mount.
The key advantage: LVs aren’t tied to physical disk boundaries. A 200 GB logical volume can span two 120 GB disks. You can grow it later by adding another disk to the VG, with no reformatting and no data migration.
Prerequisites
Identify your disks before doing anything destructive:
lsblk -o NAME,SIZE,TYPE,MOUNTPOINT
sudo fdisk -l
Double-check that you’re targeting the right devices. Initializing a disk as a PV will destroy any existing data on it.
Install LVM tools if they’re not already present:
# Debian / Ubuntu
sudo apt update && sudo apt install lvm2 xfsprogs
# RHEL / CentOS / Fedora
sudo dnf install lvm2 xfsprogs
Back up important data. This applies especially before any resize or removal operation.
Creating the storage stack
Initialize physical volumes
sudo pvcreate /dev/sdb
sudo pvcreate /dev/sdc
Verify:
sudo pvs
sudo pvdisplay /dev/sdb
You can use whole disks (/dev/sdb) or partitions (/dev/sdb1). Whole disks are simpler and avoid issues with partition tables, but partitions give you the option to flag them as LVM type (8e) for clarity.
Create a volume group
sudo vgcreate vgdata /dev/sdb /dev/sdc
Verify:
sudo vgs
sudo vgdisplay vgdata
The VG now pools all space from both PVs. If /dev/sdb is 100 GB and /dev/sdc is 200 GB, vgdata has ~300 GB available.
Create logical volumes
Fixed-size LV:
sudo lvcreate -n lvapps -L 50G vgdata
Use all remaining free space:
sudo lvcreate -n lvbulk -l 100%FREE vgdata
Verify:
sudo lvs
sudo lvdisplay vgdata/lvapps
Thin provisioning (overcommit storage)
Thin provisioning lets you allocate more space to LVs than physically exists in the VG. The actual disk space is consumed only as data is written. This is useful for environments where you want flexibility but don’t need all the allocated space immediately, such as VM storage or development environments.
# Create a thin pool (the actual storage backing)
sudo lvcreate -L 200G -n thinpool vgdata
sudo lvconvert --type thin-pool vgdata/thinpool
# Create a thin LV (can be larger than the pool)
sudo lvcreate -n lvthin -V 500G --thinpool thinpool vgdata
Monitor thin pool usage carefully. If a thin pool fills up completely, all thin LVs backed by it will freeze:
sudo lvs -a -o +seg_monitor,segtype,chunk_size,data_percent
Filesystems, mounting, and persistence
Create filesystems
# ext4: versatile, supports online grow and offline shrink
sudo mkfs.ext4 /dev/vgdata/lvapps
# XFS: high performance, supports online grow only (no shrink)
sudo mkfs.xfs /dev/vgdata/lvbulk
Which filesystem to choose?
- Use ext4 if you might need to shrink the volume later, or if you need broad compatibility.
- Use XFS for workloads with large files, high throughput, or many parallel I/O operations. XFS is the default on RHEL-family distributions for good reason.
Mount
sudo mkdir -p /srv/apps /srv/bulk
sudo mount /dev/vgdata/lvapps /srv/apps
sudo mount /dev/vgdata/lvbulk /srv/bulk
Verify:
df -hT | grep -E 'apps|bulk'
Persist mounts in /etc/fstab
Use UUIDs instead of device paths because they’re stable across reboots even if disk ordering changes.
blkid /dev/vgdata/lvapps /dev/vgdata/lvbulk
Add entries to /etc/fstab:
UUID=<uuid-for-lvapps> /srv/apps ext4 defaults,noatime 0 2
UUID=<uuid-for-lvbulk> /srv/bulk xfs defaults,noatime 0 0
Test before rebooting:
sudo mount -a
If mount -a succeeds without errors, you’re safe. A typo in fstab can prevent your system from booting.
Resizing volumes
This is where LVM truly shines. Need more space? Add a disk and extend. Filesystem full? Grow it online.
Growing a logical volume and filesystem
The -r flag on lvextend is your friend. It resizes both the LV and the filesystem in a single step:
# Add 20 GiB
sudo lvextend -r -L +20G /dev/vgdata/lvapps
# Or use a percentage of remaining free space
sudo lvextend -r -l +50%FREE /dev/vgdata/lvapps
If you prefer doing it in two steps (useful if you want to verify the LV resize before touching the filesystem):
# Step 1: Extend the LV
sudo lvextend -L +20G /dev/vgdata/lvapps
# Step 2: Grow the filesystem
# ext4 (works while mounted)
sudo resize2fs /dev/vgdata/lvapps
# XFS (must be mounted; uses mount point, not device)
sudo xfs_growfs /srv/bulk
Both ext4 and XFS support online growing, with no downtime required.
Adding a new disk to an existing VG
When you run out of space in the volume group:
sudo pvcreate /dev/sdd
sudo vgextend vgdata /dev/sdd
Now the free space from /dev/sdd is available to any LV in vgdata. Extend your LVs as needed.
Shrinking a logical volume (ext4 only)
XFS does not support shrinking. If you chose XFS and need less space, you’ll need to back up, recreate, and restore.
Shrinking is a destructive operation if done wrong. Always back up first, and always unmount before shrinking.
sudo umount /srv/apps
sudo e2fsck -f /dev/vgdata/lvapps
sudo resize2fs /dev/vgdata/lvapps 40G
sudo lvreduce -L 40G /dev/vgdata/lvapps
sudo mount /srv/apps
The order matters: shrink the filesystem first, then shrink the LV. If you shrink the LV first, you’ll truncate the filesystem and lose data.
Snapshots
LVM snapshots create a point-in-time copy of a logical volume using copy-on-write. They’re useful for backups, testing upgrades, or any operation you might want to roll back.
# Create a 10 GiB snapshot of lvapps
sudo lvcreate -s -n lvapps_snap -L 10G /dev/vgdata/lvapps
The snapshot size (10G here) is the space reserved for storing changed blocks. If more than 10 GiB of data changes on the original LV while the snapshot exists, the snapshot will become invalid. Size it based on your expected change rate.
# Mount the snapshot read-only for backup
sudo mkdir -p /mnt/snap
sudo mount -o ro /dev/vgdata/lvapps_snap /mnt/snap
# Do your backup...
# Clean up
sudo umount /mnt/snap
sudo lvremove /dev/vgdata/lvapps_snap
Shrinking from a Live CD
If you need to shrink a root volume (or any LV that’s in use and can’t be unmounted), boot from a Live CD/USB.
-
Boot the Live environment (Ubuntu, Fedora, etc., select “Try without installing”).
-
Activate LVM:
sudo vgscan sudo vgchange -ay sudo lvs -
Shrink the filesystem, then the LV:
sudo e2fsck -f /dev/vgdata/lvapps sudo resize2fs /dev/vgdata/lvapps 40G sudo lvreduce -L 40G /dev/vgdata/lvapps -
Verify and reboot:
sudo lvdisplay /dev/vgdata/lvapps sudo resize2fs /dev/vgdata/lvapps # auto-adjusts to LV size -
Boot back into your normal system and confirm everything mounts correctly.
Removing volumes cleanly
Order matters. Work from the top of the stack down:
# 1. Unmount filesystems
sudo umount /srv/apps /srv/bulk
# 2. Remove logical volumes
sudo lvremove /dev/vgdata/lvapps
sudo lvremove /dev/vgdata/lvbulk
# 3. Remove the volume group
sudo vgremove vgdata
# 4. Wipe PV metadata
sudo pvremove /dev/sdb /dev/sdc
Don’t forget to remove the corresponding entries from /etc/fstab, or your system will complain (or hang) on the next boot.
Replacing a failed disk
If a PV is failing, you can migrate its data to another PV within the same VG:
# Add the replacement disk
sudo pvcreate /dev/sdd
sudo vgextend vgdata /dev/sdd
# Migrate all data off the failing disk
sudo pvmove /dev/sdb
# Remove the old disk from the VG
sudo vgreduce vgdata /dev/sdb
sudo pvremove /dev/sdb
pvmove copies extents from one PV to another while the LVs remain online and accessible. It’s slow for large volumes, but it works without downtime.
Quick reference
# View layout
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
sudo pvs; sudo vgs; sudo lvs
# Detailed info
sudo pvdisplay /dev/sdb
sudo vgdisplay vgdata
sudo lvdisplay -m /dev/vgdata/lvapps
# Activate/scan
sudo vgchange -ay
sudo vgscan && sudo lvscan
# Health and space
sudo vgs --segments
sudo lvs -a -o +seg_monitor,segtype,chunk_size
Closing notes
LVM adds a small layer of complexity, but the flexibility it provides is well worth it, especially on servers where storage needs change over time. Being able to resize volumes, add disks, and take snapshots without downtime is something you’ll appreciate the first time a disk fills up.
For production systems, consider layering RAID (mdadm or hardware RAID) under LVM for redundancy, and LUKS on top for encryption. Monitor thin pool usage if you’re using thin provisioning. And always, always keep backups. LVM snapshots are not a substitute for proper off-site backups.