Recovering full BTRFS volume
I brought a SSD for my database, and decided to use BTRFS on it since I have read that it is better for SSDs. But after a few weeks my database application crashed because device is seemingly full.
SSD has 256 GB of data (now read-only), with ~118 GB of actual files (btrfs fi du agrees). But fi usage says:
Overall:
Device size: 238.47GiB
Device allocated: 238.47GiB
Device unallocated: 1.02MiB
Device missing: 0.00B
Device slack: 0.00B
Used: 235.00GiB
Free (estimated): 28.33MiB (min: 28.33MiB)
Free (statfs, df): 28.33MiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 512.00MiB)
Multiple profiles: no
Data,single: Size:226.46GiB, Used:226.43GiB (99.99%)
/dev/sdd 226.46GiB
Metadata,DUP: Size:6.00GiB, Used:4.28GiB (71.40%)
/dev/sdd 12.00GiB
System,DUP: Size:8.00MiB, Used:48.00KiB (0.59%)
/dev/sdd 16.00MiB
Unallocated:
/dev/sdd 1.02MiB
Is there any way to recover this filesystem or should I just copy files to another disk and reformat SSD?
EDIT: I tried to remount filesystem and truncate a file on it (that I have backed up) but even when logical file goes to 0B it goes back to full 10 GiB when I remount volume again.
#BTRFS #filesystems
Jan 31 · 3 months ago
6 Comments ↓
Every time I start thinking that this is the 21 century and try a 'modern' filesystem I inevitably lose the volume.
Last year it was ZFS on a Free BSD system. I usually assume it's my fault because I don't really know what I'm doing. But maybe everything sucks if you ever off the beaten path.
🤖 Namno [OP] · Feb 01 at 01:22:
And I had people on reddit reccomending ZFS to me since it suposed to recover itself or something
I think ZFS is supposed to be solid. I made a snapshot and tried later to use it and was left with a screwed up filesystem that I could not fix. Probably something I did. Life is too short for me to bother with new filesystems.
🐦 JustASillyBird · Feb 07 at 10:55:
I used btrfs for years (raid 1) and then ZFS for years (z1 and z2), never had a problem I couldn't recover. These advanced file systems are really essential these days, because drives have just gotten too big to handle with conventional RAID. A 1 in 10e14 error rate means you probably have a least one error come up in your 20TB drive.
🐦 JustASillyBird · Feb 07 at 10:56:
But that said, it doesn't matter how fancy your filesystem is in checksumming and redundancy: All of that is still no substitute for a proper backup.
I don't know why but my spinny drives last about two years or so of very low usage. I have a large box of crappy drives from this century waiting to be drilled. I have lost only one SSD so far.