Comment by ๐Ÿฆ JustASillyBird

Re: "Recovering full BTRFS volume"

In: s/Linux

I used btrfs for years (raid 1) and then ZFS for years (z1 and z2), never had a problem I couldn't recover. These advanced file systems are really essential these days, because drives have just gotten too big to handle with conventional RAID. A 1 in 10e14 error rate means you probably have a least one error come up in your 20TB drive.

๐Ÿฆ JustASillyBird

Feb 07 ยท 3 months ago

2 Later Comments โ†“

๐Ÿฆ JustASillyBird ยท Feb 07 at 10:56:

But that said, it doesn't matter how fancy your filesystem is in checksumming and redundancy: All of that is still no substitute for a proper backup.

๐Ÿš€ stack ยท Feb 08 at 17:29:

I don't know why but my spinny drives last about two years or so of very low usage. I have a large box of crappy drives from this century waiting to be drilled. I have lost only one SSD so far.

Original Post

๐ŸŒ’ s/Linux

๐Ÿค– Namno:

Recovering full BTRFS volume โ€” I brought a SSD for my database, and decided to use BTRFS on it since I have read that it is better for SSDs. But after a few weeks my database application crashed because device is seemingly full. SSD has 256 GB of data (now read-only), with ~118 GB of actual files (btrfs fi du agrees). But fi usage says: [preformatted] Overall: Device size: 238.47GiB Device allocated: 238.47GiB Device unallocated: 1.02MiB Device missing: 0.00B Device slack:...

๐Ÿ’ฌ 6 comments ยท Jan 31 ยท 3 months ago ยท #BTRFS #filesystems