-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[btrfs] Unable to mount newly cloned filesystem #163
Comments
Running into the same issue here, with a simple btrfs partition which was backed up by clonezilla along with a disk (which contains an ESP partition, ext4 partition and the btrfs partition, without any RAID or LVM). Clonezilla does not print any error message on the screen during the backup and restore process. Also have enabled ZSTD compression ( The broken filesystem after clonezilla image restoration keep generating following messages in almost every
and kernel log (dmesg) when trying to mount (even with all recovery options):
Tried
Was using |
Same here. Clone and recover run with no errors, but resulting filesystem is broken and won't mount. Also tested recovering on top of the same partition, and it worked fine. Does not look like partclone is breaking anything, it is just not copying something it should. |
Same issue here with clonezilla-live-3.1.0-22-amd64 using luks as well. |
After running partclone.btrfs from a raw md array to md -> bcache -> dm-crypt via latest Arch Linux ISO liveboot, I am unable to mount the newly cloned filesystem. Partclone finished without any errors.
I was running the command
partclone.btrfs -b -s /dev/md126 -o /dev/mapper/cryptroot
Error in a remote KVM window is:
Kernel version is 5.13.13, partclone.btrfs version is v0.3.17.
Original filesystem works without issue, scrubbing the fs before the clone also yielded no errors. Filesystem is ZSTD compressed, if this matters.
I am sorry if this is missing information, unfortunately this is running via web KVM on a remote server, so I might not be able to provide all information. For now, I had to cancel the maintenance and boot the original filesystem / device again which still works without any issues.
€: This might be related to #158, but I am not running quotas at all.
€: I was not able to run
btrfsck --repair --force
on the block device. It simply refused with the same error message as btrfstune above.The text was updated successfully, but these errors were encountered: