BBenja Posted July 5, 2023 Share Posted July 5, 2023 Hi all, just updated to 6.12.2. After the reboot, my cache drives (formated btrfs) show 'Unmountable: Unsupported or no file system'. (there is also a warning '... references non existent pool cache') Please find my Diagnostics file attached. What can I do to fix the issue and prevent any data loss? Thanks in advance and all the help and great work! mule-diagnostics-20230705-1125.zip Quote Link to comment
Solution JorgeB Posted July 5, 2023 Solution Share Posted July 5, 2023 Try this: Stop array and unassign both pool devices Start array (click checkbox "I want to do this") Stop array Reassign both pool devices Start array, post new diags. Quote Link to comment
BBenja Posted July 5, 2023 Author Share Posted July 5, 2023 42 minutes ago, JorgeB said: Try this: Thanks for the fast reply. Done. Please find the new diags. attached. mule-diagnostics-20230705-1227.zip Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 Looks like it worked, everything looks OK. 1 Quote Link to comment
BBenja Posted July 5, 2023 Author Share Posted July 5, 2023 23 minutes ago, JorgeB said: Looks like it worked, everything looks OK. thanks so much 1 Quote Link to comment
nardiGray Posted July 5, 2023 Share Posted July 5, 2023 I have this exact same issue, unfortunately the solution didnt seem to work for me. I have attached my diagnostics in case anyone happens to be able help. Will make a new thread if I dont hear back, thought I wouild post here first to keep all the info in one thread. tower-diagnostics-20230705-2108.zip Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 37 minutes ago, nardiGray said: unfortunately the solution didnt seem to work for me. It wouldn't because it may have the same end result but 's not the same problem, log tree is damaged, if that is the only issue this may help: btrfs rescue zero-log /dev/sdj1 Then re-start array. 1 1 Quote Link to comment
nardiGray Posted July 5, 2023 Share Posted July 5, 2023 50 minutes ago, JorgeB said: It wouldn't because it may have the same end result but 's not the same problem, log tree is damaged, if that is the only issue this may help: btrfs rescue zero-log /dev/sdj1 Then re-start array. Thank you JorgeB, you were entirely correct and my cache is now normal. Apologies if my initial post confused anyone. 1 Quote Link to comment
DeKa Posted July 8, 2023 Share Posted July 8, 2023 (edited) Hello @JorgeB, I think I'm experiencing something similar to this... A few weeks ago, I updated Unraid to Version: 6.12.0 and upon rebooting the server, I'm getting the same error message as the one in the title of this post on the disks where I have BTRFS format. Could you help me resolve this? Thank you. Edit: I have already tried both solutions mentioned in this thread, but without any positive results. diagnostics-20230708-1621.zip Edited July 8, 2023 by DeKa Quote Link to comment
JorgeB Posted July 9, 2023 Share Posted July 9, 2023 20 hours ago, DeKa said: I think I'm experiencing something similar to this... Updating to v6.12.2 should fix your issue. 1 Quote Link to comment
DeKa Posted July 9, 2023 Share Posted July 9, 2023 Works, Gracias JorgeB! Ty4All 1 Quote Link to comment
Server_home Posted November 30, 2023 Share Posted November 30, 2023 Hi Guys I have lost all my VM's and Docker Service failed to start. I saw that my cache drive is Unmountable: Unsupported or no file system I can access to all my discs Can someone help me to repair everything ? tower-diagnostics-20231130-2252.zip Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 11 hours ago, Server_home said: Can someone help me to repair everything ? If the log tree is the only issue this may help: btrfs rescue zero-log /dev/nvme0n1p1 Then restart the array. Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 20 minutes ago, JorgeB said: If the log tree is the only issue this may help: btrfs rescue zero-log /dev/nvme0n1p1 Then restart the array. JorgeB I get this root@Tower:~# btrfs rescue zero-log /dev/nvme0n1p1 Clearing log on /dev/nvme0n1p1, previous log_root 3102204461056, level 0 ERROR: failed to write super block for devid 1: flush error: No data available ERROR: failed to write dev supers: No data available WARNING: fsync on device 1 failed: No data available Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 That suggests there are other issues, you can try the recovery options here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490 Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 I think that the problem is that cache drive is Unmountable: Unsupported or no file system Can I format it ? All my docker data and vm's are saved and accessible as you can see in the screenshot By the way how can I add my vm's again to the VM tab so I can acess them ? Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 36 minutes ago, Server_home said: Can I format it ? You can but will lose all data there, including all your appdata and VMs, if they were using that pool. Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 But the cache drive is Unmountable: Unsupported or no file system so is there a way to recover it ? I have the Vm iso files stored in other avaliable disks , can i recover a vm from a iso file ? Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 14 minutes ago, Server_home said: But the cache drive is Unmountable: Unsupported or no file system so is there a way to recover it ? 1 hour ago, JorgeB said: you can try the recovery options here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490 Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 when i do this step mount -o usebackuproot,ro /dev/sdX1 /temp I get this message root@Tower:/temp# mount -o usebackuproot,ro /dev/sdX1 /temp mount: /temp: special device /dev/sdX1 does not exist. dmesg(1) may have more information after failed mount system call. Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 You need to replace sdX with the correct device, since it's an NVMe it should be mount -o usebackuproot,ro /dev/nvme0n1p1 /temp Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 (edited) i use this one root@Tower:/temp# mount -o rescue=all,ro /dev/nvme0n1p1 /temp now in /temp I have 3 folders appdata/ domains/ system/ Do I save them, format the MVME and them copy the files to MVME ? Edited December 1, 2023 by Server_home Quote Link to comment
JorgeB Posted December 1, 2023 Share Posted December 1, 2023 Yes, copy everything to the array or somewhere else, confirm the data is good and then re-format and restore the data. Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 Do you think that there is a problem with the mvme ? Should I replace it ? Quote Link to comment
Server_home Posted December 1, 2023 Share Posted December 1, 2023 1 hour ago, JorgeB said: Yes, copy everything to the array or somewhere else, confirm the data is good and then re-format and restore the data. Now that i already have the vmdk files , how can i put them back on again ? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.