AntoineR

Members
  • Posts

    30
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

AntoineR's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Hey! Sorry for the delay in my answer, I didn't want to spam and also didn't really have the time with work. These last few days I have had the time to try to resolve the issue and your diagnosis was right! Here are the steps I went through, should someone else stumble upon this thread with the same issue. I bought and installed a controller to avoid the problematic motherboard controller, the JMicron JMB585 to be precise. Unplugged the appropriate drive, changed the sata data cable and plugged it into the new controller, and restarted my server. From there I unassigned the hard drive associated to the disabled disk and started the array in maintenance mode. I then stopped the array (not the server), and assigned the drive once more to the disk, started the array *not* in maintenance mode. From there it started to rebuild the disk, and after a few hours, it's now back to business and behaving as expected! Thanks a lot for your help diagnosing the issue, should anyone want to see how I chose the controller, I used the information from this thread : When I have time to tackle a bigger beast, I'll try to update everything hahaha, thanks again and have a nice day and nice holidays!
  2. Hi to anyone who could help me, I am fearing my server is having a really bad time in the moment. I had a monthly scheduled parity check that was aborted because of errors found, it reported 177. I'm trying to find clear instructions on the right thing to do, and in doing so tried rerunning a new parity check. In about two minutes it gave millions of errors. I have not yet rebooted the server as I want to make sure nothing dies in the process. Trying to explore files on the server fails and it seems every disk is empty. Here are diagnostics, any help is greatly appreciated for the following steps to take. Thank you pegasus-diagnostics-20231201-1834.zip
  3. That would explain why I failed to find it hahaha, thanks a lot!
  4. Thanks for your help once more. I only now got to get to it as I wanted to have the time to properly sit down to try it out and document my efforts. I marked the libvirt file being corrupted as the solution as it was what happened visibly. If anybody very noobish like me falls upon this thread in the future, here are the steps I went through to repair my VMs : 1. into settings, VM manager I deleted the libvirt file, didn't change any of its settings. 2. Reboot the unRAID machine 3. Upon reboot, noticed the VM service was successful at starting, I went into VMs to create new ones 4. Selected the right operating system and settings, changed the VM's name to the same name it used to have. I don't know if CPU pinning matters but I know I used the same I used in the past, so YMMV, same for Memory. 5. For the primary Vdisk location, I pointed to the previous Vdisk location, if it is default, you can find it in the same page you deleted the libvirt file under default VM storage path, remember to point to the .img file not just the specific VM's folder. I left the vdisk size textbox empty to avoid screwing with the vdisk that was already there and it worked perfectly. This was only tested on Ubuntu VM's, so once again, your mileage may vary, but this fixed the issue for me, and I hope it will help anyone who finds this thread in the future. Finally, I fail to find it online, could you link to the plugin, ideally in a 6.8.3 compatible version so I could use it until I get to upgrading when I'll have enough free time to ensure I do it cleanly? Finally, huge thanks for helping noobs like me manage their issues, I'm certain it'll help more than just me, and I'm deeply grateful for your contributions to the unRAID community. Have a good one, and good wishes none of your system files corrupt in the future because that sucks!
  5. That's unfortunate. I didn't know this file was so important (nor existed, frankly), so I never backed it up. Is there a chance that the machine backed it up automatically, and if so, where would it be? Otherwise I'll try to point to the vdisks and hope for the best. Should it fail, does it mean I would have to create the VMs from scratch again as they would be unusable due to the VM manager not being able to "manage" them? To avoid this issue happening in the future, is there any way to have an indication of what caused the libvirt file to be corrupted? And is there a clean method to set it to backup occasionnally? Again, huge thanks!
  6. Ok sorry! I reran it with the checkbox and it corrected everything apparently, reran it again and it resulted in no errors. I have since rebooted the unRAID machine, reran the check, which gives this result : UUID: 2cdcedd8-7db1-4a5b-ac33-b942268ed85c Scrub started: Wed Nov 22 10:40:21 2023 Status: finished Duration: 0:11:14 Total to scrub: 584.17GiB Rate: 887.50MiB/s Error summary: no errors found However when going into the VM tab, I still get the error message about the libvirt service failing to start. What else should I try? I posted new diagnostics. Again, thanks immensely for your time and effort, which are deeply appreciated! pegasus-diagnostics-20231122-1051.zip
  7. Thanks again for your time, I read the thread and will inform myself more to update in the future! On the matter at hand, I ran a scrub (without checking the "repair corrupted blocks" checkbox) on the first device of my cache pool, and these are the results : UUID: 2cdcedd8-7db1-4a5b-ac33-b942268ed85c Scrub started: Tue Nov 21 18:18:49 2023 Status: finished Duration: 0:26:05 Total to scrub: 584.82GiB Rate: 382.65MiB/s Error summary: verify=9957 csum=57513363 Corrected: 0 Uncorrectable: 0 Unverified: 0 As usual I attached a new file diagnostics if this can have any helpfulness as well. If this can help, the moment the drive was disconnected and reconnected to the cache pool was a while back, and VMs worked between then and this error showing up. Could it be that the issue took a while to show up because it wasn't properly corrected on time? Thanks once more and I hope you have a wonderful day pegasus-diagnostics-20231121-1851.zip
  8. Thanks for your answer and your time! I indeed had a cache drive disconnect, I added it back to the pool a while back, however the diagnostic line you quote is not from the same day so that's interesting to note. How can I run a correcting scrub? For what it's worth, running a parity check yields this : Parity check finished (0 errors) Duration: 12 hours, 35 minutes, 1 second. Average speed: 132.5 MB/s I stayed on 6.8.3 because I use an NVIDIA GPU for some transcoding on a plex docker container, and from my understanding, newer versions do not allow this due to the disconnect between unRAID and NVIDIA, am I mistaken? I added a new diagnostics I ran after the parity check if that can help! Again, thanks immensely for your time and help, I'm in over my head and it's really appreciated! pegasus-diagnostics-20231121-1351.zip
  9. Hi! As mentionned in the title, after rebooting my server the VM service fails to start, I tried a parity check and reboot again to see if it was a problem with my data or a bug in the reboot, but I get this problem once more. Diagnostics are linked, could you help me identify my issue? Thanks a lot! pegasus-diagnostics-20231119-1025.zip
  10. Alright so that did it, the server is back on track and works well, I also ran an extended SMART test on the disk which was initially having a bad time and it passed it, so I guess it's all good? For future reference, what do you think caused this issue in the first place? Simply unlucky timing for a power outage and the disk is still fine? You also suggested getting an external SATA controller, do you have recommendations? I expect having about up to 10 disks down the line in total. Here's the diagnostics as usual! Thanks again for your help and your time pegasus-diagnostics-20220804-0847.zip
  11. Alright thanks I'll try that, will report back afterwards, thanks again both for your help and your time!
  12. Sorry for the delay in my answer and thanks a lot for yours and your time. Here are the diagnostics after rebooting the server, this time it could mount the drives seemingly without issue, but disk1 still appears as being disabled. pegasus-diagnostics-20220803-0920.zip
  13. Hey, I'm writing on this forum because I'm having issues and want to avoid creating more problems by being a newbie. My unRAID server was forcibly shutdown during a power outage while it was doing its parity check. After the issues and a reboot it could restart the array no problem and restarting doing its parity check. During the check it crashed saying that disk1 was disabled (didn't write down the actual error message, sorry for the dumb mistake). I stopped the array to try and help it restart, and upon restarting it I now get the error "unmountable : no file system" for all my drives. I'm lost and have no idea what to do to avoid losing data, any help will be appreciated. Thanks a lot! pegasus-diagnostics-20220802-2322.zip
  14. Hey! I might be a bit noobish, but I'm going crazy over this, is there something special I have to do to enable port forwarding to this container? I set it as privileged, and set the router to open port forwarding routes to its adress and port, however when testing it, port seems to be closed while other ports setup the exact same way appear as open! Thanks!