offroadguy56

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

offroadguy56's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. I do have access to the ISO and I'm currently finishing backing up important files before I attempt fixes on the OS. I had URBackup running for some files but not all. My Windows 10 Pro VM is stored on my cache drive. The other day the Zero-Log corrupted again. I was able to rescue it as it was not the first time. However when starting up the VM, VNC would show Guest has not initialized display (yet). Again, seen this issue before and I fixed it in the past by creating a new VM template and manually referencing the old iso. However this time when windows boots I'm met with a BSOD that reads Bad System Config Info. Windows automatic repairs fail. Windows install disk repairs fail. There is one guide I followed. To use DISM and SFC to repair. I can't get the DISM commands to execute. Dism /Online /Cleanup-Image /RestoreHealth results in error 50. To fix that error I try dism /image:C: /cleanup-image /revertpendingactions which results in error 3 saying the image cannot be accessed. I've had to change the drive in the command from :😄 to :X: as I guess that's a recovery mode thing? As a result I'm greeted with error 2 instead. The rest of the search results to fix these errors include using software or tools not available in the recovery mode. I give up on DISM and try SFC now. SFC /scannow scans the disk and presents WIndows Resource Protection could not perform the requested operation. I try chkdsk /r next, this says Windows cannot run disk checking on this volume because it is write protected. Search results don't bring me to any solutions for this. Disk part says there are no disks listed. No partitions. The only volumes are the 2 ISO drives from the VM, ESD-ISO and virtio. I'm ready to write off the install and start fresh. I've backed up what I care about. But I hate giving up without exhausting all my options. If you'd like to give some things for me to try that would be neat.
  2. Well the cache went unmountable again. The zero-log rescue command did bring it back again. But since the first time it's only been working for less than a day. Any idea to fix this? or should I backup my data and just wipe the cache pool and start fresh? It shouldn't be because of a full drive, it has 800gb free. Now I do have disk4 that has a duplicate of my VM vdisk. This drive is 80gb free I wonder if that is screwing up the cache pool.
  3. I'll put this information to use. Hopefully I can prevent future screw ups on my part. Thanks again for the assistance. Y'all are great!
  4. That's basically what is on it. VM vdisks, appdata, docker image, and system folder. Right now I'd like to just backup appdata and the VM vdisks. And if possible the docker image and system folder. Free space available on the cache is about 200-300gb. So it performs it's cache duties for the most part. But I needed the raid0 setup as I wanted the fastest and biggest (affordable) storage I can offer for the software in my VM. Which I am already considering increasing once again depending how my data hoarding goes.
  5. Holy smokes looks like that worked. So glad I didn't screw up the file system trying to fix it myself like I did last time some months ago. Last time I took my cache drive out and accidentally set it back up as btrfs instead of it's original xfs. And you are absolutely correct. My mistake typing the original post. I did set the shares to Yes. I remember anxiously watching the GBs tick by as the pool emptied. I will clean up the duplicates. On a side note do either of you have recommendations of automatically backing up the cache pool as it is not part of the array? Thanks very much for the assistance both of you!
  6. I wasn't able to access my services on my server. I log in to find that most of my docker containers had stopped except for a few still running. I attempted to run one of them but was given a 403 error code. I did a quick search and saw that it was in reference to a full cache pool. I checked my cache pool and it still showed plenty of free space. I then restarted the server and was met with my cache drives being unmountable. The process I performed which may have lead to the cache's demise was such: I had a m.2 drive I was on a time crunch to backup. I had a m.2 to USB adapter on order but was afraid it would not arrive on time. I would need to use the m.2 slots in the server, but to reduce risk of data corruption if I took them out I began transferring data off the cache pool to the array by changing "Use cache pool: prefer" to "no" and evoking the mover. The USB adapter did arrive in time and I backed up my m.2 to the array. I left the mover doing its thing until it finished moving my various share's files to the array. I then set my shares back to "prefer" and I noticed that some files had a duplicate stored on the array and on the cache specifically 2 appdata folders, the docker container image, and my VM image. According to unraid they were stored both on Disk4 (or Disk6) and Cache. I evoked mover again and the duplicates didn't disappear. I restarted then evoked mover once more and the duplicates remained still. Some time later I reach the 403 error and after a restart the cache pool is unmountable now. Looking for assistance in trouble shooting the issue. I have had cache problems before due to my incompetence; I had removed the cache pool and put it back it and I gave it the wrong file system. I have 2 cache drives. 1 TB each. Set up in raid 0 equivalent. Most of their data should be duplicated on the array if unraid Shares tab is correct. Appdata is nowhere on the array but I have an older backup on my personal computer. waffle-diagnostics-20230812-1700.zip
  7. Thanks JorgeB. Looks like the SATA passthrough to VM was the root of the problem. I'm not entirely sure how it managed to get that way. All I remember at the start from a week ago was plugging my GPU accelerator back in after changing it's cooler while also plugging in a 2nd m.2 NVME drive. The computer attempted to boot windows off that 2nd NVME as I had not wiped it. It tried several times before I caught on to it. After getting into unraid I noticed Disk1 was disabled so I restarted unraid multiple times and tried changing cables/SATA ports. When I disabled the array to fix Disk1 (just a simple stop array -> start array) I also simultaneously added a 2nd slot to my cache pool which changed it from xfs to btrfs which disabled my working cache drive (the 1st nvme). I don't believe loosing the cache pool was the cause or symptom as Disk1 was disabled before I touched the cachepool. But I could be remembering wrong because libvirt.img was in a share that was solely stored on the cache drive. So the SATA passthrough issue could have happened when I added that 2nd slot and drive to the cache pool which caused the file system to change causing the cache pool to become unreadable. Thanks again, the community here is great.
  8. Disk1 is down again. My docker img was corrupted at some point so I went to go fix that. I also had 2 images on the array, docker.img and docker-xfs.img. I deleted both then started the docker service with these settings. Do we know if this has caused Disk1 to go offline. This time it says it's enabled but unmountable: wrong or no file system. EDIT: I found a previous post referencing xfs_repair. I was able to execute the command and disk1 appears to be back and operational. waffle-diagnostics-20230307-1507 - removed corupted docker.img_then made new docker img as btrfs.zip
  9. Looks like the array is back online. SMB shares are working again. Docker and VM are currently disabled. I can work on my own to get those back. Can I remove these historical disks without further issue?
  10. I though the read check that it offered me would fix the disk being disabled. It did not. So I performed the start array with out disk and add it back trick. The array is performing a parity sync now. Parity drive is enabled now. I'll see how it goes, last time it did it's parity check but then the drive was disabled again, but that shouldn't happen now with the VM issue removed. I'll past back here with results sometime tomorrow after some sleep.
  11. Array started, libvirt deleted (i believe). Here is the most recent diagnostics. Here is most recent screenshot of Main tab. For when I attempt to fix the array. And my normal SMB shares have shown themselves on Disk1 again. No more linux file system. waffle-diagnostics-20230305-1047 - After libvirt deletion.zip
  12. Looks like the file was properly modified by the webui. But unraid failed to properly shut down the VM service. After a restart I can now modify the Libvirt storage location path. Do I need the array running to see the option to delete libvirt.img? If the SATA controller passthrough was the culprit, with the VM manager disabled in theory I should be able to start the array and repair it without issue correct? Even with out removing libvirt.img? There is also one more thing I want to point out. Currently on my Disk1 if I look at the contents of the disk I do not have my usual SMB shares. Instead I see a linux file system. If I navigate to /mnt/ I can see Disk2,Disk3,Disk4, etc but no Disk1. Just want to put this info out there before any more rebuilds or parity checks are performed.
  13. I have tried to disable the VM manager. It still says running in the top right. But the VMs tab is gone and enable VMs is set to 'no'. Unraid did hang on the loading icon for a few minutes. I refreshed the page to regain control of the webui. How should I go about removing the libvirt.img file? I assume I would see a button next to the path location in the settings page. This is the most recent line in the log:
  14. My VMs list is currently empty. I should have a windows 10 VM there. Any suggestions to accomplish what you recommended above? I feel like I've done enough potential damage. I'd like to play is slow and safe and see what the community suggests first. I assume I could fix this problem by disabling the VM manager?
  15. Unraid 6.11.5 I have some weird happening with my array. I will try and describe the series of events the best I can. TL;DR I tried to bring my disk1 online. It and parity disk took turns being offline. After 3 or 4 parity syncs and disk rebuilds. Parity disk will not go online after parity sync. Unraid has multiple notices saying Disk1 Disk2 can't be written to. Disk1 has read errors. Parity disk is disabled. Before I began my upgrade I have 7 disks total with 1 being parity. I have 1 cache drive NVME. I had planned to install a 2nd cache drive and assign it to the same pool. I install the 2nd cache drive. On bootup it asks me to assign it BTRFS. I click yes and that brings my 1st cache drive offline because it was formatted XFS. At the same time my Disk1 showed it was offline. I restarted Unraid multiple times and tried different cables and SATA ports. I though the disk being offline meant that it was not recognized by the OS/BIOS. I learn that was not the case and that the disk had to be rebuilt by the parity data. I take the array offline remove the disk, start the array, stop the array, add the disk back. Rebuild begins. After rebuild unraid says all disks + parity are online. I remove the 2nd cache slot and set the cache pool back to XFS and assign my cache drive back to the pool. I move the data off the cache drive by invoking mover and setting all of my "prefer cache" disks to "yes cache". All the data moves successfully. I also as a precaution copy the appdata folder contents to my main PC via SMB. I notice that my VM list and Docker list are empty. I restart unraid. I should mention at this point and before during the restarts Unraid was not able to properly shut down. It would either hang on trying to stop the array, and do absolutely nothing for 30 minutes or more. Or it would spit out on the local console that it had IO errors. I didn't think anything of it. 3 times now upon boot up my computer would not recognize any bootable devices including the unraid USB except for the 2nd cache drive which had a windows install. After a restart unraid would boot. After actually booting into unraid if I had rebuilt disk1, the parity disk would be offline and I would take the array down and back up to get it to parity sync. If I had previously done a parity sync before a restart disk1 would be offline and I would take the array offline and back online to rebuild it. Now after each restart and parity sync, the parity disk remains offline even before the restart and the "successful" parity sync. Also as of now there is no longer a parity sync button, it has been replaced with a read-check button. I'm sure some info has been left out and that certain things aren't very clear. Please ask me questions and point me in the correct direction to recover my array. My only hypothesis is that I somehow swapped my parity and disk1 positions. I have attached some diagnostic dumps and here are screenshots of my current webui. I also have a flash backup of unraid from 02-15-2023 I believe it is version 6.9.5 in that backup? Thanks in advance for all that yall do here. waffle-diagnostics-20230303-1230.zip waffle-diagnostics-20230303-1932 after parity sync.zip waffle-diagnostics-20230303-2004 after restart to normal OS mode.zip