luisv

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by luisv

  1. That's great to hear... Been happy with mine as well, so I'll probably end up with this newer Prime board. Thanks for the quick reply!
  2. Indeed a great build you have there... now that it's a year or so old, I'm curious how the Asus Prime X570 Pro is working out for you? My Prime board is 7 years old and has 1 bad SATA port, so I'm considering a replacement before more problems start showing up. The Asus Prime X570 Pro and Pro WS X570-ACE have caught my attention, so trying to decide between them.
  3. This worked perfectly for my needs... thanks! All empty drives have been removed from the array and parity synced. Currently preclearing the empty drives... @JorgeB - I understand this doesn't fix the fundamental issue of not being able to stop the array, but once the preclears complete and they are removed from the server, I'll provide an update and will see if I can stop the array normally.
  4. In normal operation, I can only stop the array if VM and Docker service is stopped; wasn't the case until I upgraded to 6.12.3. If I use instructions from the video posted above and use the script to clear a disk, it finishes the disk clear within seconds... not hours. Since no errors are present in the log, I try to stop the array... it hangs. Something doesn't seem right. I appreciate the help, but this is becoming extremely time consuming as parity checks / disk rebuilds take 24+ hrs. I'm double confused as the disks I want to remove have zero data on them, so the only bits on them is whatever is needed by the parity disk for a rebuild. I understand the risks of not having parity; however, since five disks have zero data on them and the other four have gone through several parity checks without issue, how can I remove these empty disks? Can I disable parity, remove the 5 disks and then start the array and allow parity to rebuild? My fear is the more testing I do, the more disk rebuilds / parity checks are performed. I rather not get to a point during troubleshooting that a disk fails. I rather remove the 5 empty disks and allow parity to build. Shuffle the remaining data to the new drives, then remove the last 4 disks and allow parity to build one last time?
  5. In safe mode, with Docker and VM disabled, I can stop and start the array without issue. While in safe mode, should I go through the process of shrinking the array?
  6. Thanks... I'll post an update later tonight.
  7. To recap... In settings, Disable VM and Docker Reboot in safe mode Try to stop the array
  8. No worries at all... I truly appreciate the help. Agreed... stopping the array is more important; however, I'm simply doing what I can during the day as I won't be in front of the server until later tonight. Pre-clear is currently at 62% and data transfer is estimating 4hrs 52mins to go, so once they are complete, how should I proceed?
  9. Couple of quick updates... Disk 1 build completed and parity shows as valid Installed another 16TB drive and currently pre-clearing it Using Unbalance to move data onto Disk 1
  10. Yes... I set my array to not auto start, upon successful parity sync yesterday, I rebooted, I started and then stopped the array one last time... same issue... so I gave up and I'm currently waiting for the build to complete. Once the build is complete... what's your recommendation... how should I proceed? For instance, I can preclear another 16TB drive and add it to the array, then scatter the data from Drives 4 through 7 onto Disk 1 (16TB) and Disk 2 (16TB). Once that is complete, how should I proceed? If you have other suggestions, please let me know... thanks for your replies!! Your time is truly appreciated...
  11. Yes... I tried to stop the array... several times. Dockers were stopped, VMs were not running, no additional browser windows open to the console and no SSH sessions.
  12. Correct... it's building. I have 4 drives to add and 10 to remove. Once the build for Disk 1 is complete, I can scatter data from the array onto Disk 1. Once that is complete, I'll have Disk 2 through 7 empty / no data. So my question is how to proceed due to the issue / failure of stopping the array? I rather have the ability to remove a zeroed drive without loosing parity.
  13. Here's the diag. davault-diagnostics-20230720-0935.zip
  14. My server originally had dual 6TB parity with 4TB and 6TB data drives (a total of 11 data drives). I'm trying to swap four 16TB and two 12TB drives into my server. My drive bays are all full and I don't have any extra SATA ports, so I used Unbalance to offload the data from three 4TB drives to the rest of the array. I figured I could swap one of the parity disks for a 16TB drive, allow parity to rebuild / sync then follow the steps in this video to zero three data drives, remove them, retain parity and add three 16TB drives... rinse and repeat until the array consists of one 16TB parity drive with three 16TB and two 12TB data drives. My plan was to leave the 6TB parity disk until the end as dual parity would provide more redundancy during this data disk shuffle. I swapped one of the 6TB parity drives with a 16TB; after parity was rebuilt / synced, I proceeded through the process of zeroing out a 4TB drive (Disk 1). Once I tried to stop the array, it hung with the "Retry unmounting shares" error; it's running 6.12.3. VMs and dockers were stopped, nothing was accessing the array, I don't limit drives within shares, so I was confused with the error. Since I couldn't stop the array, I proceeded to shutdown the server, swapped the zeroed 4TB (Disk 1) for a 16TB drive and upon reboot, I couldn't assign it as Disk 1 as it indicated that the new drive was larger than the dual parity disks (16TB and 6TB). If I unassigned the 6TB parity I was able to assign the 16TB drive as Disk 1.... frustrated and confused, I proceeded to unassign the 6TB parity disk and assigned the 16TB drive as Disk 1. I started the array and a data rebuild is underway for Disk 1... yep my frustration got the best of me... now the wait... another 24+ hrs for this step to complete. At least two 16TB drives have been swapped into the server and I have an available Sata port. Sorry for all the details, but I figured best to convey as much info as possible as I'm extremely confused why it hung. Not sure if my issue should have been addressed by 6.12.3 as it included a fix for the "Retry unmounting shares" error; however, I wanted to convey which version I was running while attempting this drive swap. Disk 2 and 3 do not have any data and I have an available Sata port. How should I proceed? Any help is appreciated...
  15. After I calmed down, I performed some basic troubleshooting. I connected a monitor and keyboard, restarted the server and entered the BIOS, the MB has 8 SATA ports and one port was not showing a drive connected to it. So I shut down the system and connected drive 3 to a spare SATA port on the LSi controller card. I turned the server back on, logged in and the array started as it recognized drive 3. I was able to browse disk 3 and open files via the Dynamix File Manager plugin. I also opened some files via Windows explorer that reside on drive 3; the drive seems good to go at the moment. So, it's either a bad SATA port on the MB or a bad cable?? I'm going to leave things alone for a bit to make sure the server is stable. By the way, I just realized the system is 5 years old... boy time flies... it looks like I need to start buying some spare parts as I'm sure other things are going to go... Thanks for the reply... new diagnostics attached.
  16. The drives are labeled, so I know which one is drive #3; it's part of my OSD, as all drives are in proper order within the case. I have a backup of the USB key and there's an option to start the array without this drive, so trying to figure out my best options are.... any help is appreciated.
  17. Updated the original post with the zip file... thanks for the quick reply.
  18. After upgrading to 6.10.0 from 6.10-rc8, upon reboot the array won't start as a drive is missing; any help is appreciated. I have a backup of the USB drive.
  19. Here's what I did: I went here http://boinc.bakerlab.org/rosetta/ and created an account After logged into the website, I joined the Unraid team Installed RDP-BOINC docker and set the RDP port 3389 Once the docker was running, I logged into the docker with the ID I created in the 1st step
  20. I logged in here and joined the team: http://boinc.bakerlab.org/rosetta/
  21. It was a "banner" across the top... similar as to when a new version of Unraid is available. I assume it checked to see if a server had the BOINC or Folding@Home docker installed, so if you already had one of them, you probably didn't see it. Not going to debate it's usefulness here, but I don't see an issue with it.
  22. Doing my part... both servers up and running using the BOINC-RDP container. Just to confirm... only Folding@Home is supporting GPUs?
  23. Just joined the team... using BOINC RDP on my backup server. Will be adding my main server shortly.
  24. Just upgraded one of my servers from 6.8.1 to 6.8.2; no issues.
  25. Updated 2 servers... so far so good.