xjumper84

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

xjumper84's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I checked the thread you linked, I'm using ddr-3200 ram for a Ryzen 4600g. It's set to xmp profile 1. Bios is one version out of date, but it was the latest when this server was put together and ran totally fine for almost 2 years. The board was rock stable until I disabled c-states like the thread mentions to solve issues with USB powering down and causing coral USB tpu to fail in frigate, except it didn't help solve that issue. So I put those settings back and now these issues seem to come up every 20 or so hours. Thanks for the info @JorgeB.
  2. Last night while transferring files to another machine on the network, Unraid became unaccessible and this was the error on the screen. It keeps referencing CPU cores as being unavailable. I was also running 3 Dockers at the same time (Plex, xmrig, frigate). Frigate uses a Google coral which has been causing problems on the USB side. Diagnostics are attached, I had to reboot to get back into Unraid, these diagnostics are from that first reboot. I'm just interested in learning what causes this. While not related, I'm already swapping to new motherboard/CPU/ram hardware this weekend. Any insight would be appreciated. nas-diagnostics-20240203-0810.zip
  3. Yes you summary is accurate. I will check out Dynamix File Manager. I'm finding Disk5 to be very loud when accessing files and it keeps switching between 3gb/s and 6gb/s sata 3.1 link speed. I suspect its the controller card its connected to, but thats for another day. I also need to plan out upgrading the server to 6.12.x. I'm finding hardware limitations with my current build that would warrant a hardware refresh. Thank you for your help. I hope the next drive I replace goes better.
  4. New diagnostics. I spun up the failing 4tb drive (sdi) in unassigned devices because its RAW READ ERROR RATE has risen. nas-diagnostics-20240121-0918.zip
  5. Done. How should I proceed now? Parity check the file system before I reset up shares and copy files back? Also, @trurl the drive that was failing now has a RAW READ ERROR RATE of 182, up from 177. So replacing the drive appears to have been totally eminent.
  6. Ok. Data is transferred. @trurl or @itimpi - how should I format the 10tb drive and get a good parity rebuild going? I'll copy the data back later.
  7. updated. Copying data off original disk4 now.
  8. Could I copy the data off the original 4tb drive currently in Unassigned Devices to my desktop, just to have a good copy of the data, then erase the rebuilt data on disk 4. Then parity check to ensure parity is good. Then copy the data back to Disk 4?
  9. I checked the other WD disks, they are all zero's on ID1, 197 and 200. Docker and VM's are set to no.
  10. Ok - So I should: go to settings set enable docker to no - this will keep the docker containers I have though right? go to settings, set enable VM's to no - this will also keep the VM files I already have. Then: stop the array switch the 10tb out add the 4tb in start the array with no parity check go to tools -> new config then rebuild parity, wait. don't let any docker run, don't copy anything in or out until it finishes. Is that correct?
  11. New diagnostics attached nas-diagnostics-20240120-0951.zip
  12. Ok. I re-added the original disk4 back, it showed up in Unassigned Devices. I mounted it and all my files are there. What is the proper step to proceed? I would think (and please let me know if this is correct) - unmount the drive from Unassigned Devices. Stop the array, change the 10tb drive to the original 4tb drive and then restart the array. Check the files to make sure all good, then do a parity check, ensure we're good there. Then once we are, in unassigned devices - format the 10tb drive to xfs, copy the 4tb drive to the 10tb drive, unmount the 10tb. stop the array, change the 4tb drive to the 10tb drive, setup the 4tb drive in unassigned devices and restart the array?
  13. Understood. edit: the errors showed on md1, md2 and md3 (same as shown in the picture before the last reboot) to read something similar to: "metadata i/o error in "xfs_imap_to_bp+0x50/0x70 [xfs]" ... len 32 error 5 So i'll power off the machine, connect the drive and startup. Please refresh my memory, with the new drive added, the array won't start? and then I'll be able to add the original 4tb drive into Unassigned Devices?
  14. The billion errors were on disks 1-3. The rebuild went onto the 10TB drive (which is disk4).