• Content Count

  • Joined

  • Last visited

  • Days Won


ChatNoir last won the day on March 29

ChatNoir had the most liked content!

Community Reputation

229 Very Good

1 Follower

About ChatNoir

  • Rank
    Not so Advanced Member
  • Birthday January 8

Recent Profile Visitors

2555 profile views
  1. No need to pre-clear. The plugin is used to test new drive and detect weakness and RMA them before you put your data on it. For existing drives, an Extended Smart test should be enough. Unraid will clear the drives when you add them to the Array. One potential benefit of using preclear still will existing drives would be to reduce the time Unraid takes to clear new Array drives to basically 0 and have the system up and running faster. But you don't seem to be in this situation. The system will not lose availability by doing SMART test + regular Unraid clear in
  2. It really would depend on your goal and the problem you are trying to fix and why not write directly to the Array ? How much data/day are we talking about ? A single HDD pool would only have slightly better write speed than the Array and no redundancy. A RAID0 SSD pool will be faster than the other solution and the Array but still not provide redundancy. Still, you'll have more space than RAID1. Are you looking for: write speed pool size redundancy limit the spin up time of the Array something else a mix of the above ?
  3. Hum thanks but I just participated as a moderator to mark it Solved.
  4. You should probably ask this in the ZFS plugin support thread.
  5. Hello, I guess that you mean this : Jun 14 13:37:01 Valhalla kernel: BTRFS error (device sdc1): parent transid verify failed on 498473369600 wanted 9328736 found 8804448 Jun 14 13:37:01 Valhalla kernel: BTRFS: error (device sdc1) in __btrfs_free_extent:6803: errno=-5 IO failure Jun 14 13:37:01 Valhalla kernel: BTRFS info (device sdc1): forced readonly Jun 14 13:37:01 Valhalla kernel: BTRFS: error (device sdc1) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Jun 14 13:37:01 Valhalla kernel: print_req_error: I/O error, dev loop2, sector 0 Jun 14 13:37:01 Valhalla kernel: BTRF
  6. You should check if the error still appear in your log. It is quite possible that it is just CRC errors and that the drives and their data is OK. We are just lacking information to decide. Try to run Extended SMART tests on the drives with S/N ending in 6Y0 and T0G. When both are done, share a new set of Diagnostics.
  7. As long as no raid controller is involved, Unraid should find your drives. If you have VMs with HW passthrough you'll probably need to disable Array and VM autostart and correct what needs to be corrected. If I were in your situation, I'd try to boot on your new hardware with a Trial key to iron out any issue. We often see people having trouble booting properly on some OEM servers.
  8. Your diagnostics could provide information on the initialization of the NICs, their status, etc.
  9. The procedure to rebuid the disk from Parity is described here :
  10. Looks like it is pulled from Wikipedia : The official Datasheets from AMD website are pretty basic, I guess you'd have to dig in more complex document to find details of the architecture's RAM management.
  11. You have 4 DIMMs of dual rank memory, 3200 MT/s is still too high as per the FAQ, you should aim for 2667 MT/s.
  12. Well, it is not present in 6.9 and it is the long term solution. I doubt the 6.8 branch will be continued. For 6.8, this topic discuss the issue and propose a workaround. Note that you would need to add the fix to your go file so that the workaround survives a reboot.
  13. I suppose that a program is writing in rootfs instead of writing to a Share or drive. You should check that all path are correct, probably starting from the latest additions/changes to your system. Check that the case is also correct.
  14. I do not think this is an Unraid feature. I guess that the USB drive did not appreciate the unclean shutdown. Do you have a recent backup of your drive ?
  15. I am not using Sonarr, Radarr and the like, so I a not sure what is your issue. However, the mover does not touch open files. Is it possible that your files are shared by a filesharing software ?