windowslucker

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by windowslucker

  1. I just did another test copying directly to the disk share, as you suggested. VMs were switched off, again. As I can't think of any way to copy to a disk share using SMB, the only difference to my prior tests is that I was using WinSCP to copy the files to /mnt/disk1/copyTestShare. As far as I can tell, the issue stayed the same. Unaccessible dockers and high CPU load. The Unraid Webinterface stayed accessible, though. Also, WinSCP got some weird disconnects while copying. The text on the top right window says: "The remote computer didn't send any data for more than 15 seconds." I think it has something to do with unraid freezing, but of course I'm not completely sure if that's really the case. Again, I recorded everything and uploaded it to Youtube for you to review.
  2. I'm not sure if I understand you correctly. Do you mean I should try copying straight to e.g. /mnt/disk1/folderxy instead of /mnt/user/folderxy?
  3. I just ran a test with VM Manager switched off under Settings -> VM Manager and another test with VM-Manager switched on, but without starting any VMs. So no VMs turned on in any of those two tests. The results between both of those tests stayed the same. The only difference to the problem I described in my post above was, that the CPU load didn't skyrocket like it did earlier, but rather increased slowly and gradually over the time of 3 minutes or so. The outcome, however is the same. Dockers unresponding and huge CPU load. This test was done by coping three 10 Gb files from my desktop computer to an unraid share. The share's configuration can be seen in the screenshot below. I've also recorded the main part of the copying stuff where the unresponsive dockers, switched off VMs and huge CPU load can be observed. The Video can be found here: Unraid Bug Report CPU/IOWAIT
  4. Well I may have been unclear. The VMs are active in the screenshot above but weren't actively used or copied from. They were more or less sitting idle at that time.
  5. No VMs are involved in this case. Just copying from my PC to a SMB share. Also, the problem consists regardless of VMs or Docker Containers. If I switch them all off, I still get the same high CPU utilization and iowait while copying to the array.
  6. Since I started using Unraid on Version 6.8.3 I've got a strange issue when copying files directly to the array. CPU load gets to nearly 100% and the UI as well as access to my shares via SMB is extremely slow and buggy. Additionally the access to Web UIs of Docker Containers is not possible. The whole server becomes basically unusable until copying is done. VMs are not affected, though. Checking the top commands shows me a very high wa value with values up to 70-90. After copying has finished, the value drops within a couple of seconds and everything turns back to normal. This is only the case when copying directly to the array without using the cache drive or when Mover is running. I'm currently using the LSI/DELL IT Mode SAS 9207-8i, but the same issue occured with my previous PCIE to SATA controller, as well when the drives were connected directly to the motherboard's SATA connectors. Over the last year I've went through different HDDs and a CPU and mainbaord swap. (From desktop drives to NAS drives and from an Intel i7 4770 to a Ryzen 5 2600 and a Gigabyte B450M D3SH mainbaord.) The issue has been the same througout all the changes. srv-diagnostics-20210526-1044.zip
  7. Thank you guys for your kind and quick support. A reboot indeed seems to have solved the problem for now and the rebuild of the parity drive is currently in progress. As you suggested I'm planning to buy a new SATA Controller, though. Thanks again for your support. Cheers
  8. Hi trurl, I just checked. It's the ones that are not connected to the Marvell controller. Do you think it's save to turn Unraid off and swap one of the unmountable drives over to the remaining SATA Port on the Marvell Controller? Thanks for your help
  9. Hey guys, right now i am having a tough time with unraid. Yesterday Unraid informed me that my 2tb paritys SMART value of "reallocated sector count" is continously rising. Today i bought a new 4tb hdd, stopped my array, put out the damaged parity plate, replaced it with the new 4tb one(hotswap) and assigned it to the parity slot in unraid.(the array was emulated the whole time) The rebuild of the array started and after around 20 percent i noticed that im getting a 503 error in the OpnSense webui ( which was running in a Virutal Machine in unraid) so i restarted the VM. The VM never came back to life. at around 25% i saw 49.000.000 Errors across all my hdds on the Main screen of Unraid. Then i stopped the rebuild, stopped the array, started the array in maintainace mode and tried to to repair the filesystem on one of the hdds ( xfs_repair -v /dev/md1 ) but i am getting a fatal error, "superblock read failed" in Phase 1. So here i am, not knowing what to do, all my shares and the data is gone. Anyone has an advice? srv-diagnostics-20210416-1950.zip