Raki72

Members
  • Posts

    77
  • Joined

  • Last visited

Everything posted by Raki72

  1. Thanks JorgeB for the quick solution. I didn't even know this setting existed. 🙂
  2. Well, that was quick. Problem is back and diagnostics is attached. Any help is appreciated. unraid-diagnostics-20240121-2107.zip
  3. You are right. I actually expected this to be a know issue or so, so I didn't add the diagnostics file. However, the problem us gone now. I'll post again, when the problem occurs.
  4. My Unraid is getting full. Out of 168 TB there are now 158 TB used and 9.88 TB left. I guess that this is causing a weird problem. When I try to move files from one share to another I get these errors: - Windows: something like 0 bytes needed to move files but not available. - Linux (from the Unraid GUI): see screenshot Any idea what is going wrong? Thanks!
  5. Would it be possible to share this script? I am looking for something like that, but don't want to develop it frm scratch...
  6. Well, more than that I would like to have multiple arrays per server. Because than I could simply split up my Unraid into 2 arrays and the number limitations is kind of irrelevant. Is there an ETA for the "multiple array" or multiple pools concept?
  7. I am considering to buy a X400. These things are dirt cheap. But documentation is nowhere to be found. Could you let me know, how you integrate the X400 with Unraid? How can one set the X400 to DAS mode? Thanks for your support!
  8. I am desperately waiting for this functionality so that I can build a second array, place larger disks in there and move data disk by disk to the second array. 😀
  9. Is there an ETA for the Multiple Array feature? I am eagerly waiting for it.
  10. Parts of the data is documents and documentation, which could be compressed massively. But with performance issues that won't help me. 😁 Thanks for your answers.
  11. Thanks a lot! Do you happen to know what wget-at-gnutls is doing? It's popping in and out in top.
  12. I was not aware of performance issues with ZFS. Thanks for letting know. Where do you have the information from? I am looking for the compression functionality in ZFS. But I am not in a hurry.
  13. Okay, I guess I identified one issues on my own - the find process came from the Dynamix Cache DIrectories plugin. But the high CPU load from shfs is still there, even after changing /mnt/user to /mnt/cachea in the Docker and VM settings ( Any idea?
  14. Hello fellow Unraiders, I was about to unbalance my HDDs to follow invaderone's video to convert disk1 to ZFS. By doing so, I realized that when trying to move all data from disk1 to disk2, disk3, disk4,... the performance drops from 100MB/s at the beginning to 15MB/s after 10 hours. I then stopped the unbalance / rsync task and started some investigation. 1. I disabled Docker, VMs and auto-start array to be on the safe side, thenrestarted my server (22 data and 2 Parity HDDs) 2. using top, I found this task consuming 60+% CPU: /usr/local/bin/shfs /mnt/user -disks 8388607 -o default_permissions,allow_other,noatime -o remember=0 This tasks never seems to end. 3. there are also several find commands running like: root 9382 15493 0 13:33 ? 00:00:00 /bin/timeout 7102 find /mnt/user/Exchange -noleaf -maxdepth 10 root 9383 9382 9 13:33 ? 00:00:10 find /mnt/user/Exchange -noleaf -maxdepth 10 But these find commands end after a while. Any idea what this shfs task is doing and why it consumes almost one CPU core? Thanks! unraid-diagnostics-20230716-1329.zip
  15. I remember having seen this message before the reboot but didn't consider it being important. So, yes, seems there is an issue with this.
  16. I rebooted the server and the problem us gone, also with Firefox. Strange...
  17. Hi, I am using v6.11.5 and after a new Nvidia driver was available, I rebooted my server. After the reboot, I started the array via GUI and everything is normal - the array is available, also via network, VMs and Dockers started. One thing is very weird though - in the GUI the array still shows as "off-line", also in the Main view. Any idea what is causing this issue or a workaround? Thanks for your support!
  18. Thanks, it works with v6.11.3 again. Now I just have to wait until the array has been rebuilt for two empty HDDs. 😒
  19. Hi, my server allows up to 24 HDDs. Currently I use 21 of the HDD slots: 2 parity drives and 19 data drives. Now I had the idea to add 3 more drives (Disk 20, 21 and 22) at the same time and something went wrong. What happend? 1. after adding the three HDDs, I received the normal message that new devices have been discovered and need to be cleared first. This was done without any problem. 2. next step was to shut down the array, assign the 3 new drives to the array and start up the array again. 3. now I got the message that the drives must be formatted. I acknowledged this and the formatting started. 4. for whatever reason the formating went wrong because one drive (drive 20) showed multiple sector errors. Now I had my old array with 21 HDDs plus 3 drives that showed "unmountable: unsupported partition layout". 5. I removed Disk 20, that had errors, and started the array again. Now Disk 20 showed as being emulated although there never was any data on it. Disk 21 and 22 still showed "unmountable: unsupported partition layout", so I tried to format them. 6. the format stopped after some minutes without error message. 7. I removed Disk 21 from the array. Disk 20 and 21 now are shown as emulated although there never was any data or any parity build for them. 8. Disk 22 is now the only new HDD in the existing array (Disk 1-19) but it shows "unmountable: unsupported partition layout" again. When trying to Format Disk 22, the format command stops immediately. The "old" array (Disk 1-19) is still working fine and no data is lost. But now I have the old array plus two drives (Disk 20 and 21) that I removed and that are now emulated and Disk 22 that is still in the server but I cannot format. I think, the best way to solve this issue would be to get rid of Disk 20, 21 and 22 assigned to the array and then add them disk by disk (instead of all at once). But does anybody know, how I can delete Disk 20, 21 and 22 from the array without losing any data? Thanks for your support! 😀 unraid-diagnostics-20221114-1324.zip
  20. Not sure, how to do this. How can you exclude disks from shares?
  21. Hi, on my Unraid server two data drives were disabled due to an error. I have two parity drives and two spare drives. But I wonder how I can rebuild the two data drives most effectively. Can I just put the two new drives in the system, add them to the Array and let Unraid do its magic with two disks in parallel at the same time. Or do I have to sequentially add one new drive first, let Unraid rebuild the array. Then add the second drive, and let Unraid rebuild the array for the second drive. Any sound idea is greatly appreicated. 😀
  22. Maybe you can elaborate this? The only way I know is to replace disk-by-disk the smaller one with the bigger one, for all drives of the existing array. Butwhen upgrading 21 HDDs from 6 to 16 TB, the new capacity would increase by 190 TB... way too much in terms of new capacity and also price. I am searching for a way to replace the 21x 6TB drives with 10x 16TB drives (2x parity plus 8x 16TB). This would increase the overall capacity to 128TB, compared to the current 114TB, and I could add more drives later on. Is there a way to achive this without having a second array?