golli53

Members
  • Posts

    81
  • Joined

Everything posted by golli53

  1. Thanks I think I got confused. So could I add the new drive, reassign the old parity to a new data drive, and reassign the new drive to parity, then start array?
  2. I'm trying to replace my 10tb parity with a new 14tb (current setup has 2x 10tb parity). I've read through the parity swap procedure below, but am wondering if there's anything different I need to do for a dual partiy setup and still be protected if a drive goes bad during the procedure. https://wiki.unraid.net/The_parity_swap_procedure
  3. Thanks! I played around with this and am getting another problem unfortunately. If I set 10 containers all to "3,4,10,11", I get 100% usage on CPU 3 and 0% on 4,10,11. Those 4 CPUs are all isolated.
  4. Is there a way to change these settings from the shell? Still not working through GUI. Restarted and still have the same issues. I also tried disabling VMs as my interest is changing the docker pinning, but still no cigar.
  5. If I try to select new pinning options or deselect old ones and hit APPLY or DONE, nothing seems to happen. No errors come up and nothing new in the log, so it appears to be successful. But the settings are the same as the old ones when I go back to the page. I'm using version 6.8.3
  6. @Fiservedpi wondering what ended up happening here and if you got any resolution. I just got the same error on my server.
  7. Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked
  8. I think the docker image on /mnt/cache that's mounted on /dev/loop2 is preventing the unmount. I killed a zombie container process accessing /dev/loop2, but still cannot detach /dev/loop2 and still stuck trying to unmount. Tried everything here: https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof /dev/loop2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME container 15050 root 4u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 7u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 8u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log container 15050 root 9u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log root@Tower:/# kill 15050 root@Tower:/# lsof /dev/loop2 root@Tower:/# losetup -d /dev/loop2 # fails silently root@Tower:/# echo $? 0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# kill -9 12310 # not sure what this is, but killing it fails root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# modprobe -r loop && modprobe loop # try to reload the module, but it's builtin modprobe: FATAL: Module loop is builtin.
  9. I tried to Stop my array and it's currently still stuck on `Retry unmounting disk share(s)...` for the last 30min. Some diagnostics from command line below (I cannot access diagnostics from GUI anymore). Prior to this, I noticed one of my dockers was having weird issues... seemingly stopped after I killed it, but kept being listed as running in `docker ps`. I was using `docker exec` to execute some commands in that container and I think some processes got stuck in the container. root@Tower:/# tail -n 5 /var/log/syslog Apr 28 14:11:36 Tower emhttpd: Unmounting disks... Apr 28 14:11:36 Tower emhttpd: shcmd (43474): umount /mnt/cache Apr 28 14:11:36 Tower root: umount: /mnt/cache: target is busy. Apr 28 14:11:36 Tower emhttpd: shcmd (43474): exit status: 32 Apr 28 14:11:36 Tower emhttpd: Retry unmounting disk share(s)... root@Tower:/# lsof /mnt/cache root@Tower:/# fuser -mv /mnt/cache USER PID ACCESS COMMAND /mnt/cache: root kernel mount /mnt/cache
  10. Gotcha. It seemed aggressive for it to change remote permissions by default, as this affects permissions locally on the remote server. For instance, for any home share, this will break SSH authentication for all clients to that remote server. Any read-only permissions would also be permanently changed for all clients. I'll use a script as a workaround - thanks for the suggestion.
  11. @dlandon just trying to understand whether this is a bug in unRAID or Unassigned Devices, so I can direct issue management - thanks in advance
  12. When I mount an nfs share, the remote directory gets permissions changed to 0777. Since I am mounting my home folder (/home/myname), this screws up SSH authentication on my remote Ubuntu server, since it only works if the home folder is 0755. Right now I have to manually invoke chmod each time after Unassigned Devices mounts the directory.
  13. +1 think this is a great idea. In addition to it being generally useful, sometimes I need to manually cancel Mover because it's slowing down my array, but if it's almost done, would prefer not to cancel.
  14. Which version are you using? I saw a signifcant performance drop starting in 6.8.X, with only partial recovery of performance by modifying Tunable Direct IO and SMB case settings. 6.6.7 at least should be quite a bit faster and doesn't suffer from multistream read/write issues in 6.7.X.
  15. This fixes the stat issue for very large folders - thanks for your hard work! Unfortunately, SMB is still quite slow - I think the listdir calls are still ~2x slower than with prior versions, despite Hard Links disabled. With the tweaks, my scripts now run instead of stalling, though are still noticebly slower. I'll try to reproduce and compare when I get a chance to try 6.8.2 again. Regardless, thanks for your efforts here.
  16. All on cache (which is 2xSSD RAID1 btrfs for me). Same issue occurs with folder that's on array though (spread across disks). Seems to be SMB issue because I don't see extra lag when calling stat from unRAID shell or through NFS from Linux client.
  17. @limetech First of all, thank you for taking the time to dig into this. From my much more limited testing, the issue seems to be a painful one to track down. I upgraded yesterday and while this tweak solves listdir times, stat times for missing files in large directories is still bugged (observation 2 in the below post): For convenience, I reproduced in Linux and wrote this simple script in bash: # unraid cd /mnt/user/myshare mkdir testdir cd testdir touch dummy{000000..200000} # client sudo mkdir /myshare sudo mount -t cifs -o username=guest //192.168.1.100/myshare /myshare while true; do start=$SECONDS; stat /myshare/testdir/does_not_exist > /dev/null 2>&1 ; end=$SECONDS; echo "$((end-start)) "; done On 6.8.x, each call takes 7-8s (vs 0-1s on previous versions), regardless of hard link support. The time complexity is nonlinear with the number of files (calls go to 15s if I increase the number of files by 50% to 300k).
  18. I don't have a test server for unRAID so can only try out these suggestions on a weekend when I don't need my production environment up and running. For now, I'm going back to 6.6.7 to avoid slow SMB and the concurrent disk problem in 6.7.2. Also, I think there was something else going on in addition to the 3-4x slower directory listings. Some of my apps would lag for 20 minutes compared to 5 seconds, so I think there were additional SMB performance regressions. I detailed some other slow behavior in the Prerelease thread, but those were just the regressions I happened to notice from debugging the code in a couple of my apps one weekend, so I may have missed others.
  19. I guess it's the definition of near instantaneously. In my testing over many thousands of calls, I was averaging 2.5s vs 0.7s (for 6.7.2.) for 3k items. When 2 programs are accessing SMB simultaenously, that becomes 5s vs. 1.4s. For 10 programs, 25s vs 7s. I think it's common for services to access SMB shares on a server simultaenously.
  20. Will Wireguard eventually be integrated into unRAID? I think Wireguard is very cool and definitely appreciate the work on new functionalities, but am curious about the design decision here. It's unrelated to storage (and I would personally hesitate to host my VPN server on my storage server in case I need to reboot my storage remotely). Wireguard can also be run as a docker container if the kernel supports (here's a popular one: https://github.com/cmulk/wireguard-docker). The Plugin/Community App/Docker route may be cleaner and less demanding on developer resources.
  21. Security measures like fail2ban are much easier to set up on Ubuntu than unRAID (to prevent brute force attacks into password-protected services). Otherwise, there's no difference as long as you only expose the port for the specific service.
  22. There are two issues and the sample code has a section for each (preceded by a comment header) Part 1 of the code is getting listings. That seems to be slower on 6.8.0 for all directories and is noticeable to a human on a single call without concurrency starting with a couple thousand files. Part 2 of the sample code is only calling stat. I can only reproduce this issue for very large directories, but maybe that's because it requires large directories to produce a measurable difference.
  23. I never call a directory listing in that directory. I only open specific files by naming convention. So, adding subdirs would make things more inefficient because I would have to check for a file in each subdir. My current setup works very fast using a normal Samba server eg 6.7.2 or Ubuntu. The first issue is a problem for much smaller directories (a few thousand).
  24. 😀Yes it's a very big one for automatically archiving json files. There's no natural categorization for assigning subdirectories, so wouldn't improve the speed for my app.