golli53

Members
  • Posts

    81
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

golli53's Achievements

Rookie

Rookie (2/14)

14

Reputation

  1. Thanks I think I got confused. So could I add the new drive, reassign the old parity to a new data drive, and reassign the new drive to parity, then start array?
  2. I'm trying to replace my 10tb parity with a new 14tb (current setup has 2x 10tb parity). I've read through the parity swap procedure below, but am wondering if there's anything different I need to do for a dual partiy setup and still be protected if a drive goes bad during the procedure. https://wiki.unraid.net/The_parity_swap_procedure
  3. Thanks! I played around with this and am getting another problem unfortunately. If I set 10 containers all to "3,4,10,11", I get 100% usage on CPU 3 and 0% on 4,10,11. Those 4 CPUs are all isolated.
  4. Is there a way to change these settings from the shell? Still not working through GUI. Restarted and still have the same issues. I also tried disabling VMs as my interest is changing the docker pinning, but still no cigar.
  5. If I try to select new pinning options or deselect old ones and hit APPLY or DONE, nothing seems to happen. No errors come up and nothing new in the log, so it appears to be successful. But the settings are the same as the old ones when I go back to the page. I'm using version 6.8.3
  6. @Fiservedpi wondering what ended up happening here and if you got any resolution. I just got the same error on my server.
  7. Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked
  8. I think the docker image on /mnt/cache that's mounted on /dev/loop2 is preventing the unmount. I killed a zombie container process accessing /dev/loop2, but still cannot detach /dev/loop2 and still stuck trying to unmount. Tried everything here: https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof /dev/loop2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME container 15050 root 4u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 7u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 8u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log container 15050 root 9u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log root@Tower:/# kill 15050 root@Tower:/# lsof /dev/loop2 root@Tower:/# losetup -d /dev/loop2 # fails silently root@Tower:/# echo $? 0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# kill -9 12310 # not sure what this is, but killing it fails root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# modprobe -r loop && modprobe loop # try to reload the module, but it's builtin modprobe: FATAL: Module loop is builtin.
  9. I tried to Stop my array and it's currently still stuck on `Retry unmounting disk share(s)...` for the last 30min. Some diagnostics from command line below (I cannot access diagnostics from GUI anymore). Prior to this, I noticed one of my dockers was having weird issues... seemingly stopped after I killed it, but kept being listed as running in `docker ps`. I was using `docker exec` to execute some commands in that container and I think some processes got stuck in the container. root@Tower:/# tail -n 5 /var/log/syslog Apr 28 14:11:36 Tower emhttpd: Unmounting disks... Apr 28 14:11:36 Tower emhttpd: shcmd (43474): umount /mnt/cache Apr 28 14:11:36 Tower root: umount: /mnt/cache: target is busy. Apr 28 14:11:36 Tower emhttpd: shcmd (43474): exit status: 32 Apr 28 14:11:36 Tower emhttpd: Retry unmounting disk share(s)... root@Tower:/# lsof /mnt/cache root@Tower:/# fuser -mv /mnt/cache USER PID ACCESS COMMAND /mnt/cache: root kernel mount /mnt/cache
  10. Gotcha. It seemed aggressive for it to change remote permissions by default, as this affects permissions locally on the remote server. For instance, for any home share, this will break SSH authentication for all clients to that remote server. Any read-only permissions would also be permanently changed for all clients. I'll use a script as a workaround - thanks for the suggestion.
  11. @dlandon just trying to understand whether this is a bug in unRAID or Unassigned Devices, so I can direct issue management - thanks in advance
  12. When I mount an nfs share, the remote directory gets permissions changed to 0777. Since I am mounting my home folder (/home/myname), this screws up SSH authentication on my remote Ubuntu server, since it only works if the home folder is 0755. Right now I have to manually invoke chmod each time after Unassigned Devices mounts the directory.
  13. +1 think this is a great idea. In addition to it being generally useful, sometimes I need to manually cancel Mover because it's slowing down my array, but if it's almost done, would prefer not to cancel.
  14. Which version are you using? I saw a signifcant performance drop starting in 6.8.X, with only partial recovery of performance by modifying Tunable Direct IO and SMB case settings. 6.6.7 at least should be quite a bit faster and doesn't suffer from multistream read/write issues in 6.7.X.
  15. This fixes the stat issue for very large folders - thanks for your hard work! Unfortunately, SMB is still quite slow - I think the listdir calls are still ~2x slower than with prior versions, despite Hard Links disabled. With the tweaks, my scripts now run instead of stalling, though are still noticebly slower. I'll try to reproduce and compare when I get a chance to try 6.8.2 again. Regardless, thanks for your efforts here.