Jump to content

jj_uk

Members
  • Content Count

    104
  • Joined

  • Last visited

Community Reputation

5 Neutral

About jj_uk

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. jj_uk

    Server got stuck

    I also had to force a shutdown (hold power button) yesterday.... I'm not sure what happened or why everything stopped. How do you go about figuring out what happened?
  2. jj_uk

    Preclear plugin

    I tried that, and having now done it again, they are clearing. However, the 1st disk is clearing at 143MB/s, the other 5 preclear processes are around 10MB/s. Previously, they all ran at full speed.
  3. jj_uk

    Preclear plugin

    The recent update to the preclear plugin seems to have broken it for me. I an trying to "erase and clear" 6 disks. Only one of the disks is actually doing something, the others are stuck with message "Starting...". Thoughts?
  4. Ahh It is the win10 VM stopping the array from stopping. If i stop the VM, then stop the array, the array stops. If I dont stop the VM, so unraid should stop the vm, the array doesn't ever stop. I've installed the guest agent, but it still doesn't stop. Atleast i know know whats causing it.
  5. jj_uk

    [Plug-In] Community Applications

    Squid- sell your plugin to the unRAID team This plugin should come pre-installed with unRAID.
  6. In windows VM CD drive (virtio-win-0.1.1) I have a folder called "guest-agent". Is this the guest tools? In the VMs -> Windows 10 -> Edit, i can't see a shutdown method. The only reference to shutting down VMs I can find is in the "unradi GUI -> Settings -> VM Manager -> Upon host shutdown", and it's already set to Hibernate.
  7. root@tower1:~# lsof /mnt/disk1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 11502 root 0u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 1u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 2u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 255u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 0u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 1u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 2u CHR 136,4 0t0 7 /dev/pts/4 preclear_ 11634 root 0u CHR 136,2 0t0 5 /dev/pts/2 ttyd 12438 root 9u CHR 5,2 0t0 31 /dev/ptmx bash 15944 root 0u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 1u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 2u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 255u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 2u CHR 136,2 0t0 5 /dev/pts/2 qemu-syst 21174 root 10u CHR 5,2 0t0 31 /dev/ptmx qemu-syst 21174 root 22r REG 254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted) qemu-syst 21174 root 23r REG 254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted) preclear_ 21632 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 21632 root 1u CHR 136,2 0t0 5 /dev/pts/2 tmux:\x20 22245 root 7u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 10u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 11u CHR 5,2 0t0 31 /dev/ptmx bash 22246 root 0u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 1u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 2u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 255u CHR 136,1 0t0 4 /dev/pts/1 bash 32057 root 0u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 1u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 2u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 255u CHR 136,3 0t0 6 /dev/pts/3 root@tower1:~# lsof /mnt/cache COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 11502 root 0u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 1u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 2u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 255u CHR 136,4 0t0 7 /dev/pts/4 ttyd 12438 root 9u CHR 5,2 0t0 31 /dev/ptmx bash 15944 root 0u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 1u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 2u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 255u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 2u CHR 136,2 0t0 5 /dev/pts/2 sleep 17657 root 0u CHR 136,2 0t0 5 /dev/pts/2 sleep 17657 root 1u CHR 136,2 0t0 5 /dev/pts/2 lsof 17711 root 0u CHR 136,4 0t0 7 /dev/pts/4 lsof 17711 root 1u CHR 136,4 0t0 7 /dev/pts/4 lsof 17711 root 2u CHR 136,4 0t0 7 /dev/pts/4 qemu-syst 21174 root 10u CHR 5,2 0t0 31 /dev/ptmx qemu-syst 21174 root 20u REG 0,33 107374182400 335 /mnt/cache/domains/001 Windows 10/vdisk1.img qemu-syst 21174 root 21u REG 0,33 107374182400 335 /mnt/cache/domains/001 Windows 10/vdisk1.img preclear_ 21632 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 21632 root 1u CHR 136,2 0t0 5 /dev/pts/2 tmux:\x20 22245 root 7u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 10u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 11u CHR 5,2 0t0 31 /dev/ptmx bash 22246 root 0u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 1u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 2u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 255u CHR 136,1 0t0 4 /dev/pts/1 bash 32057 root 0u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 1u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 2u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 255u CHR 136,3 0t0 6 /dev/pts/3 root@tower1:~# I don't know what to look for? I've just tried to stop the array and again it won't stop.
  8. Would Krusader do this? I was browsing around with krusader earlier, but can't recall what dir I was in. I also used the unraid "Shares -> Browse Files" (right side of share listing) to look into a few folders. Maybe one of the above could cause this?
  9. I've looked in syslog.txt and I see that 'Disk 1' and 'cache' are causing the issue with unmounting: Aug 26 11:08:27 tower1 emhttpd: Unmounting disks... Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): umount /mnt/disk1 Aug 26 11:08:27 tower1 root: umount: /mnt/disk1: target is busy. Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): exit status: 32 Aug 26 11:08:27 tower1 emhttpd: shcmd (1153): umount /mnt/disk2 Aug 26 11:08:27 tower1 kernel: XFS (dm-1): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1154): rmdir /mnt/disk2 Aug 26 11:08:27 tower1 emhttpd: shcmd (1155): umount /mnt/disk3 Aug 26 11:08:27 tower1 kernel: XFS (dm-2): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1156): rmdir /mnt/disk3 Aug 26 11:08:27 tower1 emhttpd: shcmd (1157): umount /mnt/disk4 Aug 26 11:08:27 tower1 kernel: XFS (dm-3): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1158): rmdir /mnt/disk4 Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): umount /mnt/cache Aug 26 11:08:27 tower1 root: umount: /mnt/cache: target is busy. Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): exit status: 32 Aug 26 11:08:27 tower1 emhttpd: Retry unmounting disk share(s)... Aug 26 11:08:32 tower1 emhttpd: Unmounting disks... Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): umount /mnt/disk1 Aug 26 11:08:32 tower1 root: umount: /mnt/disk1: target is busy. Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): exit status: 32 Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): umount /mnt/cache Aug 26 11:08:32 tower1 root: umount: /mnt/cache: target is busy. Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): exit status: 32 Aug 26 11:08:32 tower1 emhttpd: Retry unmounting disk share(s)...
  10. I have CA set to update plugins automaticly, but to wait 3 days before doing so. I've now updated it, thanks!
  11. I rely on Preclear to stress-test new drives with intense read/writes for a week before adding them to the array. /boot/logs/ contains the following files: - tower1-diagnostics-20180826-1137.zip - unbalance.log Do you need these?
  12. I followed the above and turned off docker in settings, but it still didn't stop so I used the GUI to shutdown. It has now restarted and diagnostics are attached. Any ideas? tower1-diagnostics-20180826-1149.zip
  13. Did you solve this? I have the same issue..
  14. jj_uk

    Mover, is it running?

    Ahh ok, I assumed that mover balanced data between disks. I'll just replace them one at a time and allow the server to rebuild them.
  15. jj_uk

    Mover, is it running?

    Is there any way in the GUI to see if mover is running, and if it is, roughly how long is left, as a rough time or as a percentage? I've set all my shares to exclude 2 disks because i'm replacing them soon and kicked off mover. I'd like to see when mover has finished moving the files from those disks over to other drives.