MvL

Members
  • Posts

    593
  • Joined

  • Last visited

About MvL

  • Birthday 08/04/1972

Converted

  • Gender
    Male
  • Location
    Netherlands

Recent Profile Visitors

2768 profile views

MvL's Achievements

Enthusiast

Enthusiast (6/14)

5

Reputation

  1. A apologie for the late response. It's on Auto.
  2. When checking "setting --> management access page" I see a blue link. See screenshot. I think this is not normal... This link leads to "main".
  3. Update to the latest beta and I have problems with docker and network setting for some reason custom br0 is not accessible. I struggled for a hour yesterday with this problem. I decided to re-install unRAID on my usb stick. Again the latest beta but the problem still exists. Does anyone know if this a bug in the latest beta?
  4. Not sure where to post this but I'm searching for a Docker container which can host PDF magazines so users can visit this site and read the PDF's. I've searched the app's section on my unRAID server but can't find anything. Anyone!?
  5. Testing a bit. Is just me or is the NextCloud container a bit slow? btw I use postgres as database is this not advised?
  6. The latest picture is after a reboot so everything is normal again... If you look to the first picture then it is missing. I also have this error in the logs:
  7. Indeed it is missing! root@Tower:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 63G 627M 63G 1% / tmpfs 32M 372K 32M 2% /run devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 308K 128M 1% /var/log /dev/sda1 29G 438M 29G 2% /boot /dev/loop0 9.2M 9.2M 0 100% /lib/modules /dev/loop1 7.3M 7.3M 0 100% /lib/firmware tmpfs 1.0M 0 1.0M 0% /mnt/disks /dev/md1 9.1T 9.1T 66M 100% /mnt/disk1 /dev/md2 9.1T 9.1T 7.3G 100% /mnt/disk2 /dev/md3 9.1T 867G 8.3T 10% /mnt/disk3 /dev/md5 9.1T 9.0T 135G 99% /mnt/disk5 /dev/md6 9.1T 9.0T 182G 99% /mnt/disk6 /dev/md7 9.1T 7.6T 1.6T 83% /mnt/disk7 /dev/md8 9.1T 7.3T 1.9T 80% /mnt/disk8 /dev/md9 9.1T 7.4T 1.7T 82% /mnt/disk9 /dev/md10 9.1T 5.5T 3.7T 60% /mnt/disk10 /dev/md13 9.1T 3.3T 5.9T 36% /mnt/disk13 /dev/sdb1 448G 336G 111G 76% /mnt/cache shfs 91T 68T 24T 75% /mnt/user0 shfs 92T 68T 24T 75% /mnt/user /dev/sdi1 1.9T 156G 1.7T 9% /mnt/disks/downloads /dev/sdc1 1.9T 1.4T 520G 73% /mnt/disks/WDC_WD20EARS-00MVWB0_WD-WMAZA2219709 /dev/loop2 64G 3.2G 59G 6% /var/lib/docker The MariaDB is the only container I also used yesterday when I was messing around. Going to try another database. Let see what happens! (Don't wanna report something if I'm not completely sure).
  8. Okay it happened again.. I was messing with 2 containers when it happened. To be clear nothing was mounted via unassigned devices. I was messing with linuxserver/mariadb and linuxserver/nextcloud. Yesterday when it happened I was also messing with containers, linuxserver/mariadb and linuxserver/piwigo. So if it is container problem than it must be the linuxserver/mariadb or there is something wrong with Docker in combination with SHFS? What uses SHFS? Unassigned devices? If it is the linuxserver/mariadb conainer there should be more reports? Update: No of course SHFS is also part of unRAID. df -h df: /mnt/user: Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on rootfs 63G 640M 63G 1% / tmpfs 32M 372K 32M 2% /run devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 392K 128M 1% /var/log /dev/sda1 29G 438M 29G 2% /boot /dev/loop0 9.2M 9.2M 0 100% /lib/modules /dev/loop1 7.3M 7.3M 0 100% /lib/firmware tmpfs 1.0M 0 1.0M 0% /mnt/disks /dev/md1 9.1T 9.1T 66M 100% /mnt/disk1 /dev/md2 9.1T 9.1T 7.3G 100% /mnt/disk2 /dev/md3 9.1T 867G 8.3T 10% /mnt/disk3 /dev/md5 9.1T 9.0T 135G 99% /mnt/disk5 /dev/md6 9.1T 9.0T 182G 99% /mnt/disk6 /dev/md7 9.1T 7.6T 1.6T 83% /mnt/disk7 /dev/md8 9.1T 7.3T 1.9T 80% /mnt/disk8 /dev/md9 9.1T 7.4T 1.7T 82% /mnt/disk9 /dev/md10 9.1T 5.5T 3.7T 60% /mnt/disk10 /dev/md13 9.1T 3.3T 5.9T 36% /mnt/disk13 /dev/sdb1 448G 336G 111G 76% /mnt/cache shfs 91T 68T 24T 75% /mnt/user0 ********************************* /dev/sdi1 1.9T 156G 1.7T 9% /mnt/disks/downloads /dev/sdc1 1.9T 1.4T 520G 73% /mnt/disks/WDC_WD20EARS-00MVWB0_WD-WMAZA2219709 /dev/loop2 64G 3.2G 59G 6% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt I think I miss the "/mnt/user"? I have to compare that when I have rebooted the server.
  9. You can check the logs! It is on the right site next to the Docker container. Maybe it gives you some clues..
  10. In this post they also mention the error. https://forums.unraid.net/topic/77422-666-plex-smb-shares-unresponsive/
  11. Hi Johnnie, thanks for having a look. I have the impression that it happens when I mount my NAS with unassigned devices and forget to unmount it before I switch it off. I'm absolutely not certain if this is the problem. To start I don't mount any remote shares then if it happens again I will start with the containers and VM's.
  12. It happened again and now I have downloaded the diagnostic file! I have to correct my case the user directory is not gone but it is colored red. Also the containers are not working anymore. I have to reboot the server to make it work again. Has this something to do with the overlay file system? I think the problems start with a NAS which was mounted via unassigned devices. I powered down the NAS before I unmounted the share in unassigned devices. I think there goes something wrong. tower-diagnostics-20200402-2007.zip
  13. I had a strange issue and I like to know the possibilities causing it. My "user" directory vanished in my /mnt directory. The "user0" directory was colored red. I discovered it when my Docker containers where failing. I don't have a log file because I forgot to generate one before I rebooted.
  14. Johnnie thanks for your reply. After some searching on the forums I found these useful commands! "lsof" and "kill". I think they are very useful to look for open files and kill them.
  15. I'm not sure but maybe I posted this in the wrong forum. Maybe this must be moved to bug reports. Please move if needed!?