• Posts

  • Joined

  • Last visited

Everything posted by caplam

  1. trying to upgrade from 6.12.4 and stuck on "trying to unmount disk share target is busy" docker daemon is stopped; all vm are stopped no file open on the disk share the share is on a 2 ssd btrfs pool. fuser tells me there is no process using the disk share. edit: libvirt.img is still mounted edit2: umount /etc/libvirt solved it
  2. thank you it worked. 😀 Parity check has been triggered. I'll wait for the end and i will manually clear the drive.
  3. I was trying to shrink array with the clear-me script. I think i forgot to update the script and it throws an error (i think it try to write zero to md3 instead of md3p1) As stated in the log i tried to stop array but it's impossible an now unraid is stalled in the retry unmounting disk share loop. It tries to unmount disk3 which is not mounted. What should i do ?
  4. deleted docker.cfg, network.cfg and rebooted. custom docker network eth0 with macvlan driver, ipv6 enabled and parent interface vhost0 get created upon reboot. I had to set up docker parameters and it started correctly. I'm on the way to try to enable ipv6 on my entire network. Until now i had ignored ipv6 but now i'm trying to understand how it works and how i can use it.
  5. i've been able to create a custom docker network with macvlan but ipv6 is not enabled for it. Can i delete network.cfg and docker.cfg ?
  6. It seems that i also have problem in docker.cfg. here's my file: DOCKER_ENABLED="yes" DOCKER_IMAGE_FILE="/mnt/user/docker/" DOCKER_IMAGE_SIZE="60" DOCKER_APP_CONFIG_PATH="/mnt/user/appdata/" DOCKER_APP_UNRAID_PATH="" DOCKER_READMORE="yes" DOCKER_CUSTOM_NETWORKS="eth1 " DOCKER_LOG_ROTATION="yes" DOCKER_LOG_SIZE="10m" DOCKER_LOG_FILES="2" DOCKER_AUTHORING_MODE="no" DOCKER_USER_NETWORKS="preserve" DOCKER_TIMEOUT="120" DOCKER_IMAGE_TYPE="folder" DOCKER_AUTO_ETH2="no" DOCKER_DHCP6_ETH0="fdd6:c963:62f:4879:2001:7e8:f883:a401:::/72" DOCKER_DHCP_ETH0="" DOCKER_ALLOW_ACCESS="yes" and the gui:
  7. it's too odd. I reboot to regain access to network settings. i deactivated bridging on eth0 and network settings disappeared again. edit: unraid doesn't create a custom docker network with macvlan driver. I tried to create one with portainer but after validation the network get created with driver: "none"
  8. it seems to be the macvlan call traces problem. So if i understand correctly i have to disable bridging on eth0.
  9. Each time i change a network setting i'm stuck. I have no more access to network settings page only the choice of giving a name to eth0. docker network custom br0 is no more selectable in docker template.
  10. i had unclean shutdown problems. I solved them by setting longer timing for vm and docker shutdown. If i remember correctly it was in 6.12.1 but not entirely sure. Lastly in 6.12.3 i had no problem (but few reboots). Attached is my diags. I think the server didn't reboot since the upgrade.
  11. i think it was linuxserver's docker
  12. i had a tvheadend docker which was running smooth but that was 2 years ago. It had 3 dvb-t usb tuners attached. And if i remember correctly i was also running tvhproxy to have dvb-t in plex. Sorry for the lack of info but this is a bit old.
  13. upgraded from 6.12.3 - reboot triggered a parity check (unclean shutdown detected) - custom docker network interface changed from bro to br2. I stopped docker to revert to br0 as eth0 is a 10GB interface. My server was not subject to call traces with macvlan. For now it runs smooth.
  14. Just a post to thank @mgutt I installed it under 6.11.5 with a sync interval of 60min. Works like a charm. My ssd says thanks. I use a single ssd (xfs) with docker directory. Upgraded to 6.12.1 with no problem.
  15. Afaik there is only one docker; at least accessible with unraid apps. I have a very small server. I'm the only one user of my server. It runs on a HP Z620 2xE5 2650V2 with 128GB Ram. All the files are on a 2x1TB nvme btrfs pool. Right after the installation i think it took a a day or 2 to generate preview for my 80k photos. But i mainly use it for files, calandar and contacts. The main advantage of aio is, at least for me, the ease of installation (fulltextsearch, collabora out of the box is a must have) and integrated backups. The AIO data size is 40GB (photos are mounted with external files so not on the pool.
  16. yes sorry. I updated the template with the new variable and it's running fine. Tank you again.😀
  17. damn it i was looking in luckybackup settings. Thank you. edit: i have this in my template. I guess i have just to add it.
  18. Hello, I'm still setting up my backups. The last thing is nextcloud aio. The share used by the nexcloud aio container use exclusively a ssd pool. The aio comes with a backup solution based on borg. The backups go to a share on the array. All is perfectly running but i think it could make sense to export these backups to an unassigned disk. The problem is file permissions. As for Appdata, Libvirt and unraid flash owner of the backup directory is root. But backup files have 700 as permission. So luckybackup won't transfer any file. Is there a simple solution which permit to use luckybackup for that and which would not affect other backups ?
  19. yes my bad. The primary problem was the impossibility to unmount the pool. I read that umount -l could be "dangerous" as it simply mask the target. So i tried mount -o remount,ro which failed.
  20. it didn't failed to mount but unmount.
  21. I needed to move disk so i stopped array. But it won't stop and in the log i had : target is busy Sorry i forgot to take a diag. i checked with open files plugin and nothing was open. i checked with lsof /mnt/pool and nothing either. so i tried to remount the pool in readonly mode: failure with a message indicating dmesg could have more details. I don't remember the exact message but something about failed btrfs transaction. I ran btrfs scrub with no errors. I needed my server up so i rebooted (with the gui) The pool is clean but a parity checked was triggered due to an unclean shutdown. I had that few weeks ago but it was due to an open file. Is there a thing to verify in such a situation? edit: the pool holds appadata, domains, system and nextcloud-aio
  22. it syncs subfolders and files. But i don't why i had to delete and recreate the exclude part. Also this should have worked without exclude and with only include. Anyway thanks i appreciate your help.
  23. thank you. I think it's good. For whatever reason i had to delete and recreate the exclude part but when i look at the command line (validate button) it's the same. For now it started to transfer folder starting with "O" right in the root of destination disk.
  24. using this it starts to transfer the whole directory : folder starting with a number and i stopped it.