Jump to content

mackid1993

Members
  • Posts

    263
  • Joined

  • Last visited

Everything posted by mackid1993

  1. Ok that makes sense. I appreciate your help. Sorry for bothering you. 😊
  2. @dlandon This file has the shutdown sequence. It took about 4 minutes to shut down. syslog-previous
  3. I rebooted my server to repro the issue. My unlcean shutdown timeout is set to 420 seconds for issues just like this thankfully! Attached are my diagnostics zip. Thanks!! I guess UD doesn't expect the SMB share to be coming from within the house lol! apollo-diagnostics-20240327-1208.zip
  4. Unfortunately umount -f or umount -l at array stop doesn't seem to work as a workaround. It seems to be a bug with UD. Fortunately my timeout is so high that it eventually unmounts. It just takes 3 minutes or so.
  5. Thanks so much! For now I made a user script to run on array stop, umount -f /path/to/mount.
  6. That is a Windows VM hosted on my Unraid server that powers down when the system shuts down but before the SMB share is unmounted.
  7. I'll update this post in a moment, looking at the syslog it seems to hang on: Mar 27 11:21:12 Apollo unassigned.devices: Unmounting All Devices... Mar 27 11:21:34 Apollo unassigned.devices: Remote server '10.0.0.2' port '445' is not open; server apears to be offline. then after a while is says: Mar 27 11:23:02 Apollo kernel: CIFS: VFS: \\10.0.0.2 has not responded in 180 seconds. Reconnecting... Edit: It seems to force unmount and then shutdown gracefully. It just delays shutdown. I set my unclean shutdown timeout really high to avoid dirty shutdowns. Maybe I'll just add a userscript to umount -f on array stop.
  8. Hey @dlandon my SMB shares from my VM were working great, but the problem I'm running to now is when shutting my server down the VM shuts down but the SMB shares don't seem to want to automatically unmount. Is there anything I can do, possibly a user script to unmount these?
  9. @dlandon Setting a 180 second delay worked great. I was also able to use the device script to easily start up the related docker container on mount so it sees the SMB share properly. Thanks for your help!
  10. Yeah it seem that it doesn't automount when the array starts because the VM hasn't fully booted. Is there a workaround to delay the automount or automount via a script like using User Scripts?
  11. Thanks, either way it seems like if it can't connect it tries on some sort of interval. Is that correct?
  12. Just question. I'm mounting an SMB share from my Windows VM (the files are on Dropbox which are only accessible from within the VM). The file are for use in a docker container. Everything works really well so far. I have automount set up and if I reboot the VM it seems to mount on its own again just fine. I'm wondering if this will still work when I reboot the server. Obviously the VM will take a minute or two to start up. With automount selected will my SMB share properly mount once the VM is up and running on a fresh boot of Unraid? I'm just trying to look for any points of failure as so far this seems to be working really well for my use case.
  13. Thanks! The goal is not to reboot until the old drives have been dod wiped and are ready to be pulled.
  14. Thanks so much! I'm replacing all of my very old 3 and 4 TB drives with new higher density 12 TB drives so I have to break parity to do the transfer. My goal was to add the two precleared drives and start moving data and when the third one finishes it'll be my parity disk. I have redundant backups of everything so I'm not worried about data loss.
  15. Are you running it with @jch's shell script. That made a difference for me.
  16. I have a preclear going on 3 drives, they are nearly done but one is about 2 hours being the others. If I stop my array to add the two completed drives will that break the preclear that has yet to finish?
  17. I'm replacing my drives, once I move the data to my new drives (I have plenty of bays for the old and new drives) I'm looking for an easy way either with a docker, vm or plugin to wipe my old drives. I was thinking I could run Parted Magic in a VM and pass the disks through by id using Unassigned Devices. Has anyone done this before? My old drives are really old so at some point I may want to recycle them rather than hang on to them.
  18. Thanks I'm not going to go that far, I'll take your word for it. It is crazy fast now. Especially given the FUSE layer that Unraid has maybe it's best to leave cache disabled, especially given your point about the mover running.
  19. I'm curious why were are disabling caching. It's supposed to improve performance is it not?
  20. Maybe open an issue report on gitlab. I would test in Linux first to verify that it isn't a Windows driver issue. Then if it's isolated to Windows open a report with virtio-win.
  21. Nvme from the server to the nvme vm boot drive I was getting 1+ GB/s. I'm not sure why your system behaves this way.
  22. Did you make the shell script in nano on the server. If you try to make it in Windows and it isn't UNIX formatted it won't work. Also where are you placing rust virtiofsd? Did you not run chmod +x ./script.sh If you create the shell scripts on the server, copy and paste (and modify) what I provided into the go file and modify the xml properly it should work. You do have to reboot.
  23. So I had a chance to test this with multiple shares and I have to say this is an incredible speed improvement with Rust virtiofs. @SimonF this should really be implemented in 6.13. Here is what I did. First I created a directory on my flash: /boot/virtiofsd In the directory I placed the rust version of virtiofsd and a shell script for each share I want to mount. In my case I have: appdata.sh archives.sh backup.sh communityapplicationsappdatabackup.sh downloads.sh movies.sh music.sh software.sh Note: Make these scripts using nano on the server. They must be formatted for UNIX. Making them in Notepad or Notepad++ on Windows will format them MSDOS and Virtiofs will die when booting the vm. Each shell script is exactly what @jch shared but modified for each individual share I want to mount in Windows. I modified the script to store virtiofsd in /usr/libexec: Here is an example for my Music share, all of the other shares are the same: #!/bin/bash # process -o option but ignore it because unraid generates the command for us VALID_ARGS=$(getopt -o o -l fd: -- "$@")i if [[ $? -ne 0 ]]; then exit 1; fi eval set -- "$VALID_ARGS" while [ : ]; do case "$1" in --fd ) FD="$2" shift 2 ;; -o ) shift 1 ;; -- ) shift; break ;; * ) shift; ;; esac done # https://gitlab.com/virtio-fs/virtiofsd /usr/libexec/virtiofsd \ --fd="$FD" \ --shared-dir="/mnt/user/Music" \ --xattr \ --cache="never" \ --sandbox="chroot" \ --inode-file-handles="mandatory" \ --announce-submounts Change --shared-dir= and rinse and repeat for each individual share with a new shell script. Next I modified my go file /boot/config/go #replace Virtiofsd with Rust version mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old cp /boot/virtiofsd/virtiofsd /usr/libexec/virtiofsd chmod +x /usr/libexec/virtiofsd #copy mount scripts for each virtofs share #appdata cp /boot/virtiofsd/appdata.sh /usr/libexec/appdata.sh chmod +x /usr/libexec/appdata.sh #archives cp /boot/virtiofsd/archives.sh /usr/libexec/archives.sh chmod +x /usr/libexec/archives.sh #backup cp /boot/virtiofsd/backup.sh /usr/libexec/backup.sh chmod +x /usr/libexec/backup.sh #communityapplicationsappdatabackup cp /boot/virtiofsd/communityapplicationsappdatabackup.sh /usr/libexec/communityapplicationsappdatabackup.sh chmod +x /usr/libexec/communityapplicationsappdatabackup.sh #downloads cp /boot/virtiofsd/downloads.sh /usr/libexec/downloads.sh chmod +x /usr/libexec/downloads.sh #movies cp /boot/virtiofsd/movies.sh /usr/libexec/movies.sh chmod +x /usr/libexec/movies.sh #music cp /boot/virtiofsd/music.sh /usr/libexec/music.sh chmod +x /usr/libexec/music.sh #software cp /boot/virtiofsd/software.sh /usr/libexec/software.sh chmod +x /usr/libexec/software.sh #tv cp /boot/virtiofsd/tv.sh /usr/libexec/tv.sh chmod +x /usr/libexec/tv.sh I appended the following to the go file so this is persistent upon boot. This renames the original virtiofsd to virtiofsd.old, copies the rust version to /usr/libexec and then copies each of my mount scripts to /usr/libexec and makes them executable. Now for the xml, this part is easy. For each mount, replace <binary path='/usr/libexec/virtiofsd' xattr='on'> with <binary path='/usr/libexec/script.sh' xattr='on'> This is what my finished XML looks like for my environment: <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/appdata.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/appdata'/> <target dir='appdata'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/archives.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Archives'/> <target dir='Archives'/> <alias name='fs1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/backup.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Backup'/> <target dir='Backup'/> <alias name='fs2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/communityapplicationsappdatabackup.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/CommunityApplicationsAppdataBackup'/> <target dir='CommunityApplicationsAppdataBackup'/> <alias name='fs3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/downloads.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Downloads'/> <target dir='Downloads'/> <alias name='fs4'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/movies.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Movies'/> <target dir='Movies'/> <alias name='fs5'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/music.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Music'/> <target dir='Music'/> <alias name='fs6'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/software.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/Software'/> <target dir='Software'/> <alias name='fs7'/> <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/tv.sh' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> </binary> <source dir='/mnt/user/TV'/> <target dir='TV'/> <alias name='fs8'/> <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </filesystem> Once done, reboot the server so the go file moves all of the files and sets permissions. This is a huge improvement over the stock config. Thanks for this @jch. Hopefully we can get these settings implemented in 6.13.
×
×
  • Create New...