MowMdown

Members
  • Posts

    194
  • Joined

  • Last visited

Everything posted by MowMdown

  1. My guess is that no data under /…/…/media/TV Shows was written to the array so it sonarr didn’t yet see the full array size. It was only looking at the SSD by itself. once a folder was created on the array, it probably picked up on the fact the storage size was now SSD+Array
  2. Yes sonarr and radarr have an import function. It will rename and move the files when they’ve finished downloading. You shouldn’t have to do anything manually. the *arrs apps also have batch renaming for already imported files.
  3. You would use the “unbalance” plugin but typically there’s no need to balance drives.
  4. You tell the *arrs what to rename them on import. *arrs > settings > media management
  5. Unraid biggest selling feature is being able to mix disks from all types of sizes and manufacturers.
  6. Click the little icon to the left of “Disk1” it will let you browse the contents on that disk. alternativly, if you ssh into your machine or click on the webgui terminal, you can navigate through /mnt/user or /mnt/disk1 (never copy data from/to /mnt/user to another disk)
  7. Use the array as unraid intends but you can have as many “cache pools” as you want. Cache pools can be just about any configuration you want them to be in. Just Disks, RAID0, RAID1, RAIDZ, ZFS MIRROR, etc each cache pool can be its own setup too. Pool1 can be 4 SSDs that aren’t in any kind of RAID. Pool2 can be a RAID1 pool.
  8. Do you have "Cache Dirs" plugin installed?
  9. I think I found the culprit maybe? nobody 19635 0.8 2.3 1609296 388820 ? Ssl 10:06 5:39 python3 -m homeassistant -c /config Before and After stopping HA docker and restarting unraid mDNS: After, clearly homeassinstant is what I think is the source for the mDNS conflicting issue
  10. @JorgeB I went ahead and disabled IPv6 and that seemed to have worked however, It's not the most ideal solution as it doesn't fix the underlying cause. Here's some more log from yesterday before I shut IPv6 off. I'm thinking whatever is causing the WARNING is the cause for the log spam. The only docker I can think of that would maybe cause this is pi-hole? Do you see anything else in my diags pointing to a culprit? Mar 9 22:11:34 Tower emhttpd: Starting services... Mar 9 22:11:34 Tower emhttpd: shcmd (265570): chmod 0777 '/mnt/user/docker' Mar 9 22:11:34 Tower emhttpd: shcmd (265571): chown 'nobody':'users' '/mnt/user/docker' Mar 9 22:11:34 Tower emhttpd: shcmd (265574): /etc/rc.d/rc.avahidaemon restart Mar 9 22:11:34 Tower root: Stopping Avahi mDNS/DNS-SD Daemon: stopped Mar 9 22:11:34 Tower avahi-daemon[17101]: Got SIGTERM, quitting. Mar 9 22:11:34 Tower avahi-daemon[17101]: Leaving mDNS multicast group on interface br0.IPv6 with address fdf0:6a76:b27e:1:ec4:7aff:fe64:9e01. Mar 9 22:11:34 Tower avahi-daemon[17101]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.4.100. Mar 9 22:11:34 Tower avahi-dnsconfd[17111]: read(): EOF Mar 9 22:11:34 Tower avahi-daemon[17101]: avahi-daemon 0.8 exiting. Mar 9 22:11:35 Tower root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D Mar 9 22:11:35 Tower avahi-daemon[26563]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214). Mar 9 22:11:35 Tower avahi-daemon[26563]: Successfully dropped root privileges. Mar 9 22:11:35 Tower avahi-daemon[26563]: avahi-daemon 0.8 starting up. Mar 9 22:11:35 Tower avahi-daemon[26563]: Successfully called chroot(). Mar 9 22:11:35 Tower avahi-daemon[26563]: Successfully dropped remaining capabilities. Mar 9 22:11:35 Tower avahi-daemon[26563]: Loading service file /services/sftp-ssh.service. Mar 9 22:11:35 Tower avahi-daemon[26563]: Loading service file /services/ssh.service. Mar 9 22:11:35 Tower avahi-daemon[26563]: *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** Mar 9 22:11:35 Tower avahi-daemon[26563]: *** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** Mar 9 22:11:35 Tower avahi-daemon[26563]: Joining mDNS multicast group on interface br0.IPv6 with address fdf0:6a76:b27e:1:ec4:7aff:fe64:9e01. Mar 9 22:11:35 Tower avahi-daemon[26563]: New relevant interface br0.IPv6 for mDNS. Mar 9 22:11:35 Tower avahi-daemon[26563]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.4.100. Mar 9 22:11:35 Tower avahi-daemon[26563]: New relevant interface br0.IPv4 for mDNS. Mar 9 22:11:35 Tower avahi-daemon[26563]: Network interface enumeration completed. Mar 9 22:11:35 Tower avahi-daemon[26563]: Registering new address record for fdf0:6a76:b27e:1:ec4:7aff:fe64:9e01 on br0.*. Mar 9 22:11:35 Tower avahi-daemon[26563]: Registering new address record for 2604:2d80:968e:b300:ec4:7aff:fe64:9e01 on br0.*. Mar 9 22:11:35 Tower avahi-daemon[26563]: Registering new address record for 192.168.4.100 on br0.IPv4. Mar 9 22:11:35 Tower emhttpd: shcmd (265575): /etc/rc.d/rc.avahidnsconfd restart Mar 9 22:11:35 Tower root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped Mar 9 22:11:35 Tower root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon: /usr/sbin/avahi-dnsconfd -D Mar 9 22:11:35 Tower avahi-dnsconfd[26572]: Successfully connected to Avahi daemon. Mar 9 22:11:35 Tower avahi-daemon[26563]: Server startup complete. Host name is Tower.local. Local service cookie is 2084411687. Mar 9 22:11:36 Tower avahi-daemon[26563]: Registering new address record for fd91:1ac0:7f43:c44:ec4:7aff:fe64:9e01 on br0.*. Mar 9 22:11:36 Tower avahi-daemon[26563]: Service "Tower" (/services/ssh.service) successfully established. Mar 9 22:11:36 Tower avahi-daemon[26563]: Service "Tower" (/services/sftp-ssh.service) successfully established.
  11. I’ve searched the forums, Reddit, Google, etc. I’m struggling what’s causing avahi daemon to spam my logs with “withdrawing/registering” over and over. there are no host-name conflicts that I am aware of on my network. nothing else is using .local TLD either. there are no static IPv6 addresses assigned to any of my devices. It’s even set to automatic/dhcp in my network settings on unraid. attached are the diagnostics. tower-diagnostics-20240309-1118.zip
  12. Well that's to be expected. You've set your share to be moved off of your pool onto the array. No idea why you'd want to snapshot an empty dataset that's never going to have data in it? Edit: @Iker I have a QOL suggest for the snapshot admin pop-up window. Instead of the pop-up window, can the snapshots just expand below the selected dataset? I have issues with the pop-up window on small screens where the width is very narrow requiring horizontal scrolling even when using landscape orientation.
  13. @MilkSomelier Stop docker service entirely, enable destructive mode in ZFS Master settings, click the action button > Convert to Dataset, then you also need to convert each individual appdata folder too or else you won't be able to restore individual docker appdata.
  14. just change --vfs-cache-mode full to --vfs-cache-mode off
  15. That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [some_dir] path = /mnt/disks/some_dir comment = browsable = yes # Public public = yes writeable = yes vfs object = You simply add this to the SMB extras under Settings > SMB
  16. when writing to the union mount directory “media” the non “vfs” one, shouldn’t be touching the cache because it should only write to the local drives your first upstream in the Union setup. Sounds like maybe you should check the spelling/case of that first path. you might need to add the flag -vv to the mount command so you can verbosely debug the issue further.
  17. @axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/) My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media.
  18. I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination) rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.
  19. Might be related to your mount command you're using. the ":nc" suffix is simply "No Create" and shouldn't really affect the reading of files so I assume the mount you're using to mount to "media_vfs" directory is possibly the culprit. Edit: No Plex/Emby would not be able to write unencrypted data to the mount since rclone is the one encrypting anything that gets written to it. I simply want to avoid writing NEW files to it to avoid corruption because writing to a mount is not best practice. you can also use the :ro suffix to essentially mount it "read only" however thats also not what I want because with :nc I am able to upgrade media using sonarr/radarr which requires those programs to be able to delete files. can't do that when it's read only. (Im not actually sure :nc or :ro is neccessay since we are using the "ff" policy which essentially only deals with the first listed upstream which is our local array drives) When those programs do upgrade the media, they actually delete them off the cloud mount, and then write the new file/data to the local array drives where my upload script will essentially write it back to the cloud. It's actually kinda clever the way I set it up.
  20. You don't need any physical unassigned devices, I don't have any. It's just where I mounted my cloud mounts. Post your "Union" rclone config like I did above. I'm not sure I understand where to do this. When using the /mnt/disks/ as your path in the docker configs for each docker, unraid will throw an warning that the path is not using the Slave options. If you edit the docker container config and you go to edit one of the path variables you will see "Access Mode" that will need to be changed from Read/Write to RW Slave. Super easy to change.
  21. I'm very time limited this week but here is a crude setup of what you need to do. Using rclone union instead of MergerFS setup (this setup will assume you already are familiar with rclone mounts and are using the latest version 1.53.2) For my mount script is pretty straight forward, I create two directories needed, the first one is needed to mount the cloud which will utilize rclone's VFS caching and the second mount will unionize the VFS mount with the local media. #!/bin/bash mkdir -p /mnt/disks/media_vfs mkdir -p /mnt/disks/media rclone mount --allow-other \ --dir-cache-time 100h \ --fast-list \ --poll-interval 15s \ --cache-dir=/mnt/user/system/rclone \ --vfs-cache-mode full \ --vfs-cache-max-size 500G \ --vfs-cache-max-age 168h \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --vfs-read-ahead 128M \ crypt: /mnt/disks/media_vfs & rclone mount --allow-other union: /mnt/disks/media & In the first rclone mount command im using my "crypt:" rclone mount, you will need to replace --> crypt: <-- with your mount. You must edit the "--cache-dir=" variable to where you want rclone to cache your media on your local unraid machine as well as the "--vfs-cache-max-size" to the largest size you are willing to cache on your disk. All the other vfs flags should remain the same. Now the next step is using rclone to configure the "union" mount needed to union the VFS mount with the local media directory. Enter rclone config and select "n" for a new remote and name it union, then select the union option. It's going to ask for the "upstreams" you're going to first type the local path to your media, put in a space, and then you're going to put in the path to the mount location we just made /mnt/disks/media_vfs and then I personally add the :nc modifier to avoid accidentally creating files to the cloud mount. Next rclone will ask for the action_policy, enter ff Next will be the create_policy, enter ff Next will be the search policy, enter all Last will be the cache time, leave 120 default Once it's done it should look something like this: [union] type = union upstreams = /mnt/user/plexmedia/ /mnt/disks/media_vfs:nc action_policy = ff create_policy = ff search_policy = all cache_time = 120 Remember to replace /plexmedia/ with root of your media location. (your remote should follow the same directory structure or this may cause issues) Once you actually mount the mounts after union is created, you should be able to browse "/mnt/disks/media" (the non _vfs media) and see a compete list of all your media whether it be in the cloud or locally) One last thing, you will need to change your dockers paths from /mnt/user/etc/etc/etc/ to /mnt/disks/media/ so it can read from this mount. you will also need to change them from Read/Write to the R/W Slave. To unmount at array shutdown: #!/bin/bash fusermount -uz /mnt/disks/media fusermount -uz /mnt/disks/media_vfs
  22. Just giving an update to one of my comments from a month or so ago: I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup. I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful. If anybody is interested let me know and I'll share my setup.
  23. I had no issues building my kernel while I was running the Nvidia Unraid from LISO.
  24. Or you can use @ich777's Unraid-Kernel-Helper docker to build the 6.8.3 kernel with the latest NVIDIA drivers or just download the pre-compiled 6.8.3 w/ them from the docker thread located: here!