bugsysiegals

Members
  • Posts

    99
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bugsysiegals's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. If I’m not mistaken when unRAID creates the top level dataset mover can still move everything from cache to array or vice-versa; however, any datasets created by yourself, at least if done with SIO’s auto dataset script, will no longer be able to be moved with mover. It’s been a bit and I think I posted these details in another post but if my memory serves me correctly the files get moved but the source folders/files never get deleted and then they exist on both cache and array. It seems is perhaps a permissions error in deleting the dataset? This issue can easily be created by making a folder with subfolders/files on the array, having mover move it to the cache, using the auto dataset script to convert those subfolders to datasets, changing the share from cache to array, and running mover again which will not move everything back to the cache.
  2. No, I never got VPN protection to work and received a letter from my ISP in the past so I moved to usenet. I’ve shutdown Radarr/Sonarr and will check activity tomorrow morning.
  3. Hi, thanks for the quick reply @trurl ... attached. FWIW - I installed File Activity plugin and see several movie/show files. I'm not sure if Radarr/Sonarr tasks that run multiple times per hour, and can't be adjusted, would cause this? I also noticed some CCTV which is probably Frigate but am not sure why it would be doing anything with a share that's set to Cache and should only be moved to the Array by mover whenever it runs. I also notice a photo is OPEN and while I pass in /data/media to my media dockers, they do not have my photos added to their library so this is really bizzare. I also see a few .tmp files and am not sure what that's about.
  4. I've set my disks to spin down when not in use since they shouldn't be being accessed very often at all; however, I've been randomly finding at least one disk and parity online. Whenever I see them online I check active streams and open files but see nothing connecting to them. That said, I just clicked on log and see a lot of disk notifications for SMART and spin down. I've read it's not healthy to have the disks spinning up/down more than 5 times per day and I'm way over this amount ... how do I find the cause of this? Jan 2 10:02:27 unRAID emhttpd: read SMART /dev/sdg Jan 2 10:02:27 unRAID emhttpd: read SMART /dev/sdc Jan 2 10:18:58 unRAID emhttpd: spinning down /dev/sdg Jan 2 10:18:58 unRAID emhttpd: spinning down /dev/sdc Jan 2 10:44:38 unRAID emhttpd: read SMART /dev/sdf Jan 2 10:44:49 unRAID emhttpd: read SMART /dev/sdc Jan 2 10:59:39 unRAID emhttpd: spinning down /dev/sdf Jan 2 10:59:50 unRAID emhttpd: spinning down /dev/sdc Jan 2 11:02:51 unRAID emhttpd: read SMART /dev/sdg Jan 2 11:02:51 unRAID emhttpd: read SMART /dev/sdc Jan 2 11:19:23 unRAID emhttpd: spinning down /dev/sdg Jan 2 11:19:23 unRAID emhttpd: spinning down /dev/sdc UPDATE: I installed File Activity plugin and noticed ... Frigate was offloading files every hour for it's retention policy. I've requested they do this once per day instead and in the meantime turned retention up to 365 days. Radarr/Sonarr has scheduled tasks running more than every hour that cause the Array to spin up. Since the developers can't trust people to manage their own media server settings, I powered these down and have switched to streaming media rather than downloading it.
  5. I moved all my folders/files off my Cache drive to upgrade firmware. I switched my share settings to move back from Array to Cache, ran Mover, and while the folders/files moved, only AppData folder became a dataset. Is this normal or should the sub-folders have also become datasets?
  6. I recently converted my Cache drive to ZFS and discovered only the top level directories are automatically converted into ZFS datasets. Since I want to take snapshots of specific Dockers or specific VM's, I need Appdata and Domain sub-folders to be datasets. I used SpaceInvaderOne's Auto Dataset script to automatically convert sub-folders into datasets; however, once you do this, mover will move some but not all files from Cache to Array. I know I can't delete the folders/files from Cache using MC as it says they're in use. I suspect the script does something when converting which prevents deletion which is affecting mover. Is this a bug or enhancement Mover needs to have added?
  7. It says I may lose the data on the drives. I have all my appdata, domains, etc. on my 2 NVME ZFS cache pool ... should I move the data to the array first or it's unlikely to lose all my data?
  8. I upgraded my cache pool to ZFS and would like to start taking snapshots of my Docker apps but I've noticed Lidarr, Radarr, and Sonarr seem unnecessarily large in size because they include MediaCover data. In my mind if I can move this folder out of the Docker I can keep snapshots very small and if everything falls apart these files are automatically downloaded anyways. Does this make sense and does anybody do this today? I presume I would have to create a folder on the array, pass the folder into the Docker, and then somehow change the MediaCover folder within the Docker to link to the array folder?
  9. Do you know if the script will convert a root folder into a dataset if it's not already or will it only convert sub-folders into datasets if the root folder is already a dataset?
  10. Thanks for the feedback. This is good if your root directories are datasets but doesn't work if they're folders. I took the long route of having mover move everything off the array to the cache and then having it move everything back which created all root folders on cache as datasets. That said, I just found that if I use the plugin to create a dataset, I can then move all sub-folders/files into that dataset, delete the original folder, and rename the dataset. If I'm correct that it works in this way, this would have saved me several hours ... hopefully this helps somebody else down the road ...
  11. I've several folders on my ZFS cache which I moved from the array using MC instead of the Mover so the folder and all sub-directories/files are not datasets. I believe I can move these back to the array and then move them again with mover to make them datasets but is there an easier way with some command/script?
  12. Thanks for the clarity, learned something new today! I assume XFS behaves in the same way which is why unRAID allows you to add different size disks to the array? Since my cache disks are the same size and I'd like the advanced features of ZFS I'll go with raid0, thanks again!
  13. I'd not chosen raid0 to potentially avoid wasting time rebuilding the "appdata" drive assuming it wasn't the one to fail but I think I should've chosen raid0 ... I was planning to run a script to backup critical files anyways so the downside of raid0 is having to copy critical files back from the array where I had a 50% of avoid with mirror but the benefit is using all the disk space of both drives and a potential speed increase?
  14. Thanks for sharing how you can assign shares to different pools but this would mean I'd only be able to assign 1 pool/drive to "movies" and the data could not "spillover" onto the NVME which only has appdata, domains, isos, docker.img, and libvirt so this disk would always sit with 1.75TB of unused space. Instead, I'd like the NVME pool to behave like the array, just without without parity, so /mnt/cache/movies/ will find all movies regardless of whether it's on NVME1 or NVME2 just like the array does with /mnt/user/movies. This allows me to use both NVME fully without 1 sitting there only consuming 250GB since appdata, domains, isos, Docker.img, and Libvirt will never grow much bigger for me. I was thinking the single mode was doing what I'm saying but perhaps it's something not yet implemented into unRAID?
  15. I've around 250GB being used for appdata, data (recently downloaded media before offloading to the array), domains, isos, system (libvirt), and docker.img. I was hoping to use the remaining 1.75TB on this drive and 2TB on the other drive for downloading media and hosting it here until it's 90% full at which point I begin offloading the oldest files to the array. I'd wanted to keep these in the same pool in order to use a single path in all of my dockers for downloaded media. I suspect 'll lose this ability if I put these in 2 separate pools?