loond

Members
  • Posts

    19
  • Joined

Converted

  • Gender
    Undisclosed
  • Personal Text
    Dual Xeon 2670 - ASrock EP2C602-4L/D16 - LSI HBA - GTX970 - Dual Parity 18TB array w/Cache Pool

loond's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Adding "-delete" the above does clean up the empty folders. find /mnt/mediapool/TV/ -type d -empty -delete No impact when looking at /mnt/user/* meaning the files and directories are still on the array as desired.
  2. Just had another thought, what if the folder has a different date than the file within it. If the folder is newer and doesn't meet the age cut off yet then it isn't moved/removed; even if the files within it do meet the age criteria and are moved, which results in an empty directory. Not sure on the logic being used, but there should be a catch all that says if directory empty remove regardless of age setting. Basically, a clean up function at the end of the move. I can find all of the empty directories from the terminal via: find /mnt/mediapool/TV/ -type d -empty mediapool being the zfs dataset and TV being the directory
  3. Configured it to run again last night, manual runs didn't change the situation. The empty folders are still there this morning. Some additional details and responses to questions from other respondents: It's a zfs pool and the folders not being removed are just directories, not datasets. These are for my movies and tv shares where the pool is the cache for the array. Files/directories are set to move based on age and the mover is set to run on a % of pool used. Basically, newer stuff stays on the pool and older stuff is moved onto the array. I have multiple pools and a couple of legacy btrfs caches that fall under the global mover tuning, however on the above shares I override the age settings. I haven't tried a reboot yet, but will give that a go this evening. Open to other ideas or did I just find an edge case?
  4. Running 6.12.3 So move ran and while it did move files from a zfs pool to the array as expected, I'm left with a bunch of empty directories in the share created dataset; also have the CA Mover Tuning plugin. I've tried to re run the mover in hope it would clean up the empty directories but no dice. My setup is essentially tiered storage with files moved off of the zfs pool based on age, so I don't want everything moved at once. Is this a know issue? If yes, any simple solution to clean them up; like a reboot, etc.? Thanks in advance.
  5. I would not call that a resolution, more a workaround. The values are retained in the smart-one.cfg file, and in the top 3 browsers; chrome, firefox, and edge. Seems like the new file isn't being read. Any idea how to get this issue into limetech bug queue? I've found half a dozen different threads across several forums all reporting the same thing.
  6. Just an update on this; decided not to remove the docker, rather just the transcoding related options, and once recreated diabling and then disabling HW transocding in plex, restarting the container, and then reenabling HW transcoding. This seemed to solve the issue, however did have some challenges with using the /tmp directory (ram disk essentially) as a temp transcode folder. Disabling this was required for getting it to work, however it could have just been an issue with me trying to test from various browsers on the same client machine. Confirmed with external clients that everything was working as expected, but will revist the tmp transcode folder issue when I have more time; was working just fine with 6.8.3.
  7. Appreciate it, that's the one I'm using; I'll give that a shot
  8. Still having an issue with Plex and transcoding after updating my main server to 6.9.1 from 6.8.3. Jellyfin works just fine and use the GPU, but not so with plex; thoughts? Have tried restarts, etc and nothing. Anyone else?
  9. Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following: Pull the container using CA, and make sure you enter the mount name like "NAME:" In the host (aka Unraid) terminal run the provided command on page 1 of this post: docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config Follow the onscreen guide; most flow with other tutorials and the video referenced above. UNTIL - you get to the part about using "auto config" or "manual". Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive. After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button. Paste the auth/token key into the terminal window Continue as before, and complete the config setup CRITICAL - go to: cd /mnt/disks/ ls -la Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread chown 911:911 /mnt/disks/rclone_volume/ *assuming you're logged in as root, otherwise add "sudo" Restart the container, and verify you're not seeing any connection issues in the logs From the terminal cd /mnt/disks/rclone_volume ls -la Now you should see your files from GDrive I was just testing to see if I could connect without risking anything in my drive folder, so everyting was in read only including the initial mount creation with the config. As such, I didn't confirm any other containers could see the mount, but YMMV. Have a great evening and weekend.
  10. I don't think I'm the only one missing something, a brief tutorial would be super helpful. In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on. Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former. I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing). The URL uses localhost...(127.0.0.1/ etc....) Bascially stuck as I can't get an API key from google to continue. Did anyone else encounter this? Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google. Appreciate any help in advance
  11. Admittedly I'm probably an oddball use case, but I have 2 unraid servers; one internal (focused on a household plex server and requisite automation ), and one external/ utility server (pihole, unifi, external plex server, etc.) I have the pihole container on both, but only want one instance to be running at any one time so traffic isn't divided. Why 2, so if I manually take down the external/utility server my clients will still have DNS. So, my question is if it would be possible to have a listener on the server with the backup pihole container to listen for the 1st to go down, and then automatically start up if it missed a heartbeat. Not an issue for manual maintenance, but more if there is an unexpected outage. Basically, active/ passive fail over. This plugin functions in the opposite fashion, but curious if it was possible. Thanks in advance, and awesome plugin!
  12. same issue with VyOS; even tried to pfsense with the same issue. Downgrading to 6.3.5 at least has them showing up, however I have 2 servers 1 on 6.4.0 and not the "vswitch" on 6.3.5. Something is off now as latency is terrible w/huge amounts of dropped packets. This setup was working perfectly fine for my 10GB network w/both machines on 6.3.5; what gives?
  13. failed again 2TB in, and unfortunately upon restart it started at the beginning instead of where it left off. It seems there are a few possible causes; 1) the session/ key may be timing out on the backend, which might be resolved by setting the "don't reuse connections" flag, and 2) the app shouldn't mark a backup as completed due to the time difference between last start/finish and the new start. It should index all the files first so in knows the overall package specifics; what's actually in the backend vs source. I thought is was able to do incremental, but the behavior suggest otherwise. Also, I have this set to keep only 1 version of the backup, however it's registering 3 for some reason
  14. try 8.6TB; turns our when you get the time out error you just have to wait 5min or so, and rerun the backup. It seems to pick up where it left off. Still, it should be able to better utilize the available resources; ram is at 25%, so I would expect the CPU to be better.
  15. Some small challenges to get this working reliably with amazon clouddrive for large files, and had to have a lot of options added. I think I'm getting the timeout error when increasing either the number of asynchronous files and/or volume size because it's taking so long to encrypt/ compress the files. Looking at the docker CPU utilization I might get 5% during a back up. Any ideas on how to take more advantage of the hardware? Already have max thread, and high thread priority options set. The problem with the timeout error is it essential breaks the backup due to incomplete files; tmp and backend. Even with the option to cleanup unused files it still won't re-run after this kind of error; basically delete the back up and start over. I've only been able to get maybe 20GB or so before receiving it, and have almost 9TB to go I have the hardware to theoretically complete a full backup in just 24hours to Amazon; assuming their pipe is bigger than mine. Verizon fios just upgraded to 1Gb/s speeds, and I have dual xeon 2670's w/128GB of ram. I can spare some cycles to get the full backup complete, and slow things down when it's just incremental. Ideas, thoughts, etc are very much appreciated as at this rate it might be done in a few months.