loond

Members
  • Posts

    19
  • Joined

Posts posted by loond

  1. 7 minutes ago, loond said:

    Just had another thought, what if the folder has a different date than the file within it.  If the folder is newer and doesn't meet the age cut off yet then it isn't moved/removed; even if the files within it do meet the age criteria and are moved, which results in an empty directory.

    Not sure on the logic being used, but there should be a catch all that says if directory empty remove regardless of age setting.  Basically, a clean up function at the end of the move.  I can find all of the empty directories from the terminal via:

     

    find /mnt/mediapool/TV/ -type d -empty

     

    mediapool being the zfs dataset and TV being the directory

     

    Adding "-delete" the above does clean up the empty folders.

     

    find /mnt/mediapool/TV/ -type d -empty -delete

     

    No impact when looking at /mnt/user/*  meaning the files and directories are still on the array as desired.

  2. Just had another thought, what if the folder has a different date than the file within it.  If the folder is newer and doesn't meet the age cut off yet then it isn't moved/removed; even if the files within it do meet the age criteria and are moved, which results in an empty directory.

    Not sure on the logic being used, but there should be a catch all that says if directory empty remove regardless of age setting.  Basically, a clean up function at the end of the move.  I can find all of the empty directories from the terminal via:

     

    find /mnt/mediapool/TV/ -type d -empty

     

    mediapool being the zfs dataset and TV being the directory

  3. 23 hours ago, hugenbdd said:

    It should delete the empty directories second/next time it's run.

     

    Configured it to run again last night, manual runs didn't change the situation.  The empty folders are still there this morning.  Some additional details and responses to questions from other respondents:

    It's a zfs pool and the folders not being removed are just directories, not datasets.  These are for my movies and tv shares where the pool is the cache for the array.  Files/directories are set to move based on age and the mover is set to run on a % of pool used.  Basically, newer stuff stays on the pool and older stuff is moved onto the array.  I have multiple pools and a couple of legacy btrfs caches that fall under the global mover tuning, however on the above shares I override the age settings.  I haven't tried a reboot yet, but will give that a go this evening.

     

    Open to other ideas or did I just find an edge case?

  4. Running 6.12.3

     

    So move ran and while it did move files from a zfs pool to the array as expected, I'm left with a bunch of empty directories in the share created dataset; also have the CA Mover Tuning plugin.  I've tried to re run the mover in hope it would clean up the empty directories but no dice.  My setup is essentially tiered storage with files moved off of the zfs pool based on age, so I don't want everything moved at once.

     

    Is this a know issue?  If yes, any simple solution to clean them up; like a reboot, etc.?

     

    Thanks in advance.

  5. I would not call that a resolution, more a workaround.  The values are retained in the smart-one.cfg file, and in the top 3 browsers; chrome, firefox, and edge.  Seems like the new file isn't being read.  Any idea how to get this issue into limetech bug queue?  I've found half a dozen different threads across several forums all reporting the same thing.

  6. Just an update on this; decided not to remove the docker, rather just the transcoding related options, and once recreated diabling and then disabling HW transocding in plex, restarting the container, and then reenabling HW transcoding.  This seemed to solve the issue, however did have some challenges with using the /tmp directory (ram disk essentially) as a temp transcode folder.  Disabling this was required for getting it to work, however it could have just been an issue with me trying to test from various browsers on the same client machine.  Confirmed with external clients that everything was working as expected, but will revist the tmp transcode folder issue when I have more time; was working just fine with 6.8.3.

  7. On 7/24/2020 at 4:40 PM, loond said:

    I don't think I'm the only one missing something, a brief tutorial would be super helpful.  In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on.  Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former.  I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing).  The URL uses localhost...(127.0.0.1/ etc....)  Bascially stuck as I can't get an API key from google to continue.  Did anyone else encounter this?

    Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google.  Appreciate any help in advance

    Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following:

     

    1. Pull the container using CA, and make sure you enter the mount name like "NAME:"
    2. In the host (aka Unraid) terminal run the provided command on page 1 of this post:
      docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config
    3. Follow the onscreen guide; most flow with other tutorials and the video referenced above.  UNTIL - you get to the part about using "auto config" or "manual".  Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive.
    4. After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button.
    5. Paste the auth/token key into the terminal window
    6. Continue as before, and complete the config setup
    7. CRITICAL - go to:
      cd /mnt/disks/
      ls -la

      Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread

      chown 911:911 /mnt/disks/rclone_volume/

      *assuming you're logged in as root, otherwise add "sudo"

    8. Restart the container, and verify you're not seeing any connection issues in the logs

    9. From the terminal

      cd /mnt/disks/rclone_volume
      ls -la

      Now you should see your files from GDrive

     

    I was just testing to see if I could connect without risking anything in my drive folder, so everyting was in read only including the initial mount creation with the config.  As such, I didn't confirm any other containers could see the mount, but YMMV.  Have a great evening and weekend.

    • Like 1
  8. I don't think I'm the only one missing something, a brief tutorial would be super helpful.  In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on.  Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former.  I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing).  The URL uses localhost...(127.0.0.1/ etc....)  Bascially stuck as I can't get an API key from google to continue.  Did anyone else encounter this?

    Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google.  Appreciate any help in advance

  9. Admittedly I'm probably an oddball use case, but I have 2 unraid servers; one internal (focused on a household plex server and requisite automation ), and one external/ utility server (pihole, unifi, external plex server, etc.)  I have the pihole container on both, but only want one instance to be running at any one time so traffic isn't divided.  Why 2, so if I manually take down the external/utility server my clients will still have DNS.

    So, my question is if it would be possible to have a listener on the server with the backup pihole container to listen for the 1st to go down, and then automatically start up if it missed a heartbeat.  Not an issue for manual maintenance, but more if there is an unexpected outage.  Basically, active/ passive fail over.  This plugin functions in the opposite fashion, but curious if it was possible.

     

    Thanks in advance, and awesome plugin!

  10. same issue with VyOS; even tried to pfsense with the same issue.  Downgrading to 6.3.5 at least has them showing up, however I have 2 servers 1 on 6.4.0 and not the "vswitch" on 6.3.5.  Something is off now as latency is terrible w/huge amounts of dropped packets.  This setup was working perfectly fine for my 10GB network w/both machines on 6.3.5; what gives?

  11. failed again 2TB in, and unfortunately upon restart it started at the beginning instead of where it left off.  It seems there are a few possible causes; 1) the session/ key may be timing out on the backend, which might be resolved by setting the "don't reuse connections" flag, and 2) the app shouldn't mark a backup as completed due to the time difference between last start/finish and the new start.  It should index all the files first so in knows the overall package specifics; what's actually in the backend vs source.  I thought is was able to do incremental, but the behavior suggest otherwise.  Also, I have this set to keep only 1 version of the backup, however it's registering 3 for some reason

  12. Some small challenges to get this working reliably with amazon clouddrive for large files, and had to have a lot of options added.  I think I'm getting the timeout error when increasing either the number of asynchronous files and/or volume size because it's taking so long to encrypt/ compress the files.  Looking at the docker CPU utilization I might get 5% during a back up.  Any ideas on how to take more advantage of the hardware?  Already have max thread, and high thread priority options set.  The problem with the timeout error is it essential breaks the backup due to incomplete files; tmp and backend.  Even with the option to cleanup unused files it still won't re-run after this kind of error; basically delete the back up and start over.  I've only been able to get maybe 20GB or so before receiving it, and have almost 9TB to go ;-)

     

    I have the hardware to theoretically complete a full backup in just 24hours to Amazon; assuming their pipe is bigger than mine.  Verizon fios just upgraded to 1Gb/s speeds, and I have dual xeon 2670's w/128GB of ram.  I can spare some cycles to get the full backup complete, and slow things down when it's just incremental.  Ideas, thoughts, etc are very much appreciated as at this rate it might be done in a few months.

  13. I still get the errors (although didn't happen last night for some reason; turned network off, but event monitoring is still on.  Will change back to confirm), however the temps are now closer with the new ram; one processor might have been working harder shuffling data through the bus?  BTW, this is where I purchased all of my ram: http://www.ebay.com/itm/162349088753?_trksid=p2057872.m2749.l2649&ssPageName=STRK%3AMEBIDX%3AIT

  14. Thank you both for the quick response, and appreciate your contribution to the community.  This has turned into a bit of a lab for me; this being the 3rd iteration of the system.  Once I figured out/ understood the capability of unraid I just had to keep going; don't talk to my wife though ;-)

     

    A few observations, and am not sure you might have experienced the same.  The plugin reports errors for both processors; where the web gui only reports the primary CPU.  Also, in each instance the event lasted exactly 5 sec according to the event log in the gui.  I had another strange problem previously where I was getting machine check errors with the ram evenly distributed in the primary mem slots between each CPU.  When I was getting those errors I wasn't getting the temp errors, but one of the CPUs consistently indicated 5C hotter.  After verifying via memtest the ram was good, I "solved" the issue by adding another 32GB; maxing out the primary CPU mem slots.  Also, when I was getting the machine check errors the 4 NICs would constantly fail over due to an IRQ 16 error.  They definitely have some issues with their BIOS.  It does make me wonder though if the BIOS just expects to see all the slots full given its expected use case, and if not just can't handle the delta.

  15. Excellent plugin.  Quick question on fan control.  I have an Asrock board, but fan control is disabled.  After reading through the change log it looks like you make an assumption on the fan naming convention.  I assume this works for a single socket MB, but does not align with how things are named in a dual socket configuration; CPU_FAN1_1, CPU_FAN1_2, CPU_FAN2_1, and CPU_FAN2_2.  Any thoughts on supporting this type of configuration, MB is EP2C602-4L/D16?  Unfortunately, Asrock doesn't let you make changes via their web interface, so this was my best option to avoid having to reboot into bios.

     

    Currently chasing down an Upper Critical non-recoverable error for both CPUs; where one pegs at 101C, and the other is at 40C but also has the error.  I seriously doubt that the CPU actually hit 101C even if the fan stalled, and am guessing this is some config error; open to suggestions on this one as well.  There is enough airflow through from drive fans (3x140 - 1000), the other CPU fan (Hyper 212's), exhaust fan (1x120 - 1500), and side/MB fan (1x140 - 1100) to provide good passive cooling; especially since the current load at the time of the error is basically idle.