• Posts

  • Joined


  • Gender
  • Personal Text
    Dual Xeon 2670 - ASrock EP2C602-4L/D16 - LSI HBA - GTX970 - Dual Parity 18TB array w/Cache Pool

loond's Achievements


Newbie (1/14)



  1. I would not call that a resolution, more a workaround. The values are retained in the smart-one.cfg file, and in the top 3 browsers; chrome, firefox, and edge. Seems like the new file isn't being read. Any idea how to get this issue into limetech bug queue? I've found half a dozen different threads across several forums all reporting the same thing.
  2. Just an update on this; decided not to remove the docker, rather just the transcoding related options, and once recreated diabling and then disabling HW transocding in plex, restarting the container, and then reenabling HW transcoding. This seemed to solve the issue, however did have some challenges with using the /tmp directory (ram disk essentially) as a temp transcode folder. Disabling this was required for getting it to work, however it could have just been an issue with me trying to test from various browsers on the same client machine. Confirmed with external clients that everything was working as expected, but will revist the tmp transcode folder issue when I have more time; was working just fine with 6.8.3.
  3. Appreciate it, that's the one I'm using; I'll give that a shot
  4. Still having an issue with Plex and transcoding after updating my main server to 6.9.1 from 6.8.3. Jellyfin works just fine and use the GPU, but not so with plex; thoughts? Have tried restarts, etc and nothing. Anyone else?
  5. Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following: Pull the container using CA, and make sure you enter the mount name like "NAME:" In the host (aka Unraid) terminal run the provided command on page 1 of this post: docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config Follow the onscreen guide; most flow with other tutorials and the video referenced above. UNTIL - you get to the part about using "auto config" or "manual". Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive. After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button. Paste the auth/token key into the terminal window Continue as before, and complete the config setup CRITICAL - go to: cd /mnt/disks/ ls -la Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread chown 911:911 /mnt/disks/rclone_volume/ *assuming you're logged in as root, otherwise add "sudo" Restart the container, and verify you're not seeing any connection issues in the logs From the terminal cd /mnt/disks/rclone_volume ls -la Now you should see your files from GDrive I was just testing to see if I could connect without risking anything in my drive folder, so everyting was in read only including the initial mount creation with the config. As such, I didn't confirm any other containers could see the mount, but YMMV. Have a great evening and weekend.
  6. I don't think I'm the only one missing something, a brief tutorial would be super helpful. In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on. Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former. I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing). The URL uses localhost...( etc....) Bascially stuck as I can't get an API key from google to continue. Did anyone else encounter this? Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google. Appreciate any help in advance
  7. Admittedly I'm probably an oddball use case, but I have 2 unraid servers; one internal (focused on a household plex server and requisite automation ), and one external/ utility server (pihole, unifi, external plex server, etc.) I have the pihole container on both, but only want one instance to be running at any one time so traffic isn't divided. Why 2, so if I manually take down the external/utility server my clients will still have DNS. So, my question is if it would be possible to have a listener on the server with the backup pihole container to listen for the 1st to go down, and then automatically start up if it missed a heartbeat. Not an issue for manual maintenance, but more if there is an unexpected outage. Basically, active/ passive fail over. This plugin functions in the opposite fashion, but curious if it was possible. Thanks in advance, and awesome plugin!
  8. same issue with VyOS; even tried to pfsense with the same issue. Downgrading to 6.3.5 at least has them showing up, however I have 2 servers 1 on 6.4.0 and not the "vswitch" on 6.3.5. Something is off now as latency is terrible w/huge amounts of dropped packets. This setup was working perfectly fine for my 10GB network w/both machines on 6.3.5; what gives?
  9. failed again 2TB in, and unfortunately upon restart it started at the beginning instead of where it left off. It seems there are a few possible causes; 1) the session/ key may be timing out on the backend, which might be resolved by setting the "don't reuse connections" flag, and 2) the app shouldn't mark a backup as completed due to the time difference between last start/finish and the new start. It should index all the files first so in knows the overall package specifics; what's actually in the backend vs source. I thought is was able to do incremental, but the behavior suggest otherwise. Also, I have this set to keep only 1 version of the backup, however it's registering 3 for some reason
  10. try 8.6TB; turns our when you get the time out error you just have to wait 5min or so, and rerun the backup. It seems to pick up where it left off. Still, it should be able to better utilize the available resources; ram is at 25%, so I would expect the CPU to be better.
  11. Some small challenges to get this working reliably with amazon clouddrive for large files, and had to have a lot of options added. I think I'm getting the timeout error when increasing either the number of asynchronous files and/or volume size because it's taking so long to encrypt/ compress the files. Looking at the docker CPU utilization I might get 5% during a back up. Any ideas on how to take more advantage of the hardware? Already have max thread, and high thread priority options set. The problem with the timeout error is it essential breaks the backup due to incomplete files; tmp and backend. Even with the option to cleanup unused files it still won't re-run after this kind of error; basically delete the back up and start over. I've only been able to get maybe 20GB or so before receiving it, and have almost 9TB to go I have the hardware to theoretically complete a full backup in just 24hours to Amazon; assuming their pipe is bigger than mine. Verizon fios just upgraded to 1Gb/s speeds, and I have dual xeon 2670's w/128GB of ram. I can spare some cycles to get the full backup complete, and slow things down when it's just incremental. Ideas, thoughts, etc are very much appreciated as at this rate it might be done in a few months.
  12. I still get the errors (although didn't happen last night for some reason; turned network off, but event monitoring is still on. Will change back to confirm), however the temps are now closer with the new ram; one processor might have been working harder shuffling data through the bus? BTW, this is where I purchased all of my ram: http://www.ebay.com/itm/162349088753?_trksid=p2057872.m2749.l2649&ssPageName=STRK%3AMEBIDX%3AIT
  13. Thank you both for the quick response, and appreciate your contribution to the community. This has turned into a bit of a lab for me; this being the 3rd iteration of the system. Once I figured out/ understood the capability of unraid I just had to keep going; don't talk to my wife though A few observations, and am not sure you might have experienced the same. The plugin reports errors for both processors; where the web gui only reports the primary CPU. Also, in each instance the event lasted exactly 5 sec according to the event log in the gui. I had another strange problem previously where I was getting machine check errors with the ram evenly distributed in the primary mem slots between each CPU. When I was getting those errors I wasn't getting the temp errors, but one of the CPUs consistently indicated 5C hotter. After verifying via memtest the ram was good, I "solved" the issue by adding another 32GB; maxing out the primary CPU mem slots. Also, when I was getting the machine check errors the 4 NICs would constantly fail over due to an IRQ 16 error. They definitely have some issues with their BIOS. It does make me wonder though if the BIOS just expects to see all the slots full given its expected use case, and if not just can't handle the delta.
  14. Excellent plugin. Quick question on fan control. I have an Asrock board, but fan control is disabled. After reading through the change log it looks like you make an assumption on the fan naming convention. I assume this works for a single socket MB, but does not align with how things are named in a dual socket configuration; CPU_FAN1_1, CPU_FAN1_2, CPU_FAN2_1, and CPU_FAN2_2. Any thoughts on supporting this type of configuration, MB is EP2C602-4L/D16? Unfortunately, Asrock doesn't let you make changes via their web interface, so this was my best option to avoid having to reboot into bios. Currently chasing down an Upper Critical non-recoverable error for both CPUs; where one pegs at 101C, and the other is at 40C but also has the error. I seriously doubt that the CPU actually hit 101C even if the fan stalled, and am guessing this is some config error; open to suggestions on this one as well. There is enough airflow through from drive fans (3x140 - 1000), the other CPU fan (Hyper 212's), exhaust fan (1x120 - 1500), and side/MB fan (1x140 - 1100) to provide good passive cooling; especially since the current load at the time of the error is basically idle.