loond

Members
  • Content Count

    14
  • Joined

Community Reputation

0 Neutral

About loond

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed
  • Personal Text
    Dual Xeon 2670 - ASrock EP2C602-4L/D16 - LSI HBA - GTX970 - Dual Parity 18TB array w/Cache Pool
  1. I would not call that a resolution, more a workaround. The values are retained in the smart-one.cfg file, and in the top 3 browsers; chrome, firefox, and edge. Seems like the new file isn't being read. Any idea how to get this issue into limetech bug queue? I've found half a dozen different threads across several forums all reporting the same thing.
  2. Just an update on this; decided not to remove the docker, rather just the transcoding related options, and once recreated diabling and then disabling HW transocding in plex, restarting the container, and then reenabling HW transcoding. This seemed to solve the issue, however did have some challenges with using the /tmp directory (ram disk essentially) as a temp transcode folder. Disabling this was required for getting it to work, however it could have just been an issue with me trying to test from various browsers on the same client machine. Confirmed with external clients that everything w
  3. Appreciate it, that's the one I'm using; I'll give that a shot
  4. Still having an issue with Plex and transcoding after updating my main server to 6.9.1 from 6.8.3. Jellyfin works just fine and use the GPU, but not so with plex; thoughts? Have tried restarts, etc and nothing. Anyone else?
  5. Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following: Pull the container using CA, and make sure you enter the mount name like "NAME:" In the host (aka Unraid) terminal run the provided command on page 1 of this post: docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config Follow the onscreen guide; most flow with other tutorials and the video referenced above. UNTIL - you get to the part about using "auto con
  6. I don't think I'm the only one missing something, a brief tutorial would be super helpful. In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on. Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former. I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing). The URL uses localhost...(127.0.0.1/
  7. Admittedly I'm probably an oddball use case, but I have 2 unraid servers; one internal (focused on a household plex server and requisite automation ), and one external/ utility server (pihole, unifi, external plex server, etc.) I have the pihole container on both, but only want one instance to be running at any one time so traffic isn't divided. Why 2, so if I manually take down the external/utility server my clients will still have DNS. So, my question is if it would be possible to have a listener on the server with the backup pihole container to listen for the 1st to go down, and then a
  8. same issue with VyOS; even tried to pfsense with the same issue. Downgrading to 6.3.5 at least has them showing up, however I have 2 servers 1 on 6.4.0 and not the "vswitch" on 6.3.5. Something is off now as latency is terrible w/huge amounts of dropped packets. This setup was working perfectly fine for my 10GB network w/both machines on 6.3.5; what gives?
  9. failed again 2TB in, and unfortunately upon restart it started at the beginning instead of where it left off. It seems there are a few possible causes; 1) the session/ key may be timing out on the backend, which might be resolved by setting the "don't reuse connections" flag, and 2) the app shouldn't mark a backup as completed due to the time difference between last start/finish and the new start. It should index all the files first so in knows the overall package specifics; what's actually in the backend vs source. I thought is was able to do incremental, but the behavior suggest otherwise
  10. try 8.6TB; turns our when you get the time out error you just have to wait 5min or so, and rerun the backup. It seems to pick up where it left off. Still, it should be able to better utilize the available resources; ram is at 25%, so I would expect the CPU to be better.
  11. Some small challenges to get this working reliably with amazon clouddrive for large files, and had to have a lot of options added. I think I'm getting the timeout error when increasing either the number of asynchronous files and/or volume size because it's taking so long to encrypt/ compress the files. Looking at the docker CPU utilization I might get 5% during a back up. Any ideas on how to take more advantage of the hardware? Already have max thread, and high thread priority options set. The problem with the timeout error is it essential breaks the backup due to incomplete files; tmp an
  12. I still get the errors (although didn't happen last night for some reason; turned network off, but event monitoring is still on. Will change back to confirm), however the temps are now closer with the new ram; one processor might have been working harder shuffling data through the bus? BTW, this is where I purchased all of my ram: http://www.ebay.com/itm/162349088753?_trksid=p2057872.m2749.l2649&ssPageName=STRK%3AMEBIDX%3AIT
  13. Thank you both for the quick response, and appreciate your contribution to the community. This has turned into a bit of a lab for me; this being the 3rd iteration of the system. Once I figured out/ understood the capability of unraid I just had to keep going; don't talk to my wife though A few observations, and am not sure you might have experienced the same. The plugin reports errors for both processors; where the web gui only reports the primary CPU. Also, in each instance the event lasted exactly 5 sec according to the event log in the gui. I had another strange problem pre
  14. Excellent plugin. Quick question on fan control. I have an Asrock board, but fan control is disabled. After reading through the change log it looks like you make an assumption on the fan naming convention. I assume this works for a single socket MB, but does not align with how things are named in a dual socket configuration; CPU_FAN1_1, CPU_FAN1_2, CPU_FAN2_1, and CPU_FAN2_2. Any thoughts on supporting this type of configuration, MB is EP2C602-4L/D16? Unfortunately, Asrock doesn't let you make changes via their web interface, so this was my best option to avoid having to reboot into bios