Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About rclifton

  • Rank
  • Birthday January 25


  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The RTL8117 on that board is for management only, using Asus's Control Center software.. You should be using the intel nic as the default network connection (eth0) on your setup.
  2. Nevermind, I figured out what the issue was. User error =(
  3. Fixed!! As soon as I ran that command it started running as it should.. Thanks a bunch!!
  4. Here is the results of the docker exec command: -rw------- 1 root root 5654 Feb 6 13:15 /ddclient.conf The run command when I update the container is: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='ddclient' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/cache/appdata/ddclient/':'/config':'rw' 'linuxserver/ddclient' 38e494fbb43ced3a2a5262aba904fc95d8592d23dc2ec4ac14c0b47c68f5b5a3 I've tried completely removing and reinstalling a couple times now and end up with the same results every time.
  5. Loving the container, I've had it up and running for awhile now, but today I noticed that something has broken. I'm not sure exactly when it happened but I do know that I updated a domain this past weekend and ddclient updated its ip so it was working as late as Sunday. However my log is now filled with: ErrorWarningSystemArrayLogin readline() on closed filehandle FD at /usr/bin/ddclient line 1130. stat() on closed filehandle FD at /usr/bin/ddclient line 1117. Use of uninitialized value $mode in bitwise and (&) at /usr/bin/ddclient line 1118. readline() on closed filehandle FD at /usr/bin/ddclient line 1130. WARNING: file /ddclient.conf: Cannot open file '/ddclient.conf'. (Permission denied) I'm not sure how the permission on the config file was changed and I'm really unsure of how to fix it, as linux command line is not something I have a lot of experience with.. Thanks!
  6. Thanks for the heads up!! Never even thought to look at the github page for some reason lol..
  7. Hi, not sure if this is the right place. I installed the container "nut influxdb exporter" and it points to this post as the support post. Anyway I'm trying to use it to bring in my UPS stats and have run into a problem. My UPS is a Cyberpower OR2200PFCRT2Ua. When I install the container if I delete the entry for WATTS the container works but the data is incorrect, it shows a usage of only about 44W, when the UPS front panel indicates the load is actually 172W. In NUT I have to configure the setup as: UPS Power and Load Display Settings: Manual UPS Output Volt Amp Capacity (VA): 2200 UPS Output Watt Capacity (Watts): 1320 If I do this then in Unraid, all the UPS information is displayed correctly on the dashboard. However if I enter 1320 into the WATTS entry of the container it instantly stops after starting and displays the following error message: [DEBUG] Connecting to host Connected successfully to NUT [DEBUG] list_vars called... Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 107, in <module> json_body = construct_object(ups_data, remove_keys, tag_keys) File "/src/nut-influxdb-exporter.py", line 85, in construct_object fields['watts'] = watts * 0.01 * fields['ups.load'] TypeError: can't multiply sequence by non-int of type 'float' So to get accurate data I need to enter the WATTS info but then the container doesn't like it. If I omit the Watts info the container runs but reports the wrong info. Any help is appreciated and sorry if this is perhaps the wrong thread... *EDIT* As an aside I did some digging, my UPS is reporting ups.load as 14. If I do the math in the last line watts(1320) * .01 * ups.load (14). I get 184.8W. The front panel is reporting 185W currently. So the math is right, it just appears that maybe one of the entries isn't seen as an actual number for some reason..
  8. I think what he was saying is, now that he is back on 6.6.7 he checked and the queue depth is 1 on 6.6.7 as well. Which means that the speculation that NCQ in 6.7 might be part of the problem with that release would be incorrect since for him queue depth was 1 for both 6.6.7 and 6.7 but he has no issues with 6.6.7. Or at least that's how I read what he said anyway.
  9. After pulling my hair out for the last week looking for what I originally assumed was probably a network issue, I found this thread which describes the issue I'm having exactly. My system is a dual xeon 2650 setup with 96GB of ram, dual LSI2008SAS2 cards, two cache drives connected to the onboard sata controller intel c600/x79 chipset in raid1. Mover is currently configured to run hourly as my cache drives are relatively small @ 120GBs for the number of users within my household (8). I was already planning to jump to a 1TB nvme drive but guess I may need to seriously consider downgrading as my wife's identical twin lives with us which means WAFx2 is a major issue! 😱 Is there anything major to look out for when downgrading?
  10. Thanks, any idea why I am just now seeing this message? Since I've never seen it before I would assume that means it actually was working up until yesterday when I first started seeing these messages.
  11. I'm suddenly having a very strange problem with my cache drive and not really sure what the cause could be. This morning while checking logs/updating containers etc.. I noticed the following entries: Jan 3 18:00:36 Tower kernel: print_req_error: critical target error, dev sdd, sector 230729786 Jan 3 18:00:36 Tower kernel: BTRFS warning (device sdd1): failed to trim 1 device(s), last error -121 Jan 3 18:00:36 Tower root: /etc/libvirt: 920.8 MiB (965472256 bytes) trimmed Jan 3 18:00:36 Tower root: /var/lib/docker: 8.6 GiB (9238429696 bytes) trimmed The first line doesn't concern me to much, I've seen it ever since I first set unraid up and assume it's just a bad sector on the ssd (it's over 4 years old at least). The second line however is new and concerns me. I did a google search and found a post by someone from December and the consensus seemed to be check your cables. All my drives are in hotswap bays on my Supermicro server so I powered the system down, swapped the drive into a new bay and powered it back on. The problem remained. What's interesting is that even though it says trim failed, the next two lines below that appears to show that it actually did complete the trim. I'm really not sure what the problem could be and was hoping someone more experienced with the inner workings of unraid could point me in the right direction. Thanks, diagnostic info is posted below... tower-diagnostics-20190103-1850.zip
  12. Yes actually, I did read them. Neither of them really explain the massively random difference in transfer speeds that I see. I mean 22MB/s vs 108MB/s is a pretty wide margin is it not? You would not be wondering if there was something seriously wrong if you frequently saw a swing like that moving files around? By the way, that first transfer was my kids movie folder, so all fairly large files. That second transfer as you can see is music. Obviously much smaller individual files and yet its FASTER!
  13. Sorry, real life pulled me away for a bit but I came back to finish moving data off that drive and thought I would give another example of why the randomly crappy writes are sooooo annoying. Here is another set of screen shots, same exact two drives (Source is a 7200rpm seagate 6TB approx 2yrs old and destination is a brand new 8TB WD White Label Red) and as you can see decent transfer speed and it actually got slightly faster as it went on. It's this random, can't really explain why its totally slow as molasses one time and about what I would expect another that is really getting to me.. I have the same issue with my monthly parity syncs as well. One time they will finish in about 15 hours averaging 140+MB/s and the next time they will run for almost 30 hours with a average speed of 77MB/s. Any ideas?
  14. I've struggled with this issue for awhile, everytime I think I finally have figured it out I'm proven wrong. Currently I'm using Unbalanced to empty the contents of a drive. It's plodding along at 22MB/s. If I transfer that same content to an external USB3 drive and then copy it back to the array I'll get 130+ MB/s, why? I do not use the cache drive for any file copies, so that isn't it. I have reconstruct writes enabled, all my drives are connected at sata 6g, I am not cpu/ram limited (96GB of ram, dual xenon) so what gives? This is the one issue I have with unraid that really bothers me. I get that I am not going to get the same performance as a raid 5 array but, when I can do a parity sync and average 140+MB/s and a file copy can't manage half that something seems very wrong. Any one have ideas? Thanks!
  15. What are the variable on the NUT details page in settings? What UPS? The UPS i'm using is a Cyberpower OR2200PFCRT2Ua. When in manual mode, I just enter the ratings that the manufacturer gives, which are: UPS Output Volt Amp Capacity (VA):2200 UPS Output Watt Capacity (Watts):1320 I notice in the NUT settings the variable ups.realpower.nominal always equals 296 and I think that might be the issue with the differences being displayed.. In manual mode I assume the 1320 I'm entering is overwriting that 296 in it's calculations. Where as in Auto mode its using that 296 and thats why the ups load in watts is way off in Auto.