Ambrotos

Members
  • Posts

    114
  • Joined

  • Last visited

Everything posted by Ambrotos

  1. Hm. Seems I may have provided a misleading example. True, I was forced to do a hard shutdown a couple days ago after the unRAID system locked up due to a kernel panic. But that's definitely something I try to avoid at all costs. I have a 9kVA UPS and NUT/Powerdown plugins installed deliberately to try to minimize this. This is also the only system of mine that's still running btrfs for the cache drive. I too have moved pretty much exclusively to xfs, but I haven't gone through the hassle of migrating this system yet. I guess it might be time. Anyway, apologies for my lack of attention to detail. At this point, when I see a csum error in the log I guess I just assume it's a loop0 csum error. I've attached a log from one of my friends' servers which IS running xfs on all drives, and still sees csum errors. Cheers, -A tower-diagnostics-20160910-0822.zip
  2. I just wanted to chime in and say that I see exactly the same thing. Every once in a while I'll have a docker app (usually Plex, but not always) crash and error when I try to restart it. I'll simply delete the docker.img, then re-add all my dockers from the my-* templates, and continue on until it happens again. It's honestly pretty annoying. I manage 5 systems (mine, my test machine, my father's, and 2 friends') and I've seen this problem on 3 of them. Interestingly, the 3 machines that I've seen this problem on all have Samsung SSDs for cache drives (1x 750 EVO, 2x 840 EVO). The other machines are using platter drives for their cache drives. Maybe it's coincidence. I've seen this in all of the recent builds of 6.x that I've been running, though it's tough to remember exactly which builds at this point. Certainly I can say that it's present in 6.1.9 and 6.2RC5, since those are the two versions I'm currently running on various machines and it just happened again tonight. I've attached the diagnostics file from my main machine, which just had a burst of CSUM errors as of this evening at 21:34. Anyone else seeing this? Anyone have any suggestions? Cheers, -A nas-diagnostics-20160909-2147.zip
  3. I don't suppose you noticed whether the network activity LEDs on your router were going nuts, did you? I've seen issues the past where network hosts can get stuck continuously sending the same packet out their network interface at full line rate, which at 100Mbps or 1Gbps would probably choke a residential router's packet processor. I've personally only seen this with lab equipment at work, never with unRAID. But if we're theorizing that the trigger was a power surge/lightning strike then I'd wager it's possible. Just a thought, -A
  4. Hm. Odd. I just deleted the container (and its resulting orphan from Advanced view) and then recreated from the my-syncthing template. Didn't even modify configuration. Now it's upgraded to 0.13.10. Sorry, I guess I should have tried that before posting. Just didn't occur to me that that would do anything -A
  5. It appears that my install of syncthing is stuck at v0.12.22 while all of my desktop-based peers have upgraded automatically to the latest stable version of v0.13.10. I've restarted the docker, checked the appdata/syncthing folder to make sure it's writeable, verified that <autoUpgradeIntervalH> is set to 12 in the config.xml. Generally I would expect to be able to trigger an upgrade by logging into the syncthing webUI and clicking the "Upgrade" button in the banner when one is available. For my docker-based syncthing peer this has never appeared. Everything looks good environment-wise as far as I can tell. One thing I noticed in the logs is that on startup the docker appears to be checking for a new version of syncthing at the following URL, which returns a 404 error: http://apt.syncthing.net syncthing/release Has this URL changed recently? Is this something that should be updated? Has anyone else successfully upgraded their Linuxserver.io based syncthing docker to v0.13.10? Thanks, -A
  6. Quick tangential question: Over in the Preclear Plugin conversation someone made the comment that the primary purpose of the script is largely moot now, since the 6.2 beta can zero new drives in the background while the array is active. I thought I was keeping pretty close tabs on the beta, and usually comb through release notes with each new announcement... but this was news to me. Is this a confirmed feature of the 6.2 beta now? A quick search of the forum didn't turn anything up, but is there a discussion somewhere with more details? I don't have a spare drive to add to my test rig or I'd just try it for myself... -A
  7. They're just teasing us now with that "...similar to v6.2" comment Both servers updated smoothly. -A
  8. I guess now it's my turn to apologize for a delayed reply. I didn't see a response for a while, and then sort of stopped tracking this thread The ability to specify a custom value for pollfreq would be nice, you're right. I find the default of 30sec to be a little long. As for SNMP traps, those aren't new to SNMPv3. Main difference between v2c and v3 is the inclusion of security/encryption, which I'm not really interested in on my private network. My UPS supports SNMPv2c, and supports most if not all of the traps defined in the IETF MIB (https://tools.ietf.org/html/rfc1628). I haven't really played around with traps though, since with a polling freq of 15sec and 9000VA of backup capacity, I've got more than enough time to react to a power outage. I just trigger a powerdown when battery capacity reaches 25%, which takes about 45 minutes. Thanks for considering the update. Happy to test/debug for you if that would help. Cheers, -A
  9. I'd like to echo my thanks for putting this plugin together. I've been hoping unRAID would eventually support non-APC UPSs for a while now. I was wondering if you'd consider adding the snmp-ups driver in as a dropdown option in the next release? I'm using the plugin with a TrippLite SMART5000RT3U, and this UPS is really only supported by NUT through SNMP using the standard IETF 1.4 MIBs. Because the plugin considers the SNMP driver as "custom", my usb.conf file is missing a couple key parameters every time my server boots. If you could add a dropdown for 'snmp_version' and an editbox for 'community', then we'd have all the variables needed to generate an SNMP config file. Currently to make it work, in my go file I copy a custom usb.conf file to /etc/ups/ with the following contents: [ups] driver = snmp-ups port = 192.168.0.31 community = public snmp_version = v2c pollfreq = 15 desc = TrippLite SMART5000UPS ...and then start the SNMP driver with the command /usr/bin/snmp-ups -a ups -u root ...and everything seems to work nicely. Thanks again. -A
  10. Pretty sure it doesn't go to the root of the flash drive. I believe you should put the script in /boot/config/plugins/preclear.disk/ -A
  11. Just thought I'd submit an additional data point for you: I didn't really notice much improvement after that executing that command. But then, I never really noticed a write performance problem to begin with. I'm running unRAID 6.1.3 Pro, and my cache drive is a Samsung 840 EVO 250GB connected to a m1015 flashed w/ IT. root@nas:/mnt/cache# dd if=/dev/zero of=/mnt/cache/dump.me bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 7.63911 s, 562 MB/s root@nas:/mnt/cache# rm dump.me root@nas:/mnt/cache# echo deadline > /sys/block/sdf/queue/scheduler root@nas:/mnt/cache# dd if=/dev/zero of=/mnt/cache/dump.me bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 7.52125 s, 571 MB/s root@nas:/mnt/cache# -A
  12. Did you edit the .cfg file in Windows? It's sort of telling that your username/password lines are appended with \r\n. The \r\n means carriage return and line feed respectively. Windows uses \r\n as line endings, whereas Unix just uses \n. It's a common text formatting compatibilty problem when moving files between OSs. Try running your autoProcessTV.cfg file through the "fromdos" utility to strip out any Windows formatting. (i.e. run the command "fromdos autoProcessTV.cfg"). Similarly, if you want to move a document from Unix to Windows systems, you can run it through the "todos" utility. -A
  13. The URL that unRAID sends you to when you click the WebUI link for a docker on the dashboard is specified separately from the container's port settings. It's in the Advanced Settings section, and not shown by default. Usually the docker container author sets it (along with a bunch of other unRAID-specific customization) upon publication. Open up the docker config, click Advanced Settings in the top right of the popup window, and change the WebUI setting in the Advanced Fields section to use the appropriate port. It's of the format http://[iP]:[PORT:8081]/ (for example). This just modifies the WebUI link, not the actual operational config of the container. So, if you mix them up you might find that clicking on sab takes you to plexWatch, or vice versa. Just make sure that the port you specify in the WebUI field matches that container's configured host port. -A
  14. With the release of RC1, is it safe to assume that NUT support isn't being included 6.1? I have a Tripplite SMART5000 networked UPS which I have been unsuccessful in integrating with APCUPSD, and was really looking forward to being able to add UPS functionality to my unRAID box. What are the current plans for integration of NUT into unRAID? -A
  15. I have a Samsung 840 EVO as my cache drive. Force NCQ disabled has always been set to 'yes' on my systems. I, too, am seeing the same error in my syslog. I only mention this for diagnostic purposes. I haven't seen any issues with performance/stability. From what I can tell, trim is working fine. Since I've got NCQ disabled, I'm not terribly concerned by this error. FYI, the drive is connected via a m1015 flashed to IT mode. -A
  16. A few years ago when I first started playing with unRAID I ran it as a guest in my ESXi box. I eventually found the need to manually do RDM every time I added/removed a disk to be a bit annoying, so I moved unRAID to its own box. This was around the time that v5 was released, so there was no VM hosting option within unRAID and I kept my ESXi guests running separately. Recently, with the availability of dockers and VMs in unRAID 6 I've moved much of the functionality that was done in VMs to dockers. I like the ability to do map folders directly instead of having to mount CIFS/NFS shares. After the dust settled I was left with two CentOS servers remaining, which I moved to unRAID KVM guests and shutdown the ESXi box. I'm sure the process would be a bit more challenging with Windows-based guests, but it was a breeze with mine; the KVM website has a really straightforward VMWare/KVM migration guide. -A
  17. Don't know exactly what your problem was, but this was not it. It is completely normal for the drive "letter" to change between boots. unRAID keeps track of which disk is which by using the drive's serial number. Fair enough. Now that you mention it, I think I knew that actually... In spite of that, I still suspect a problem with changing drive assignment, regardless of whether unRAID keeps track of the drives by label, or uuid, or path, etc. I realize a drive's reported serial number isn't likely to change... the only response I have to that is to shrug. As I mentioned, while I was seeing the errors and the array was started the GUI showed the cache drive as green balled. When I stopped the array, the cache was listed as "unassigned", and /dev/sdk was the only drive available in the dropdown. I mentally latched onto the most obvious difference -- the drive path. There were likely other differences I didn't notice. To me, this implies that the drive was located at one location when the array started, but then that reference changed somehow after the array started, invalidating that reference. Anyway, if it happens again I'll pay more attention. At the time I was just trying to restore functionality ASAP. -A
  18. I've upgraded my two systems from b15 to RC3 as well. Overall it's been a smooth experience. I really like the improvements that have been made to docker and VM management. If nothing else, it's allowed me to shutdown a separate ESXi host I had running a couple CentOS servers and consolidate some of my hardware, which is much appreciated. I do have two issues to report though. One of my systems has 10x 3TB platter drives in the array and 1x 500GB cache (all via SAS expander), all of which are connected via an IBM m1015 flashed to IT mode. The first time I booted the machine after applying (via GUI) the upgrade to RC3, I encountered all sorts of errors in the log about btrfs errors and timeouts. Eventually I figured out that for whatever reason the cache drive (which is formatted btrfs) had changed from /dev/sde to /dev/sdk. Unfortunately the array had already automatically started, and was "using" /dev/sde as the cache drive. I put "using" in quotes since there was obviously no device at that location, even though the GUI showed the cache as green-balled, and docker was trying to use /mnt/cache/docker-data/ to load the docker configs. Once I noticed the change, I stopped the array, reassigned the cache to /dev/sdk, and restarted. This seemed to resolve the issue, though I had to delete and rebuild docker.img. The second issue is as jimbobulator reported; I was seeing btrfs checksum errors in the syslog for /dev/loop0. Originally I assumed the corruption was a result of the cache issue mentioned above. However, now that I see someone else experiencing the same thing I'm wondering if they're not related. After all, if the cache drive was assigned to the wrong location, docker shouldn't be able to access it to write to it at all, corruption or otherwise. Anyway, my systems are now up and running smoothly. I just thought I'd chime in and document my experiences. I'm going to be watching to see if the btrfs errors or the drive assignment issues recur. -A
  19. Jeff, The error message appears to be accurate. You don't have a DNS server configured. For example, my network.cfg looks like this: # Generated settings: USE_DHCP="no" IPADDR="192.168.0.15" NETMASK="255.255.255.0" GATEWAY="192.168.0.1" DHCP_KEEPRESOLV="no" DNS_SERVER1="192.168.0.1" DNS_SERVER2="" DNS_SERVER3="" BONDING="no" BONDING_MODE="1" BRIDGING="yes" BRNAME="br0" BRSTP="no" Yours is missing the "DNS_SERVER1" field. What have you got configured there in the Settings -> Network Settings page in the unRAID GUI? If yours is like 99.9% of home networks it's the same as your default gateway, 192.168.1.1 -A
  20. I having an odd problem with the latest H5AI container. I've got what I think is a pretty basic setup (80/tcp<->host:81, /config <-> /mnt/user/docker-data/h5ai, /var/www <-> /mnt/user/Videos). When I start the container, it technically "works", in that I can access the h5ai interface @ host_ip:81, however if I check the log it is constantly cycling through the following errors, and never stops: [20-Mar-2015 10:50:01] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [20-Mar-2015 10:50:01] ERROR: FPM initialization failed nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() [20-Mar-2015 10:50:03] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [20-Mar-2015 10:50:03] ERROR: FPM initialization failed nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Has anyone seen this before? Any thoughts? Cheers, -A
  21. lol. well don't I feel dumb! In my defense though, b12 is the first I've played with v6, and those sliders blend in REALLY well. I understand the need to make advanced settings unobtrusive, but maybe they should be darkened a few shades of grey. -A
  22. It looks like if you add a container from one of the built in LimeTech-supported repositories (e.g. needo), then the templates' resulting .xml contain values for the <banner> and <icon> tags, which causes that container's entry in the docker.json file to be populated properly. If you add a container from a "generic" docker repository (e.g. tutum's apache-php server) then those values are populated with the default "#", resulting in the grey question mark icon. I think this is probably because these images aren't part of a standard docker file definition, and are extra eye candy added by unRAID's implementation. As such, docker files downloaded from generic repositories aren't going to have these tags populated. I've already asked about having a couple fields added to the container creation page that would allow us to specify/upload a couple images to be used by the container at time of creation. I haven't seen any response yet though; seems we're the only two interested in this at the moment I should mention that I'm making a bunch of potentially ignorant assumptions here. Can anyone with more docker expertise/experience jump in? -A
  23. Found it. You can manually specify the filename of the icon and banner images in the file /usr/local/emhttp/state/plugins/dynamix.docker.manager/docker.json If the filename is "#" it will display the grey question mark. Otherwise, it will look for the file in two places. First, it will check the folder defined as "images-ram". If it isn't found, it will copy the specified file from "images-storage". By default, these two folders are: 'images-ram' => '/usr/local/emhttp/state/plugins/dynamix.docker.manager/images', 'images-storage' => '/boot/config/plugins/dockerMan/images', SO, I just placed tutum-apache-php-*.png files in /boot/config/plugins/dockerMan/images and then edited the docker.json file. The next time I loaded the docker webpage my icon was displayed properly. The only downside to this is that I think it's a volatile change (i.e. it won't persist across reboots). I suppose you could make a backup of the .json file on your /boot and then add a one liner to your go script to overwrite the default on startup. I'm sure there's a more elegant solution that I'm not thinking of though... Hope this helps. -A
  24. I was having similar trouble last week trying to get the Apache feather logo to show as the banner for my Tutum webserver container. I'm pretty sure the my-*.xml files just specify potential templates/repositories to be used during container creation, not which image to show during pageload. Those <banner> and <icon> tags specify from where to download the images to be used by the container being created... though I could be wrong. Anyway, I'm still poking around looking for a config file that refers to the specific container ID instead of the repository. I'd be interested to hear if you make any progress! -A
  25. Maybe there's already a way to do this, or maybe this is a feature request... If I add a container from one of needo's or gfjardim's repositories (e.g. sickbeard or SAB etc.) then the container's app has a nice icon and banner image associated with it. Anything else has a generic grey question mark icon. For example, I'm using the tutum apache-php container to host a couple simple websites, and would like to associate the stylized Apache feather with its Docker app. I've noticed that the images for the built in repositories are all downloaded during container creation to /boot/config/plugins/dockerMan/images/<repositoryName>-banner.png ...but just placing a properly named and sized image at that location doesn't work. I've also found that the pre-canned repository .xml files have <Banner></Banner> and <Icon></Icon> tags, but that only applies during container creation, I need somewhere to specify icon/banner for an existing container during page load. So I guess my question is two-fold: Is there a config file somewhere where I can specify the icon/banner images to be used for an existing "custom" Docker container? Secondly, can I make a feature request that a couple fields be added to the container creation page which would allow us to specify a locally-stored icon and banner image? -A