Ambrotos

Members
  • Posts

    114
  • Joined

  • Last visited

Everything posted by Ambrotos

  1. I know this thread is a bit stale, but for anyone else landing here after searching for the same issue, I also initially saw "bad gateway" on a fresh install. In my case, the problem was because I didn't notice that the necessary seafile-mc environment variable is "DB_ROOT_PASSWD", not "DB_ROOT_PASSWORD". (notice the missing "OR"!)
  2. Yeah, I actually didn't wind up using it after all and just went back to customized BIOS fan curves. Still nice to finally have all the RPM and temp values available to the System Temp plugin -A
  3. @Forty Two man, am I glad I found your post. I've been scratching my head for a long time about why my Asus Z270 Prime board was detecting the NCT6775 module but not detecting any of the PWM controllers attached to it. I tried all the other posts' suggestions about making sure you've got the latest lm_sensors and forcing modprobe address etc. Your acpi_enforce_resources=lax tip was the missing piece! This is the only place I've seen reference to that. Out of curiosity, where did you find that?! -A
  4. Take a look through this thread from back when CrashPlan originally changed their pricing model and caused the exodus. The strategy I described back then is still pretty much what I do today, with the exception that I not longer use the ProFTPd plugin. It's been working great for 3+ years. https://forums.unraid.net/topic/61234-moving-from-crashplan-to-duplicati-requesting-guidance/?tab=comments#comment-600970
  5. That error message is referring to the configured Access Mode of the UD volume you've configured for that container. Edit the docker, switch to the Advanced View, edit the volume configuration that you have mapped to your Unassigned Device, and change the Access mode from the default Read/Write to RW/Slave. -A
  6. Hm. You're probably right. I was never super clear on the difference between volumes and binds. Thanks for the clarification! -A
  7. I know this is a bit of a zombie thread, but I recently ran into the same thing and thought I'd post here for anyone else who's having trouble with this docker. The reason that the nxfilter-base docker throws the Java exception seems to be that it's not able to deal with an empty conf folder. In unRAID-land we're used to dockers which will populate their /conf folders with default configuration when they're empty on first startup. Whether it's intended or not, that doesn't seem to work with this docker. When you don't map any host folders to the container's /nxfilter/conf then it starts up and uses the configuration files in the image itself. My quick hackey solution was just to start the docker with all the port and folder mapings *except* the /nxfilter/conf one. I temporarily mapped /mnt/user/appdata/nxfilter/conf : /tmp/conf, and connected to the container using docker exec -it <container-id> /bin/bash. Then I just got the default config out of the image and into my host with a cp -r /nxfilter/conf/* /tmp/conf, stopped the container, deleted the temporary mapping, and added the usual /mnt/user/appdata/nxfilter/conf : /nxfilter/conf. After that, it fired up no problem. Hope someone finds this useful. -A
  8. Back when Crashplan changed their business model I jumped ship and started searching for alternatives. I went the Duplicati + online storage route, and selected Backblaze B2 for the online half. Actually, I just did a forum search and there there was a conversation about it here: Two-and-a-bit years later, and I can confirm that this is still working great for me. I've got about a terabyte stored in B2, and my bill last month was just over $3. Hope that helps. -A
  9. @Taddeusz thanks for the tip. I made the change, rebooted, and haven't seen the message recur since yesterday. Out of curiosity, does anyone have an idea if this is actually a difference between 6.6 and 6.7, or did I just happen to coincidentally notice it after the upgrade and this has been happening for a while now?
  10. I had considered that, but I did have visions of one day finding some spare time to install a graphics card and build a Steam in-home streaming VM. Ideally I'd like to figure out how to fix this without disabling IOMMU -A
  11. Since upgrading to 6.7.0 a couple days ago, I have started seeing the following message in my system log. May 16 07:00:02 nas kernel: DMAR: [DMA Read] Request device [03:00.0] fault addr ffabc000 [fault reason 06] PTE Read access is not set Some quick Googling suggests that this is related somehow to IOMMU, though I don't use hardware passthrough for any of my VMs and anyway I've confirmed that IOMMU is reported as enabled by unRAID. The error message is similar to one raised during the 6.7RC. Maybe the patch that was included to fix the previous issue had an unintended side effect? Unlike the issue reported by Duggie264, I am not using any HP240 controllers. Mine are all IT reflashed m1015s or H310s. Also, note that the PCI device that it's complaining about is my Intel nVME drive that's currently not part of the array and is mounted by UA. Maybe that's related? Attached are my diagnostics. Does anyone have any thoughts on this? Cheers, -A P.S. - I should mention that I upgraded direct from 6.6.7. I don't play with RCs on this server, I have a test server for that. nas-diagnostics-20190516-1634.zip
  12. Yup, I noticed that there was an update to the plugin this morning. Looks good! Thanks. -A
  13. Is it just my system, or has anyone else noticed that the Settings icon for NUT doesn’t display anymore since the January 24th update to fa font? I initially thought it was just a browser cache thing, but I’ve since tried multiple browsers, multiple computers, etc. The NUT icon in the plugins page loads fine, but I’m assuming that’s because it hasn’t been changed to be “awesome” (Running 6.6.6 if that matters. I haven’t had a chance to try out the 6.7RC yet) -A
  14. Ah yes, I do see the mlx4_core reference in the second trace message there. I must have made a typo when I searched the first time and didn't find anything. So alright, is Mellanox a common element of this issue? @AcidReign, what drivers are you using? @Hoopster? -A
  15. I reported a similar issue a while ago which at the time I suspected might have been related to the 10G Mellanox drivers I was using not playing nicely with macvlan. I don't see any reference to the Mellanox drivers in your stack trace though; it looks like you're using Intel drivers. So maybe this is a more common issue than I'd assumed. I haven't really been following it since I last saw it in August. I can confirm at least that it was present in version 6.5.3. Is it safe to assume that you're running 6.6.3? If so, that begins to put a range on the affected software versions... Anyway, as I mentioned in my post linked above, I just avoided the issue by removing VLANs and using multiple hardware NICs. I'm just chiming in here to report that I've seen this issue as well. Cheers, -A
  16. Yes, I saw that thread. I stuck with "Dark" -A
  17. Updated from 6.5.3. Everything went nice and smoothly. Initial impressions? Overall I like the UI changes. It's definitely going to take some getting used to, but I like the new look. It does seem a little less... space efficient (?) vertically than the previous layout but that's not a huge thing. If I had to highlight one feature, I'd say I particularly like the new all-in-one-place CPU pinning functionality. MUCH more intuitive! Cheers! -A
  18. I realize this may be somewhat of a niche use case, but I thought I'd document it here in case anyone else encounters similar issues. I've recently been encountering stack traces, network connectivity issues, and the occasional hard crash on my unRAID box. I'm running the latest version of unRAID (6.5.3 at the time of writing), and have a Asus P8B-X mobo with a Xeon e3-1240v2, and 2 on-board Intel 82574L NICs. I've also added a Mellanox ConnectX-2 10GbE NIC. Originally I had 2 VLANs configured on eth0 (the Mellanox card), and the on-board Intel NICs were unconnected. I began noticing the stack traces shortly after upgrading to 6.5.3, though I can't say for sure that it's software version related, or whether that's just coincidental with when I happened to check the syslog at just the right time. I'm not terribly familiar with diagnosing stack traces, but I did notice that each of the traces specified "last unloaded: mlx4_core" (the driver for my 10GbE NIC), and that there was always the following sequence in the trace: Aug 19 16:29:23 nas kernel: do_softirq+0x46/0x52 Aug 19 16:29:23 nas kernel: netif_rx_ni+0x1a/0x20 Aug 19 16:29:23 nas kernel: macvlan_broadcast+0x117/0x14f [macvlan] Aug 19 16:29:23 nas kernel: macvlan_process_broadcast+0xc5/0x10c [macvlan] Aug 19 16:29:23 nas kernel: process_one_work+0x155/0x237 The simplistic interpretation of this combination of messages is that there's some bug in the way the Mellanox drivers are interacting with the macvlan network drivers, and it's causing stack crashes when it encounters some particular broadcast traffic. So as a test I removed all VLAN configuration from the Mellanox card, configured an unused port on my switch to be an access port to the second VLAN, and connected it to one of the Intel on-board NICs. I haven't seen any stack traces or issues otherwise since completing this change back on August 19th (just over a week). Previously I'd see traces/crashes about once a day or so. So, I'm pretty satisfied that I've addressed my problem. I'm not sure how common this scenario is... how many of us are using 10GbE Mellanox cards, and how many of that subset of us have VLANs configured? All the same, I figured I'd post my experience here sort of as a PSA in case anyone else with a similar setup is trying to troubleshoot. Cheers, -A
  19. As Frank said above, feel free to use Preclear plugin for convenience if you like, but since its original purpose (minimizing downtime required to insert new drives) has been integrated into unRAID itself, I've stopped using it. A lot of people (myself included) continued to use it for a while to detect "infant mortality" in newly installed hard drives, but since the plugin doesn't seem to be supported anymore (regardless of whether it technically runs or not) we probably shouldn't even be doing that much. Myself, to address the need to detect flaky drives on install I've been opening up the handy new web-based terminal window, starting a "screen" session, and issuing a badblocks command. This does much the same thing as the preclear plugin did; read/write the entire disk and compare with expected contents. badblocks -nvs /dev/sdx will do a non-destructive test of the entire disk. If you don't care about the data on the disk you can do badblocks -wvs /dev/sdx. Just my $0.02 -A
  20. That definitely seems to be what I'm encountering. Thanks for the tip. I'll follow up in that thread. Cheers, -A
  21. Just updated from 6.5.2 to 6.5.3 a couple days ago. Initially I thought everything went smoothly, as I haven't noticed any behavioral/performance symptoms. However when I logged in this morning I noticed in the system log that I have a call trace in my syslog. (Diagnostics attached). Obviously I can't be 100% certain that this is related to 6.5.3 specifically, but I have never seen any traces at all in the past. (I've run pretty much every stable release on this hardware since the unRAID v5 days). Just seems coincidental that the call trace happened shortly after the software upgrade. It seems noteworthy that the trace is concerning netfilter/macvlan and that I do have somewhat of a "non-standard" networking configuration, in that I'm using VLAN tagging and a Mellanox ConnectX-2 10G NIC. As I said, I haven't noticed any behavioral systems or anything like that so I'm not too panicked about this. Just thought someone might like to take a look to see if any corner cases or incompatibilities of some recent change weren't covered during testing. Let me know if more information beyond the diagnostics file would be helpful. I'll monitor the logs to see if it happens again, and whether I can correlate it to a specific event. Cheers, -A nas-diagnostics-20180624-0755.zip
  22. No problem. I'm assuming you're mostly asking about the unRAID-side of the equation, since from the Duplicati side of things it's really just setting the backup destination storage type to FTP and filling in the address unRAID has a built-in FTP server, but it's not configurable, so I turned that off and installed SlrG's ProFTPd plugin (just search the Apps tab in the unRAID UI for it to install). Check the ProFTPd page in unRAID"s Settings menu to make sure it's Enabled and "RUNNING". Instead of having its own separate user database, this plugin uses the built-in unRAID user profiles. Basically, any configured unRAID user with the description "ftpuser" will be added to the ProFTPd user database. If you also add a storage path to the user's description then that will be set as the FTP user's home directory. So I created a new unRAID user called "backup" with the description "ftpuser /mnt/users/Backup". Then just create the unRAID share called "Backup" (for backups I un-check use of the cache disk), and you're done. You can test that your FTP user has been created properly by using Chrome/Firefox to browse to ftp://<username>:<password>@<unraidIP> and see if the resulting page lists the contents of the folder you specified as the FTP user's home directory. Hope that helps. -A
  23. I don't have any problem with backing up a repository of duplicati backups to B2. They are after all just a bunch of zip files. In fact, while troubleshooting an unrelated backup completion problem I was having a few months ago I asked the very same question in passing on the Duplicati forums. Here, if you're curious. Anyway, as long as you don't mind the lack of additional de-duplication, you're correct in that the only downside to this plan is that due to the nested nature of the backups you'd have to restore the entire outer backup even if you're only looking to recover a single file from the inner one. -A
  24. I believe the issue I was referring to is particular to the Dell H310s, so you should be OK.
  25. I have several 9211-based LSI cards (some Dell Perc H310s, and some IBM m1015). I've used them with every (stable/released) version since the v5 days and haven't had any problems. I've also used a few of these cards prior to getting into unRAID, when I was using more of a JBOD-based solution and the SAS controllers were operating in IR-mode. Based on my experience: If you receive the SAS controller and it's currently in IT mode, I wouldn't bother flashing it, even if it isn't the latest and greatest v20.00.07. The only real change I notice after update an older P19 IT firmware to a newer one is that the boot ROM says "Avago" on boot instead of "LSI" If you receive the controller and it's currently in IR mode, then I would definitely recommend flashing it to the latest IT firmware. I'm not sure that you'll see significantly better/worse performance in terms of MB/s read/write rates, but a controller in IT mode which just acts as a simple HBA is way more convenient to manage than having to expose each drive as a single-disk raid-0 via the card's bios configuration utility or via megacli. IT mode just strips out all of the RAID functionality and turns the card into a simple HBA controller. And since you're using unRAID for your redundancy, presumably 90% of the time you're not looking for hardware RAID functionality anyway. By the way, just as a side note, you didn't mention which 9211-based card you bought, but if you got a Dell H310 and you're planning to install it into a desktop-class motherboard's PCI-E x16 port, you may need to apply the pin B5/B6 "tape mod" in order to get the system to boot properly.