Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

9 Neutral

About Ambrotos

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Taddeusz thanks for the tip. I made the change, rebooted, and haven't seen the message recur since yesterday. Out of curiosity, does anyone have an idea if this is actually a difference between 6.6 and 6.7, or did I just happen to coincidentally notice it after the upgrade and this has been happening for a while now?
  2. I had considered that, but I did have visions of one day finding some spare time to install a graphics card and build a Steam in-home streaming VM. Ideally I'd like to figure out how to fix this without disabling IOMMU -A
  3. Since upgrading to 6.7.0 a couple days ago, I have started seeing the following message in my system log. May 16 07:00:02 nas kernel: DMAR: [DMA Read] Request device [03:00.0] fault addr ffabc000 [fault reason 06] PTE Read access is not set Some quick Googling suggests that this is related somehow to IOMMU, though I don't use hardware passthrough for any of my VMs and anyway I've confirmed that IOMMU is reported as enabled by unRAID. The error message is similar to one raised during the 6.7RC. Maybe the patch that was included to fix the previous issue had an unintended side effect? Unlike the issue reported by Duggie264, I am not using any HP240 controllers. Mine are all IT reflashed m1015s or H310s. Also, note that the PCI device that it's complaining about is my Intel nVME drive that's currently not part of the array and is mounted by UA. Maybe that's related? Attached are my diagnostics. Does anyone have any thoughts on this? Cheers, -A P.S. - I should mention that I upgraded direct from 6.6.7. I don't play with RCs on this server, I have a test server for that. nas-diagnostics-20190516-1634.zip
  4. Yup, I noticed that there was an update to the plugin this morning. Looks good! Thanks. -A
  5. Is it just my system, or has anyone else noticed that the Settings icon for NUT doesn’t display anymore since the January 24th update to fa font? I initially thought it was just a browser cache thing, but I’ve since tried multiple browsers, multiple computers, etc. The NUT icon in the plugins page loads fine, but I’m assuming that’s because it hasn’t been changed to be “awesome” (Running 6.6.6 if that matters. I haven’t had a chance to try out the 6.7RC yet) -A
  6. Ah yes, I do see the mlx4_core reference in the second trace message there. I must have made a typo when I searched the first time and didn't find anything. So alright, is Mellanox a common element of this issue? @AcidReign, what drivers are you using? @Hoopster? -A
  7. I reported a similar issue a while ago which at the time I suspected might have been related to the 10G Mellanox drivers I was using not playing nicely with macvlan. I don't see any reference to the Mellanox drivers in your stack trace though; it looks like you're using Intel drivers. So maybe this is a more common issue than I'd assumed. I haven't really been following it since I last saw it in August. I can confirm at least that it was present in version 6.5.3. Is it safe to assume that you're running 6.6.3? If so, that begins to put a range on the affected software versions... Anyway, as I mentioned in my post linked above, I just avoided the issue by removing VLANs and using multiple hardware NICs. I'm just chiming in here to report that I've seen this issue as well. Cheers, -A
  8. Yes, I saw that thread. I stuck with "Dark" -A
  9. Updated from 6.5.3. Everything went nice and smoothly. Initial impressions? Overall I like the UI changes. It's definitely going to take some getting used to, but I like the new look. It does seem a little less... space efficient (?) vertically than the previous layout but that's not a huge thing. If I had to highlight one feature, I'd say I particularly like the new all-in-one-place CPU pinning functionality. MUCH more intuitive! Cheers! -A
  10. I realize this may be somewhat of a niche use case, but I thought I'd document it here in case anyone else encounters similar issues. I've recently been encountering stack traces, network connectivity issues, and the occasional hard crash on my unRAID box. I'm running the latest version of unRAID (6.5.3 at the time of writing), and have a Asus P8B-X mobo with a Xeon e3-1240v2, and 2 on-board Intel 82574L NICs. I've also added a Mellanox ConnectX-2 10GbE NIC. Originally I had 2 VLANs configured on eth0 (the Mellanox card), and the on-board Intel NICs were unconnected. I began noticing the stack traces shortly after upgrading to 6.5.3, though I can't say for sure that it's software version related, or whether that's just coincidental with when I happened to check the syslog at just the right time. I'm not terribly familiar with diagnosing stack traces, but I did notice that each of the traces specified "last unloaded: mlx4_core" (the driver for my 10GbE NIC), and that there was always the following sequence in the trace: Aug 19 16:29:23 nas kernel: do_softirq+0x46/0x52 Aug 19 16:29:23 nas kernel: netif_rx_ni+0x1a/0x20 Aug 19 16:29:23 nas kernel: macvlan_broadcast+0x117/0x14f [macvlan] Aug 19 16:29:23 nas kernel: macvlan_process_broadcast+0xc5/0x10c [macvlan] Aug 19 16:29:23 nas kernel: process_one_work+0x155/0x237 The simplistic interpretation of this combination of messages is that there's some bug in the way the Mellanox drivers are interacting with the macvlan network drivers, and it's causing stack crashes when it encounters some particular broadcast traffic. So as a test I removed all VLAN configuration from the Mellanox card, configured an unused port on my switch to be an access port to the second VLAN, and connected it to one of the Intel on-board NICs. I haven't seen any stack traces or issues otherwise since completing this change back on August 19th (just over a week). Previously I'd see traces/crashes about once a day or so. So, I'm pretty satisfied that I've addressed my problem. I'm not sure how common this scenario is... how many of us are using 10GbE Mellanox cards, and how many of that subset of us have VLANs configured? All the same, I figured I'd post my experience here sort of as a PSA in case anyone else with a similar setup is trying to troubleshoot. Cheers, -A
  11. Ambrotos


    As Frank said above, feel free to use Preclear plugin for convenience if you like, but since its original purpose (minimizing downtime required to insert new drives) has been integrated into unRAID itself, I've stopped using it. A lot of people (myself included) continued to use it for a while to detect "infant mortality" in newly installed hard drives, but since the plugin doesn't seem to be supported anymore (regardless of whether it technically runs or not) we probably shouldn't even be doing that much. Myself, to address the need to detect flaky drives on install I've been opening up the handy new web-based terminal window, starting a "screen" session, and issuing a badblocks command. This does much the same thing as the preclear plugin did; read/write the entire disk and compare with expected contents. badblocks -nvs /dev/sdx will do a non-destructive test of the entire disk. If you don't care about the data on the disk you can do badblocks -wvs /dev/sdx. Just my $0.02 -A
  12. That definitely seems to be what I'm encountering. Thanks for the tip. I'll follow up in that thread. Cheers, -A
  13. Just updated from 6.5.2 to 6.5.3 a couple days ago. Initially I thought everything went smoothly, as I haven't noticed any behavioral/performance symptoms. However when I logged in this morning I noticed in the system log that I have a call trace in my syslog. (Diagnostics attached). Obviously I can't be 100% certain that this is related to 6.5.3 specifically, but I have never seen any traces at all in the past. (I've run pretty much every stable release on this hardware since the unRAID v5 days). Just seems coincidental that the call trace happened shortly after the software upgrade. It seems noteworthy that the trace is concerning netfilter/macvlan and that I do have somewhat of a "non-standard" networking configuration, in that I'm using VLAN tagging and a Mellanox ConnectX-2 10G NIC. As I said, I haven't noticed any behavioral systems or anything like that so I'm not too panicked about this. Just thought someone might like to take a look to see if any corner cases or incompatibilities of some recent change weren't covered during testing. Let me know if more information beyond the diagnostics file would be helpful. I'll monitor the logs to see if it happens again, and whether I can correlate it to a specific event. Cheers, -A nas-diagnostics-20180624-0755.zip
  14. No problem. I'm assuming you're mostly asking about the unRAID-side of the equation, since from the Duplicati side of things it's really just setting the backup destination storage type to FTP and filling in the address unRAID has a built-in FTP server, but it's not configurable, so I turned that off and installed SlrG's ProFTPd plugin (just search the Apps tab in the unRAID UI for it to install). Check the ProFTPd page in unRAID"s Settings menu to make sure it's Enabled and "RUNNING". Instead of having its own separate user database, this plugin uses the built-in unRAID user profiles. Basically, any configured unRAID user with the description "ftpuser" will be added to the ProFTPd user database. If you also add a storage path to the user's description then that will be set as the FTP user's home directory. So I created a new unRAID user called "backup" with the description "ftpuser /mnt/users/Backup". Then just create the unRAID share called "Backup" (for backups I un-check use of the cache disk), and you're done. You can test that your FTP user has been created properly by using Chrome/Firefox to browse to ftp://<username>:<password>@<unraidIP> and see if the resulting page lists the contents of the folder you specified as the FTP user's home directory. Hope that helps. -A
  15. I don't have any problem with backing up a repository of duplicati backups to B2. They are after all just a bunch of zip files. In fact, while troubleshooting an unrelated backup completion problem I was having a few months ago I asked the very same question in passing on the Duplicati forums. Here, if you're curious. Anyway, as long as you don't mind the lack of additional de-duplication, you're correct in that the only downside to this plan is that due to the nested nature of the backups you'd have to restore the entire outer backup even if you're only looking to recover a single file from the inner one. -A