Ambrotos

Members
  • Posts

    114
  • Joined

  • Last visited

Everything posted by Ambrotos

  1. I'm not sure what drivers (if any) Windows would load if the card were in recovery mode. What is shown if you execute "mst status" from the WinMFT installation directory? If you get any Mellanox device names listed there, you can do a device query to see if it's in "failsafe" mode with "flint -d /dev/mst/<device_name> query" http://www.mellanox.com/page/firmware_HCA_FW_identification If flint can identify it, there's a good chance it can flash it. Failing all that, you're probably right that the card's a brick. You don't sound optimistic that you'll be able to get your money back. I guess you didn't buy it on eBay, where worst case if you get a crusty vendor you can exercise your buyer protection? Cheers, -A
  2. Actually, assuming you did get the correct Ethernet variant of the ConnectX-2, it's possible your card might be in Firmware Recovery mode. Some quick Googling suggests that the "ConnectX IB Flash Recovery" device shows up if your card failed to properly load its firmware image. Maybe you could try flashing the latest firmware image into the card using the flint utility. http://www.mellanox.com/page/management_tools ...although Mellanox only makes DEB and RPM installers available for Linux, so unless you can find a pre-compiled binary somewhere it might be easier to move the card to a Windows machine to try to recover the firmware.
  3. What is the exact model of the ConnectX card you got? Are you sure you didn't get one of their Infiniband or FibreChannel cards instead of an Ethernet card? Does yours say "MNPA19-XTR" anywhere on it? My ConnectX-2 detects as follows in the system devices list: [15b3:6750] 01:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
  4. I'd still probably be inclined to get the Mellanox MNPA19-XTR, but get a copper SFP+ module instead of a fiber one. Aside from being cheaper than a straight-up RJ45 10Gig NIC, lots of people seem to use these cards w/ unRAID and we know driver support is pretty good. Something like this: https://www.ebay.ca/itm/Intel-Compatible-10G-RJ45-Copper-SFP-Transceiver-Module-10GBase-T/232569948684?hash=item36263fca0c:g:BHwAAOSwGzhaE~zl
  5. I picked up a couple of the Mellanox Connect-X 2 cards (MNPA19-XTR) off eBay for $40 CAD each, and they're working great. Since it's in the same rack as my switch, I just got a 2m DAC cable to connect unRAID into one of my 10Gig switch ports. The second Mellanox card is in a desktop in another room so it's too far for a DAC. The necessary SFP+ modules and LC to SC duplex cable was surprisingly cheap on FibreStore.com. https://www.fs.com/products/65334.html https://www.fs.com/products/42995.html https://www.fs.com/products/30856.html That's just my setup. May not work for everyone. -A
  6. Just updated my two servers; one from 6.3.5 stable and one from rc20. No troubles to report. Of particular interest to me in this release is the new ability to bind a docker to a VLAN-tagged bridge and assign it an IP address in an isolated subnet so I can route its traffic out a separate gateway interface. Works like a charm! Cheers, -A
  7. 1. Support for using network shares as "pseudo-local drive" backup targets seems to be better in OSX than in Windows. Since Duplicati doesn't officially support the use of SMB-based backup destinations, I chose to use FTP. I use the plugin instead of using the stock unRAID FTP server because I wanted finer control over FTP user accounts. I'm not aware of any specific downsides of doing it the way you did; if it's working for you then by all means carry on. 2. This is likely to depend a lot on your specific setup. Personally, I have a 10Gbps LAN, so my bottleneck is definitely the array. Assuming you're probably running a 1Gbps LAN, newer NAS drives like the Iron Wolfs and WD Reds I've seen write as fast as ~175MB/s at the edge of the platter, so it's possible the network is your bottleneck. This all assumes of course that your Duplicati client PC can read off its local drive, split the data into blocks, and encrypt it all fast enough to saturate your LAN/array. There are guides out there on optimizing Duplicati backup performance from a client configuration perspective, but increasing the volume size and reducing the compression level are good first steps for local backups if you feel like you're not getting the expected transfer rates. 3. Correct. I am using Backblaze B2. I've found it to be the cheapest per GB of the big cloud storage providers Duplicati supports (e.g. Dropbox, Microsoft Azure, Amazon S3, Google Drive, etc.) 4. Based on my research from back when the CrashPlan announcement was made public I chose to go the Duplicati route. I've set it up on several different systems for friends and family with good success. There may be new/better options out there, but I haven't really gone looking for alternatives. Honestly, I like this setup better than what I had set up with CrashPlan, and it's cheaper per month to boot! Cheers, -A
  8. Works like a charm. Thanks for taking a look at this so quickly. Cheers, -A
  9. Thanks for the updated plugin! I've been running without integration w/ my UPS since development of the v1 plugin was stalled a couple years ago. Much appreciated! One bit of feedback: you seem to have put a constraint on the length of the text field for "port". I have a networked TrippLite SMART5000RT, which can be polled via SNMP. However, the allowed length of the port field when the snmp-ups driver is selected is too short to enter "192.168.0.31". I got it to work by configuring the plugin for "manual only" mode and putting the following into the ups.conf file: [ups] driver = snmp-ups port = 192.168.0.31 community = public snmp_version = v2c pollfreq = 15 If you increased the allowed length of the port text box I'd be able to use auto config as well Thanks again, -A
  10. Ah, interesting. I don't have any Macs, so I couldn't tell you how well that'll work. Personally I try to use as few plugins as possible on my unRAID box, so if it works reliably as you have it now then I wouldn't mess with a good thing I could be wrong, but I doubt you'll see a noticeable speed difference between the two protocols. Honestly, I would prefer it if Duplicali supported SMB, and it seems that you Mac users have it a bit easier in that regard than Windows users. https://github.com/duplicati/duplicati/issues/1257 Cheers, -A
  11. We must be having a miscommunication or something. The PCs are using a share on unRAID as a backup destination. They connect to these shares via FTP (SMB isn't a supported option in Duplicati). The built-in unRAID FTP server isn't configurable enough for my liking, so I used the ProFTPd plugin on unRAID to allow better user account control, etc. I'm not sure how you could use "Local Drive" as a backup destination for the PCs, unless you were planning to map a network drive or something like that, but I've never had much success with persistent network mapping in Windows. Cheers, -A
  12. I went through exactly the same thought process as you when I first read about CrashPlan halting their residential service offering, and in the end wound up selecting option #1. I chose not to backup any of the videos/movies/etc I have on my unRAID server, since I can easily re-rip or re-download them as necessary if they were to be lost. But I do have several terabytes of photographs on my array, as well as a significant music collection, both of which would be practically irreplaceable. So I installed Duplicati on each of my family's 4 PC's which back up to a "PC-Backups" share on unRAID via FTP (making use of SlrG's ProFTPd plugin). Then I created two jobs on the unRAID Duplicati docker, one for PC backups and one for array backups, each with it's own separate bucket in Backblaze B2, and on its own schedule; PC's every other day, unRAID shares on a weekly basis. My main justification for backing the PCs up to unRAID instead of directly offsite is the significantly improved time-to-recover. I figured it was unlikely for both a PC and unRAID to fail simultaneously, so being able to restore data over a local 1Gbps LAN connection will be much faster than trying to restore a PC over the internet. In the event of something more catastrophic (like a house fire) where the unRAID server is lost too then I still have my offsite backups, it'll just take longer to recover. I've had it running this way for a couple months now and so far I'm pleased. I'm paying way less than I was with crashplan too. Since Backblaze B2 is priced purely on the basis of storage consumption, in my particular case it works out way better than when I was being charged per-PC. I think my bill last month for 4 PCs and several shares in unRAID itself was around $4. YMMV, but it's working well for my needs. -A
  13. Quick followup, I was able to get the plugin to start correctly by simply adding the following to the docker image: docker exec -it LogitechMediaServer apt-get install libcrypt-openssl-bignum-perl libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl
  14. That's what I'm trying to confirm. Right now, it appears that the only things missing are the following modules: Bignum.pm RSA.pm Random.pm ...but I'm trying to make sure that if I provide these modules manually then it starts properly. I don't want to ask for a list of modules that winds up being incomplete. The current error message only lists RSA.pm, but that's just because that's where it currently fails. I can't be sure that it won't fail on some dependency further down the list until I try it for myself. Hopefully I'll have an opportunity to dig a bit more after work this evening. I'll keep you posted. Cheers, -A
  15. Hm, unfortunately that didn't resolve the startup exception. But, digging into it a bit I think I know why. The log reports the error in the file Plugin.pm, line 33. If you open that file, it's trying to load Crypt::OpenSSL::RSA, a Perl module. If you check earlier in that same file, you see the line use lib catdir($Bin, 'Plugins', 'ShairTunes2W', 'lib'); ... so the module is expecting the ./Plugins/ShairTunes2W/lib folder to contain the library of Perl modules for loading. The lib folder seems to contain OpenSSL module files for each of several recent versions of Perl; 5.12, 5.14, 5.18, and 5.20. Unfortunately, if you run docker exec -it perl -v in the latest LMS docker, you can see what we're running Perl v5.22. Just on a whim, I tried copying the 5.20 directory to a new directory called 5.22, but upon restart that resulted in a different error regarding undefined symbols etc. Anyway, it seems that the ShairTunes2 plugin is looking for OpenSSL Perl modules in the config/cache/InstalledPlugins/Plugins/ShairTunes2W/lib folder, not the OpenSSL executable in /usr/bin. I'm going to see if I can download the OpenSSL modules specifically for Perl 5.22 from CPAN and load them into the correct library folder. I'll let you know how it goes. Cheers, -A
  16. I'm really sorry to do this to you again, but now I'm trying to load up the latest ShairTunes2 (fork) plugin in order to enable my LMS as an AirPlay receiver, and I'm getting a "Plugin Failed to load" error. If you check the LMS docker log file, it's pretty clear why: [17-07-11 11:44:17.8246] Slim::bootstrap::tryModuleLoad (286) Warning: Module [Plugins::ShairTunes2W::Plugin] failed to load: Can't locate Crypt/OpenSSL/RSA.pm in @INC (you may need to install the Crypt::OpenSSL::RSA module) (@INC contains: /usr/sbin/Plugins/ShairTunes2W/lib /config/cache/InstalledPlugins/Plugins/ShairTunes2W/lib /config/cache/InstalledPlugins /usr/share/squeezeboxserver/CPAN/arch/5.22/x86_64-linux-thread-multi /usr/share/squeezeboxserver/CPAN/arch/5.22/x86_64-linux-thread-multi/auto /usr/share/squeezeboxserver/CPAN/arch/5.22.1/x86_64-linux-gnu-thread-multi /usr/share/squeezeboxserver/CPAN/arch/5.22.1/x86_64-linux-gnu-thread-multi/auto /usr/share/squeezeboxserver/CPAN/arch/5.22/x86_64-linux-gnu-thread-multi /usr/share/squeezeboxserver/CPAN/arch/5.22/x86_64-linux-gnu-thread-multi/auto /usr/share/squeezeboxserver/CPAN/arch/x86_64-linux-gnu-thread-multi /usr/share/squeezeboxserver/CPAN/arch/5.22 /usr/share/squeezeboxserver/lib /usr/share/squeezeboxserver/CPAN /usr/share/squeezeboxserver /usr/sbin /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /config/cache/InstalledPlugins/Plugins/ShairTunes2W/Plugin.pm line 33. BEGIN failed--compilation aborted at /config/cache/InstalledPlugins/Plugins/ShairTunes2W/Plugin.pm line 33. Any chance you could add OpenSSL to the Docker OS when you get a chance? Cheers, -A
  17. Awesome. Thanks! Seems to have addressed the issue. After updating the server's Web UI can now properly load the feed.xml and show the tracklist. I'll have to wait until I'm home tonight in order to test actually playing the feed, but I'm assuming that won't be a problem. Cheers, -A
  18. Could you consider adding IO::Socket::SSL to the docker? I'm trying to use the Podcast plugin to listen to a Podcast that's available via an HTTPS feed, and I keep getting the error: There was an error loading the remote feed for : (Can't connect to https URL lack of IO::Socket::SSL Thanks!
  19. I'd also love to see NUT support built in. I'm using a Tripp Lite SMART5000RT3U which obviously isn't supported by APC. Also, since Macester didn't get a chance to include SNMP support before he stopped developing the plugin, I've had to fork his repository and make my own hacks. It works, but I would obviously prefer to be using a supported solution for something as important as UPS monitoring and safe shutdown. There used to be a similar feature request for baked in NUT back in the unRAID v5 days. If I remember correctly, the talk was of moving away from APC and onto NUT. Last I checked it was still in the "Unscheduled" sub-forum, though I can't seem to find the topic anymore. Anyway, +1 from me on this. -A
  20. The RSS URL changed with the migration to IPS. I also primarily check the forum using RSS. You can get feeds either per-category, or there's a forum-wide feed now that jonp re-enabled that shows the last 100 new posts. Just click the RSS icon in the bottom right of the forum page to get that particular category's feed URL. For example, the forum-wide feed URL is: https://forums.lime-technology.com/rss/1-all-unraid-topics.xml
  21. Was it a conscious decision not to enable a forum-wide RSS Feed? I see links for sub-category feeds, but haven't found the forum-wide URL. It'd be a shame if I couldn't keep monitoring via QuiteRSS.
  22. I just installed the official phpMyAdmin docker from the Docker Hub phpmyadmin/phpmyadmin repository. Just make sure to add 3 variables (PMA_HOST, PMA_USER, PMA_PASSWORD). If you want to do any config file based configuration instead of storing it all in tables, you can add an extra parameter to map an appdata configuration file to the container's /etc/phpmyadmin directory: -v /mnt/cache/appdata/phpMyAdmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php It's spelled out pretty clearly in the repo info. https://hub.docker.com/r/phpmyadmin/phpmyadmin/ Cheers, -A
  23. There is a plugin conflict, eg, what is "hpfax"? I don't know what hpfax is, as it's not something I've explicitly installed. However, given the directory it's trying to access, I suspect that error is being logged by gfjardim's CUPS Docker image that I was playing around with a couple days ago. That docker was not started at the time of the crash, since I couldn't find the right PPD file for my network printer. I'm not at home at the moment so I can't give you an explicit list of plugins, but all of the plugins I have installed are pretty 'vanilla'... Stuff like Nerd Pack, Powerdown, CA, Squid's User Scripts and Tips&Tricks and several of the Dynamix packages like System Temp and Cache Dirs. -A
  24. I upgraded my system from 6.1.9 to RC5 yesterday evening. Everything seemed to be OK afterwards, however this morning I notice that I can no longer access anything in my user shares. When I log into SSH and try to browse to /mnt/user/ I get the following error: /bin/ls: cannot access 'user': Transport endpoint is not connected This appears to be the same error that Capt.Insano was reporting back on Sept 8th. I never saw this problem prior to upgrading to RC5. Attached is my diagnostics file. Rebooting fixes the problem, at least temporarily. I'll be watching to see if it recurs. -A nas-diagnostics-20160912-1044.zip