Leaderboard

Popular Content

Showing content with the highest reputation on 03/10/21 in all areas

  1. Upgraded from last stable as I had a spin-up problem with my Seagate Ironwolf parity drive under RC2. I see the same quirk again under the new kernel - this time having attached diagnostics. From what I can tell, it appears once the mover kicks in and forces the spin up of the parity. It tripped up once as you can see from the logs but came up and wrote fine. I've done repeated manual spin downs of the parity, writing into the array via the cache, and forcing a move hence bringing up the parity again. No errors as of yet. This is a new drive and under the same hardware setup completely as 6.8.3 so it is a software/timing issue buried deep. If this continues, I will move the parity off my 2116 controller (16 port) and over to my 2008 (8 port) to see if that relieves any issues. Past that, perhaps over to the motherboard connector to isolate the issue. FYI. Kev. Update: I've disabled the cache on all shares to force spin ups much faster. Just had another reset on spin up. I'll move to next controller now. Update 2: Drive dropped outright on my 2008 controller and Unraid dropped the parity and invalidated it: I'm going to replace the Seagate with a 8T WD unit and rebuild. Definitely an issue somewhere with timings between the 2 controllers. Update 3: After some testing offline with the unit and digging, it looks like the Ironwolf may be too aggressively set at the factory for a lazy spin up. This behavior must be tripping up some recent changes in the mpt2/3sas driver. Seagate does have advanced tools for setting internal parameters ("seachest"). I set the drive to disable the EPC (extended power conditions) which have a few stages of timers before various power-down states. I also for good measure disabled the low current spin-up to ensure a quick start. Initial tests didn't flag any opcodes in the log. I'm rebuilding with it and testing. For note, the commands are: SeaChest_PowerControl_xxxx -d /dev/sgYY --EPCfeature disable SeaChest_Configure_xxxx -d /dev/sgYY --lowCurrentSpinup disable Update 4: I've submitted all info to Seagate tech support citing the issue with the 10TB and my suspicions. Deeper testing today once my rebuild is done to see if the fixes clear the issue. Update 5: Full parity rebuild with Ironwolf. No issues. I've done repeated manual drive power downs and spin ups to force the kernel errors across both the controllers above and I have not seen any errors. Looks like the two tweaks above are holding. Kev.
    3 points
  2. You don't have to do that. If the share is empty it will delete all top-level dirs for it on all devices.
    2 points
  3. Normally I would have, but I just didn't have a chance to yet. My life is still a bit of a mess after moving (I'm currently sitting on the floor while I type this because most of my furniture won't arrive for another week haha). I'm planning on incrementing the version later today though, but I wanted to get the quick and dirty fix out sooner rather than later.
    2 points
  4. Have you a active internet connection on boot? The plugin will automatically download the new version for you, but you need to have a internet connection on boot otherwise it will fail. No, you only need a active internet connection on boot and it will download the new driver (keep in mind that the boot will take a little longer since it's downloading the new driver ~130MB). As @tjb_altf4 said if you don't have a Internet connection the worst thing can happen is that you have to reinstall the Plugin and also disable and enable the Docker Daemon or reboot once more. Hopefully the next time a update is released this won't happen again. I check for new versions now every 15 minutes and have everything automated so that about 1 hour and 15 minutes after a release the Plugins are updated, even if I'm sleeping...
    2 points
  5. Personally and I do mean personally I always do the following: Stop all Dockers Spin up all Drives (below does it anyways, but......) Stop the Array Shutdown/Reboot I do that simply because if a docker hangs I can wait for it to shutdown vs wondering what's hung and why my machine isn't shutting down. So I assume control of each step because I don't like unclean shutdowns and having to wait for a parity check to fire up if something goes sideways. I've not done that a few times and had good results, but there was a few times in the past when I had to eventually login and pray when I forced it to shutdown nothing would go wrong.
    2 points
  6. Yeah just a min, actually about 15. I put it on the wrong branch 😆 Ok good to go now, sheesh
    2 points
  7. After starting to play around with UnRaid a couple of weeks ago I decided to build a proper system. I want to share build progress and key learnings here. Key requirements have been: AMD system Plenty of CPU cores Low Wattage ECC Memory IPMI Good cooling since the system sits in a warm closet Prosumer build quality Config: Runs 24/7 and is rock stable since day 1. UnRaid OS: 6.10 RC1 Case: Fractal Design Define 7 PSU: Be Quiet! Straight Power 550W Board: AsRockrack X570D4U w/ Bios 1.20; latest version as of 2021/10 CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP through Bios setting; CPU was difficult to source since it is meant for OEMs only. Cooler: Noctua NH-L12S Case Fans: 5x Arctic P14 PWM - noise level is close to zero / not noticeable Memory: 64 GB ECC (2x32 GB) Kingston KSM32ED8/32ME @ 3200Mhz (Per Memory QVL) Data disks: 3x 4TB WD40EFRX + 1x 4TB WD40EFRX for Parity (all same disks, same size) Cache 0: 2x 512GB Transcend MTE220S NVME SSDs Raid 1 Cache 1: 4x 960GB Corsair MP510 NVME SSDs Raid10. Set up with ASUS Hyper M.2 in PCIE X16 Slot (BIOS PCI Bifurcation config: 4x4x4x4x) Todos: Replace the 4 SATA cables with Corsair Premium Sleeved 30cm Sata cables Eventually install a AIO water cooler Figure dual channel memory setting out, atm. single channel config. Thats done. Eventually configure memory for 3200mhz, Done. Eventually install a 40mm PWM cooler for the X570. Update: After a few weeks of 24/7 uptime this seems to be unnecessary since the temps of the X570 settled at 68 - 70° Get the IPMI Fan control plugin working Temperatures (in Degree Celcius) / Througput: CPU @ 35W: 38° - 41° Basic usage (Docker / VMs) / 51° - 60° Load CPU 65W: 78 - 80° Load (This pushes fans to 1300 - 1500 RPM, which lowers the X570 temps to 65°) Disks: 28° - 34° Load SSDs: 33° - 38° Load Mainboard: 50° in average X570: 67° - 72° during normal operations, 76° during parity check Fan config: 2x Front (air intake), 1x bottom (air intake), 1x rear & 1x top (air out); 800 - 1000 RPM Network Througput: 1 Gbit LAN - Read speed: 1 Gbit / Write speed 550 - 600 Mbit max. (Limited by the UnRaid SMB implementation?). Write tests done directly to shares. So fare meeting expectations. Final Config: 2x1 Gbit Bond attached to a TP-Link TL-SG108E. Learnings from build process: Finding the 65W version of the Ryzen 9 3900 CPU was difficult; finally found a shop in Latvia where I ordered it. Some shops in Japan sell these too. The Case / Board config requires a ATX cable with min. 600mm length IPMI takes up to 3 mins after Power disconnect to become available The Bios does not show more than 2 M.2 SSDs which are connected to the Asus M.2 Card in the x16 slot. However, unRaid has no problem seeing them. Mounting the CPU before mounting the board was a good decision, should have also installed the ATX and 8PIN cable on the board before mounting it, since installing the two cables on the mounted board was a bit tricky Decided to go with the Noctua Top Blower to allow airflow for the components around the CPU socket, seems to work good so far Picked the case primarily because it allows great airflow for the HDDs and a clean cable setup The front Fans may require PWM extension cables for proper cable setup, depending where on the board the Fan connectors are located X570 is hot, however with a closed case airflow seems to be decent (vs. open case) and temps settled at 67° - 68° Removed the fan from the ASUS M.2, learned later that it has a fan switch too. Passive cooling seem to work for the 4 SSDs PCIe Bifurcation works well for the x16 slot, so far no trouble with the 4x SSD config Slotting (& testing) the two RAM modules should be done with the board not mounted yet since any changes to ram slots, or just in's/out's is a true hassle since the slots can only be opened on one side (looking down at the board on the left side, towards external connectors) and the modules have to be pushed rather hard to click in. IPMI works well, still misses some data in the system inventory. However the password can only have a max. length of 16 Byte; used a online generator to meet that. Used a 32 char PW at first instance and locked the account. Had to unlock it with the second default IPMI user (superuser) Asrock confirmed the missing data in the IPMI system inventory. Suggested to refresh the BMC what I didn't do yet. Performance: With CPU @ 35W the system performs well for day to day tasks, however feels like it could be a bit faster here and there. Nothing serious. VMs are not as fluent as expected. The system is ultra silent. With CPU @65W the system, especially VMs and docker tasks such as media encoding are blazing fast. VM performance is awsome and a Win10 VM through RDP on a MacBook feels 99% like a native desktop. The app performance in the VM is superiour to usual Laptops from my view, given the speed of the cache drive where I have the VM sitting at and the 12 core CPU. Fans are noticeable but not noisy. 45W Eco Mode seems to be the sweet spot, comparing performance vs. wattage vs. costs. Transcoding of a 1.7GB 4K .mov file using a Handbrake container: 65W config - 28 FPS / 3mins 30sec - 188W max. 45W (called ECO Mode in Bios) - 25 FPS / 3min 45sec - 125W max. 35W config - 4FPS / 25 mins - 79W max. Power consumption: Off (IPMI on) - 4W Boot 88W Bios 77- 87W Unraid running & ECO Mode (Can be set in Bios) - 48W Unraid running & TDP limited to 35W - 47W Parity check with CPU locked to 35W - 78W Without any power related adjustments and the CPU running at stock 65W the system consumes: 80W during boot 50 - 60W during normal operations e.g. docker starts / restarts 84 - 88W during parity check and array start up (with all services starting up too) 184 - 188W during full load when transcoding a 4K video CPU temps at full load went up to 86° (degree celcius). Costs: If I did the math right - the 35W config has less peak power consumption, however since calculations take longer the costs (€/$) are higher, compared to the 65W config. In this case 0.3 (188W over 3,5 Minutes) vs. 2.3 (78W over 25 Minutes) Euro Cent. So one might look for the sweet spot in the middle January 2021 - Update after roughly a month of runtime - No issues, freezes etc. so far. The system is rock stable and just does its job. Details regarding IOMMU groupings further below. I will revisit and edit the post while I am progressing with the build.
    1 point
  8. Thats awesome! Thanks!
    1 point
  9. This seems to be very difficult to achieve, although I can fully understand your "wish". Keep in mind that even an 80 plus titanium psu has 10% loss on 10% load. ARM sbc's are very low on idle usage, but they're not super powerful.
    1 point
  10. 谢谢. I will merge the pull request. Give Community Apps a bit to pick up the updated language pack and then you can update it and see the translation changes.
    1 point
  11. I edited your first post and title to reflect that, it will be better for clarity in the future.
    1 point
  12. That's because I'm an idiot. I don't mean wireshark, I mean wireguard 🤯
    1 point
  13. how full a drive is will have no effect on a parity check as the parity check works at the raw sector level and is unaware of the meaning of the contents of the sectors - just that they have a bit pattern it is going to use/check.
    1 point
  14. Yes, since it's currently empty. Before deleting the share, add Disk 3 back to it (ie. include Disk 3). Then delete the share. Then recreate it, including just the encrypted Disk 5. Alternatively, if you don't mind using the command line, just delete the empty folder: rmdir /mnt/disk3/Work
    1 point
  15. I'd say that you have an empty directory called "Work" in the root of Disk 3 and therefore the reported available free space is the sum of that on Disk 3 and Disk 5. I'm guessing that directory was created when you first created the Work share and that you later changed the share to include only Disk 5.
    1 point
  16. I'll be out of town for a couple of days and can see there's a new update(6.9.1). I'm going to update it and as soon as i return, I'll try the previous solutions, otherwise I'll try the your new sugestion. But I really don't like deleting things :E Feels like there's a high probability of error from my side, but hey, gotta learn somehow
    1 point
  17. My script now looks like this: #!/bin/bash cat /mnt/user/<share>/<file>; With 1 line per disk, that just outputs the file to the console, or in that case the log. So same as before, script as cronjob for the time that you want them to keep spinning, spindown settings for the rest of the time. Yeah, that remains to be seen, my script will start soon and I'll report back if it kept working through the evening. Still this is a clunky workaround, there has got to be something lke the old version with 'sdspin'
    1 point
  18. @Gico Do you know which client on your network is 192.168.168.10? its connecting to your server repeatedly via /login and nginx is complaining about it.
    1 point
  19. No these are host keys - what the server uses to identify itself to the user - and all the files in /boot/config/ssh will be installed into /etc/ssh upon startup of the ssh server. If the ssh server starts up and cannot find these host keys, it will generate new ones and scare anybody trying to ssh in with a warning message about possible Man in the Middle Attacks due to host key mismatch. Needless to say, the new ones will be saved to /boot/config/ssh as well. For those who use the ssh plugin or know what they are doing, the configuration of the ssh service can be changed and persisted by copying the modified /etc/ssh/sshd_config file here.
    1 point
  20. A noble goal, but a lot of technology/expense for very little benefit (ecological or economic) Solar is highly variable in how much energy is produced based on time of day, cloud conditions, etc. What's going to happen to the excess energy when the sun is shining but your server only needs a small amount of what's generated? What's going to happen to the energy generated when it isn't quite enough to power your server? (your server will need to switch to mains, and the power that is generated is wasted?) Have you looked at getting a more standard roof-top solar/inverter system? If you can afford the expense, the cost/benefit ratio would likely be much better (again, both ecologically and economically)
    1 point
  21. Thanks for the fix Trouble though with what you've done is that many users may not be aware of the issue (ie: the plugin may have auto updated), and if at some point when they do check out the webUI they'll have the issues on a version of the plugin which is already fixed. Why not bump the version of the plugin so that every version to avoid any issues?
    1 point
  22. This is one way: https://forums.unraid.net/topic/93846-btrfs-error-with-new-system-m2-ssd/?tab=comments#comment-867992
    1 point
  23. Also, wenn Du ständig hohe Datenmengen schiebst kann es schon was bringen auf MTU 9000 zu gehen. Bei 1Gbe rechnet man im Schnitt mit 10MB/s Gewinn. Bei 10GBe könnte das schon zu einem Unterschied von 100 MB/s führen. Alle Geräte müssen das können und beim CSS610 steht da irgendwo etwas von 10218 Bytes. Mal wieder wie von MT gewohnt. Anstatt der standardmäßigen allseits bekannten 9038 Bytes zu nutzen, benutzt MT etwas unstandardisiertes bzw. ungewöhliches. Das ist imho so ähnlich wie bei dem POE Port der nur Passiv beherrscht während alle anderen 802.3af/at nutzen. Lass das besser auf 1500 Bytes stehen wenn Du keine Probleme hast. Mittlerweile bin ich der Meinung daß ein anderer Switch besser, wenn auch erheblich teurer gewesen wäre. Die Dokumentation stimmt nicht. Reset wie beschrieben auf SWos 2.12 geht nicht, ich bleibe immer bei 2.13RC5. Mit Passwort hängt die Kiste ewig bei der Anmeldung um dann einen Fehler auszuspucken.. Ja, ich kann Tippen... Ist halt ein Billigteil. Anders kann ichs nicht mehr ausdrücken. Wären die zwei sfp+ Ports nicht würde ich sogar von einem Fehlkauf sprechen, so kann ich sagen funktioniert das Gerät für Standardanwendungen immerhin normal. 10G ist schnell und problemlos. Ich bin gespannt auf was für Probleme ich stoßen werde wenn ich das Netz mal in Vlans auftrennen will... womöglich noch mit einem router derselben Firma..
    1 point
  24. @ich777 I have confirmed this works. Thanks, I'll happily wait for the fix.
    1 point
  25. @all for all that are using the Nvidia-Driver Plugin: @Kaldek & @sterofuse If you are using the Nvidia-Driver Plugin I have a workaround for you if you boot into GUI mode: When you get to the blinking cursor press CTRL+ALT+F1 Login with your root account and password Type in 'nvidia-xconfig' (without quotes) Type in '/etc/rc.d/rc.4' (without quotes) These commands have to be done from the local console (not a remote console). I will fix this ASAP and keep you updated here in the thread, sorry for the inconvenience... Fixed! Update the Nvidia-Driver Plugin (make sure that you are on version 2021.03.10) and reboot your server into GUI mode again.
    1 point
  26. Thanks for your reply, i've literally just finished using the kernel helper for 6.9.1 stable. I think I know where I was going wrong, a long time ago I was using the linuxserver.io unraid Nvidia, and Unraid DVB Plugins. They overwrote each other not your drivers. Re-looking just now I see there are two plugins I need in your Repo, Nvidia Driver, and DVB Driver. Thanks for your efforts! I'll try them next build rather than the Kernel Builder. p.s the Kernel Builder has been amazing.
    1 point
  27. That drive is very sick -as each Pending sector indicates a sector that can not be read reliably (and can thus result in the corresponding sector on the parity drive potentially having the wrong contents). Reallocated sectors while not necessarily a problem if they are stable are a big warning sign if the number is not small,. with that drive in the system I would not assume that the contents of the parity drive are valid enough so that parity plus remaining drives can rebuild any failed drive without serious file system corruption on the rebuilt drive. Since you say the content of that drive is unimportant I would suggest doing Tools -> New Config; and select the options to retain all current settings return to the Main tab and change the problem drive slot to its replacement rebuild parity with the new drive set. Hopefully this time it will build without drive level errors so it can be assumed valid. You can then format the replacement drive to create an empty file system on it so it is ready to receive data.
    1 point
  28. Good deal. The -delete option uses extra memory. Let’s eliminate the variables one at a time. Just for the sake of troubleshooting, remove that option and try running again. Also, add -P and -h. Those will make the output a little easier to read for you.
    1 point
  29. Don't touch those as they are the SSh host keys (deleting them will regenerate them on sshd restart) If they are regenerated, you'll get warning about Man In The Middle attacks (ssh will consider your Unraid host as never before seen)
    1 point
  30. Hi @coblck, can you advise on how much ram your system running rsync has? also, what is the file format for the external hd? sometimes using the -a tag for rsync between different file formats can cause problems. so i'm wondering if using -r and -t, and maybe some other options would help solve the problem of using -a. there could be a few different things to try here
    1 point
  31. Done as you wrote and with all default settings everything works. Public posting to server list and stable for a few hours. I'm going to start changing the server name, world name, password, one by one to see what happens. I did make sure the password was not the world name. Will report back if with results.
    1 point
  32. @cortana - please type this in a Terminal session and then reboot. If still no local video post diags again: touch /boot/config/modprobe.d/amdgpu.conf
    1 point
  33. Next point release can you update docker to 20.10.5 It patches 3 CVE's including these 2 ugly ones: https://github.com/moby/moby/security/advisories/GHSA-6fj5-m822-rqx8 https://github.com/moby/moby/security/advisories/GHSA-7452-xqpj-6rpc
    1 point
  34. IIRC GPU stats plugin can cause this.
    1 point
  35. Hi all. I had this problem too after upgrading to 6.9 this week. My Win10 VM and even unRAID itself were running dog slow. System Interrupts was using 100% CPU in Windows. For me what worked was changing a setting in the "Tips and Tweaks" plugin. I had "CPU Scaling Governor:" set to "Power Save", which I think is the default for Intel CPU. Changed it to "On Demand", and now my VM and unRAID are running much better. Disclaimer: I don't understand much of that, but it seems to work. Maybe there's downsides. YMMV. Seeing a lot of posts with this issue, hopefully this helps you.
    1 point
  36. Hello, both Sonarr and Radarr recently have been unable to connect to Deluge. Both return the following error under System: I suspect this may be related to Privoxy, as my Deluge log shows the following repeating over and over: 2021-03-04 19:35:47,139 DEBG 'watchdog-script' stdout output: [info] Privoxy not running 2021-03-04 19:35:47,143 DEBG 'watchdog-script' stdout output: [info] Attempting to start Privoxy... 2021-03-04 19:35:48,153 DEBG 'watchdog-script' stdout output: [info] Privoxy process started [info] Waiting for Privoxy process to start listening on port 8118... 2021-03-04 19:35:48,159 DEBG 'watchdog-script' stdout output: [info] Privoxy process listening on port 8118 Up until recently everything was working fine. My VPN is PIA. Anyone know where to start? --- ✔️SOLVED: I had to add my server's IP in both Sonarr and Radarr under Settings > General > Proxy > "Addresses for the proxy to ignore" This is described in Q26 in the documentation here: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  37. Looks like its time to close my feature request now.
    1 point
  38. If you want a container to use multiple cores, then you can't pin it to the same isolated cores. Sure the OS will possibly take some cycles on the cores, but since you're not running any VM, the hit would be minimal. If you really want it to have more or less unhindered access to certain cores, then you pin it to those cores, and then pin every other container to all the cores except for those ones.
    1 point
  39. I like to request NordVPN client docker from bubuntux Its designed to provide connection to other containers via --net=container:vpn And the advantage front normal openvpn clients its that uses the NordLynx connection so you get better performance with this when using this vpn. Tried to create a docker template my self and but it dockerman/templates-user, but can't get it to work. Thanks for your time.
    1 point
  40. I like to request Remotely would be very nice to use it for customer remote support for Windows, linux or maybe vm access. This Youtube video (hope its okay to post) got me interested and it can be selfhosted. https://github.com/lucent-sea/Remotely
    1 point
  41. I think the Meshify allows even a better cable management, especially for the ATX connector since it has one more rubber protected cable grommet on top of the other two upper grommets. With the experience from my Define build I'd recommend the Meshify. And, the optics of the Meshify are cooler too. The Switch is fine. Not nowing your home / infrastructure I'd probably throw away the other small switch a get device which allows Link Aggregation. The full PCIe4 x1 slot of the board allows a 10G upgrade at a later stage.
    1 point
  42. So far I have no issues with the board. Even the relatively high temps of the X570 seem to be no issue. The maximum I have seen is 76° during Parity check and will copying data back and forth to the NVMEs. I am using 2 Linux (server related) hosts and 1 Windows VM atm. Given that the Windows VM is only using a virtual VGA Adapter the performance is ok. I get between 5 - 7GB / Sec read speed (Raid 10) and 2,5 - 4,2 GB / Sec write speed (Raid 1 & Raid 10). Still testing the network speed which should give max. 1 GBit, given I have a WIFI only network. The UnRaid host is connected to a Fritzbox 1GBit Lan port. Main purpose of the large cache is to host docker container and a large picture db. I use the slower disks primarily as a 1st backup instance. Both cache drives are also being used to store VM disks. I got the 12 core CPU for encoding purposes and to be able to pin cores more granular to container and VMs. Still I wanted a low power CPU and not a 105W. Bottom line, I think the board (and probably the 10G version) is worth the money. 10G version only if the network allows the speed. The Bios is basically a server grade bios with added desktop (overclocking) features. The 4x Asus card works flawless. Even though the SSDs stay in their termal tolerance range I am thinking to add the fan again, and since its rather noisy add a resistor to lower the fan speed. The SSDs are ok up to 70°, so there is still plenty of head room atm. If a dedicated GPU would accelerate the Desktop VMs I might get one, but only a cheap 2D Card Like the Nvidia GT710 passive. Still need to figure that out. I did a quick disk benckmark with a the Linux Desktop which used a 60G disk on the Raid 10. The 8,7 GB is read, the 1,2GB is write speed.
    1 point
  43. I was able to pull bubuntux/nordvpn, which is a Nord VPN service utilizing Wireguard, as an ordinary docker from Docker hub. I was able to modify the config file and get it to run using docker-compose. I have to launch it from the command line, and while its running I can pause it and stop it. from the Unraid interface. I haven't yet figured out how to configure it for other dockers that might want to use the tunnel. I think more than anything, it needs a docker menu to help reconfigure the service, but I haven't figured out how to create a menu file. Any tips of borrowing another apps docker menu and modifying it to work for bubuntux/nordvpn would be appreciated.
    1 point
  44. Thanks everything is looking good for now. If somone else got the problem, this is what I did: Disable the Docker under Settings->Docker->EnableDocker Disable the VM Manager under Settings->VM Manager->EnableVMs Backup Cache Drive: /usr/bin/rsync -avrtH --delete /mnt/cache/ /mnt/(SOMEDISK)/cache_backup >> /var/log/cache_backup.log Format Disk: Stop Array Under Main -> Cache Change "File system type:" to something different Start array Under the stop array button you should find a format button -> format disk Stop array Change "File system type:" back to the old value Start array Format again Done Restore Cache Disk: /usr/bin/rsync -avrtH --delete /mnt/(SOMEDISK)/cache_backup /mnt/cache/ >> /var/log/cache_backup2.log DONE
    1 point
  45. SOLVED! Solved solved solved! I can across this post from MHzTweaker: http://lime-technology.com/forum/index.php?topic=28727.msg256252#msg256252. In turn, he had posted this link: http://yannickdekoeijer.blogspot.com/2012/04/modding-dell-perc-6-sas-raidcontroller.html Sure enough, after sticking a piece of electrical tape over pins B5 and B6, the card now boots on all of my systems. Attached is a picture. Once I had flashed the card, literally the electrical tape was the thing to get it to work. I've attached a picture of the card, and below is a link to the zip file I used to flash the card. Instructions are in the file. I'll also update the LSI controller thread. https://www.dropbox.com/s/l4kadyukkh2w497/Flash%20Dell%20PERC%20H310%20for%20unRAID.zip
    1 point