Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About mfwade

  • Rank
    Advanced Member
  1. So, I haven't tried one of their HBA's, the QSFP-QSFP type. I am speculating here however, provided it uses a supported LSI chipset, you "should" be ok... That being said, if you go this route, I would use one of the generally agreed upon in this forum with the understanding you will need the appropriate cable QSFP-8088. Regarding the power supply noise, the tray noise isn't 'terrible' although, I kept mine in the unfinished basement. I suppose you could modify them however, that's outside anything I would be willing to do. Keep in mind, depending on the firmware, etc. located on the IOM controller, it may not support more than a 3TB drive. Good luck.
  2. Keep in mind, you will need an sff-8436 to sff-8088 to connect the NetApp DS4243 tray to most HBA's. I have 2 of these shelves lying around, - they came with 1 controller and IOM each. You will need at a minimum 2 power supplies in the enclosure for it to work properly and level the fan noise "some"... These are proprietary power supplies, not easily hacked. The IOM3 module is all you need to facilitate connectivity to the server HBA - this is nothing more than a SAS controller / interface to the drives in the chassis. Most trays that can be had on eBay already come with them. You will also want the baffles/blanks for all the unused drive slots in the front and controller / power supply slots in the rear. If not, the fans will run at full speed since the chassis detects an air flow anomaly. They do work very nicely with Unraid though.
  3. Good evening Everyone, Just as the title suggests, does Unraid support monitoring multiple UPS via USB? I have several rack mount APC units and would like to see them detailed under the dashboard pane. Currently I am monitoring only one unit. Thanks again.
  4. Set it up to use the custom br0 and set a static IP.
  5. Well, tried creating 3 different USB drives with no luck. Tried with the backup zip file and then without (image from Unraid site). Still no luck. The USB would appear to boot, bring up the Unraid boot screen, then count down from 5 to 0 then continue to start over again. Tried to change the menu to i.e. w/Web Gui and nothing, just remains on the screen. I did all of the USB drives from my Mac. Each of the drives were from a different manufacturer. All were 8G in size and USB 2.0. Tried the 6.7.0 and 6.7.2 images from the Unraid site. Give up... Reinserted the perceived faulty drive and booted right up - just like last time (just rebooted without any troubleshooting). Seems odd that it happened right around ~60 days ago. Will keep an eye on it and if it starts to go wonky again, I will replace it with one made from a Windows machine (maybe there is a difference....). Or, if right around the 60 day mark, will formally request assistance from the big guy - as then it would no longer be a coincidence as the timing is too specific. Still can't run a parity check as the machine becomes completely unusable to the point it drops pings intermittently and the web interface is unresponsive, shares not accessible, etc. Maybe one day the big guy @limetech will be able to duplicate this (doubtful) and offer up a solution. Maybe I am the only one running a full enterprise SSD SAS array with external shelves with an unlimited drives license - 28 data + 2 parity. At any rate, I can offer my support for testing if he/they would like.... Thanks again to those that answered. As always, a huge thank you for this forum!
  6. Thanks Johnnie. I did see that after I sent the request for help. Is there an official replacement procedure to swap it out? I do have a backup of the drive albeit dated 5/31/2019 - so I at least have something.... I have more current backups (as of last week) on an unassigned device however, I can't access it in this current state. Regardless, I still have to power it down so maybe I will try to grab a good copy after a reboot. I should be ok with parity as well however, if needed I will kick one off and wait patiently.
  7. All, Need your help. For the 2nd time after ~60 days (last time was 59 if memory serves), my array is acting weird. When logging in to view the array, only 'some' of the tabs are shown, the web interface is slow and close to unresponsive. Shares are no longer visible on the network however, docker containers and virtual machines appear to be running. There is a constant Array Started . starting services... flashing in the lower left-hand corner of the web interface. Additionally, I am concerned that if I just hard power down the array the parity check will kick off for over 20 plus hours. This also results in unresponsive performance, slow gui, etc. I did post about this some time ago and no errors were observed. Original post: https://forums.unraid.net/topic/80455-unable-to-run-a-parity-check/?tab=comments#comment-747515 Current setup: UNRAID Version 6.7.0 2019-05-08 HP DL365 G7 192G RAM AMD Opteron™ 6282 SE @ 2600 MHz Qty. (1) H200 flashed to IT Mode – 1 Parity and 2 Cache drives Qty. (1) H200 flashed to IT Mode – 1 Parity and 2 Cache drives Qty. (1) H200E flashed to IT Mode - connected to external EMC SAS shelves Qty. (30) 4TB Enterprise SAS Flash Drives (2 Parity, 28 Data) Qty. (4) 4TB Enterprise SAS Flash Drives (BTRFS Cache Pool) Only modification: /boot/config/go => rmmod acpi_power_meter Plugins: CA Auto Update Applications CA Backup / Restore Appdata Community Applications Dynamix SSD TRIM Fix Common Problems Unassigned Devices unBALANCE I don’t remember every experiencing these conditions back on 6.6.7 however, I didn’t have as many SAS trays and drives attached. I am beginning to wonder if there is an underlying issue with the total number of drives albeit SAS and the SSD combination. That being said, in the arrays current condition there is no method available via the web gui to go back. Diagnostics attached. Thank you all in advance! unraid-1-diagnostics-20190920-0058.zip
  8. Well, new information. Not sure if it pertains to the finally running parity check though.. I am seeing the following in the logs. Jun 1 10:55:12 unRAID-1 kernel: CPU: 24 PID: 5156 Comm: unraidd Tainted: G O 4.19.41-Unraid #1 Jun 1 10:55:12 unRAID-1 kernel: Call Trace: Could this be a new issue or a systemic from the now running parity check? Latest diagnostics attached. -MW unraid-1-diagnostics-20190601-1459.zip
  9. Thanks Johnnie.Black, I did try those settings with the same result. The server web interface was hung and the network interfaces dropped. However, the parity check appears to keep running though. Seems the only setting that has made any difference was the Tunable (md_write_method): => Reconstruct Write. Nonetheless, it is running now with albeit a perceived slower than what I would expect SSD disk speeds of which should complete in under 24 hours. In the storage world, this same operation could take days/weeks to complete (background verify tasks) so I guess I should be grateful for what I have If @limetech is reading these, I would be more than happy to perform any type of testing you may need on this or another array. I can stand up another one relatively quickly with all SAS drives and a few scattered SATA SSD and HDD if you need these as well. Thanks again! -MW
  10. Had some time this morning to work on the UNRAID server. I made one change to the Disk Settings section - Tunable (md_write_method): => Reconstruct Write. I wouldn't expect this to make a difference unless there is an issue with the current parity hash or checksums. So far, so good. The network connectivity to the array has not dropped however, the web interface is still very very sluggish. I am pleased that the parity check is currently running even if its a staggering 44-47MB/s per drive x qty. 30. I would expect significantly faster speeds to each of the SSD drives.. I know there are a few of you ( @johnnie.black is one? ) with all flash arrays. I suspect that most are all SATA however (mine is all SAS), if you wouldn't mind posting your Disk Settings (or any other relevant) so that I may try to incorporate them to assist in speeding up my array, I would be sincerely grateful. Thanks again, -MW
  11. I am considering starting over per se, or rather formatting my USB stick with a fresh UNRAID image. Thoughts? This is really only an exercise to rule out any 'weirdness' that may be going on with the currently installed image. Ultimately, my real concern is a disk failure and as it currently stands, the inability to run a parity check and or verify if In fact parity is correct. There are 'green' health indicators however, erring on the side of caution I need to ensure the data is intact and can be rebuilt. If I do this, I would really like to preserve the following: NO plugins will be installed - initially Docker container / configuration Virtual Machine configuration and layout Disk layout Shares User names / passwords Cache pool (4 disk BTRFS pool) Appdata resides on cache Domains reside on cache Other shares reside here as well I don't mind reconfiguring the hostname, network, etc., it's simple enough to redo as I have 4 connections set up in failover mode. Is there a document that details the config files that I need to preserve and move over to the newly formatted USB? Anything else I am missing? Should I even go down this path? Other than the inability to run or verify parity, UNRAID appears to be running phenomenally! And, it goes without saying... The new web GUI looks great! -MW
  12. I would tend to agree with you on some server type gear and the staggered spinning up of the drives however, in the storage world (Pure, NetApp, EMC, etc.), all drives are on all of the time. In this case they are all flash drives so the overall power draw is significantly lower than that of a tray with spinners. Power draw: All DAE Trays contain either dual 850watt power supplies or 1150watt power supplies. The server contains dual 750watt power supplies currently configured for high efficiency - balanced mode. Total combined usage detailed below. Server (0 drives): In use: 268 watts DAE Tray 1 (20 drives): Idle - 32watts, In use - 161watts DAE Tray 2 (14 drives): idle - 32watts, In use - 118watts DAE Tray 3 (0 drives): idle - 32 watts I don't believe this to be a power supply issue. As stated earlier, the parity check does kick off and it does appear to run however, when it is running I lose all connectivity to the UNRAID server and the web interface when logged in to the console becomes unresponsive. -MW
  13. To clarify a bit further... Server contains no drives at this time. H200E Connection 1 => SAS DAE tray 1 contains 14 data drives, 2 parity drives, 4 cache drives - (all 2.5") H200E Connection 2 => SAS DAE tray 2 contains 14 data drives - (all 2.5") DAE tray 2 SAS expansion => SAS DAE tray 3 contains 0 drives - will be used for Unassigned Devices (at some point) The server and all SAS DAE's are all connected to the same rackmount APC Smart UPS 1500 of which is running around 65% load. -MW
  14. Server power supplies are rated @ 750watts x 2. Power draw at start up is 392watts. At idle 242watts. The SAS trays each have 1150watt x 2 power supplies. At idle with no drives they are running @ 40watts total. these are enterprise rated disk trays and are rated for 25 hdd’s. The ssd’s I am using consume much less than a spinner.
  15. Good Afternoon, I am having an issue when trying to manually run or schedule a parity check. Every time I try to kick one off the entire system hangs. More specifically, when I am on the console and kick it off, I see all the drive lights light up like its doing a parity check however, the web instance becomes unresponsive. Furthermore, when monitoring from a remote computer, I see that the network (ping test) drops altogether. I have verified this activity both on my older Supermicro server and on the newly provisioned (this past weekend) HP DL365 G7. Please note, I have tried safe mode, guide mode without plugins, etc. All with the same results. Current setup: UNRAID Version 6.7.0 2019-05-08 HP DL365 G7 192G RAM AMD Opteron™ 6282 SE @ 2600 MHz Qty. (2) H200 flashed to IT Mode - Not connected to any drives Qty. (1) H200E flashed to IT Mode - connected to external EMC SAS shelves Qty. (30) 4TB Enterprise SAS Flash Drives (2 Parity, 28 Data) Qty. (4) 4TB Enterprise SAS Flash Drives (BTRFS Cache Pool) All plugins removed Only modification: /boot/config/go => rmmod acpi_power_meter I am attaching the diagnostics file for your review. All that said, I suspect if I just let it run, that it would complete in a day or so however, none of the VM's and Containers are available nor can UNRAID be managed. This has been going on for quite some time and the only way to recover is to perform a hard shut down and power on. When it comes up I have to be fairly quick to cancel the parity check. Any help is appreciated. Thanks again! -MW unraid-1-diagnostics-20190528-1430.zip