Whaler_99

Members
  • Posts

    618
  • Joined

  • Last visited

Everything posted by Whaler_99

  1. I'm not specifically sure, as it doesn't explicitly state one way or the other, but I assume it does. Most of these 3x5 units, the backplane is pretty dumb and just passing through signals, so will support SATA I/II/III just fine. I know a couple models state support for I and II and but not III specifically, so maybe there you might have an issue.
  2. Those cages will work fine and are a great alternative, but having had unraid for years now, needing to swap drives in and out, the "hot swap" cages are so worth the money. They make it so much easier! And in reality, they are not really hot-swap per say. You still need to shut down the server to swap/move/change a drive, etc.
  3. When I did my review of the 3x5 cages I looked high and low on this and there was a ton of info on what to plug in and what not. All I could find officially was on their website FAQ, it seems to indicate to only use the two SATA power plugs or the three Molex connectors.
  4. Just figured I put this here if anyone interested. I am in the Barrie area and can meet and can also come further south into say the Vaughn area to meet if someone is interested. I also have a box from a ML350 Gen9 unit, so if you wanted it shipped somewhere in Canada, could possibly work something out, but be pretty expensive. This thing is heavy. This is used, about 3 years old. Pulled in working condition from a production environment. Has an HP Support Pack assigned to it, giving it hardware warranty till June 2017. Form Factor: Tower, 5U Processor: 1 x Intel Xeon E5-2620 / 2 GHz ( 6-core ) Cache Memory: 15 MB L3 cache Cache Per Processor: 15 MB RAM: 56 GB (installed) / 384 GB (max), DDR3 SDRAM Storage Controller: RAID ( Serial ATA-300 / SAS 2.0 ), PCI Express 3.0 x8 ( Smart Array P420i ) Server Storage Bays: 8 x Hot-swap 2.5" Hard Drive: None. Graphics Controller: Matrox G200 Networking: 4 x Gigabit Ethernet Power: 2 x Redundant 750W power supplies Note - does NOT include any hard drives. The locking mechanism on the door panel has been removed to make access easier. Does include all drive blanking panels for the drive cage, the server feet and two power cords. Cash only. If shipping, EMT preferred but could do paypal. Contact me with any questions I'm looking for about $1200 CA for this but am up for negotiations.
  5. Anyone with some insight? Thanks!
  6. Sorry for delay... unraidbackup-diagnostics-20160406-1331.zip
  7. Have a basic setup and noticed the drives in the server won't spin down. Get the following in the log any time the spin down command gets sent: Mar 31 15:35:51 UnRaidBackup kernel: mdcmd (1979): spindown 1 Mar 31 15:35:51 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:35:51 UnRaidBackup kernel: md: disk1: ATA_OP e0 ioctl error: -5 Mar 31 15:43:10 UnRaidBackup kernel: mdcmd (1980): spindown 4 Mar 31 15:43:10 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:43:10 UnRaidBackup kernel: md: disk4: ATA_OP e0 ioctl error: -5 Mar 31 15:43:27 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:43:27 UnRaidBackup kernel: mdcmd (1981): spindown 3 Mar 31 15:43:27 UnRaidBackup kernel: md: disk3: ATA_OP e0 ioctl error: -5 Mar 31 15:45:14 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:45:14 UnRaidBackup kernel: mdcmd (1982): spindown 0 Mar 31 15:45:14 UnRaidBackup kernel: md: disk0: ATA_OP e0 ioctl error: -5 Mar 31 15:46:00 UnRaidBackup kernel: mdcmd (1983): spindown 2 Mar 31 15:46:00 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:46:00 UnRaidBackup kernel: md: disk2: ATA_OP e0 ioctl error: -5 Mar 31 15:56:59 UnRaidBackup kernel: mdcmd (1984): spindown 1 Mar 31 15:56:59 UnRaidBackup emhttp: mdcmd: write: Input/output error Mar 31 15:56:59 UnRaidBackup kernel: md: disk1: ATA_OP e0 ioctl error: -5 These are SAS drives WD2001FYYG, connected to a SM controller. The only think I can think is the SAS drives don't support the command? Running 6.1.9
  8. Hey guys, just got my hands on what I think is a BR10i card. These are some of the numbers on it: SAS3082e-R L3-25116-01h And looks like the IBM FRU is 44E8690 Been doing some reading on this thread, looks pretty straight forward. Just two questions, what is the lastest firmware for this card in IT mode and does it support the large drives? Thanks,
  9. Not fixed, not really. Anyone have more than two Windows guest systems running? I got my third loaded, but the second one was off. If I try and run all three guests at once though, any one of the will random just turn off. In fact I actually had all three turn off at the last attempt. Bloody weird. I have no clue what is going on. Could this be a RAM issue? I have 16GB in my system, and between the three guests, they would use about 14GB. With guest one and two running normally, when I check the stats it says 16GB of 16.3GB allocated. I assume a bunch is being used by dockers and Plex transcoding. So, are VM's randomly powering off because, I am out of memory?
  10. I may have it sorted out - shut down one of the other VM's and started comparing. The new VM was by default using for the machine setting i440fx-2.3 but my older ones were using the 2.2 version. Switched the new one down to 2.2 and I am actually getting through the install process now.
  11. I currently have two Windows 7 VM's running, one using Cores 6 and 7, one using cores 4 and 5. I am trying to create a new VM (tried this with both Windows 7 and 8.1) I go an do the initial creation in unRaid. VM starts up, I start loading the 4 drivers from the virtio-win drive. Every time I then select the drive to install on, only one drive, the VM just turns off, every time. Twice one of my other VM's also turned off. Any ideas what is going on? I have tried this VM with cores 2 and 3, 6 and 7, etc...
  12. He talks about using the local display for unraid, the GTX for the VM. At about 6:05 he talks about "planning to use the unraid desktop GUI, makes sure you plug your monitor into the onboard graphics". Then at 6:32 you see the boot and what is definitely some sort of desktop gui login of some sort. Again at 16:20 or so, you see more of this unraid desktop gui logon screen.
  13. Nothing confirmed, but definitely looks like it. The way Linus was talking, needing the screen local to setup, looks like we are getting a local GUI interface.
  14. And here is the latest video where you get some decent screen shots of 6.2 beta with dual parity support...
  15. That means the pool is using ~550GB, should have about 450GB free. BTRFS raid works different: I understand what your saying with RAID1 BTRFS, pretty much the same as Windows Storage Spaces... but, when I look at the stats: btrfs filesystem df: Data, RAID1: total=552.00GiB, used=479.66GiB System, RAID1: total=32.00MiB, used=96.00KiB Metadata, RAID1: total=3.00GiB, used=1.15GiB GlobalReserve, single: total=400.00MiB, used=0.00B I read this as having a 552GB partition of which 479GB is used. The 479 used is correct, that does correspond to all the data on the cache pool. If I have 500GB still free, why doesn't it either show it or why does "Data, RAID1..." read the way it does? I'm not saying you are not right, just the output on the page in unRaid just seems confusing then.
  16. Using default settings with 4 x 500GB drives you would get a btrfs raid1 4 drive cache pool with 1TB usable space, if you only see 500GB something is wrong, check "btrfs filesystem show:" on the cache webpage for the number of devices in use. The issue as I see it, is that a true RAID1 set is only two drives. Any more than that and your are using some combination of Raid1+0 or Raid0+1, or if your are Intel, just call it Raid1E. From what I have seen and read, using more than two drives on setup, you still only get a two drive mirror. Then you manual have to run the command to rebalance it as raid10. But then there had been mention not to run the cache array as a raid5 or raid10 yet. Here is what is on the Cache Page: btrfs filesystem show: Label: none uuid: 8eaec535-5a88-45fc-a40c-86442c538cf5 Total devices 4 FS bytes used 480.81GiB devid 1 size 465.76GiB used 277.03GiB path /dev/sdl1 devid 2 size 465.76GiB used 278.00GiB path /dev/sdo1 devid 3 size 465.76GiB used 277.03GiB path /dev/sdm1 devid 4 size 465.76GiB used 278.00GiB path /dev/sdp1 btrfs filesystem df: Data, RAID1: total=552.00GiB, used=479.66GiB System, RAID1: total=32.00MiB, used=96.00KiB Metadata, RAID1: total=3.00GiB, used=1.15GiB GlobalReserve, single: total=400.00MiB, used=0.00B What I read from this, I have a 500GB data partiton....
  17. Ya, I just checked and I have 500gb and not a 1tb, so definitely not raid 10. I think something like this needs to be made more clear in the GUI as well. And if it is only suggested to run raid 1 and not 5 or 10, make that clear.
  18. Just found this and got me to thinking, when I setup my Cache Pool, I used 4 x 500GB drives. But, based on what you have said Jon, is my solution, since I have made no changes, other than the default, actually only currently using two drives in RAID1 and not using the others? I guess form what you are saying, I should really only be running with two drives at this point?
  19. Good point, something to keep an eye on when putting everything together. Guess I have been lucky till now.
  20. Received the cages, controller and cables today. Very well packaged and fast shipping. Thanks for a smooth transaction.
  21. Are they all using XFS (or BTRFS)? I suspect its localized to SAS2LP-MV8 cards + ReiserFS + Writing operations + (possibly) some additional factor in my setup. I will ask and find out... Confirmed, all three of these new systems are running XFS.