HP MS Gen8 Rebuild


Recommended Posts

G'Day Folks,

WOW its been 5-6 years since I last looked at unRaid as my first attempt was way before features such as Kernel-based Virtual Machine and Dockers existed, Dang! Look how much you've grown "unRAID"!!

 

Due to my own OCD or should I say my UCD again, I am reviving a HP Gen8 with a few modification, my last build was back in 2014 for MAIOS HERE.

 

This HP ProLiant MicroServer Gen8 originally had the following upgrades inside the HP case, until the PSU went POP during a preclearing of a WD 12TB HDD via USB, while this is still a mystery why the PSU blew up, I wasn't going to shell out more for another PSU so decided to butcher up the GEN8 and move it to another case I had lying around

  • CPU <> Xeon E3-1265L V2 with a modified Noctua NH-9i HSF
  • RAM <> 16GB ECC RAM
  • Real RAID <> HP Smart Array P420 flashed to IT mode with 2GB FBWC and BBU installed in PCI 1
  • CASE Mod added <> Addonics AESN5DA35-A Snap-In Disk Array PRO (holds 5 drives)
  • Moved / swapped the SATA 1 cabling with the ODD making this to allows to boot off SSD due to BIOS limitations and HP not allowing to boot a HDD of the ODD, anyways this was done prior and now not required for use of unRAID). No longer required since moving to a more flexible case and cabling.
  • PSU Delta PSU 400W PSU Corsair 650W

 

Originally was using ESXI and ran unRaid along side a few other VM's like Xpenology and FreeNas. But honestly due to one dude "SpaceInvaderOne" & through his awesome tutorials on how good unRaid has become that I want to take a further look and rebuild my crippled NAS with the current hardware and try out unRAID again.

 

My primary use is to be centralized storage for my household of 6 users, including Plex Media Server storage. I am not planning on doing much with the dockers yet, will have camera capture / surveillance, share folders across 6 users and backups of Apple devices. Possibly build a pfsens in a VM to replace the raspi running pie-hole.

 

I'd planned to use a 250GB SSD for a cache drive, but thinking to replace this with 2 x 480GB SSD drives as RAID0 due to seeing 2 CRC errors for the single 250GB SSD, not sure if this is due to cabling or just the Samsung 840 is just getting old so may as well dump it for the 2 x Intel S3500 480G units.

 

Eventually I want to use all 12TB drives which will last a while given I've just picked up 2 of them from Amazon recently, this is to allow me to sort and store all my folders and data to one of them before moving files to the NAS, Yep I've datahorded and spread across multiple drives - backups to duplication that I need to consolidate!

 

List of HDD's that need consolidating
3 x 3TB WD Green; 1 x 3TB Seagate : These are already out free of data and ready to sell off to fund additional larger HDD's

 

Still to sort and be wipe
6 x 4TB Seagate (4 of them are new, only powered up a few times if that, no data written to them or precleared yet so potentially will be used to add to unraid with one of the 12TB as the parity drive)? sell off unused drives to fund additional larger HDD
1 x 4TB WD; 1 x 5TB Toshiba; 1 x 8TB Seagate, once all data is moved to unRAID sell off to fund additional larger HDD

 

Due to some limitations of the HP GEN8 board such as SATA ports, BIOS etc, I'm thinking for my build plan to including additional back up options. The loose plan is to use the 8TB as a parity drive and use 3 X 4TB for unRAID. Then take this data and back this up to another single drive not part of unraid and perform some sort of rsync or similar to keep it updated and backed up, eventually replacing the 8TB with a 12TB or larger

 

I'm wanting to use the B120i onboard controller on the HP to do this as its a crap controller where by the first 2 slots are at SATAII 6G and the others are SATA1 3G. so hence I want to use the HP P420 Smart Array and use this for unRAID giving me 6 total drives towards unRAID and 2 drives

 

Something like this

B120i --> 2 x SATAII connect 2 x 12GB Mirror for backing up UNRAID
B120i --> 2 X SATA I connect drives for slow storage or simply not used them?

ODD SATA port, either use for backup purposes add to the above as a new pool or not use at all?
HP Smart Array P420 --> Port 1 connect 4 drives, 1 x 8TB - 3 x 4TB
HP Smart Array P420 --> Port 2 connect 2 X 480G SSD Cache, leaves 2 SATA ports for expansion creating hop spare or preclearing

 

Feel free to advise best options with current hardware.  For those keen to see what transplanted looked like PIX are HERE

 

PEACE

Kosti

Edited by Kosti
fixed codes
Link to comment

So looking at the logs I see the following filling up

Jun  8 16:28:53 Medusa kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-390)
Jun  8 16:28:53 Medusa kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-514)
Jun  8 16:28:53 Medusa kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338)

Checking what is going on seems to point to a few pages in particular this one HERE

I'm using the latest bios and all updates on the HP motherboard?

sudo dmidecode -s bios-release-date
04/04/2019

HP ProLiant MicroServer Gen8, BIOS J06 04/04/2019

Believe the version of unraid is 6.8.3

Is there anything I can do to stop these filling up the logs?

PEACE

Kosti

Link to comment

Thanks

 

I'm sure I read that thread, as I am also running the latest BIOS, so the other option which I missed was to add in the line to the /boot/config/go file

 

rmmod acpi_power_meter

Will try this and see if this suppresses the spamming of the log file

 

Will let you know how it goes

PEACE

Kosti

Link to comment
9 hours ago, Kosti said:

Thanks

 

I'm sure I read that thread, as I am also running the latest BIOS, so the other option which I missed was to add in the line to the /boot/config/go file

 


rmmod acpi_power_meter

Will try this and see if this suppresses the spamming of the log file

 

Will let you know how it goes

PEACE

Kosti

OK added this to my go file and no more errors seen, hope this isn't really required..?

 

Now the next issue is NTP, seems I am not getting the right sync or at least that's what the logs are telling me

 

Jun  9 19:32:36 Medusa ntpd[1622]: ntpd [email protected] Fri Aug  2 18:40:41 UTC 2019 (1): Starting
Jun  9 19:32:36 Medusa ntpd[1622]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
Jun  9 19:32:36 Medusa ntpd[1624]: proto: precision = 0.045 usec (-24)
Jun  9 19:32:36 Medusa ntpd[1624]: basedate set to 2019-07-21
Jun  9 19:32:36 Medusa ntpd[1624]: gps base set to 2019-07-21 (week 2063)
Jun  9 19:32:36 Medusa ntpd[1624]: Listen normally on 0 lo 127.0.0.1:123
Jun  9 19:32:36 Medusa ntpd[1624]: Listen normally on 1 lo [::1]:123
Jun  9 19:32:36 Medusa ntpd[1624]: Listening on routing socket on fd #18 for interface updates
Jun  9 19:32:36 Medusa ntpd[1624]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jun  9 19:32:36 Medusa ntpd[1624]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized

I had a look at the date settings in unRaid and made some changes to use oceania.pool.ntp.org for NTP time but this made it worse? Even setting 0.pool.ntp.org 1, 2 respectively didn't seems to make any improvements

 

Jun  9 20:06:48 Medusa ntpd[17600]: ntpd exiting on signal 1 (Hangup)
Jun  9 20:06:48 Medusa ntpd[17600]: 127.127.1.0 local addr 127.0.0.1 -> <null>
Jun  9 20:06:48 Medusa ntpd[17600]: 162.159.200.1 local addr 192.168.1.122 -> <null>
Jun  9 20:06:48 Medusa root: Stopping NTP daemon...
Jun  9 20:06:49 Medusa ntpd[18559]: ntpd [email protected] Fri Aug  2 18:40:41 UTC 2019 (1): Starting
Jun  9 20:06:49 Medusa ntpd[18559]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
Jun  9 20:06:49 Medusa root: Starting NTP daemon:  /usr/sbin/ntpd -g -u ntp:ntp
Jun  9 20:06:49 Medusa ntpd[18561]: proto: precision = 0.050 usec (-24)
Jun  9 20:06:49 Medusa ntpd[18561]: basedate set to 2019-07-21
Jun  9 20:06:49 Medusa ntpd[18561]: gps base set to 2019-07-21 (week 2063)
Jun  9 20:06:49 Medusa ntpd[18561]: Listen normally on 0 lo 127.0.0.1:123
Jun  9 20:06:49 Medusa ntpd[18561]: Listen normally on 1 br0 192.168.1.122:123
Jun  9 20:06:49 Medusa ntpd[18561]: Listen normally on 2 lo [::1]:123
Jun  9 20:06:49 Medusa ntpd[18561]: Listening on routing socket on fd #19 for interface updates
Jun  9 20:06:49 Medusa ntpd[18561]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
Jun  9 20:06:49 Medusa ntpd[18561]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized

I've tried using several different NTP servers but get the same message in the system log every time I apply the new settings. I check the output from the console

root@Medusa:~# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l  37m   64    0    0.000    0.000   0.000
*time.cloudflare 10.26.8.4        3 u   37   64  377    9.078    0.673   2.823
-tick.chi1.ntfo. 206.55.64.77     3 u   31   64  377  201.290   -1.028   0.359
+ntp3.ds.network 162.159.200.1    4 u   35   64  377   57.536   -0.631   2.780
+ec2-13-55-50-68 203.206.205.83   3 u   28   64  377    9.605    0.734   1.597
root@Medusa:~#

 

Link to comment
14 hours ago, Kosti said:

OK added this to my go file and no more errors seen, hope this isn't really required..?

 

required until HP fixes it. but don't hold your breath.

 

14 hours ago, Kosti said:

I've tried using several different NTP servers but get the same message in the system log every time I apply the new settings. I check the output from the console

 

my hp workstations all complain about it but I just ignore them as it doesn't seem mission critical for what I do.

  • Like 1
Link to comment

Thanks mate, your a champ for taking the time to chime in, really appreciate it

 

I created another bunch of questions in the general section HERE, thinking this area wasn't for such questions, shame as its had 85 views and not much of a way of clues as to why I had experienced a catastrophic failure during preclear..

 

Don't know what your name is dude can't work out the emoji LOL

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.