MAIOS - My All In One Server - Water cooled


Recommended Posts

G'day All, as you may have guessed I come from a land down under and I have UDC! I'm not the best photographer or story teller, but I have completed my first DIY unRAID build, which I will share my journey

 

First up I have the parts list with some links to where these where bought or what they look like - some prices of the links could be in either AUS or USD, but will go through the numbers later.

 

As you will find, if something is worth doing, I tend to go to extremes when really it's not necessary, that's just me!

 

So here we go..

 

Parts List - some have seen this already - like I said over kill and probably sucks too much power!

 

Here it the new list of parts that I have purchased and installed in the case

 

The pictures and some steps taken to put MAIOS together

 

The Case - Xigmatek Elysium

IMG_0709_zps4143b2ef.jpg

DSC00497_zps2fea34b9.jpg

 

Quick Mod - Larger Side Window

IMG_0708_zps656f6635.jpg

 

The motherboard - Supermicro X8DTH-iF - bought reasonably cheap with RAM and 2xCPU (but they were all sold off to fund better gear)

SupermicroX8DTH-iF_zpsfd2c4abf.jpg

 

CPU - Intel Xeon 5670

X5670_zps575a0ed1.jpg

 

HDD Caddy Units - 3 x Addonics AESN5DA35-A Snap-In Disk Array PRO

IMG_0713_zps45350dcf.jpg

IMG_0714_zps97f20b31.jpg

IMG_0716_zpsbd89da2b.jpg

IMG_0718_zps3426891f.jpg

 

Water Cooling Gear

DSC00508_zps96da6419.jpg

DSC00500_zps8fda3a0f.jpg

DSC00505_zpsce6a4ef4.jpg

DSC00515_zps865ce4ca.jpg

DSC00517_zps6adabab2.jpg

DSC00583_zps8b92f1d7.jpg

DSC00591_zpsacd88cc8.jpg

DSC00609_zps99c2d6ce.jpg

 

RAM

DSC00543_zpsebf92d8a.jpg

DSC00542_zpsaa07a524.jpg

 

Fans

DSC00504_zps3b8d87e3.jpg

DSC00502_zps93b05b1c.jpg

DSC00645_zps2dfa3535.jpg

DSC00647_zps63d8fbe9.jpg

 

PSU

DSC00520_zpsa1868544.jpg

 

Raid Card

DSC00509_zps2b717720.jpg

 

I begin was getting the Reservoir mounted and making sure it has enough room for the caddy's and PSU cabling doesn't interfere

DSC00526_zpse4a44459.jpg

DSC00527_zpsb2d05683.jpg

 

Next fitting the board in and getting the Radiator ready with the FANS

IMG_0719_zps1c212773.jpg

DSC00525_zpsb68b1a90.jpg

DSC00529_zps6a2058b0.jpg

DSC00531_zps6c4b43ed.jpg

 

Now that I knew the space I needed for the Rad, I made up some spacers and got ready for the final mounting of the RAD & FANS

DSC00551_zpsde240a75.jpg

DSC00552_zpsbddcae3d.jpg

DSC00553_zpse6f46587.jpg

DSC00554_zps2d3c7f59.jpg

DSC00555_zps62c2f081.jpg

DSC00556_zps9de45816.jpg

DSC00561_zps8ea369cf.jpg

DSC00562_zps279a9c25.jpg

 

Now that the main water components were installed it was time for the tubing

 

Here is the tool I used to cut those lengths

DSC00584_zps431b25a4.jpg

 

From RES to first CPU block

DSC00585_zps41a8ed02.jpg

DSC00590_zps58b51e44.jpg

 

From RAD back to RES

DSC00587_zps0cc6ce39.jpg

DSC00588_zps590cb0ed.jpg

 

CPU Block to next CPU Block

DSC00592_zps7a111c5e.jpg

DSC00593_zps70d5e867.jpg

 

How did I get 1/2 inch Barbs to fit on 5/8th tubing - Easy

DSC00594_zps00df6508.jpg

DSC00595_zpsac308299.jpg

DSC00596_zps2ba54023.jpg

DSC00597_zps1b4b012e.jpg

 

Finishing off the loop - used some cable ties jut to keep some pressure and hold in place until I completed it

DSC00598_zps058826ef.jpg

DSC00650_zps79df4134.jpg

DSC00651_zpsb5d29fdf.jpg

 

Added in a flow meter - at this stage I done some other mods to the IOH heatsink and also added RAM fans to all the 24 sticks of RAM

DSC00652_zps2cd4893a.jpg

DSC00654_zpsf22b961e.jpg

DSC00653_zps1ffd05c6.jpg

 

Taking a step back from the above pictured, I'd removed all motherboard heatsinks, cleaned them and re-sintalled with thermal paste and added the CPU & waterblocks

 

IOH Chip sets - These get really hot, hence why I also added a fan unit on it

DSC00532_zps135e77da.jpg

DSC00533_zpsa020d31c.jpg

DSC00534_zps8216f841.jpg

DSC00535_zpsce0f527e.jpg

 

Power Regulators HS removed and cleaned

DSC00537_zps6a92f494.jpg

DSC00536_zps75e67bb6.jpg

 

Thermal Paste I used

DSC00545_zpsfc822b10.jpg

 

CPU Water Blocks

DSC00538_zps2d6cfc35.jpg

DSC00539_zps8d0174b6.jpg

DSC00540_zps446e6774.jpg

 

Raid Card Heatsink removal Cleaned sanded and refited

DSC00546_zpscb8b6e1b.jpg

DSC00547_zps70a41c66.jpg

DSC00550_zps828f7678.jpg

DSC00563_zps2fbc6ab7.jpg

 

Somewhat Completed and ready for water testing

DSC00677_zpsb663413d.jpg

DSC00658_zpsd971aec7.jpg

DSC00657_zps696939b4.jpg

 

It runs - little video

http://vid161.photobucket.com/albums/t214/Kostiz/Server/IMG_0910_zpstdbjdpiz.mp4

 

HDD added in

Pictures to come

 

Software loaded

IMG_0930_zpsdf6a1db6.jpg

 

I will add in a breakdown on cost soon, I dare to think once I add this up the cost of it  :'(

 

Link to comment

unRAID boots from a flash drive. It unpacks the OS from that flash drive, and then the OS runs completely in RAM. The unRAID license is tied to the GUID of that flash drive.

 

Your RAID0 SSDs would be suitable for the cache drive though.

 

The 4TB drives, one could be a data drive and the other should be parity, since parity must be at least as large as the largest data drive.

 

Welcome and don't hesitate to ask lots of questions.

Link to comment

VERY nice setup => definitely overkill on the power supply, but the only thing that will hurt is your electric bill, as it will be running well below the 80+ certification level => the bottom of the 80+ range (20%) is still FAR more power than your system will draw, so it could drop as low as the mid-60's in efficiency.    Not a big deal, but I'd have used a PSU with perhaps half that load rating.

 

Link to comment

Also, pretty sure unRAID doesn't have drivers for bluray burners. You can run VMs within unRAID and use it there maybe.

No doubt I missed the point and didn't pay attention to the "All In One Server". Obviously VMs is what this hardware will be about.

 

Are you planning to run unRAID as a guest or as the host?

Link to comment

Wow!

 

Not sure what you have in mind with this beast, but you are ready for it.

 

unRAID is actually pretty CPU un-insensitive, but you will be able to run VMs out the ying yang.

 

I actually tend to try to talk people out of water cooling, as unRAID doesn't tax the CPU, and the risk of a leak in 24x7x5 (or longer) years seems unjustified. But what you have seems very heavy duty and the risk should be low.

 

The area where you seem light is in the disks themselves! I would have thought you'd have a couple of 8T Helios in there! ;)

 

Enjoy your array. Make sure to read my "What is parity" (see my sig) writeup and really understand how this system works. So many users make silly mistakes with new arrays by not understanding some of the basics. A common misconception is that the parity disk is sufficient to reconstruct a failed disk. It is not. Parity PLUS every other disk in the array must be present to do a rebuild. If you get a red-ball, do NOT assume a disk has failed. Ask on the forums for advice before attempting a recovery, which could easily make things worse not better.

 

You have some work in front of you, take your time. And make sure you have data backups of everything, at least until the array is thoroughly burned in and you are confident in it. Remember, unRAID may make losing data less likely, but it is not a backup solution so you should always have backups at least of your critical data you can't afford to lose.

Link to comment


unRAID boots from a flash drive. It unpacks the OS from that flash drive, and then the OS runs completely in RAM. The unRAID license is tied to the GUID of that flash drive.
 
Your RAID0 SSDs would be suitable for the cache drive though.
 
The 4TB drives, one could be a data drive and the other should be parity, since parity must be at least as large as the largest data drive.
 
Welcome and don't hesitate to ask lots of questions.
 
I intent to ask heaps, I've been reading and reading my eyes are about to pop!
 

VERY nice setup => definitely overkill on the power supply, but the only thing that will hurt is your electric bill, as it will be running well below the 80+ certification level => the bottom of the 80+ range (20%) is still FAR more power than your system will draw, so it could drop as low as the mid-60's in efficiency.    Not a big deal, but I'd have used a PSU with perhaps half that load rating.
 
Yeah it was some of the stuff I had unused especially the PSU so considering I plan to add as many large drives as I can, it may come in handy
 

Also, pretty sure unRAID doesn't have drivers for bluray burners. You can run VMs within unRAID and use it there maybe.
 
Planned for ripping my movie collection from their disc, it was also another spare unit I had lying around - BTW I changed this to another brand ASUS Bluray burner, only cause the LG will go into another project I have brewing  :P
 


Also, pretty sure unRAID doesn't have drivers for bluray burners. You can run VMs within unRAID and use it there maybe.
No doubt I missed the point and didn't pay attention to the "All In One Server". Obviously VMs is what this hardware will be about.
 
Are you planning to run unRAID as a guest or as the host?
 
YES this machine is planned to do a whole lot more via VM's for the following
 
    Storage - the main one is for photo's as now with digital pictures/video you seldom delete or print so these need to be stored safely
 
    Plex/XBMC - wanting a simple solution for all the wifi devices including ATV, XIOS and other media capable devices to access and play movies TV shows. Transcoding a must for various file extensions
 
    Torrents/downloads - A place where they can be downloaded, scanned and stored as required
 
    TV - Currently have PayTV (Too expensive) and wish to move to a simpler and or less expensive option for TVshows, NEWS and other FTA programs. PVR option as well
 
    Security - Surveillance record and capture IP camera's
 
    Remote Access - Ability to securely access files or manage the AIO machine remotely
 
I am also running unRAID via a GUEST VM in VMWare 5.5 via an vmdk uploaded http://lime-technology.com/forum/index.php?topic=26639.255
 

Wow!
 
Not sure what you have in mind with this beast, but you are ready for it.
 
unRAID is actually pretty CPU un-insensitive, but you will be able to run VMs out the ying yang.
 
I actually tend to try to talk people out of water cooling, as unRAID doesn't tax the CPU, and the risk of a leak in 24x7x5 (or longer) years seems unjustified. But what you have seems very heavy duty and the risk should be low.
 
The area where you seem light is in the disks themselves! I would have thought you'd have a couple of 8T Helios in there! ;)
 
Enjoy your array. Make sure to read my "What is parity" (see my sig) writeup and really understand how this system works. So many users make silly mistakes with new arrays by not understanding some of the basics. A common misconception is that the parity disk is sufficient to reconstruct a failed disk. It is not. Parity PLUS every other disk in the array must be present to do a rebuild. If you get a red-ball, do NOT assume a disk has failed. Ask on the forums for advice before attempting a recovery, which could easily make things worse not better.
 
You have some work in front of you, take your time. And make sure you have data backups of everything, at least until the array is thoroughly burned in and you are confident in it. Remember, unRAID may make losing data less likely, but it is not a backup solution so you should always have backups at least of your critical data you can't afford to lose.
 
Thanks! - A few things has done my head in already, the biggest one is SHARES - still can't wrap my brain around it and the levels etc
 
Parity I think I get but ran into some problems and need some idea's
 
I have 2 SSD both are 600GB on the onboard SATA controller, I will use one for the VM DataStore drive and keep the VM's on here for fast spin ups and the other well I wanted to use as a cache drive for unRAID. Now I also just found a spare 500GB WD Black drive which I installed on the onboard SATA controller as well, and ready for pre-clear but I have two questions
 
Which drive is suitable for cache
  • SSD or Mechanical - If SSD want the life of it be reduced if I am constantly writing and wiping and reading from it
  • I also read that the motherboards onboard should be faster than a PCIe card in this case my RAID card?

 

  • My other issue is I am having troubles trying to passthough the SATA onboard controller to the VM, is it because the DataStore drive is on the same controller

 

Does that make any sense?

 

As for the 8T Helios these are well priced but Seagate just aren't doing anyone favours with there current failure rate an this technology is bloody awesome, soon DataCentres will be a thing of the past as everyone will have heaps of storage

Even that they are priced at about $260USD by the time it gets here it's cheaper to by RED WD 4TB  :o

 

Nice rig.

Thanks!

 

Now I've also noticed a few flaws in the caddy designs

 

The heat from preclear.sh process seemed high temps to what the average use is showing in their logs

 

My 3TB came in at about 38DegC and the $TB came in at about 42DegC - so the plan is to now try and reverse the fans so that they blow air into the caddy instead of pulling are in - I will be a PITA but I need to try it as I do not like these temps at all

 

The other is how hot the RAID card heatsink gets, so I plan to dig out an out VGA card that had a simple fan on it to help keep it cools, I did lap/sand the heatsink down to make better contact but I've read these still get hot, so adding a little fan may help keep the temp regulated. I've seen on the card there maybe a provision for a 2 pin fan header so I will measure the voltage coming from it and see if it's suitable for a small fan.

 

Thanks guys, really need some help in the cache side and ideas on how to setup Shares.

 

Cheers

Kosti

Link to comment

As for the 8T Helios these are well priced but Seagate just aren't doing anyone favours with there current failure rate an this technology is bloody awesome, soon DataCentres will be a thing of the past as everyone will have heaps of storage

 

HGST makes the HeloSeals. They are filled with Helium to reduce friction. They are a tad more expensive than the Seagate SMRs. ;)

 

HGST 8T HeloSeal

Link to comment

My other issue is I am having troubles trying to passthough the SATA onboard controller to the VM, is it because the DataStore drive is on the same controller

 

My understanding is that you have to passthrough the whole controller so wherever you put your datastore drives you can't pass through those sata ports to a VM.

 

Certainly this was the case when I tried ESX...  I didn't have fast drives for datastores so I just used an ancient LSI card for them....  gave up with ESX in the end as I was getting purple screens of death all the time and I didn't have the inclination to work out why.

Link to comment



My other issue is I am having troubles trying to passthough the SATA onboard controller to the VM, is it because the DataStore drive is on the same controller

 

My understanding is that you have to passthrough the whole controller so wherever you put your datastore drives you can't pass through those sata ports to a VM.

 

Certainly this was the case when I tried ESX...  I didn't have fast drives for datastores so I just used an ancient LSI card for them....  gave up with ESX in the end as I was getting purple screens of death all the time and I didn't have the inclination to work out why.

 

Hey Mate

Yes, it seems that way, but I was reading a thread on here about XenServer that appears to be able to share single ports off a controller and not the whole controller which is really good

 

I've got Esxi up and running and unRAID working so next will be to assign the drives

 

My plan was to use the 3 x 3TB as storage, the 4TB as the parity the other 4TB as a backup for the Parity (is that a waste)? then use the 500GB as a cache drive, it leaves me with one spare 600GB SSD, which I am not sure it it's a good idea to use as a cache drive due to the amoun of writing and purging that will be done on it as I figured it would reduce the life of it?

 

Also I am getting a weird display in unmenu with duplicated drives?

unmenumain_zps1ba49511.png

 

It's about 32DegC today and my house is hot, the HDD are hot as well and reversing the fans around on the caddy didn't really reduce the temps at all  :(

 

My thermal Gun reads then as 32 but unmenu reads it at 39-40

 

Cheers

Kosti

Link to comment

My plan was to use the 3 x 3TB as storage, the 4TB as the parity the other 4TB as a backup for the Parity (is that a waste)?

 

It's not necessary to do that.  But if you are, set parity up as a RAID-1 array of the two 4TB drives.  But there's really no particular advantage to that -- it would give you an extra layer of protection for ONE drive, but all your others would still have the same single-failure level of fault tolerance.    Your parity drive is already protected from failure by the other drives.

 

Link to comment

Power consumption would be interesting, but I doubt it's too bad.

 

The X5670's have a 95w TDP, but will pull FAR less than that when not under load.

 

Nevertheless, it's true that the newer generations are appreciably more power-efficient.  I just built my wife a new system last month with a Haswell Core i7-4790, 16GB of RAM, and a 480GB SSD.  The system draws about 35 watts in normal operation, INCLUDING a 23" touch screen display  :)    Ramping it up to full CPU load it tops 100w, but that's a VERY rare occurrence.    ... and the equivalent Xeon would draw even less  8)

 

Link to comment

My plan was to use the 3 x 3TB as storage, the 4TB as the parity the other 4TB as a backup for the Parity (is that a waste)?

 

It's not necessary to do that.  But if you are, set parity up as a RAID-1 array of the two 4TB drives.  But there's really no particular advantage to that -- it would give you an extra layer of protection for ONE drive, but all your others would still have the same single-failure level of fault tolerance.    Your parity drive is already protected from failure by the other drives.

 

Yep makes sense, but if the 4TB goes I don't have a replacement immediately, whcih is no huge problem as I can afford it to be down for a few days.

 

OK so what's the recommendation I should lean towards to get the best performance and basic protection from what I have at my disposal to build my storage array.

 

  • 3 x 3TB WD RED (precleared - I assume passed?)
    2 x 4TB WD RED (precleared - I assume passed?)
    2 X 600G Intel SSD (one used for VM/esxi DataStore) (one unused)
    1 x 1TB Samsung (well be spare once I move off DATA) (has not been precleared)
    1 x 3TB WD Green (well be spare once I move off DATA) (has not been precleared)
    1 x 4TB WD Mybook (well be spare once I move off DATA) (has not been precleared)
    1 x 500G WD Blue (precleared - I assume passed?)
     

 

Once thing that concerns me is the write performance of some of the older drives as when the 500G drive finished preclear the read/write results were pretty average, but ok for an older drive

500GB HDD Preclear Successful
... Total time 7:50:38
... Pre-Read time 2:00:39 (69 MB/s)
... Zeroing time 1:56:12 (71 MB/s)
... Post-Read time 3:52:47 (35 MB/s)

 

Cheers

Kosti

Link to comment

Write performance for any older drive with lower areal density than the new 1TB/platter (or more) drives will be significantly below what's possible with new drives.    This isn't a significant problem in most applications; but if you want the absolute best write performance, then I'd stay with drives that are 1TB/platter and use your older drives as backups (that's what I do).

 

Link to comment

Appreciate the feedback Garycase!

 

I will use a 1TB drive instead and find use the 500G somewhere else - as for the drives showing up as sd1 was explained as I posted it in another thread, which i should just keep them in here

 

Once I finish the preclear of the 1TB I will start the array - Hooray!

 

Thanks again!

Kosti

Link to comment

My plan was to use the 3 x 3TB as storage, the 4TB as the parity the other 4TB as a backup for the Parity (is that a waste)?

 

It's not necessary to do that.  But if you are, set parity up as a RAID-1 array of the two 4TB drives.  But there's really no particular advantage to that -- it would give you an extra layer of protection for ONE drive, but all your others would still have the same single-failure level of fault tolerance.    Your parity drive is already protected from failure by the other drives.

 

 

Don't raid-1 your parity drive. That's a waste of resources and it's going to slow down all writes to the array.

If you want speed, use RAID 0 with a hardware controller.

If you want redundancy or a warm spare, then prepare it with preclear and spin it down manually with hdparm -y /dev/disk/by-id/(drivemodelserial) in the /boot/config/go script.

 

When you have an issue, the drive will already be in the machine, precleared and ready to go.

Stop the array, re-assign the drive and let it rebuild.

RAID1 on parity is going to hold back everything in the array unless you use a caching raid controller.

The parity drive is no more important than any other data drive.

The parity drive is (sorta) protected by all the data drives, i.e. you can rebuild parity at any time from all the data drives.

A single failed data drive is protected by the other data drives and parity.

The key would be, how fast you are able to acquire a replacement drive and rebuild it.

using modern 1Tb platter based drives provides the speed to do a parity sync or recreate a drive at a reasonable speed and with in a reasonable amount of time.  I wouldn't waste the electricity or sata slot on old/slow drives.

 

FWIW, I would suggest acquiring and using the HGST 4TB 7200 RPM drives for a parity drive.

They are NAS rated drive and have good speed.

For one or two write streams it will be a small boost in writes, but once you write a few things simultaneously the extra speed of the drive helps.

 

The HGST 4TB 7200 RPM drives get about 160-180MB/s on the outer tracks.

If money is no object, the HGST 6TB 7200 RPM drives get about 225MB/s on the outer tracks.

I'm really pleased with these two model of drives.

The other speedy drives are the Seagate 3TB 7200 RPM drives.

When I get a chance I plan to RAID0 a pair and use them for Parity and relegate all 6TB drives for data.

 

With all that horsepower in the machine, I would use one of the SSD's as the cache and get a fast parity drive set up.

With the right configuration you can burst at almost 115MB/s.

 

With today's SSD's, as long as the drive is not at 80% full, there are plenty of cells for re-writes, You can write over 20GB a day and it will still last over 5 years.

 

In past years, we used to recommend that people purchase a cache drive equivalent of the parity drive.

In a failure condition you could move the data off the cache drive and re-use it for parity or any other replacement.

 

With dockers and apps folders, that changes things a bit, so I might just use the SSD and have a regular backup schedule to the array.

 

 

In any case, that's a real sweetheart of a setup. Even I'm sweating it...  ;D

Although these days, I've come to the conclusion of a fewer smaller machines so I can maintain things easier.

Link to comment

Thanks WeeboTech, Really appreciate you taking the time to provide some idea's.

 

The original plan was for Toshiba drives, but the WD RED's came up, so I am stuck with them for now. Sadly we don't get the good deals for HDD here in OZ so we either pay a premium for shipping or look for alternatives like used drives, in fact I maybe grabbing 2 more 3TB REDS so that will fill one caddy up nicely

 

If I was to sell the current drives and maybe grab some 8TB without having to add in too much then I may go down that path in a few months. But too early for these Helli drives and may not be suitable for something like this?

 

Also the heat of my drives is also causing some concerns as our hot summer temps are making then hit the 40DegC and the wife is not happy with the room heating up, so the quicker I can move it to the man cave the better for all at home  :P

 

For now the plan is this

3 x 3TB reds for all storage

1 x 4TB red for parity

1 x 1TB Samsung F3 for Cache

 

Leave the 4TB as a hot spare and spin down as you suggested, as for the SSD since this in on the motherboard controller I cannot passthrough this without the whole controller so for now I am going to use it for cache in ESXI or may use this as a unRAID cache if I find i need that 1TB for other storage

 

Now I've not touch Docker, as my next learning curve will be SHAREs since I am not sure how I am going to set that up

 

I want to fill each 3TB HDD before it moves to the next and also keep each HDD specific category something like the below

 

[slot1] 3TB HDD - Movies, TV Shows, Documentaries, Downloads

 

[slot2] 3TB HDD - Music Videos, Music Albums, iTunes

 

[slo3] 3TB HDD - Photo, Home Movies, ISO images

 

I have so many questions about unmenu as well like now that I have preclear the drives do i just add them in any order?

 

I also noticed the 500GB has come uo as HPA - if the drive went through the preclear why did it come back as HPA, did my motherboard add a BIOS too it?

 

As this is the 500GB drive I will be taking this out of the equation anyways

 

Cheers

Kosti

Link to comment

You don't have to pass through the SSD in order to use it as a cache, You can make the drive an RDM drive by ssh'ing into ESX, login as root, and creating a RDM for the SSD.

 

I do this with my drives since I cannot pass through controllers in the HP AMD version of the micro servers.

 

ssh in as root to esx (you will have to enable it via vsphere)

my data store is VMDS1 so

cd /vmfs/volumes/VMDS1/

mkdir _RDMS

 

find the disk you want to RDM with

/vmfs/volumes/51d5b2e5-7bbdb7fb-4391-28924a2f176c/_RDMS # ls -1 /vmfs/devices/disks/t10.ATA* |grep -v :
/vmfs/devices/disks/t10.ATA_____HGST_HDN726060ALE610____________________NAG1D7TP____________
/vmfs/devices/disks/t10.ATA_____HGST_HDN726060ALE610____________________NAG1DEKP____________
/vmfs/devices/disks/t10.ATA_____ST4000DM0002D1F2168__________________________________W300B0G2
/vmfs/devices/disks/t10.ATA_____ST4000DM0002D1F2168__________________________________Z300JE0D
/vmfs/devices/disks/t10.ATA_____ST6000DX0002D1H217Z__________________________________Z4D0EE7M
/vmfs/devices/disks/t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEAD350663M_____

 

Here's an example shell I use

#!/bin/sh

DISK=/vmfs/devices/disks/t10.ATA_____ST6000DX0002D1H217Z__________________________________Z4D0EE7M

RDMP=`echo ${DISK} | sed -e 's#/vmfs/devices/disks/##g' -e 's#_\{1,\}#_#g' -e 's#t10.ATA_##g'`.vmdk

# set -v -x
ls -l ${DISK}
vmkfstools -a pvscsi -z ${DISK} ${RDMP}
ls -ltr ${RDMP}

exit

 

which created the following files.

-rw-------    1 root     root     6001175126016 Dec 22 15:56 ST6000DX0002D1H217Z_Z4D0EE7M-rdmp.vmdk
-rw-------    1 root     root           520 Dec 23 03:07 ST6000DX0002D1H217Z_Z4D0EE7M.vmdk

 

then in vsphere I added a hard disk.

use existing virtual disk

browse to the datastore

choose the _RDMS folder

and select the .rdmp disk node I just created.

 

I make the .rdm nodes in a separate folder for maintenance rather then directly in the virtual machine folder.

This allows me to rsync the virtual machine without the rdmp files (Which expand to full size when rsyncing).

 

 

There are plenty of guides on how to do this on the forum, With pictures too!

 

 

Preclearing drives is easy, it just takes time. Be careful about the drive you are writing to. You don't want to make a mistake and clear a data drive. There are some protections, still be mindful.

 

If you have an HPA on the 500GB drive, I would double check and make sure this motherboards bios did not put it there.

 

Also if heat is a concern, then consider a different drive to replace/consolidate the 1TB and 500GB. 

That's 1 drive to replace 2. Thus less heat/electricity.

 

Link to comment

I will use a 1TB drive instead and find use the 500G somewhere else

 

It's not the size of the drive that's important vis-à-vis the speed .... it's the areal density.  What you want to use are drives that have at least 1TB/platter density ... the WD Reds, the Seagate and HGST NAS drives, etc.

 

Link to comment

I want to fill each 3TB HDD before it moves to the next and also keep each HDD specific category something like the below

 

[slot1] 3TB HDD - Movies, TV Shows, Documentaries, Downloads

 

[slot2] 3TB HDD - Music Videos, Music Albums, iTunes

 

[slo3] 3TB HDD - Photo, Home Movies, ISO images

 

To have drives fill up before moving on to the next drive in a share, just set the allocation method to "fill up".    To keep the shares on specific drives (as you've indicated), just use an "Include" so the desired drive is the only drive allowed for the specified share.    But if you do this, and a drive gets full, you won't be able to copy to that share unless you've got another drive assigned to the share for it to use.    So think carefully about just how you assign the shares.

 

 

... now that I have preclear the drives do i just add them in any order ...

 

Yes.

 

Link to comment

 

 

[slot1] 3TB HDD - Movies, TV Shows, Documentaries, Downloads

 

[slot2] 3TB HDD - Music Videos, Music Albums, iTunes

 

[slot3] 3TB HDD - Photo, Home Movies, ISO images

 

Each to their own but personally I would use the following setup:

 

[slot1] 3TB HDD - TV Shows

 

[slot2] 3TB HDD - Movies, Documentaries, Downloads

 

[slot3] 3TB HDD - Music Videos, Music Albums, iTunes, Photo, Home Movies, ISO images

 

However this is just due to knowing my personal usage over time. TV for me is by far the biggest space eater and music is barely even the same size as some TV shows.

 

Take a look at your current setup and usage habits and base it off that rather than spreading the categories between the drives. You may have already planned this out though, just offering some advice :-)

Link to comment

Note that there's NO real reason to keep specific categories limited to a single disk.  That effectively eliminates one of the key advantages of the user shares -- the ability to reference a share without worrying about which physical drive it's on.    By setting the appropriate split levels for your shares, you can ensure there are no individual movies or episodes that are split among drives (thus not requiring multiple spin-ups to stream, potentially resulting in pauses during playback) ... but that you also won't run out of space when writing to a share because one drive is full but the others have TB's of free space.

 

If you simply create a share for each category you want to segregate [e.g. Movies, TV Shows, Music Videos, Photos, etc.] the UnRAID user shares will keep those nicely separate for you, regardless of which physical disk they're stored on.

 

Link to comment

Note that there's NO real reason to keep specific categories limited to a single disk.  That effectively eliminates one of the key advantages of the user shares -- the ability to reference a share without worrying about which physical drive it's on.    By setting the appropriate split levels for your shares, you can ensure there are no individual movies or episodes that are split among drives (thus not requiring multiple spin-ups to stream, potentially resulting in pauses during playback) ... but that you also won't run out of space when writing to a share because one drive is full but the others have TB's of free space.

 

If you simply create a share for each category you want to segregate [e.g. Movies, TV Shows, Music Videos, Photos, etc.] the UnRAID user shares will keep those nicely separate for you, regardless of which physical disk they're stored on.

 

Thanks mate, really appreciate u giving me some pointers, I am trying to ready about the SHARES so I don't bugger it up and prevent unnecessary spin up of other drives.

 

I am reading a lot of threads with great information and stuff but most have stalled, so I'm concern either people are moving on to other adventures or have become somewhat too busy

 

Still the wiki is getting a thrashing ;D

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.