Upgrade from Socket 775?


mazma

Recommended Posts

Currently my unRaid setup is:

 

Gigabyte EP43-DS3L Socket 775

http://www.gigabyte.com/products/product-page.aspx?pid=2847#ov

Core 2 Duo Quad (2.4 Ghz I think, not sure)

6Gb ram DDR2

 

3x HGST HDN724040ALE640 4Tb

7x WD WD20EARS-00MVWB0 2Tb

1x Seagate Barracuda ES 750Gb Cache

No hot swap bays

ReiserFS file system

unRaid 6.1.0

 

It runs Plex, SabnzBD, Couch, Sickbed, streams music for Sonos and file storage.

 

Overall the box has been solid for 5+ years but I do have some gripes.

 

Its streams Plex and so forth just fine but starts getting laggy when file operations are ongoing at the same time as video streams are bringing played. Copying large amounts of data to the cache drive takes to long and browsing files via Finder (Mac) over AFP or SMB is lame.

 

The motherboard used the slower SATA 3Gb/s connectors.

 

So, would I notice a significant difference in performance, particularly read/write if I upgraded to one of the popular Supermicro server boards with a Xeon processor and an SSD for cache? Would changing to the new XFS file system make a difference to?

 

unRaid has served me well but as a busy photographer I'm finding it harder and harder to have time to look after the box and browse the forums (which have been excellent). I can't help looking at Synologys website (love the idea of having 1 small NAS where I dump RAW images which then replicates to the larger NAS on its own)

 

Thanks in advance.

Link to comment

A few thoughts ...

 

(a)  3Gb SATA ports are NOT a bottleneck when you're using rotating platter drives.  Only SSDs can sustain data rates faster than a 3Gb port, so this is not something to be concerned about [The only transfers that would be faster if you had 6Gb SATA ports are those between the drive's buffer and the PC -- a VERY tiny % of transfer activity]

 

(b)  Your motherboard only has 6 SATA ports, but you've got 11 drives.  What are you using to provide the other ports?    I'm wondering if this may be a bottleneck in your system.

 

©  The copies to your cache drive seem "slow" because you're using an old drive with very low areal density.    The Barracuda ES 750GB drives used either 4 188GB platters or 3 250GB platters, depending on which model you have.    These are VERY low density compared to modern drives.    If you replace the cache drive with a new 1TB/platter drive or an SSD you'll see a BIG improvement in the performance [Assuming, of course, that you're using a Gb network.].    This improvement doesn't need a better processor -- just a better drive  :)

 

(d)  Switching to XFS may help some with writes ... particularly if your drives are fairly full [> 80% or so].    It's not likely to make any difference in read speeds.    If you decide to do this, be VERY careful, as I've seen several folks lose data when attempting this conversion [be sure you understand the "user share data loss" issue and don't do anything that will result in that problem].    Personally, I wouldn't do this until you've at least switched to a better cache drive and, possibly, replaced your add-on controller (if that's an issue -- once you post the make/model as I request above we can see if that's the case).

 

(e)  I'm assuming (as I noted above) that your network is Gb => if that's not the case be sure you update everything to Gb, as that makes a BIG difference.

 

(f)  Slow browsing of files may be because drive aren't spun up.    Click on the "Spin Up" button in the GUI to spin up all of your drives, and then see if the browsing "feels" better.    If so, then you may want to adjust how the drives are spun up by setting a spinup group, so all disks in the share you're using are spun up as a group.

 

Bottom line:  I wouldn't upgrade the system until you've at least replaced the cache drive and possibly the controller.    Then see how things are working.    The only thing you've listed that MAY need more "horsepower" is Plex => if you're running multiple streams at once AND those streams require transcoding, then you may indeed need more CPU "horsepower" than you have now -- if that's the case, a nice new Xeon will indeed be an improvement.

 

Link to comment

Thanks SO much for the reply garycase!

 

The remaining drives are hooked up to these:

 

3x Adaptec RAID 1220SA 2255900-R SATAII PCI Express x1

http://www.newegg.com/Product/Product.aspx?Item=N82E16816103104

 

The network is Gigabit.

 

Most drives are over 85% full - some around 98% etc.

 

Spin up groups is enabled but I've never set any. The drives power down after 15 mins.

 

I have a spare WD WD1001FALS 1TB lying around - use this for cache instead?

Link to comment

The WD1001FALS also has a larger onboard cache, which should help.  Not like an SSD by any stretch, but a little better.

 

To the OP, which CPU do you have, a Core 2 Duo or Quad?  I replaced my Core 2 Duo E6400 2.13GHz with a Core 2 Quad Q9550 2.85GHz and got a decent bump in CPU performance.  It's certainly no Xeon, but I went from a 1,300 Passmark CPU to a 4,000 Passmark CPU for $90 ($75 for the CPU on eBay and a $15 cooler).  How much CPU do you think you are using, and how much transcoding is Plex doing for you?

 

If I was depending on this for my business, though - I'd be very tempted to modernize the whole thing rather than piecemeal it.

Link to comment

Currently its a Intel Core 2 Duo Quad Q9650 3.00GHz.

 

Right now its idling around 20% use whilst mover is running. Streaming a single movie over wifi to Xbox 1 utilization is 50-75%. Playing a second movie to an ATV over wired ethernet the average utilization doesn't change to much but it has higher peaks (85% or so). The average seems to be around the high 50%-60% mark. Memory usage is at 27%.

 

With all the drives spun up browsing files in my movies folder it takes on 2-3 seconds to view the contents of each folder. Sitting here randomly selecting folders its now got to a point where 20+ secs have elapsed and its still thinking about showing that folders contents. Only a single movie is playing and utilization is low.

Link to comment

It sounds like you have fairly high quality/high bit rate source material, and Plex is doing a lot of transcoding to support your players, the XBox in particular.  The Core 2 Quad 9650 is the top of the LGA775 food chain - so no upgrades available.  I'm not sure why file browsing is quite that slow, but I'm also not sure whether it's worth trying to solve that problem independently.

 

You're maxing your current box for streaming and personally I'd go with a full upgrade.  It's 7 year old technology - you got your money's worth out of it ;). Among commercial NAS units only a high end QNAP (Core i5/i7) might be able to keep up with your requirements.  I'd stay with unRAID and go with a high end Core i5 at a minimum, and why not go with a Xeon or Core i7?

Link to comment

A few more thoughts ...

 

First, the WD1001FALS may be a 320GB platter unit (as mr hexen noted) OR it may have 500GB platters.    There were several versions.    If you look at the actual drive, it will have some digits at the end of the model number.  If those are 00E3A0 or 00Y6A0 it's got 500GB platters.  Otherwise it's 320GB/platter.

 

That's not a big bump from what you have.  The FIRST thing I'd do is buy an SSD cache drive.  You can get 250GB SSD's these days for ~ $100, and a 500GB is under $200.   

http://www.amazon.com/Crucial-BX100-500GB-Internal-Solid/dp/B00RQA6M5Y/ref=sr_1_2?s=pc&ie=UTF8&qid=1442438607&sr=1-2&keywords=crucial+BX

 

Replace your cache with one of those, and then see what you think.

 

Your CPU has a PassMark of 4267, which is PLENTY of "horsepower" for a couple of Plex streams.  You can certainly do a full upgrade if you want, but it's unlikely that CPU power is the primary issue here.

 

Your WD20EARS drives are 667GB/platter units, so they're "okay"  ... although not as good as your 1TB/platter HGST's.

 

I suspect your relatively slow browsing is likely due to the generation of thumbnail images of the content.  If you turn off that feature and just browse the names, it'll likely be MUCH faster.    That's easy to do in Windows, but I have no idea how you do it on a Mac.  Note that if this is indeed what's happening, a faster system won't help at all ... the rendering is done on the client (i.e. your Mac) -- the only thing UnRAID is doing is transferring the data from the disks, which isn't going to be any faster with a newer system and the same disks.

 

Note also that if you improve your cache drive with a good SSD, the fact that Reiser is slow in writing to the last 10% or so of your disks becomes essentially irrelevant ... since those writes will happen overnight when the Mover runs => the write speeds YOU see will be to the cache, which will be VERY nicely improved with a new cache drive.

 

Finally, your add-in controllers are old PCIe v1 units that only support a single 250MB/s lane.  So with 2 drives attached, they're limited to 125MB/s.    This is a bottleneck for your 4TB units and even the EARS units on the outer cylinders ... although only when you're running multiple drives at once.    I'd be sure that your 4TB drives, and especially your new SSD cache if you add one, are connected to motherboard ports -- NOT to one of these controllers.

 

A nice new system with a high-end Xeon, SATA 6GB ports, a better add-in controller, etc. would certainly be a nice overall improvement => but it's not at all clear it would make the issues you've noted go away.    Before spending the money for that, I'd buy a good SSD for your cache drive, turn off thumbnail images; and be sure your fastest drives are on motherboard ports.    You may find that this is all you need to do to feel a lot better about your system's performance.    And you won't be spending any money at all except for the SSD -- which you'd clearly use on any new system you build anyway.

 

 

 

 

 

Link to comment

Thanks so much for all the replies.

 

All my apps etc are stored on the cache, if I mount the count in an external drive I presume the contents are visible?

 

I'm not using the MB's NIC, instead there is a PCI NIC card installed. Would it be possible to combine the two for link aggregation?

 

For file browsing, all I was looking at was my movies folder, so just folders with 1 file inside each. When I left it Finder (Windows Explorer) had the beach ball of death on it. Went out for an hour or so and came back and it was fine again. Could browse just fine. At the time a single movie was streaming (which I stopped and it didn't make a difference) and mover was moving a large amount of small files into the array. When I came back none of those operations were running.

 

To avoid the add-in controllers and just use the MB's sata ports I'd have to condense 18.2Tb's of data down to 4 disks (2 for parity and cache) which would be expensive and not leave me any options after I max out that storage.

Link to comment

... and mover was moving a large amount of small files into the array. When I came back none of those operations were running.

 

THIS is a MAJOR slow-down factor.  This is causing the cache drive, the current data drive, and the parity drive to all be moving about constantly => it can have a BIG impact on streaming, browsing, or any other activity that also needs to use the disks.    There's a reason Mover is generally set to run during "off hours"  (middle of the night) ... so you won't have this kind of activity while the array is in use.

 

... To avoid the add-in controllers and just use the MB's sata ports ...

 

There's no reason to do that.  Your add-in controllers are just fine for the WD20EARS drives.    Just be sure you use motherboard ports for the 4GB HGST drives and for your cache drive -- which, as I've noted a couple times, you should replace with an SSD.

 

As I noted before, all you need to buy right now is a nice SSD => I'd go with a 500GB unit so you have plenty of room for your apps in addition to its cache functions.    And change your Mover setting so it runs at a time you're not using the array !!

 

 

Link to comment

The browsing symptoms exist with or without mover, but not as bad obviously. I rarely use mover and have never changed the default schedule, it just so happens that I have over 1Tb of data to move into the array and I've been trying to get it on there quickly all week so I could free up the external drive that data currently resides on. Ugh.

 

I'm pretty sure that the 3 HGST's in there are already on the MB's sata's as they replaced previous 2TB Ears that were on those controllers.

 

The other thing that I'd like is to have a second box where I can quickly offload RAW images, photoshop files and large Pro Res videos from my laptop which would then replicate itself over to the main NAS over night. Sure, I could dump them straight onto the current unRaid box but its current location would make that a pain and it would take to long to restore some of those files back to my laptop when needed. I saw a BT Sync Docker which would do the replication, but I'm not sure unRaid is the right platform for this second box. I was literally just thinking of a 2 drive box that was mirrored but super quick. Right now I could achieve this with the Synology 1815+ and one of the their 2 drive nas's.

 

Link to comment

The Synology pair would probably work fine for that ... but so would a large cache SSD on your UnRAID box -- the copies would be over 100MB/s if you had an SSD.    You could even use a 1TB SSD if you are concerned about having enough space to cache all of your files  [A 1TB SSD is as low as $325:  http://www.bhphotovideo.com/bnh/controller/home?O=&sku=1116795&gclid=CP-wnODZ_McCFYM-aQodNuECQw&Q=&m=Y&is=REG&A=details ]

 

Reads from UnRAID should also be over 100MB/s, regardless of where they are currently stored (cache or array) ... although you may not achieve those speeds with the EARS drives due to their lower density.

 

 

Link to comment

I have a very similar setup to you. Same CPU, less hdds (but some on a pci-e x1 card), but I have an older spare SSD I use as my cache drive. I even have an intel pci-x nic in a pci slot for GB networking.

 

How much data are you moving onto the array in a given day? The mover runs by default at 3 am once a day, so it should move all the data you put there overnight. Are you using the array heavily at 3am (or check your mover times, maybe you just need to change its schedule?)

 

My cache is a crappy OEM 128GB SSD, but I think it works great. I also have all my dockers on there, and any non array data on there (like my incoming data folder for dockers to process).

 

Is it possible that your cache drive, while holding a bunch of data/files/etc for moving later, is also being accessed (like you just copied a few files there, but then read them/play them/etc before they actually get moved), and all your dockers are using the drive to run things, copy things, process files/etc?

 

This is exactly why I knew getting an SSD for cache was the way to go, not as much for faster write speeds, but because it never "spins up/down" and can always be on and available for use for the always running dockers/etc.

 

Have you done a parity check lately? What kind of speed do you see there? I think there is also a hdd speed test you can run in unRAID, maybe one of your drives is just slowing everything down (as gary mentioned because of the addin controllers/etc) or one of them is just starting to go?

 

Lastly, i'll just add that i'm also ready for an upgrade, and looking at jumping to haswell, getting a mb that is vt-d ready, going cheap with a G3258 (similar hp to my q9650, just less cores) and some ram, and leaving it a lot more upgrade ready to add an i5 down the road. This route isn't the "BEST", but its certainly cheap, and leaves me $$$ to start buying bigger hdds as well.

Link to comment

I don't see any way that Synology would work for you unless you setup a dedicated Plex machine.  You currently have a 4200 Passmark CPU that you're pushing at 85% when serving two streams.  The Synology 1815+ has a 2300 Passmark Atom.  The Synology would be great for your photography work but there's no way it can meet your media streaming needs.  If you decide to go commercial check out the QNAP TVS series - they have i5, i7, and Xeon options which could probably handle your transcoding requirements.

 

How large do your photography shoots get?  I think of unRAID 6 cache pools as sort of a NAS within a NAS - with automatic sync to the array.  Gary mentioned several SSD options, but you could also go with  a pair of large fast hard drives as well, if the data requirements are too large.

Link to comment

00b5

 

Up until now all my work images etc have all been kept on my laptop and external drives. I wanted to get smarter with my workflow so I decided to utilize unRaid more by having an archive of the data on unRaid which was then backed up to the cloud via CrashPlan. Outside of this unRaid was just used as a media streamer with Plex, Sickbeard etc etc. One of the reasons I'd stayed away from it as a file server was because it was to slow, i.e. slow browsing, copying data from the array. Up until this week I never used mover, but with my new plan in mind I had a bunch of stuff that needed archiving so I could free up external drives. Its only this week I've had mover running during the day as I need to the space on the cache drive to do the next batch, certainly not a normal usage scenario. Right now I'm copying a 350gb batch, its been copying to the cache drive for well over 8 hours and its got 100gb to go.

 

Parity check was done last week. I think its somewhere around 55/75 Mbs.

 

tdallen

 

I agree - until yesterday I hadn't considered what the current power of my current cpu was, but luckily with people like yourself and garycase around I have a much better understanding :)

 

Right now I'm doing a timelapse move and each clip is 5-10 seconds long, each clip will contain 150-300 images at 75Mb per image. All of those RAW's then have to be converted into TIFF's which are 50Mb each. Eats space real quick.

Link to comment

Hmm, 30fps uncompressed RAWs are a little scary to deal with :o.  Can your camera compress them, and would that be acceptable quality?  14bit would cut the file sizes in half...

 

But that's still a lot of data.  Just curious, are they on a pile of SD/CF cards, or are you capturing directly to magnetic/SSD?

Link to comment

Interesting problem...  Well, thinking out loud (and hopefully someone will correct my math if I have it wrong):

 

The first thing is to make sure you are getting the data off your cards as quickly as possible.  My old laptop has an SD card reader but it works at USB 2.0 speeds ~20MB/s.  I haven't picked up a USB 3.0 card reader yet for my D7100, but I've been meaning to.  You'll need that in order to take advantage of the 95MB/s of an SD card, or 160MB/s of a CF card.

 

At 95MB/s you are reading from an SD card slightly slower than it's possible to write to a magnetic hard drive.  1GB Ethernet should also be able to keep up with a 95MB/s copy operation, though we're pushing close to the theoretical limit of 125MB/s.  So, the next thing I'd do is make sure that you have a cache drive that can keep up with your SD card and your network.  An SSD is ideal, but a nice fast HD with high density platters should be fine as well.  I don't see a need to write to a local hard drive - in theory you should be able to write over the network to an unRAID cache drive as fast as you can read from the SD card.

 

With 160MB/s CF cards the picture changes a bit.  In this case you'd be better off writing to a locally attached SSD and then copying across the network later.  Or, is it possible to have a card reader directly in your unRAID server?  I think I've seen references to that, not sure.

 

In theory you could benefit from having multiple machines copying SD/CF card data to your server, and having a server with two Ethernet connections - common on server class Xeon machines.  You'd want an SSD cache drive in this case, though.

 

I think you are a good candidate to implement an unRAID 6 cache pool.  A couple of redundant high speed cache devices should provide the fastest, safest way to move data onto your server.

 

I still don't understand why you are experiencing slow downs browsing your server.  As someone who owns hardware from the same era, though - I'm just not sure that it's worth putting a lot of effort into figuring out.  When I'm honest with myself I'll admit that I've already put too much time and money into this old server.  If I were faced with a another significant issue on my server I'd pull the trigger on a new one.

Link to comment

00b5

 

Up until now all my work images etc have all been kept on my laptop and external drives. I wanted to get smarter with my workflow so I decided to utilize unRaid more by having an archive of the data on unRaid which was then backed up to the cloud via CrashPlan. Outside of this unRaid was just used as a media streamer with Plex, Sickbeard etc etc. One of the reasons I'd stayed away from it as a file server was because it was to slow, i.e. slow browsing, copying data from the array. Up until this week I never used mover, but with my new plan in mind I had a bunch of stuff that needed archiving so I could free up external drives. Its only this week I've had mover running during the day as I need to the space on the cache drive to do the next batch, certainly not a normal usage scenario. Right now I'm copying a 350gb batch, its been copying to the cache drive for well over 8 hours and its got 100gb to go.

 

Parity check was done last week. I think its somewhere around 55/75 Mbs.

 

tdallen

 

I agree - until yesterday I hadn't considered what the current power of my current cpu was, but luckily with people like yourself and garycase around I have a much better understanding :)

 

Right now I'm doing a timelapse move and each clip is 5-10 seconds long, each clip will contain 150-300 images at 75Mb per image. All of those RAW's then have to be converted into TIFF's which are 50Mb each. Eats space real quick.

 

Well, if you cache fills up, your copy will keep going, it will just write directly to the share. I guess we should just make sure that you are in fact using a user share, which has the cache drive enabled. In that case, you are writing all the files to that 750gb drive (we'll assume it has way more than 350gb of free space), then, in the middle of the night (or if you invoke it yourself) it should start to copy those files to one of your array protected disks, while calculating parity. Now, I rarely copy smaller files, most of what I deal with is 1gb+, and never 300gb worth at once.

 

But, if it takes 8 hours to copy 350gb to that cache drive** (over wired GB), then when it gets moved to the array its would surely take LONGER than that, since it has the overhead of calulating parity along the way (and lots of little files always take longer than single big files). I think you have something else going on, you should check the speed on that 750gb, check its free space, etc, etc.

 

Also, maybe you need to run some sanity checks. Take some of these raws (say 50gb worth) as a test. Copy them to a share/test share. Then put them all into one big zip/rar file. Copy that huge file to the same share, see if it seems any better. Also, how are the shares setup, SMB/etc, and is the laptop in question a mac? And it IS on hardwired right, maybe we just all assumed that and its really on wifi?

 

 

** Again, just making sure, but you should be copying to a user share like \\server\images\workflow\incoming as opposed to \\share\cache\images......

 

 

Link to comment

Current Workflow goes:

 

SD > MacBok Pro (its pretty close to the 95Mb/s). I have a Lexar Pro USB 3.0 reader and its super fast. Getting data off the cards isn't an issue.

 

Final Edits and RAW's are then transferred from laptop to external drive. I recently setup CrashPlan (after MUCH messing about) to back up the external drive over wi-fi to unRaid. unRaid will then (soon enough) back up to Crash Plan central.

 

The server isn't in an easy accessible location (long story) so its to much hassle to always go to it and dump final edits to it, hence why I have an external drive for when I need to restore and re-export something for a client.

 

Backing the external drive up via Crashplan over wired ethernet (its a fire wire 800 drive) I get anywhere from 14 to 65 Mb/s. It seems to hang around the low 20's for a while and eventually it might make it up to 65 Mb's.

 

The cache drive does indeed seem central to the issues I'm having.

 

Right now I'm leaning towards upgrading to a super micro board with an Xeon + SSD cache, or cache pool. At some point the WD EARS drives on the add-on controller boards are going to need replacing and anything I replacing them with I won't see the performance from them. Also, I can't be spending all my time messing with this stuff, I need it done so I can go make $$!! Seems if I do it now I'll be set for some time. This current setup has done well and its been running since 2010 I think so can't complain.

 

Link to comment

I have an old, slow cache drive.  Diskspeed.sh has it at 55MB/s.  In comparison, my 3TB array drives are 114MB/s and my 6TB drives are 142MB/s.  Shows you what happens with older hardware... I don't even write to my cache drive - writes to the protected array are almost as fast as writes to that old clunker.  Works fine for Docker, though.

 

You mentioned both wired and wifi for copying data from the external hard drive to the server - unless you have a serious wifi setup, wired will be faster.

 

Your build list looks good to me.  The stock CPU cooler is fine.  8GB of RAM is probably fine based on your current usage, but 16GB wouldn't hurt.  I think ECC is required for that board, and always a good idea.  I like Samsung SSDs.

 

 

Link to comment

Your PSU is fine, its probably overkill actually, but at least its newer and gold rated, so I'd stick with it.

 

I'd also suggest ecc, it can only make your server better in the long run (and is probably required), and for what you are storing (lots of small, by video standards, sensitive to you files), anything you can do to avoid bitrot in the long run is a plus.

 

I'd only do a cache pool (or something more than a single drive) if you really want to make sure you don't lose data from it before the mover can move it to the array in the middle of the night, but it sounds like going with a new setup will prove beneficial for you.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.