Jump to content

bman

Members
  • Posts

    182
  • Joined

  • Last visited

Posts posted by bman

  1. It depends how much you depend on your cache data being safe.

     

    I don't like surprises, so I wouldn't trade an EVO drive for a cheap SSD of any kind. I have thrown away many lesser SSDs but used some of my forty or so  Samsung SSDs for 8 years already.  They make the NAND flash memory, they make the controller, they make the firmware. Their reputation is all over their drives inside and out.

     

    I discovered that Intel's 5-year warranty means precious little if you don't have your data when you come home from a long day of video production. Eight SSDs and all of them failed more than once within the first year, in the case of that particular hard-learned lesson.

     

  2. Not that I am an expert on this but as far as I know, the special flag to tell Unraid the drive is "cleared" is only set at the end of a successful preclear script execution.

     

    If a new disk is not properly cleared, Unraid must clear it before use as part of the array. This would be (among other reasons) to prevent any possibility of old data on a drive becoming part of a parity calculation and wreaking havoc on your once-good data elsewhere in the array.

     

    Clearing is just writing zeroes to all spots on a disk, where preclear reads, then writes, then verifies for each of its cycles, unless you change its settings to skip some of those steps. That's why you saw the large time difference.

  3. I don't have the answer for your issue, but I experienced very much the same kinds of trouble when I wanted to run 4 RAM sticks on my Gigabyte EP45-UD3P based computer.  Two sticks were fine but I had to tweak many BIOS settings to get four RAM sticks to be stable.  I was able to find the correct tweaks via internet search. Maybe someone has ideas for your motherboard?

  4. I'm sure there are better ways, especially if by chance your Synology disks are each singles with valid file systems on them, but since we don't know how you've got things configured, here's how I would go about it:

     

    Create desired shares on UNRAID and use rsync to transfer files categorically over the network.

     

    For example if you have a large Apple Music library and create a "music" share on UNRAID, you would proceed to copy all the music files from your Synology device to that share as one session.  Repeat for TV shows, photos, etc as required.  That way if something breaks along the way you can easily find out where things left off, double-check the latest file edition for corruption, and continue from there.

     

    For mounting shares on UNRAID check https://docs.unraid.net/legacy/FAQ/transferring-files-from-a-network-share-to-unraid/

    I usually mount network shares in the /mnt directory on UNRAID but you may choose anywhere you like.

     

    rsync can show progress, verify and then safely remove source files with a single command, such as

     

    rsync -av --progress --remove-source-files /mnt/synmusic/* .

     

    This copies and verifies, then erases the source files.  If you want to skip the --remove-source-files option you'll retain a backup of each file until you know everything is safe.

     

    That command assumes you're currently in the UNRAID destination share before executing - e.g. cd /mnt/user/music.  "synmusic" is the name I chose to use for the mounted music share from the Synology device.  If you have just one master share from your Synology device and just a bunch of subdirectories, perhaps "synology" is a more appropriate label, then your rsync command might reference /mnt/synology/music/*

     

     

     

     

     

     

     

     

     

     

     

    • Upvote 1
  5. 1 minute ago, olehj said:

    As stated earlier for the same request: I cannot do that as the sub-design does not permit it in an easy way. Empty trays are simply marked as a number and nothing else and currently I won't add a second layer of empty number as it might conflict and corrupt existing databases.

    Okay thanks for the explanation.  This is a useful plugin, thanks very much for your efforts!

  6. On 2/7/2019 at 3:25 PM, olehj said:

    I will make it possible to hide the contents of "empty" in the near future, but it will be either all or none of the empty slots - not selective.

    I'd like to request this goes one further in having the ability to hide individual slots, or mark them in some way as N/A (not available) with their own colour.  Reasoning is because sometimes through poor quality control or over-use/abuse, individual slots on a backplane may no longer be functional.

     

    I have two Supermicro 24-bay chassis where this is the case and I need to remember not to try to use those slots, so I physically mark them with tape, but it would be nice to have the plugin able to also mark individual slots as defunct, so when I am several miles away from the physical server, I can easily know which slot(s) are faulty for making upgrade or drive swap plans before visiting the site.

     

  7. tbh I haven't given Intel SSDs a fair shake since premature failure on 6 of 8 purchased 520 series products left students without their days' worth of video footage upon returning.  Swore off them because even though the warranty was good, the reliability wasn't.

     

    Seeing the price of the D3-4510 960G versus the competition has me rethinking things, albeit for different use cases.  I think you're on the right track with that one.

  8. 28 minutes ago, IamSpartacus said:

     

    Yea I think I am going to keep it and use it as place for my DVR and maybe my Windows Home drives as those get backed up daily anyway so they don't need to be protected.  Now I'm just shopping for a pair of 500GB-1TB SSD's with decent write endurance so I'll probably look at the Intel DC line.

    $0.02 on write endurance - Samsung 850 or 860 Pro series wins.  I've only seen one dud out of a couple dozen I've used over the past several years. Write endurance seems to be a Sammy strongpoint: https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

     

  9. On 2017-05-09 at 8:07 AM, Fireball3 said:

    I wonder how one can be "impressed by the performance" of a PSU... O.o

    I guess if you measure its outputs with an oscilloscope under different loads to check ripple and general quality of the signal, as well as deviation from expected output, you could be suitably impressed by a PSU.

     

    Or maybe it's nicely lit and has an LED readout telling you how much power you're using... not performance related, but lights and gizmos generally impress on a subconscious level if nothing else!

    • Upvote 1
  10. Oooh, good call BRiT.  I'll start playing around with that. This could prove useful at work where I use unRAID for video archives. Lots of students using all sorts of computers and OS's doing who knows what, and some of them have brief access to the primary archive server to drop on their newly-finished masterpieces.  If all the previous works were root-owned and RO that could potentially save days of rsyncing from the backup server if something unsavoury happened.

     

    Cheers!

    • Upvote 1
  11. I started my unRAID life after IDE (okay, the motherboard had the controller, but I always used SATA drives and ignored the IDE ports)... and never really had much use for spin up groups.

     

    Now that hard drives are very large and also very power efficient (<= 5W each) I have no concerns about letting them spin for several extra hours, or even 24/7/365. Where I am 100W of constant consumption for 23 HDDs costs me $0.31 daily, which is far less than the cost of a daily coffee -- and I don't drink coffee, so I'm still well ahead!

     

    I generally set my spin down time to about 6 hours which lets them stop as I sleep, and usually they won't be called upon again until the next afternoon.

     

    There's always delays the first time or two I try to access the array, but it's not a big deal after that as the rest of the day is hiccup free due to the long spin down timer.

  12. 1 hour ago, BRiT said:

    I like it. I think I'm going to do exactly that on my media shares for any file larger than 100 Megs (or whatever size is just a bit larger than the trailers videos).

     

    I might start off with the scripts for the Accelerator Drives but invert the checks and change out the action so it operates on the large files instead.

     

    I like the application to files larger than xx bytes in size.  Any idea on command line for such?  a chmod -R | grep something about file size?  (My bash knowledge is sorely lacking!)

  13. Having searched briefly for anything that's not rackmount and still holds that many drives, I come up short.  As far as I can tell, any other form factor (like Lian-li's PC-D8000) adds extra space but no increase in available drive slots. I'm not sure there's a way around it. There is not a large enough market for a single chassis that holds as many drives as you're looking to house, so your choices are few and expensive.

     

     

  14. I'm handy with a soldering iron so I'd just add another 8-port connector from old PSU bits I have lying around if it was needed - but that's just me, and also assuming there were no power requirements that would blow up my power supply by performing such a mod :)

     

     

  15. Nice motherboard!  I got the non-IPMI version for a recent build that I haven't quite gotten around to yet.

     

    In my previous experiences, though, I have learned to connect ALL motherboard power connectors no matter how many there are. I've had it happen before where I failed (or thought I wouldn't need that extra 12V feed) only to scratch my head a year later when a new add-on card wouldn't work, or some other silly problem I could have avoided.

     

    Each 12V wire and each circuit trace on the motherboard are designed to carry only so much current.  If there are extra power connectors, it's probably because in some situations the extra current will need to be available to certain slots on the board.  Plug in whichever connectors fit. They're keyed so you can't mess them up, no matter how they're labelled.

  16. On 2017-04-28 at 8:26 AM, nuttytech said:

    Hi Unraid forum 

    I looking for advice on where I can get a server case that supports 40+ drivers 50 would be good. And a psu that can power the drivers plus 2 titan gpus and 2 power hungry CPUs.

     

    Where I am the Supermicro 847E2C-R1K28JBO chassis (which includes two 1280W power supplies, and SAS backplanes and front+rear hot swap bays for 44 3.5-inch hard drives)  costs the same as 6.37 10TB enterprise (5-year warranty) hard drives.

     

    That's 50TB of parity-protected data for your rendering needs.

     

    I've never seen a chassis as large as the one you're after for sale in the used market (eBay or otherwise, yet -- I am sure I will one day!) so as far as I am concerned you're buying a new chassis at full price. I don't see how you're going to fit it into such a small budget.

     

    Best forward-thinking logic is as already suggested: Spend money on larger drives so you can use cheaper, smaller chassis, like one of the ones you already have... unless you can barter a deal with a good metal bender in your area who can make you what you need.

     

     

  17. 5 hours ago, the_cook said:

     

    Hi y'all!

    What do you think of this PSU for my build? EVGA SuperNOVA 650W GQ

     

    650W will work fine for what you're up to.  Looks like your load will put that supply in its highest efficiency zone, so I'd say you're good to go!

  18. Definitely overkill. Grab a used Supermicro 24-bay case from eBay for a few hundred $ and start from there. SAS expanding backplanes are okay but you will eventually saturate the 6Gbps or 12Gbps limit of them as you add more drives.  Normal unRAID use wouldn't show any problem, but parity checks will turn from hours to days long as you add more drives and don't have the full throughput available for each.

     

    I stick with SATA backplanes and multiple controllers to ensure full rates to each drive, just to ensure I don't have 5-day-long parity checks like I did with my first PCI-based system.

     

    If you really feel the need to cram a whack ton of physical drives into 4RU, you may wish to visit https://www.backuppods.com/  to grab a chassis from the folks who make the Backblaze pods, and go from there. Keep in mind though, if unRAID is your OS choice, your limit for data storage is basically 30 physical drives (28 data and 2 parity, and not counting cache drives).  Using more than that is likely more troublesome than helpful. 

     

  19.  

    Warning!!!:

    Please be VERY careful when doing this. You do NOT want to pass through your unRAID USB by mistake. unRAID needs the USB present to function properly. 

     

     

    So if, in my messing around I DID do the above and my UnRAID system no longer works properly... how do I edit the XML from the command line?  How do I find it to edit it? 

    :-[

  20. I see that the 'SeaSonic X Series X650 Gold 650W' is recommended on the first page of this thread, however this one doesn't seem to be available, all I can find is the Seasonic X-660 660W - http://www.amazon.co.uk/Seasonic-X-660-Watt-Power-Supply/dp/B004DRZI4Y/

     

    Is this a typo on page 1? Or, in the event this is a different PSU, would this be suitable for an unRAID server?

     

    Different supply... but in my experience thus far, just as good for UnRAID and other purposes.  I've used both the X650 and X660 Seasonics with great results, whether I am loading a system up with 22 drives, or just 13.

     

     

  21. I have use a couple of Seasonic X650 supplies on a few different Supermicro motherboards, all without problems as yet.

     

    Having said that, this thought train piques my interest because I am also soon to be a (hopefully proud) owner of an X8SIL-F-O motherboard.

     

    Crossing my fingers that no issues rear up and bite me.

  22. Nice!  With your rig, do you use 5-in-3 cages, or if you did, do you think they'd fit with no messing about?

     

    I'm not looking for another bunch of work bending tabs out of the way just to be able to slide in my drive cages...

  23. Wow those Xigmatek cases look the part - but do they play well with others?

     

    Since I have 3 5-in-3 cages already, I could get a big-ass Xig case and another 5-in-3 cage to bring my space up a bit... maybe more reliable than a Norco RPC-4224?

     

    Hmm... anyone had cause to purchase/use one of these 12x5.25" monsters yet?

     

  24.  

    You are correct, you have a split rail PSU with 17A on each rail.  Generally one rail powers the CPU and GPU and the other powers the motherboard, fans, and HDDs...  Either way, your PSU is definitely under-powered and you should look into upgrading it immediately.

     

    I'm surprised that your server still boots with that PSU.  There are a few factors that could be at play.  First, some PSU manufacturers advertise their power supplies as split rail when they are actually single rail... Even still, that only saves you 1.5A total, which doesn't account for the 13A (or larger) gap that you currently appear to have.  I still recommend assuming the WD Greens pull 2A just so you have a bit of overhead.

     

    Well, I ran my server with that Seasonic 'split rail' power supply (I have to think it was really a single as you suggest but I will not be disassembling and analyzing it to find out) for many months with 15, and recently 16 hard drives on it.  So that leaves about 1A per drive if it was truly a split rail supply.

     

    So I can see two possibilities: the one you've mentioned about single rail even though the specs say otherwise, or the Seasonic supplies are really awfully darn good supplies! 

     

    For info, I never had any separate spin-up groups, so every time I would come home with new files to transfer, I would spin all drives up at the same time, and every restart of the server of course saw the same task.  Somehow it worked well and faithfully even though I was oblivious to the current loads versus ratings.

     

    I am more at ease now, however, as I bought a Seasonic X650 Gold with 54A of hard drive lovin under the hood.

     

    I can't wait for Tom to up the max drive on UnRAID to 40 drives so I can give this supply a proper workout, too  ;D

×
×
  • Create New...