Decto

Members
  • Posts

    261
  • Joined

  • Last visited

Everything posted by Decto

  1. Looks fine, if any 1 drive fails the data will remain available and you can rebuild the array similar to raid 5. Unraid is not backup, ensure you have a backup for any irreplaceable data.
  2. Decto

    PCI lanes?

    The 9211-8 is PCI-E 2.0 so each PCI-E lane can support 2 HDD or 2 SATA SSD at full speed. You need at least a PCI-E x4 electrical to avoid the array slowing down during parity checks/ drive recovery etc. when all drives need to be read. I expect this is a Z590 board with x8 x8 x4 electrical for GPU, 9300-16i, LSI 9211 As well as the electical, the card has to fit into the slot, the LSI 9211 is physically a x8 card so you either need the x4 electrical slot to be physically a x8 / x16 or on some PCI-E slots, the back is notched so you can plug a longer card and it just connects the first 4 lanes. Board specific
  3. I snipped the 3.3V wire between the PSU and the first SATA power plug. AFAIK it doesn't serve any useful purpose and if really needed to I could reconnect. Saves all the hassle with tape. Under technical data you can check compaible accesories > Power Cables Compatible Cables and seems that one CS-6740 is compatible
  4. Useful information and a data point but without a clear trend of failures it may be excessive to write off the MX500 so completely. Perhaps an issue with a specific firmware version that only showed up, as they say in an 'edge case'. My 500GB drive in the cache is over 2 years old, no issues apart from the nusiance alerts for 'pending sector' which I disabled. When I look at the smart data, no sectors or nand blocks have actually been reallocated etc. so just the way the drive reports rather than any indication of reliabilty or pending failure. The other (mirror) cache drive is a different brand to split the risk of any systemic failure. I'd alway recommend spitting the risk in a pool in such a way. My main array uses a deliberate mix of drive models and purchase dates. I have around 10 MX500's around the house (PC, Xbox, PS4, Set Top Box) as they are one of the SSD's that still has some DRAM and while some of these are up to 4 years old with 24/7 running , I'm yet to have an issue with any one of them. Also widely installed in (guessing 30+) PC's I've updated for friends and family over the last few years, again with no reported failures or issues. TBH I usually pick up a couple on the prime sales so I have drive or 2 on hand.
  5. CPU should be fine. One of my backup servers is an HP N40L with a Turion 1.5Ghz Dual core which is a slower AMD dual core for over 10 years ago. That runs just fine with 4 HDD + Parity using 3-4TB drives. I don't always manage to saturate the Gbit connection on upload, depending on the file sizes I'm transfering but it's been perfectly robust. When I used it previous as my main NAS, It would struggle to stream hi-res video if I was dumping a lot of data to it but that's somewhat a limit of the drives if writing direct to the array. Do you get the same issue from both drives? If possible test with files / directories that are on one drive or the other. I would probably start with a SMART short self test on each drive, check/swap the sata cables etc. Possible that if you are experiencing read errors then this will cause poor performance and the read error count does reset during a reboot so it would be consitant with a drive issue or drive connection issue.
  6. With Gigabit you should get ~90-100MB/s each way depending on file size. Lots of small files is slow, big files should max the connection. Each drive takes 7-10W during read/write - depends on the model so the PSU with this CPU etc should be good for at least 10 spinners and some SSDs (2-5W) You can start with 3 drives for now, 2+parity, it's really easy to add in extra drives later as long as it is the same size or smaller than the parity. For disks, have a look at price performance, 8GB is a good minimim but don't turn down a deal of larger drives if it's to good For backup, you may also want to consider if you can store a copy of your most precious files in an online account or a second backup you leave with a relative and swap every 3 months. I'd say 10% of my data is critical (multiple redundancy), perhaps 20% I would take some extra precautions and the remaining data is not that important or can be easily replaced so I don't really back it up. (I do have some of it copied onto old high hours drives and on a shelf for convenience but I don't conisder this a real backup). Splitting strategy gives me quite good bang for the buck on backup. I actually have been using some 2TB externals which get rotated for the critcal data + it's duplicated in online accounts.
  7. The E5-2640 V4 is similar TDP, I have just had one delivered to replace a E5 2660 V3 as it is similar cores / clocks but a later generation and very inexpensive so I'm interested to see if there is any significant difference. I would not expect any noise / power benefit to switch the CPU, you would just have more cores + HT however appliciation will be a lot more snappy as the CPU's can boost to 3.4Ghz vs the original CPUs which are static at 1.7Ghz so peak power consumption may increase but be offset as the system returns to idle more quickly. Unless you need the cores, the greatest efficienty saving in the Dell R430 would be switching to 1 CPU + memory, perhaps streching to 14 cores if a single CPU and you need more. These processors can have 4 Dimm channels so for maximum memory bandwidth you need 4 per CPU. As the memory is slow by modern standards, single channel (1 dimm) is likely to be terrible, 2 dimms minimum and 4 dimms ideal. As a starter I would try 1 new CPU (With HT boosting > 3Ghz) + 4 Dimms and see if it is enough performance.
  8. The My Cloud EX4100 should be capable to saturate the gigabit connection. if you consider this slow, you are not likely to get a significant speed up with Unraid unless you add a 2.5Gb, 10Gb network cards etc. Even then spinning HDD would not be more than ~ 1.5x faster. 18TB drives take a long time (>1 day) to run parity check , clear, recover etc so personally I feel the optimum size is around 8TB. Otherwise config looks fine. Unraid is not backup, do you have a backup plan at least for 'important family photos and videos(Converted VHS, iPhone video, and h264 from mirrorless camera).' Movies etc. can be redownloaded.
  9. Are any of the M.2 slots occupied *When a device in SATA mode is installed on the M.2_1 socket, SATA_1 port cannot be used. *2. When a device is installed on the M.2_2 socket, SATA_5/6 ports cannot be used.
  10. Hi, Workstation class boards are expensive as below, you'll need to shop around for a price but £350 - £600 , Eur420 to Eur720 is common depending on features Gigabyte Supermicro or Cheaper LGA1200 may be a little less expensive as I think it was PCI_E 4 and DDR4, personal preferane of mine to stay at least one generation behing for Linux to let others beta test my data storage hardware + drivers. LGA1200 Supermicro Supermicro Web Also check ASRock under their 'ASRock Rack' branding. Personally I still feel 8TB drives are a sweet spot between parity cost, SATA ports usage cost/TB, rebuild / partity check time for small to mid size arrays. With the caveat on fresh hardware support in the kernal, workstation boards are generally well supported if they are primarily the core chipset. If there is a non standard network / raid etc. chip build onboard you may want to look how well that is supported.
  11. My previous server had a couple of 2.5" SMR drives in the array and these were fine though they were used for fairly static storage with large occasional writes but mostly read only. The parity drive was always CMR though. No issues with parity checks etc. If you have a lot of active writes on a drive while running parity then SMR may become an more of an issue. Is one of you cache drives connected by USB? Has that always been the case? SSK_SSK_Storage_DB98765432116DA-0-0-20230206-1619 cache_256_sata (sdb) /dev/sdb: Unknown USB bridge [0x7825:0xa2a4 (0x1507)]
  12. Idle is still spinning but not active Powersave is spun down
  13. Hi, The UPS question was in case you had a UPS inline and it was pulling excess current to charge a failing battery. How are the drives powered? are they all fed from the EVGA or do you have a powered drive shelf etc. I think there are two main options 1) the Tasmota is giving a totally incorrect power reading. Can you read this from any other device, smartphone / web interface etc? I'd swap this out or add a second power meter inline to validate. Even a $10 plug in meter will differentiate between 50W and 200W. 2) The HDD's are idle but not in power save. Typcially HDD are approx 5W in idle so that is easily 120W or 140W from the wall. In power save they would be ~ 0.8W or ~ 20W total hence the question of what happens if you remove the HDD power.
  14. I'd start with troubleshooting your existing build, seems a good spec for your current workload. Run with one dimm, then the other for a period to see if you get a crash. Check / swap the sata cables. Turn on syslogging and see what errors you get. If you do want to upgrade, I'd still avoid the old servers. You'll be buying someones E-waste at this point. Here is an equivalent workstation LGA1700 board as an example https://www.supermicro.com/en/products/motherboard/x13sae-f Built in IMPI Support ECC 3xM.2 NVME 8 x SATA
  15. When they spec base of 2.1Ghz , this is a miminum guaranteed frequency at the specified TDP, if you have a higher CPU TDP, the base and boost frequency can be higher. The CPU can then boost to use 'spare' thermal headroom if it is only using a small number of cores, or for short periods etc. Most CPU's have a minimum idle frequncy around 0.8Ghz at reduced voltage so I expect all of these CPU's will have the exactly the same base idle consumption somewhere around 5W. Often there is little power consumption difference between the TDP varients for occasional boost loads, as a higher TDP CPU boosts it's clock for a shorter period than the lower TDP to get the same task done, though there are some minor efficiency losses at full boost it's not worth considering in my opinion unless you have have a lot of regular loads. The only reason I see to go with the lower TDP CPU is if you are thermally or power limitted. E.g. can only install a very small heatsink, want to make the unit as silent as possible by limiting power consumption, or are using a small low wattage PSU (PICO Power) etc. Personally I'd go for the standard CPU, I wouldn't want to overclock my data store, so the K is cash I don't need to spend and the T model is usually costs the same or even more due to rarity for less performance.
  16. 250W idle is insane The CPU temp is only 1C higher than the motherboard temp, that would confirm to me that this isn't the CPU drawing any signficant power. I'd question if the kernal has full support for power saving for Z790 / 13xxx yet, so may be a power state issue for connected peripherals. I've always been a generation or 2 behind to make sure I get decent Linux support. Can you detect anything getting hot in the case, 200W doesn't disappear, are 'spun down' HDD actually cold. Do you have additional drives / devices connected, but not in the array. Do you have a UPS inline? Can you remove the HBA / Expander, remove the power to the HDD's then start the system (not the array). Give it a few minutes to idle. That should give you a baseline for the platform, I'd expect this to be 50W or lower, even my XEON E5 V3 (10C) idles at 35W for just the board, memory, cooler with an 850W PSU.
  17. I'm less convinced that a 7-8 year old ex data center server is any more reliable than new consumer class hardware over any future 5+ year period. They can be pretty noisy and inefficient as they are designed to run in a AC cooled rack at near full load continuously. In a home location, it will run hotter and noiser than designed, the screamer fans in these can run 50W+ just forcing the air through. I've run various home servers / PC's in the last 30 years for me, the kids etc. and out of it all I had 1 mainboard fail (AMD B450) , and a couple of HDD that threw smart errors. Everything else was retired / sold without failure with plenty of it 10yr old. I even have an old Phenom X3 from 15 years ago that boots and runs in the shed, just a litte short on memory in the modern day. Also really depends on the applications you want to run and spec of the server, many low clocked cores with generations old IPC vs modern 5Ghz + and high IPC. My main unraid uses a workstation grade Supermicro ATX board, standard case (10 HDD + 2 SSD) E5 XEON and ECC. A few years ago was ideal, though I'd build some different if starting today. For my use case it really isn't worth the investment to update as it's doing everything I need and only needs the occasion poke for a version update, or to clean the dust from the filters.
  18. Interesting to compare, with increasing energy costs I would like to reduced my idle power consumption, though I think the ROI on a potential ~15W saving (£40 / $50 per year) isn't going to pay off a modern platform so I'll be sticking with this for now. The one thing I may change is the UPS as it adds 15-20W onto the base load according to my power socket, while a more expensive line interactive unit adds just 3-5W from what I can see in a bit of random forum feedback, so likely a UPS is the lowest hanging fruit and will pay for itself in a couple of years. Supermicro X10SRA Xeon E5 2660 V3 (10C / 20T) 4 x 16GB ECC 6 x 8TB 1 x 8TB parity 3 x SSD (2x cache, 1 scratch) Quadro P2000 Corsair RM850X PSU Idle (~51W +/- 4W resolution on the UPS) - HDD spun down
  19. 18 months from the OP, this thread just fixed my high idle power for a P2000 in Version: 6.11.5 ~20W P0 to 7W P8 Thanks!
  20. Decto

    Too much?

    Seems like a case for a simple NAS with streaming. You can save some cash with an I3 - 10100 which is a 4c8t part, this will be plenty for NAS / streaming and the built in GPU does a good job of transcoding in Plex or other media streaming apps. if you want to transcode 4k so you can stream to non 4k devices the you need either the Intel iGPU or an add in GPU as it can be very hard on the cores. This chip will easily run a good number of dockers for gathering and streaming media and even a light weight VM or two if you need it. More cores would be for heavy VM usage, e.g. running gaming servers, or if you were using this as both a NAS and as your main desktop (via a VM). For motherboard, a B460 is a more budget choice with plenty of expansion, the main thing you lose is the option to split the main x16 PCI-E into a pair of X8 PCI-E for expansion. Look for a board with at least a x16 and x4 PCI-E slot, a couple of extra x1 are useful as well but don't get too carried away with premium boards unless you will use the 2.5Gb Network etc. Each x1 PCI-E can support 2 extra spinning drives at native speed while the x4 will give 8 extra drives. If you can get a deal on a B365 and similar CPU then that's a good option too. There is little difference other than Intel offered a little more of the money in the 10 series and added a 10 core part so discounted last gen would be fine. The areas you may want to spend a little extra are PSU and drives. That PSU has power plugs for around 8 drives and splitting power cables isn't recommended an extra one per string is OK, more than that can cause power issues. If you plan significantly more than 8 drives in the future then I'd recommend a PSU with more SATA power strings, or molex that can adapted to SATA. For drives, depending on how much storage you need, fewer larger drives are recommended. e.g 3x 8GB is prefered to 5x 4GB. You do waste a little more capacity using a larger drive as partity, however the expensive part of the whole system is usually the number of physical drives as each needs power, cooling, a slot in your case and a SATA port. So if you run out of capacity to add drives you may need a better PSU, a bigger case, , a more complex SATA add in card etc. or even a motherboard with more expansion slots. The cost of the extra 'wasted' space on the parity drive can soon become trivial in comparision. Also each drive is a potential point of failure. Good luck
  21. It should idle low but will depend on the system. A more basic motherboard with less VRMs will use less power at idle. Use a suitable PSU, gold or better and not oversized. My Xeon E5 2660V3 idles at 40w for the bare board a GT710 and 4 registered Dimms so I would expect the I3 will be ~20w. You then need to add a watt or 2 for each spun down drive and 3-4W for each 120mm cooling fan. Ideally the board will have a decent fan controller so the fans respond to case temp and can be spun down. Cooling is usually the issue with lots of drives so best to have them mounted right behind a fan wall.
  22. Decto

    Too much?

    Hi and welcome. Before anyone can help you need to post your requirements. Is this a simple NAS, media server, will it host VM's, if so what will these be for? What apps, dockers do you plan to run. How much storage do you plan to attach?
  23. SATA 2 is good for ~250 MB/s, the 195 MB/s is about right for a WD 8TB drive at the start of the pre-clear but it will slow down as it moves to the inner disk tracks.
  24. The EDAZ drives are air filled and run 5C or so hotter than other WD drives.