Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

10 Good

About Decto

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Over in UK, I can get the 8TB WD external drives for ~£120-£130 on sale or from EU, just have to wait for the very common sales. Cheapest bare drives are £180 for the Segate SMR, while Ironwolf are £200 and WD Red are £220. I've shucked ~20 drives in the last 12 years and only had one fail in use which was a 500GB Seagate 7200.10. Even then, it didn't fail but it developed a hum and has since been used as a scratch drive on the bench. I think it didn't like mounting on it's narrow edge as it's quiet flat. I'll take the chance on warranty rather paying top price to get back a refurb warranty drive I wouldn't really trust. For the WD shucks I was getting WD80EMAZ or WD80EZAZ which which both have the Helium counter in SMART so are likely the HGST Helium drives. I did have one very recent drive which is WD80EDAZ, Goggle suggest this is the 5 platter air drive. I can confirm it has no Helium counter in the SMART and runs ~5-6C warmer that other drives in the same enclosure and acutally increased the temp of the drive next to by 2-3C. Most likely this is the newly launched WD Ultrastar DC HC320 8TB SATA Enterprise HDD 7200rpm HUS728T8TALE6L4. I had to place a fan on the enclosure during a pre shuck test as the drive hit 60C in the plastic shroud without, In my enclosure it's below 40C in a parity check so not really an issue in a case with reasonable cooling. Iron wolf, Iron wolf pro are stated as CMR, HGST He8 is also CMR PMR is Perpendicular Magnetic Recording, effectively it's a way to make the read/write head smaller by using a vertical alignment. To my knowledge you can use PMR heads to write CMR (parallel) or SMR(shingled) tracks to the disk. Unraid doesn't need matching drives so to avoid a bad batch, you can always buy a mix of drives. My 8TB drives were bought over ~12 months as I upgraded from 3/4TB.
  2. AFAIK thats a SMR drive so I wouldn't use it in parity, I've used SMR for data and they were OK. Personally I've been shucking 8TB WD Mybook or Elements which had Hitachi He8 drives in, however these are now being replaced by a 5 platter drive which uses air rather than helium and runs 5-6C hotter. Warrenty will be void but the retail drive has 5yr so likely to reasonably robust in unraid.
  3. Do you have a monitor or HDMI dummy plug connected to both GPU. My Nvidia card doesn't fire up for streaming without a 'screen'
  4. Hi, Pending sectors are considered 'suspect' by the drive. If the drive subsequently reads OK from them, the pending count will reduce, effectively they will disappear. If the drive continues to be unable to read from them, they will be remapped. In this case the pending counter is reset to zero and the remapped counter incremented. Looks like your drive is starting to fail, but not consistantly enough for the drive to remap the sector as yet. https://kb.acronis.com/content/9133
  5. If you want to move docker and vm's to cache then you need to disable them first, check the share is set to 'cache prefer' (move from array to cache) then run mover. If the services are running the mover won't touch those files.
  6. The soon to be released high end RX3080 / 3090 have TDP of 320W/350W respectively. The still very powerful RX3070 (supposed 2080Ti performance) is a more reasonable 220W. AIB cards can be higher than that, some of the current Radeon 5700XT are hitting 300W and likely it is similar of the OC versions of the current Nvidia cards. The current high end Intel CPU's are hitting 200W+ for the turbo period, or continously if the Turbo timer is deactivated in BIOS. Same with AMD where the 125W CPU's hit ~170W in the short term boost. You mostly won't see this when gaming as it tends to be GPU bound but there are likely to be peaks. I'd expect this is only an issue if you are gaming when you have all 15 drives spinning. E.g. during a parity check or rebuild. A parity check with a mix of 4 and 8GB drives takes me around 17 hours. 850W is a good balance, I was planning for 850W but due to Covid releated supply issues and pricing, a 1000W from a high street retailer was cheaper than any of the 750W - 850W models I was looking at on any of the common online retailers and platinum vs gold. Only downside was it's 200mm long. Pricing and supply looks to be getting back to normal so may not be such an issue now.
  7. Hi, Power supply looks fine, it has 4 outlets for peripherals. I'd aim for 4 to 6 drivers per outlet. 6 Drives @ 15W is 90W on a single conductor wheras the 3 conductor PCI-E cable is rated at 75-150W so the load adds up. In reality WD specs the 12TB drives at average 6W read/write but others are closer to 8W, it just the start up current that can be high. If you think you may add a mid range GPU at some point, it would be worth looking at ~850W. I have a semi modular seasonic core gold in a desktop PC, I wouldn't recomend that for your use as it only has 2 peripheral outputs so the load would be too high.
  8. A few observations. The CPU / board combination would benefit from quad channel ram so 4 sticks and 32M as a minimum. The RTX20xx range is being replaced right now with the RX3070 range where the RX3070 ~ RTX2080TI performance at 2070 prices so I wouldn't buy that generation unless it is very well discounted. For Plex transcoding, I don't think the CPU has IGP so you are working with 1 GPU. if you pass through the GPU to plex, it's not available to VM's, if you pass through to the VM's, then plex can't use it. Favoured card for heavy plex usage is the Quadro P2000 as it has unlimitted streams whereas all other cards are limmited in software to 2/3. How are you planning to use the VM's Will these be used concurently .. multiple users etc. or are you just launching different VM's for different activities. This is quite a high end setup for light gaming and a couple of work/browsing VM's. What is it that your current setup can't do?
  9. Both parity drives need to be the same size or larger than any other drive in the array. If you want an 8TB array drive they both need to be 8TB
  10. Many people use unraid on their main PC and may have multiple VM's to use. E.g. with the same hardware you can use OSX, different Windows versions, Linux etc. and hop between them quickly. with many core processors becoming common, plenty of headroom to run unraid and all the dockers in the back ground then game, work, etc on VM's which are then not cluttered by 100 apps. My server is remote, I use parsec (a casting app) to steam games to a low powered laptop for the kids. Another VM streams my games to the shed or wherever I am with some low powered hardware.... celeron J1900 in a NUC type device works fine as a client. I also have a finances VM which I use for my banking etc. and it never does anything else. I also have a sacrifical VM or two which I'll use to open and scan files from torrents or trying to find files, drivers etc. from those sites that are deliberately obtuse. Then there are mutiple VM's for distros I've tested etc. While all these could be on my local machine, having them on server means I can access them from any PC in the house or even remote in via VPN when I'm away from home. I agree that docker elimates a lot of VM's, but VM's are what made me upgrade from a more basic file server to something with more power.
  11. Hi, If you have all but one disk and the parity is good, the array should be recoverable. Best advice, leave it well alone until one of the experts drops in with some guidance or you are very likely to lose some data. It can take a day or so to get support depending on time zones etc. Good luck.
  12. I have two LSI HBA, one internal, one external. Makes no difference if drives are moved between these or even to Sata as they are both flashed to IT. Take a screenshot of your array before you start, ideally with the array stopped then if there is any translation issue, you have the id's and slots for all drives.
  13. As I understand, you get 2x8 lane PCI-E 4.0 to the CPU slots and the chipset gets 4 lanes of PCI-E 4.0 The chipset then provides a x8 link to the 3rd GPU. The chipset connection is x4 PCIE 4.0 which is same bandwidth as x8 PCIE 3.0. This third slot bandwidth is shared with other devices but is currently the best option for 3 GPU as you get up to PCIE gen 3 x8 bandwidth depending on contention from other devices whereas other boards give you at best a 4x electrical connection so you are limited with a PCIE 3.0 GPU. Theoretically, if your VM uses the 3rd slot, along with sata/nvme also on the chipset, direct memory transfers reduce requirements on the cpu link. Bifurcation would give you x4 links at PCIE 3.0 as the gtx16x0 cards are pcie 3.0
  14. Bifurcation while possible is a bit of a lash up. Better to buy a board designed for 3x pcie x8 electrical. Benefits of PCIE gen 4 bandwidth. Asus Pro WS X570-ACE
  15. I'm having a similar experience with USB drives. My current server has been running happily on a 16GB USB 3.0 Sandisk Ultra Fit for around 3 years. I'm in the process of upgrading to a new server so I'm running a trial with the intent of converting this to a basic licence and to reuse an old HP N40L as parity protected cold storage. I like the small drives as they are difficult to break, though could use the internal USB port as discussed here. Anyhow, the 16GB USB 3.0 Sandisk Ultra Fit weren't available so then the fun started. 16GB Sandisk Ultra Fit USB 3.1 (all black) - No Guid 32GB Sandisk Ultra Fit USB 3.0 (old one with metal insert) - won't boot in any device, these look identical to the 16GB that works in any device. 32GB Samsung Fit Plus USB 3.1 - won't boot in the N40L at all, will only boot in SuperMicro X10 if I enable 'allow UEFI' in customise during creation. The 16GB Ultra Fit boots without. A 16GB USB 2.0 Cruzer Fit arrived today to I'll see if that still works, and maybe stock up on a few spares for the future as they are very cheap in this size. I think larger is better to a point as it helps the wear leveling even though the number of writes is small. I'll also take a look at the Kingston SE09, I have at least one genuine USB 2.0 of these, the other looks fake as there is only a logo on one side and I had a couple stop working in general use but these could have been fake. I also have a couple of later USB 3.0 versions. The triple pack could be useful if the GUID is common to all sticks and compatible with Unraid. Could these be a good backup option? If a stick went bad, you could just copy the latest backup to a replacement stick without needing to go through licence replacement which could be handy if you're running through PFSense or something on the server.