Decto

Members
  • Posts

    261
  • Joined

  • Last visited

Everything posted by Decto

  1. So an update. Summary of intial post When originally added to an array, the drive had read errors in the 10-20% zone, these were not sufficient to trigger smart or pending reallocaiton and a non correcting parity check did not find any errors. These errors seemed to be during write as if the drive was re-reading what it had just written. A pre clear (Read/Zero/Read) in a completely different system had read errors on the pre-read in the 10-20% zone but no errors in the Zero or post read. Update The drive has since had two additional full pre-clear cycles (Read,Zero,Read) which completed at the expected speed and without any errors in the logs. I then added it as parity 2, let the parity rebuild and then ran a non correcting check. Not a single error in the logs and smart data remains clean. So I guess for now I'll keep it in the parity 2 slot. I'm about to add a 5th array drive with a 6th already pre-cleared for future expansion (prime day deals) There is little information about how drives behave so I wonder if the sectors were initally a little lazy or if the drive in someway has compensated. I have read that all drives are effectively read and written to in the factory post assembly and during this any substandard sectors are excluded. I was curious if a few sectors would get remapped and then the drive would behave normally, however it seems to have returned to normal without any remapped sectors. Lets see how it gets on.
  2. There are a number of options. I went for a single E5-2660 V3, these and the E5 2680 / 90 V3 are increasingly affordable. I then used an ATX format motherboard and ECC DDR4. As it's a workstation board, I have 4x PCI-E x16 slots which can all do PCI-E x8 electrical + 10 sata. For Ryzen, Asus Pro WS X570 gives you 3 x16 slots that operate at x8 electrical however it has a limited number of sata ports. Some of the intel dektop solutions with IGP may allow you to only need 1GPU as the IGP will handle plex transcode. Really depends on the workload, for a NAS and occasional desktop you don't need high core counts or mutiple processors. I'd also consider how many drives you really need. It's easy to get caught up in lots of low capacity drives, much better to have a small number or larger drives and avoid the costs of cooling, power, sata ports, space etc.
  3. A pair of 60W CPU's probably produce less then 10W each when ticking over. Even under load they should be easy to keep cool. I have aE5-2660 V3 with a 3U noctua cooler, with the motherboard fan controller it's barely audible. Active cooling is definetely needed, no way you can get enough air moving in a standard case, though you don't need all that much air, even a 600-1000 rpm fan on a cooler is likely to keep the CPU in check, though something I'd recomend something with heat pipes. One thing to watch is that the low profile motherboards tend to expect very high fan speeds so you may have some fun getting it to do anything other than run the fans at full speed. You could consider selling off the server and swapping to a single core E5-2660 V3 (10 core) E2-2680 V3 (12 core) as you can then use a more compact ATX board and will have less cooling requirement etc.
  4. I wouldn't recommend that controller. It is a native 2 sata port PCI-E x1 controler with port multipliers. A PCI-E x1 card only has enough bandwidth for 2 modern HDD so it will quickly become a bottle neck and you are likely to get drive errors, drives ejected from the array etc. 3Gbs is way faster than any conventional HDD.
  5. The Mybook uses drive encryption so you can't put any old drive in it. I have multiple WD elements and WD Mybook boards, the elements can use any disks, the Mybook only seem compatible with the drives it arrived with. It's a hardware limitation. You can try this
  6. Hi, Personally I prefer mATX such as node 804, though I'm actually using full ATX. mITX can be very limiting and expensive. A few other thoughts. For this type of build I would look to Intel. The IGP on the Intel CPUS works very well for transcoding etc. in Plex, Emby etc. AMD doesn't have the support and some modern formats could easily hit 100% CPU during transcode. The iGPU can do that on ~20W. If you set the drives to spin down and then the only noise will be the air passing through the machine. If you use the BIOS fan management then it will be auidible only if you listen for it. The exception may be parity checks when all drives are spinning, however you can use a plugin to manage the timings. I do use ECC, but I read it is not a big issue with unraid, no different to your home PC. Other OS with ZFS file system are more at risk from memory errors. It is important to run memory a the JDEC standard for the CPU, not the XMP profile speed ... high speed memory is therefore not needed. You'd need to check but could be 2400/2666 on the AMD chip.
  7. Hi, A couple of thoughts. Unless you really need the compact form factor, an mAtx build will often be less expensive and have more features / expansion. Node 804 is a compact option. I don't understand your drive selection, there is no benfit in mutiple drives as each file is stored on a single drive so read speed is limited to individual drive speed. For 4TB of storage, a 4TB parity and single 4TB data would be better. You can then add 4TB drives to your hearts content... or until you exceed your licence / number of SATA ports. Sata ports quickly become limiting so it is recommended to go for the biggest disks (within reason) up front. No issues on the cache size, but give your 4TB storage it seem a bit large. Usually this would be for VM's or Appdata etc and for any regular transfers, however you don't mention VM's so it may go mostly unused. Perhaps the cash would be better spent on 8TB + 8TB drives and a smaller cache. Any resonable quality USB 2.0 drive or better with GUID is fine. I use both the 16GB Sandisk Ultra Fit USB 3.0 (3 years) and metal 16GB Kingston data traveller USB 2.0 (1 year). There is no need for a fast drive, unraid is compact and runs in memory once loaded. USB 2.0 ports and drives are generally preferred as more robust. The USB 3.1 sandisk Ultra Fit I had didn't seem to have a GUID, I've also had some mixed results with 32GB drives wheras 16GB work fine in all systems. In reality you'll be using less than 1GB, my flash backup is 400MB so the only thing a large drives gives you is a little better endurance.
  8. This could be the start of an issue with the drive Every drive I've seen is 100, less than 100 indicates a leak. The SMART in the full diagnostics is 086, was this taken earlier ?
  9. No issues with EZAZ or EMAZ, both are He10 drives and work fine in my array. However WD has now replaced these in the Mybook and Elements with EDAZ (Air) drives. Most of the other Red/Gold/Purple etc. drives have also moved away from helium until you get to 16TB+ The EDAZ are ~10% faster across the platter but run noticably hotter. E.g. these three have been pre-clearing for nearly 2 days. Temps on the EDAZ and EMAZ are almost identical but EDAZ have a notable uplift. I can bring this difference down if I turn the fans up but it's cool weather now so silence is good. Nothing wrong with the air drives, and you can check the drive ID in crystal disk etc. before you crack them open. Just an oppertunity for disappointment if you expect He10... you may get lucky with old stock, 1 in my last 4 was EMAZ. From data in forums, it's also likely that all 8TB WD drives are 7200rpm, detuned but still spinning at 7200rpm hence the 8-10W power consumption. WD market as 5400rpm class ..... Hence there is no effieciency saving from WD 5400rpm 'class' drives so Toshiba N300 7200rpm can be an alternative, performance should be similar.
  10. As said above, USB drives, especially multi disk devices like the Mediasonic are not recommended as array devices for many reasons including the way disks are identified by the controller, connections dropping and so requiring rebuilds etc. The board you have in option 3 is a reasonable start, there are 6 intel SATA ports and 2 Asmedia SATA ports all of which should be fine with unraid. A lot of boards in that time had Marvel controllers for 2 of the ports which cause issues. Add $10 for another 2 port PCI-E 2.0 x1 Asmedia AS1061 /1062 and you are at 10 available ports quickly with the two x16 physical slots available. One of those (x4) electical would be fine with either an 8 port LSI HBA in IT mode or a 5 port JM585. You can test out on a trial for 4 weeks + 2 x 2 week extensions (from the menu), if it doesn't work out then you can always use the drives in a different device. Like most, once you've tried it, you'll probably stick with it due to the flexibility. Power consumption should be ~30-40W idle for the base system, then ~8W per spinning drive. One important point though, Unraid is not a backup. It will cover 1 or 2 drive failures while maintaining availability (depending on parity drives), however it can't protect you from theft, fire, complete hardware failure or fat finger syndrome... user errors probably destroy more data than hardware failure in any system. If it's your business, make sure you have an independant backup for at least the critical data.
  11. The preclear tool writes to the USB and has it's logs / resume files on there. I'd assume this is so it can run independantly of the array being started, e.g. in maintainance etc. I'm halfway throught the second step, 'write' on 3 parallel preclears and I'm at 19k USB writes since the reboot from installing the disks. You can run all the preclears together, no need to do them individually.
  12. RMA is a sticky point as all the SMART data is good. Usually you need some sort of logged failure like reallocated sectors etc. So I am a bit stuck until it really fails. Currently running another set of preclearing It's already passed through the 'issue' area on the first pass, interstingly it also seems to be running faster than previously. Oct 19 22:11:08 preclear_disk_2SGAWHDJ_28904: Pre-Read: progress - 10% read @ 192 MB/s Oct 19 23:24:22 preclear_disk_2SGAWHDJ_28904: Pre-Read: progress - 20% read @ 186 MB/s Current plan is to run another cycle or two and see what happens. If it holds up I'll throw it in as parity 2 to give it a good workout and get some data for interest. Had I run the full three cycles up front, it may well have 'worked through' this issue since nothing gets reported. That's also another reason for discussing here. These errors may have been in the first clearance I ran many months ago, however as they don't report and I wasn't looking for them in the logs, I could easily have been sitting on a precleared but defective drive which would have fallen over in a rebuild.
  13. First off, there is no data at risk. When this origninally occured, I ran a non correcting parity check to confirm there were zero errors, then I replaced the drive which rebuilt sucessfully. I am now deciding what to do with the removed drive and if I need to change my preclear approach to include a review of server logs to look for unreported errors. Details: A new shucked WD He10 8TB drive passed 1 x WD test then 1 x pre clear cycle. On being added to the array after 6 months as a hot spare, it generated errors after being added to the array when >10% of drive capacity used. The errors were shown in the main Unraid page and logged as SMART 001 'Raw read error rate'. The 'Value / Worst' smart reading actually increased from 099/099 to 100/100 after the non correcting parity check was complete. The Smart 101 Error count reset on reboot Swapping to a different controller port did not eliminate the errors. Extended smart passed with no errors Smart data is clean, no pending allocation / reallocated sectors etc Moved the drive to a different server, different controller cables etc. Ran another preclear cycle read/write/read and got errors between 10 and 20% of disk scan for the initial read. No errors in the write or second read. No errors in SMART ############################################################################################################################ # # # unRAID Server Preclear of disk 2SGAWHDJ # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [17:03:16 @ 130 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [14:53:06 @ 149 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [16:04:22 @ 138 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 48:00:48 | Total elapsed time: 48:00:48 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status (device type: default) # # # # # # ATTRIBUTE INITIAL CYCLE 1 STATUS # # 5-Reallocated_Sector_Ct 0 0 - # # 9-Power_On_Hours 5749 5798 Up 49 # # 194-Temperature_Celsius 32 32 - # # 196-Reallocated_Event_Count 0 0 - # # 197-Current_Pending_Sector 0 0 - # # 198-Offline_Uncorrectable 0 0 - # # 199-UDMA_CRC_Error_Count 0 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear Finished Successfully!. The log is spammed with these errors between 10% and 20% of drive capacity The drive is sdb / ata1.00 The pre read zone 10-20% was slow due to these errors. Oct 17 21:23:15 MyNas preclear_disk_2SGAWHDJ[18758]: Pre-Read: progress - 20% read @ 98 MB/s Wheras the post read ran as expected with no errors and at the expected speed Oct 19 03:35:05 MyNas preclear_disk_2SGAWHDJ[18758]: Post-Read: progress - 10% verified @ 187 MB/s Oct 19 04:49:52 MyNas preclear_disk_2SGAWHDJ[18758]: Post-Read: progress - 20% verified @ 167 MB/s Oct 19 06:07:32 MyNas preclear_disk_2SGAWHDJ[18758]: Post-Read: progress - 30% verified @ 161 MB/s Oct 19 07:29:11 MyNas preclear_disk_2SGAWHDJ[18758]: Post-Read: progress - 40% verified @ 156 MB/s So based on this being used in two totally independant servers, the errors must be the drive. The questions I have are. 1) is the disk actually good? the errors seem to reduce with usage. Is this part of some early life adaption process? If this was just some early life behaviour then I'd rather keep it and it's a He10 rather than junk it or attempt a return. Currently there are no 'faults' to return it for. 2) Why do these errors not show up in SMART / Preclear, there are the same errors between the servers at the same points. Given there were originally SMART 001 errors when the drive was part of the array, I am suprised not to see them now after the read failures. 3) I have some more drives to pre-clear, feels like I need to keep a close eye on the logs as well as just rely on SMART/Preclear Does anyone have any experience of anything similar? Interested in your thoughts. Edit: I don't have the logs for the original error on the other server or the extended SMART test I ran. Latest logs attached. mynas-diagnostics-20201019-1943.zip
  14. I think the Mybook uses some type of onboard drive encryption. I have a few of these about my desk from shucking, just tried a Mybook board, no joy with a couple of drives. A board from an elements 8TB drive works fine with either which confirms it's a hardware issue. It may be that the board looks for a drive feature relating to encryption, however that's just a guess. I tried to clear ot in WDtools and that also fails. Smart data looks OK, but you can't delete or create partitions.
  15. How are then not as described? The external drive isn't marketed as Helium filled, buyers are making assumptions about the product ID containing a specific base drive. There will be text that allows WD to make reasonable variations. I have a couple of the 'Air' drives. They work fine with ~10% higher throughput than the Helium drives. The do run a little warmer but with moderate case cooling it isn't an issue. I'm seeing + 3-4C under sustained load at a low fan speed. The high temps reported are for continous load in a poorly ventilated plastic shell. WD is moving aware from using the He10 drives in anything other than the highest capacities, take a look at the WD Gold product sheets which no longer state Helium fill on these lower capacities. Unless the drives are actually faulty in some way, the OP is unlikely to get anything at a better cost to TB ratio so what benefit in returning. Someone here in the EU sent back the bare drive recently and got it replaced with a new external. Always worth a punt, worst that can happen is they don't replace it.
  16. That card is visually identical to two that I am currently using with no issues, one in my main Unraid and one in the backup. I just pulled the third one (not yet installed) out of it's box for a look and all the silk screen labeling and componet positioning matches the card you posted, though I note that card does not specifically identify the controller chip used which is a little unusual. In the Unraid device list it is "ASM1062 Serial ATA Controller (rev 02)" If you look closely, something similar should be written on the main chip - in my case ASM1061. Using a mobile phone camera zoom, balanced on an empty cup makes a good way to see the small writing. Often needs a torch or some light from the side to get contrast. No drivers required, support is native to Unraid. If you watch the screen duting boot, is the Asmedia card detected? Should 'splash' on the screen during POST. Does Unraid see the controller in Tools>System Devices? If not, try a different slot, or you may need to take a look around BIOS settings. Good luck
  17. That's normal, just running preclear on a shucked Prime Day 8TB WD drive, it took a little over 17 hours for the first read averaging 130MB/s. They slow down quite a lot on the inner tracks at the end. Keep in mind that parity checks etc will take the same time.
  18. Unraid can be fun and is very flexible with community applications. I haven't tried pass through of iGPU other than for docker transcoding, however a GT710 can be picked up for @ $15 in an auction and will be fine for a windows VM with the benefit of a dedicated GPU output, HDMI audio etc. Plex would be far better in docker and passed through to the iGPU than running in windows. Reads like you want a main VM with some storage on the side so give it a go. The only other thing you may want to consider is a USB card based on fresco logic FL1100 for another $15 as pass through of onboard USB can be an issues. Being able to pass through a full controller rather than devices will make a VM much more user friendly as you'll have hotplug for devices.
  19. Thats not so bad, one of mine ran away a little but still going strong otherwise. 9000 is barely run in
  20. Hi, While this is an Unraid forum, I'm struggling to see what using unraid gives you over Windows 10 and storage spaces for mirror / parity given you currently plan to run most of the apps in a Windows.
  21. In my opinion, the mounting of those drives is unsuitable for continuous operation. Location is fine for a game drive with intermittant access but when you have the drive spinning for 18hr+ during partity or rebuild they will get warm, possibly too warm on a hot summer day. To keep the case, you would need to fabricate brackets and then fix the drives in the path of the air from the front fans, side on to the fan is fine, as that doesn't impact the air flow to the rest of the case. By the time you have bought and fitted something, you could be a good way to a new case. A case such as the Antec P101s or the fractal design R6 (and others) allow you to mount multiple drives behind the front fans. My current unraid build is in a P101s and I have the FD R6 as my watercooled main PC though a front radiator isn't compatible with keeping the drive bays. (still a top 280/360 possible). I was going to get a second R6, however the P101s has 8 x expansion slots which means I can use a double slot card in my end PCI-E plus it came with more fans and was less expensive.
  22. There are many options open, an idea of budget for hardware (ex drives) would be a good start. Do you plan to buy a case to hold 12-16 drives? Larger drives seem to need a lot more direct cooling than 4TB so really need to be in directed airflow, which can be an issue to arrange when you have a lot of drives. Usually it is better to go with larger drives for less spindles and more simplicity. The 'wasted' cost of a single large parity drive will likely be a lot less than a larger case, bigger PSU, more cooling, more controllers and cables etc. Currently the 12TB and 14TB WD externals are very cheap in the prime day deals (ends today) but are likely to be similar for black Friday. A pair of drives + your old drives would give you significant upgrade storage even with a modest number of SATA ports available and you would then gain 8-10TB for every one of your 4TB you replaced over time. For lots of drives the mainboard needs a spare PCI-E 2.0 x4 (electrical) x8 /x16 physical or better slot available for a LSi SAS HBA which will give you 8 extra SATA ports above whatever the board supports. This can be picked up used for $30. If you just need a couple of extra ports, a $10 Asmedia dual SATA PCI-E x1 card is fine.
  23. The iGPU in the 6700k will be fine for most uses unless you are trying to transcode a very modern format. If you are doing a new build primarily for media, a modern iGPU from intel would be my preference over the P2000 and cheaper than a Ryzen + P2000. If you need lots of cores for other workloads or server levels of PCI-E lanes then a P2000 is a good option however this is in the 12+ cores / mass PCI-E rather than a mid range Ryzen.
  24. Are these the new SanDisk Ultra Fit USB 3.1 version? The 16GB drives I bought don't have a Guid so the creator won't recognise them. The standard USB 2.0 cruzer fit works fine. There is no performance or quality benefit to USB 3.0
  25. CSM - Compatibility Support Module (Bios Setting) - enabled, may force the board to boot from from a Chipset GPU. The x4 is connected to the PCH (chipset). PCI-E 2.0 x 4 is enough for 8 HDD, PCI-E 3.0 x4 is enough for 16 HDD with no loss in speed. Could you use a x4 to x8 / x16 riser for the ATA card with the GT710 in the second slot. Assuming your ATA card is half height.