Vr2Io

Members
  • Posts

    3614
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Vr2Io

  1. Yes, exactly. 3U and 2U rack case also in this way. Sure, because I use two PSU, loading are 16 disk and 12 disk. Always 1 wire set for 4 disks. Not one PSU for 28 disk.
  2. Yes, most PSU won't have 6 independence wire for peripheral. ** Never think only use 2 wire set to power up 24 disks ** My 28 disks config are 16 ( 4 molex ) + 12 ( 3 molex ), both is 2U PSU with 4 peripheral wire and each have 2 molex, so no problem on wiring. The model is Enhance ENH-2180, it is not a high quality PSU, but I haven't much choice. EVGA SuperNOVA 1300w G2 have 6 peripheral socket, you need DIY to make SATA 1-4 wire be molex, cut it and insert to molex DIY plug. Corsair HX-850, 1000, 1200 also support 6 peripheral socket and 1200 have 5v 30A and one more SATA cable, but you also need cut those SATA wire be molex.
  3. My setup also similar power usage, 28 disks in two case, 2 PSU , 1 10GNIC, 2 HBA , 1 Expander, 1 low power GPU, during parity sync, it use ~422w ( 522w-100w floor). I like Corsair more then EVGA.
  4. You could try pbertera/syslogserver, I run it in docker with standalone IP ( need enable host network in docker setting ) and redirect syslog file to UD disk path.
  5. I suggest you should focus on the 5v rating. By quick check, CX750M or RM__ HX__i, 5v rating is 25A ( combine is 130w or 150w ) In my opinion, the major problem of CX750M should be 5v regulate with 12v together, btw you should avoid use single PSU for 24 disks. ( If you can, pls visual check does CX750M any capacitor exploded ) https://linustechtips.com/topic/1122694-why-group-regulated-units-shouldnt-be-boughtsold-in-2019-and-on/
  6. I make some mistake and correct as below 1. For Synology E10G18-T1 does not have WOL, it should be the NIC haven't implement 3.3v-Aux for the chips as standby power. 2. Tplink not work on Asrock, it should be mobo relate, could you check does mobo BIOS "NIC" UEFI WOL setting or status.
  7. Really great info. For Asrock x570m pro4, I suppose it may due to no 3.3v-aux present when system in S5, you can further check on this PCIe slot pin and solve if you know electronic circuit. https://electronics.stackexchange.com/questions/225389/pcie-power-when-system-off By some checking, Qnap B10 pin have circuit trace and component ( C21 C25 C26 C222 C223 ) but Synology have the trace only ( missing C21 C25 & C2X )
  8. There are no best way, I like pool those backup disks in raid0 pool(s) but mount / manage under UD. I will consider as individual if Unraid support multiple array pool or have 2nd Unraid license, this will provide max protection but lost high performance from RAID0.
  9. ~17 WD shuck disk ( all WD from shuck ), the age one ~4yrs. Only 1 ( non Helium ) got bad sector.
  10. If no spare cable then no solution. As WOL were base on magic-packet ( MAC address ), so nothing need to block at router. I use optical and use suggested method for WOL. I also notice Tplinlk say support WOL and most other 10G NIC not mention this. For WOL on those power hungry NIC, unless they have advance power management design, otherwise that would draw too much power in S5 and I think that's why 10G NIC usually haven't WOL. ** edit : If you have Wireless NIC support WoWLAN ( wake on WiFi ) then you could try setup under VM **
  11. Simple connect the onboard NIC ( without assign IP ) for WOL purpose.
  12. No, you must fix it or try use other USB SATA bridge, otherwise USB storage file or file system will corrupt.
  13. Yes, dynamic allocate to different disk according different setting. Yes, if a file with same name in other disks, it will writing too, as result duplicate file in different disk.
  14. It may save a bit of power, but reliable & stable more importance. Pls do a simple test, "rsync /mnt/disk1/<share>/<folder> /mnt/disk1/test/<share>/<folder> " to check does it crash, then delete /mnt/disk1/test
  15. There are no link aggregation in SATA, I think you talking about aggresive sleep feature, btw you should set to disable and I don't think it will be the cause.
  16. Your setup quite simple and seems no abnormal found. First, are you sure only rsync will got crash, how about use command cp or mc ? BTW, I guess problem cause by transfer to USB device and USB brisge/device have problem, also, what filesystem of the USB storage ? FYR, I never got crash with rsync, even with USB storage.
  17. I use Intel X520 and Connect X3 ( need flash from IB to Eth firmware ), both work well with Unraid. https://forums.unraid.net/topic/84705-mellanox-connectx-2-mhrh2a-xsr-10gb/ I also have Emulex old gen OCE-10xxx adapter, it work with Unraid ( be2net driver ) by flash it to another firmware something like 4.x.x ( can't remember ). 49Y7952 should be newer gen OCE-11xx adapter. You may search / try any firmware would make it work. https://forums.servethehome.com/index.php?threads/beware-emulex-10gbe-virtual-fabric-adapter-ii-x8-pci-e-slot.3491/page-3
  18. Enjoy !! BTW only two cable for 5 cage for total 20 bay still not quite enough.
  19. I haven't found attributes 200 on WD disk, just finish re-arrange disks and now all array disk were WD. 😈 I have try monitor attributes 1, but sometimes it will change to some no. and return to zero in short time ( can't remember WD or not ), so I finally haven't monitor it.
  20. Note, I just re-install UD and reboot, I have 10 data disk , 16 UD disk and 2 parity. Below is the screen capture when array start, Disk 1-10 counter figure show at Dev 1-10, i.e. Dev 1=Disk 1, Dev 2=Disk2 ..... and Dev 11-16 will be zero. For about spindown issue, I also reproduce by - Start array - Spindown all disk - Clear all counter by "clear stats" - Let say made activity on Disk 1,5 then Dev 1,5 also spinup - Same counter figure show on UD @bonienl @limetech @dlandon So, if I perform parity check/sync, I have 10 data disk, then UD dev 1-10 will spinup and waste 100W+ uncessary.
  21. I got same issue when switch to R/W counter mode. All figure is copy from array R/W figure and places in UD different disk. Or more clear state the symptom, array Disk 3,4 figure copy to Dev 3,4 ........ ( due to delay in capture and update, they will show different figure, but actually they are same number ) Array: UD: And this also prevent UD Dev 3,4 to spindown.
  22. I am not sure the outlook of zpools in UD, I have two RAID0 pool in UD, I will mask member disk by set to passthrough, then got a clear look.
  23. There are some interest founding about UD disk spindown issue after I rearrange most disk. My setup was 10 data + 2 parity and 16 UD. When 10 array data disk have activity, then UD dev 1-10 won't spindown, but dev 11-16 will spindown. After only array disk 3,4 have activity, then UD also only dev 3,4 in spinup. I notice UD state spin down not control by UD after 6.9+, so does it is Unraid OS issue ? FYR, parity sync in progress and no other disk activity, OS 6.10.0-RC1.