Jump to content

ldrax

Members
  • Content Count

    63
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ldrax

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. I find that Code Server app is quite useful for me, but it might be far from how you want it to be. https://forums.unraid.net/topic/81306-support-linuxserverio-code-server/
  2. oh now I got confused. I always thought putting it into standby means telling it to spin down.
  3. So perhaps I will set up the script with the cron schedule matching the spin-down delay then. I think that will do, better than having to manually remember to spin it down. Thanks @Zonediver
  4. I take it that this command will put the drives to standby mode immediately? Preferably if there's still a way to respect the global spin-down delay settings. Anyway, this issue is not a big deal for me for the time being, because 'accessing the SMART page' of a drive is not a thing that I do very often.
  5. I see. Thanks @johnnie.black! I'll make mental note of that.
  6. I was giving a wrong example, the drive in question is part of the array, perhaps I shouldn't have said /dev/sdd, rather: the fourth drive in the array. So after observing for a few hours noting that the drive wasn't spun down, I browsed inside the drive content to give it 'drive activity'. Sure enough, after the global spin-down delay elapsed, the drive was spun down as it should be. This makes me think that, if an array drive in spun-down state is waken up by SMART-accessing command, the array won't 'notice' it, and hence won't spin it down after the spin-down delay elapsed.
  7. Not sure if this is known, but if a drive is in spun-down state, and I click the device name, say, /dev/sdd, from the Dashboard page, the page that follows seems to wake up the hdd (presumably to get SMART data), but from here on the spin-down delay never gets into effect, this drive will stay Active/Idle indefinitely. The Main page will keep showing * (standby) on the status column for the said drive, while the command line hdparm -C /dev/sdd shows Active/Idle.
  8. My PSU has 4 SATA cables out (each with 4 connectors, so total 16), with 3 cables used to power 12 hard drives. I "dedicated" the fourth cable for all the fans and AIOs, as follows: 1x connector to first AIO cooler (powering the pump and 1x radiator fan) 1x connector to second AIO cooler (powering the pump and 1x radiator fan) 1x connector to a fan controller hub 1x connector free The fan controller hub has 8 3-pin channels grouped into 4 controlling knobs (1 knob for 2 channels). If I use 4pins cable I can only use 4 channels due to space constraints. I have 12 fans that I want to connect to this hub, so that would mean connecting 3 fans to a single channel (using daisy-chained fans splitter). A single fan rated current varies from 0.2 - 0.4A, so estimatedly 3 fans to a single channel would be 0.9A, and all 12 fans to the hub a whole would combine to be 3.6A. So my questions are: Will all these (AIO + fans) overload the main SATA cable coming out from PSU? Will the fan controller hub overload the single SATA connector to power 12 fans? I couldn't find exact specs, but sporadically I saw that 1 SATA connector is rated 4.5A, but not sure is this for single SATA connector or for the main SATA cable coming out from PSU. Will the single channel on the hub be overloaded by the fans splitter powering 3 fans?
  9. update, the iommu=pt didn't work in my case, but replacing the drive with a different model did (and without the iommu=pt). The new one is a Samsung PM981. I guess that solves it. Thanks @johnnie.black!
  10. Thank you for the pointer @johnnie.black, I'll try it out later. When you said 'those', are you referring to the controller on the motherboard, or the controller inside the NVMe drive? And 'latest models', is it of the NVMe drives, or of the motherboard?
  11. I'm building a new rig with an ASUS Z10PE-D16 WS and Xeon 2637 v3. unRAID (6.7.2) boots fine, but soon as some tasks were initiated, such as logging in to Main web interface, or entering `diagnostics` on command line, the system would freeze after 1-2 minutes, with the CATTERR LED lit up red on the motherboard. After much stripping of peripherals, the tail -f /var/log/syslog reveals nvme nvme0: controller is down: will reset: CSTS=0xffffffff. PCI_STATUS=0xffff I removed the NVMe drive, and the system no longer freezes. The NVMe drive is a WD Black 500GB (the first model), and it's a Windows 10 OS drive. I have booted the same build into this Windows 10 on this drive, and it's running well, with a few test running fine overnight such as Prime95 and Unigine Heaven. So I think this error is Linux and/or unRAID specific. Anyone has any idea on what should I do next to troubleshoot?
  12. Never mind me. There is apparently a whole new world with GPU passthrough with VM on unRAID. I'll go read them up.
  13. Is the VM able to use the GPU directly? Not sure the term, hardware passthrough?
  14. Hi everyone, I'm just wondering what is the use case of having a mid (or high)-end GPU on unraid build? I have recently upgraded my build with higher specs, that is i7 3770 with Z77 board (they were great specs in their times). These days, used GTX 980, 980Ti, 1060 or RX580 can be had at very reasonable price. Normally the question will be, can I use this for purpose A or B or C. But it's reversed-question, for what purpose X,Y,Z can I use this for? Thank you!
  15. I recently reformatted my disk1 from ReiserFS to XFS, and then I did rsync -avPX /mnt/disk4/ /mnt/disk1 When it's done I ran the xfs_db - frag-f, to my surprise the fragmentation level is 27%, I was kind of hoping to see 0%. Does rsync write some temporary files somewhere on destination drive, and then delete it after completion? Other than that, I'm not sure what might cause the rather significant fragmentation.