Jump to content

ldrax

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by ldrax

  1. Thank you @johnnie.black! I have done all those steps, and now all 4 drives are back in cache pool. Hope nothing changes beneath.
  2. I finally get to this, I'm now at step 3: unassign all cache devices. Since there is no 'Unassign' button or drop down, I assume you meant choose 'No Device' next to each of the 4 cache slots. So I chose 'No Device' for the cache 1, 2, 4. As for Cache 3, there no other option other than choosing one of available drives, so I leave it as 'Unassigned'. So right now, this is how it looks like: Cache 1: Missing (x) Cache 2: Missing (x) Cache 3: Unassigned Cache 4: Missing (x) Now before I proceed to 'Start the array', are these states how they are supposed to look like? Just want to make sure
  3. Alright, I'll do it shortly. Is this known issue? Normally if an array disk is missing, its slot will be marked as missing, and upon fixing it physically and reboot, it will show up back on that slot. I wonder why isn't that the case with cache pool?
  4. thank you @johnnie.black so, unassign while still leaving all 4 cache slots, right, not setting total number of cache slots to 0?
  5. I have 4 drives assigned to RAID 1 BTRFS cache pool. Everything was fine when I turned the system off to do some maintenance (rearranging cables). When I turned the system back on, 1 of cache drives (Cache 3) wasn't detected by the system, probably due to loose cable. I noticed this before anything, so I didn't start the array. Surprisingly, unRAID reported 'Configuration is valid', and slot 3 of the cache pool is simply marked as 'Unassigned'. I turned the system off, and reseated all the cables. After booting up, the missing drive now shows up. But Cache 3 slot is still marked as Unassigned, and the drive that was supposed to be there is now listed under Unassigned Devices. I would like to seek an advise here on what should I do next? Do i reassign the drive to cache 3 slot? or do I just start the array without cache 3 drive. Thanks!
  6. Just wondering how do you discover such machines, by chance or by scanning the intranet/internet? What are the most common mistakes here from the users here, other than being oblivious to public exposure, do they forget to change the default password?
  7. I find that Code Server app is quite useful for me, but it might be far from how you want it to be. https://forums.unraid.net/topic/81306-support-linuxserverio-code-server/
  8. oh now I got confused. I always thought putting it into standby means telling it to spin down.
  9. So perhaps I will set up the script with the cron schedule matching the spin-down delay then. I think that will do, better than having to manually remember to spin it down. Thanks @Zonediver
  10. I take it that this command will put the drives to standby mode immediately? Preferably if there's still a way to respect the global spin-down delay settings. Anyway, this issue is not a big deal for me for the time being, because 'accessing the SMART page' of a drive is not a thing that I do very often.
  11. I see. Thanks @johnnie.black! I'll make mental note of that.
  12. I was giving a wrong example, the drive in question is part of the array, perhaps I shouldn't have said /dev/sdd, rather: the fourth drive in the array. So after observing for a few hours noting that the drive wasn't spun down, I browsed inside the drive content to give it 'drive activity'. Sure enough, after the global spin-down delay elapsed, the drive was spun down as it should be. This makes me think that, if an array drive in spun-down state is waken up by SMART-accessing command, the array won't 'notice' it, and hence won't spin it down after the spin-down delay elapsed.
  13. Not sure if this is known, but if a drive is in spun-down state, and I click the device name, say, /dev/sdd, from the Dashboard page, the page that follows seems to wake up the hdd (presumably to get SMART data), but from here on the spin-down delay never gets into effect, this drive will stay Active/Idle indefinitely. The Main page will keep showing * (standby) on the status column for the said drive, while the command line hdparm -C /dev/sdd shows Active/Idle.
  14. My PSU has 4 SATA cables out (each with 4 connectors, so total 16), with 3 cables used to power 12 hard drives. I "dedicated" the fourth cable for all the fans and AIOs, as follows: 1x connector to first AIO cooler (powering the pump and 1x radiator fan) 1x connector to second AIO cooler (powering the pump and 1x radiator fan) 1x connector to a fan controller hub 1x connector free The fan controller hub has 8 3-pin channels grouped into 4 controlling knobs (1 knob for 2 channels). If I use 4pins cable I can only use 4 channels due to space constraints. I have 12 fans that I want to connect to this hub, so that would mean connecting 3 fans to a single channel (using daisy-chained fans splitter). A single fan rated current varies from 0.2 - 0.4A, so estimatedly 3 fans to a single channel would be 0.9A, and all 12 fans to the hub a whole would combine to be 3.6A. So my questions are: Will all these (AIO + fans) overload the main SATA cable coming out from PSU? Will the fan controller hub overload the single SATA connector to power 12 fans? I couldn't find exact specs, but sporadically I saw that 1 SATA connector is rated 4.5A, but not sure is this for single SATA connector or for the main SATA cable coming out from PSU. Will the single channel on the hub be overloaded by the fans splitter powering 3 fans?
  15. update, the iommu=pt didn't work in my case, but replacing the drive with a different model did (and without the iommu=pt). The new one is a Samsung PM981. I guess that solves it. Thanks @johnnie.black!
  16. Thank you for the pointer @johnnie.black, I'll try it out later. When you said 'those', are you referring to the controller on the motherboard, or the controller inside the NVMe drive? And 'latest models', is it of the NVMe drives, or of the motherboard?
  17. I'm building a new rig with an ASUS Z10PE-D16 WS and Xeon 2637 v3. unRAID (6.7.2) boots fine, but soon as some tasks were initiated, such as logging in to Main web interface, or entering `diagnostics` on command line, the system would freeze after 1-2 minutes, with the CATTERR LED lit up red on the motherboard. After much stripping of peripherals, the tail -f /var/log/syslog reveals nvme nvme0: controller is down: will reset: CSTS=0xffffffff. PCI_STATUS=0xffff I removed the NVMe drive, and the system no longer freezes. The NVMe drive is a WD Black 500GB (the first model), and it's a Windows 10 OS drive. I have booted the same build into this Windows 10 on this drive, and it's running well, with a few test running fine overnight such as Prime95 and Unigine Heaven. So I think this error is Linux and/or unRAID specific. Anyone has any idea on what should I do next to troubleshoot?
  18. Never mind me. There is apparently a whole new world with GPU passthrough with VM on unRAID. I'll go read them up.
  19. Is the VM able to use the GPU directly? Not sure the term, hardware passthrough?
  20. Hi everyone, I'm just wondering what is the use case of having a mid (or high)-end GPU on unraid build? I have recently upgraded my build with higher specs, that is i7 3770 with Z77 board (they were great specs in their times). These days, used GTX 980, 980Ti, 1060 or RX580 can be had at very reasonable price. Normally the question will be, can I use this for purpose A or B or C. But it's reversed-question, for what purpose X,Y,Z can I use this for? Thank you!
  21. I recently reformatted my disk1 from ReiserFS to XFS, and then I did rsync -avPX /mnt/disk4/ /mnt/disk1 When it's done I ran the xfs_db - frag-f, to my surprise the fragmentation level is 27%, I was kind of hoping to see 0%. Does rsync write some temporary files somewhere on destination drive, and then delete it after completion? Other than that, I'm not sure what might cause the rather significant fragmentation.
  22. Right now I'm running 4 concurrent file copy operations. Command used: rsync -avPX /source /destination Parity: disabled (unassigned) Thread 1: from disk 3 -> 2 Thread 2: from disk 4 -> 1 Thread 3: from disk 7 -> 8 Thread 4: from disk 9 -> 5 I'm only getting about 30MB/s for each of these 4. Tried both recontruct write and Auto on Disk Settings, results are the same, but I figure since parity is disabled, this doesn't matter. If I run just 2 operations, speed of each would be around 45 - 55 MB/s. The disks are connected to SATA ports on motherboard and some to LSI 8i RAID card (PCIE3 x4). Does this mean that as a whole, my entire build's disk bandwidth is only about 100-120 MB/s ? A while back I did an rsync from most of these disks, one disk at a time, to another unRAID build, the max speed was around 120MB/s, but I figure that's the max of my gigabit LAN speed.
  23. Autofan questions. Q1 I understand that autofan is controlling speed by pwm. Is it also working with 3-pin fans header with no PWM, that is, will it do voltage-based speed control using pin #3? If not, is there such plugin? Q2 I'm wondering about the issue of minimum PWM. I clicked DETECT next to "Minimum PWM value:" settings a few times, but the field remains blank. Meanwhile, syslog is reporting 0 rpm at certain PWM values, logs below. Is this all fine or do I need to adjust something somewhere? Oct 27 05:38:09 Tower autofan: Highest disk temp is 40C, adjusting fan speed from: 189 (74% @ 2265rpm) to: 145 (56% @ 1771rpm) Oct 27 05:43:14 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 145 (56% @ 1767rpm) to: 123 (48% @ 1500rpm) Oct 27 06:08:23 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 123 (48% @ 1500rpm) to: 101 (39% @ 0rpm) Oct 27 06:13:28 Tower autofan: Highest disk temp is 34C, adjusting fan speed from: 101 (39% @ 0rpm) to: OFF (0% @ 0rpm) Oct 27 06:23:33 Tower autofan: Highest disk temp is 36C, adjusting fan speed from: OFF (0% @ 0rpm) to: 57 (22% @ 0rpm) Oct 27 06:33:38 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 57 (22% @ 0rpm) to: 79 (30% @ 0rpm) Oct 27 06:48:43 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 07:03:49 Tower autofan: Highest disk temp is 0C, adjusting fan speed from: 101 (39% @ 0rpm) to: OFF (0% @ 0rpm) Oct 27 07:28:54 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: OFF (0% @ 0rpm) to: 123 (48% @ 0rpm) Oct 27 07:33:59 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1513rpm) to: 79 (30% @ 0rpm) Oct 27 07:44:04 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 07:54:10 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 101 (39% @ 0rpm) to: 123 (48% @ 1500rpm) Oct 27 07:59:15 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1510rpm) to: 79 (30% @ 0rpm) Oct 27 08:04:20 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 08:14:25 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 101 (39% @ 0rpm) to: 123 (48% @ 1496rpm) Oct 27 08:19:30 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1510rpm) to: 79 (30% @ 0rpm) Oct 27 08:24:35 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 08:34:40 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 101 (39% @ 0rpm) to: 123 (48% @ 1490rpm) Oct 27 08:39:45 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1510rpm) to: 79 (30% @ 0rpm) Oct 27 08:44:51 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 08:54:56 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 101 (39% @ 0rpm) to: 123 (48% @ 1486rpm) Oct 27 09:00:01 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1510rpm) to: 79 (30% @ 0rpm) Oct 27 09:05:06 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm) Oct 27 09:15:11 Tower autofan: Highest disk temp is 39C, adjusting fan speed from: 101 (39% @ 0rpm) to: 123 (48% @ 1503rpm) Oct 27 09:20:16 Tower autofan: Highest disk temp is 37C, adjusting fan speed from: 123 (48% @ 1510rpm) to: 79 (30% @ 0rpm) Oct 27 09:25:21 Tower autofan: Highest disk temp is 38C, adjusting fan speed from: 79 (30% @ 0rpm) to: 101 (39% @ 0rpm)
  24. I've been re-reading this guide: https://wiki.unraid.net/File_System_Conversion I have 11 data disks array currently in ReiserFS, and 1 parity. The 'most recommended' option in that guide is "Mirror each disk with rsync, preserving parity method". I've been mulling over whether to use "Share based with inclusions, no parity" method. I have the option to move 4-disks worth of data to another build, with no need to transfer it back, hence I should be able, I think, to convert 4 disks in one go using 4 screen sessions. All user shares have inclusions set, but honestly if it's too much of a trouble to keep this config, I don't mind losing all shares config. Would like to have second opinion on what I figure the steps would be: 1. Empty 4 disks (say, 1-4) by moving all data from them to another NAS. 2. Stop array, change partition type of disk 1-4 to XFS. 3. Unassign parity disk. 4. Start array, disk 1-4 will be formatted to XFS, data on these will be lost. 5. Copy data from disk 5-8 to newly formatted disk 1-4, respectively. 6. Verify data in disk 1-4. 7. Stop array, change partition type of disk 5-8 to XFS. 8. Start array, disk 5-8 will be formatted to XFS, data on these will be lost. 9. Copy data from disk 9-11 to newly formatted disk 5-7. 10. Verify data in disk 5-7. 11. Stop array, change partition of disks 9-11 to XFS. 12. Assign the parity disk back. 13. Start array, disks 9-11 will be formatted to XFS, parity will be built. I hope there is no issue unforeseen with this plan.
  25. Just an update for my case, it turned out that the 300W SFX (SSP-300SFB Active PFC F3, http://www2.seasonic.com/product/ssp-300sfb/) that I got wasn't enough to power 7 drives (5x 7200rpm, 2x5400rpm). The PSU has 1 cable with 1 Molex connector, and 1 cable with 3 SATA connectors. Trial 1: The 1 Molex was splitted into 3 Molex and connected to the hotswap backplane. While 2 SATA connectors were powering the 2 bottom drives. Trial 2: The 1 Molex was used to power 4 drives, while the other 3 drives were powered by SATA connectors. In both trials, I experienced 'power reset' on random drives, usually not long after I press the parity check or sync/rebuild button. I changed the PSU to Seasonic 500W (ATX) and then to Corsair SFX 600W, the problem didn't appear on either of these PSUs, fingers crossed.
×
×
  • Create New...