-
Posts
166 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Andiroo2
-
-
This explains my hangs over the last few weeks. Tdarr running must be causing this.
-
I've turned off "Scan my library periodically" in Plex...let's see if that makes a difference. I (like everyone else) am running Tdarr on my libraries at the moment so it may be a month or two before my disks spin down to test this for reals.
-
I’ve had this issue since beta 35 of the last release. I’m on the latest stable release now and I still see my drives spinning up for SMART and then down again after the 15 min set time. Over and over again.
-
Confirming that this solved my issues. Delete the 2 files and reboot...thanks SO much!!
-
Some info from my log right after I manually spin down the array:
Mar 2 12:37:17 Tower emhttpd: spinning down /dev/sdl Mar 2 12:37:20 Tower emhttpd: spinning down /dev/sdk Mar 2 12:37:20 Tower emhttpd: spinning down /dev/sdh Mar 2 12:37:21 Tower emhttpd: spinning down /dev/sdj Mar 2 12:38:02 Tower kernel: sd 7:0:4:0: attempting task abort!scmd(0x000000007e3428ef), outstanding for 7017 ms & timeout 7000 ms Mar 2 12:38:02 Tower kernel: sd 7:0:4:0: [sdl] tag#3214 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Mar 2 12:38:02 Tower kernel: scsi target7:0:4: handle(0x000d), sas_address(0x4433221105000000), phy(5) Mar 2 12:38:02 Tower kernel: scsi target7:0:4: enclosure logical id(0x5c81f660f5419d00), slot(6) Mar 2 12:38:02 Tower kernel: sd 7:0:4:0: task abort: SUCCESS scmd(0x000000007e3428ef) Mar 2 12:38:04 Tower emhttpd: read SMART /dev/sdl Mar 2 12:38:12 Tower kernel: sd 7:0:3:0: attempting task abort!scmd(0x00000000637eab23), outstanding for 7016 ms & timeout 7000 ms Mar 2 12:38:12 Tower kernel: sd 7:0:3:0: [sdk] tag#3223 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Mar 2 12:38:12 Tower kernel: scsi target7:0:3: handle(0x000c), sas_address(0x4433221106000000), phy(6) Mar 2 12:38:12 Tower kernel: scsi target7:0:3: enclosure logical id(0x5c81f660f5419d00), slot(5) Mar 2 12:38:12 Tower kernel: sd 7:0:3:0: task abort: SUCCESS scmd(0x00000000637eab23) Mar 2 12:38:15 Tower emhttpd: read SMART /dev/sdk Mar 2 12:38:22 Tower kernel: sd 7:0:2:0: attempting task abort!scmd(0x000000005b9e412a), outstanding for 7063 ms & timeout 7000 ms Mar 2 12:38:22 Tower kernel: sd 7:0:2:0: [sdj] tag#3236 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Mar 2 12:38:22 Tower kernel: scsi target7:0:2: handle(0x000b), sas_address(0x4433221101000000), phy(1) Mar 2 12:38:22 Tower kernel: scsi target7:0:2: enclosure logical id(0x5c81f660f5419d00), slot(2) Mar 2 12:38:22 Tower kernel: sd 7:0:2:0: task abort: SUCCESS scmd(0x000000005b9e412a) Mar 2 12:38:25 Tower emhttpd: read SMART /dev/sdj Mar 2 12:38:32 Tower kernel: sd 7:0:0:0: attempting task abort!scmd(0x0000000000b292e2), outstanding for 7069 ms & timeout 7000 ms Mar 2 12:38:32 Tower kernel: sd 7:0:0:0: [sdh] tag#3248 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Mar 2 12:38:32 Tower kernel: scsi target7:0:0: handle(0x0009), sas_address(0x4433221100000000), phy(0) Mar 2 12:38:32 Tower kernel: scsi target7:0:0: enclosure logical id(0x5c81f660f5419d00), slot(3) Mar 2 12:38:32 Tower kernel: sd 7:0:0:0: task abort: SUCCESS scmd(0x0000000000b292e2) Mar 2 12:38:35 Tower emhttpd: read SMART /dev/sdh
I have 4 array drives (3+parity) attached to a Dell perc h310 (flashed to LSI 9211-8i). I AM using telegraf but [inputs.smart] is disabled in the config files.
SSDs are connected right to MB headers.
UPDATE before I posted: I found a separate [inputs.hddtemp] in telegraf.conf and disabled it....it appears to have fixed my immediate spin-up issue.
-
On 2/12/2021 at 9:20 PM, trurl said:
"Don't make me tap the sign..."
- 1
-
On 12/3/2020 at 1:30 AM, craigr said:
I just setup a second cache pool with Beta 35 using two Samsung 850 PRO SSD's and it worked well. However, I couldn't help but notice that capital letters, numbers proceeding the pool's name, and special characters are not permitted. I would have liked to have just called this cache2 or SSD_Pool. Not really a big deal, but are there plans to allow for this, but this is jut beta?
So far I am loving having all my VM shares; isos, vdisks, system on the SSD pool. Everything is running great
craigr
How did you get your docker and VM's to move over to the new cache pool? Did you have to manually move the files between pool A and pool B? I have set my appdata and system shares to prefer the new 2nd cache pool but nothing is happening when I run the mover.
Unraid OS version 6.12.0-rc2 available
-
-
-
-
-
in Prereleases
Posted
Is there any support for converting a BTRFS cache pool to ZFS, or will it need to be a complete wipe and reformat?