sdesbure

Community Developer
  • Posts

    47
  • Joined

  • Last visited

Everything posted by sdesbure

  1. What I'll do : * replace controller card from broadcom to LSI * upgrade disk4 (I was planning to upgrade disk3 as it's the smallest / nearly full but let's do that ) I'll keep you posted. Do you prefer that I mark your proposal as "solution" or do we wait for the hardware?
  2. So I first did it with docker / vms enabled: Linux 5.15.43-Unraid (nas) 05/31/2022 _x86_64_ (4 CPU) 11:38:38 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:38:43 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:43 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:43 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:43 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:43 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:43 AM sdc 11.20 1792.00 0.00 160.00 0.05 4.32 2.55 2.86 11:38:43 AM sdb 19.60 2234.40 0.00 114.00 0.18 9.21 6.06 11.88 11:38:43 AM sdd 21.60 1872.00 69.60 89.89 0.13 5.78 2.38 5.14 11:38:43 AM sde 13.40 1827.20 0.00 136.36 0.04 2.84 2.72 3.64 11:38:43 AM sdf 31.60 2077.60 68.80 67.92 6.10 186.13 27.04 85.46 11:38:43 AM sdg 0.20 0.00 8.80 44.00 0.00 0.00 1.00 0.02 11:38:43 AM sdh 11.60 1792.80 0.80 154.62 0.04 3.16 2.53 2.94 11:38:43 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:38:48 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM sdc 20.00 1760.00 0.00 88.00 0.04 1.83 2.75 5.50 11:38:48 AM sdb 55.40 2312.80 0.00 41.75 0.72 12.98 9.03 50.02 11:38:48 AM sdd 43.60 1989.60 247.20 51.30 0.28 5.11 2.71 11.82 11:38:48 AM sde 25.20 1823.20 0.00 72.35 0.09 3.76 3.55 8.94 11:38:48 AM sdf 58.80 2248.00 212.00 41.84 5.92 93.95 16.08 94.54 11:38:48 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:48 AM sdh 20.40 1761.60 0.00 86.35 0.03 1.38 1.81 3.70 11:38:48 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:38:53 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:53 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:53 AM loop2 3.40 8.00 22.40 8.94 0.00 0.59 0.41 0.14 11:38:53 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:53 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:38:53 AM sdc 28.40 2008.80 0.00 70.73 0.06 1.99 2.15 6.12 11:38:53 AM sdb 82.60 2552.80 0.00 30.91 1.47 17.76 10.31 85.14 11:38:53 AM sdd 101.40 2341.60 523.20 28.25 11.64 113.16 3.55 35.98 11:38:53 AM sde 33.40 1680.80 0.00 50.32 0.12 3.46 3.51 11.74 11:38:53 AM sdf 102.60 2134.40 558.40 26.25 28.23 272.41 7.57 77.64 11:38:53 AM sdg 22.20 24.00 201.60 10.16 0.01 0.21 0.25 0.56 11:38:53 AM sdh 25.80 1558.40 0.00 60.40 0.02 0.88 1.59 4.10 ^C Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util Average: loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop2 1.13 2.67 7.47 8.94 0.00 0.59 0.41 0.05 Average: loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdc 19.87 1853.60 0.00 93.30 0.05 2.38 2.43 4.83 Average: sdb 52.53 2366.67 0.00 45.05 0.79 15.02 9.33 49.01 Average: sdd 55.53 2067.73 280.00 42.28 4.01 70.96 3.18 17.65 Average: sde 24.00 1777.07 0.00 74.04 0.08 3.45 3.38 8.11 Average: sdf 64.33 2153.33 279.73 37.82 13.42 203.91 13.35 85.88 Average: sdg 7.47 8.00 70.13 10.46 0.00 0.21 0.26 0.19 Average: sdh 19.27 1704.27 0.27 88.47 0.03 1.51 1.86 3.58 (I stopped before the end, maybe I shouldn't have...) then I stopped, waited for load to cool down (from ~11 to ~2): Linux 5.15.43-Unraid (nas) 05/31/2022 _x86_64_ (4 CPU) 11:50:26 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:50:31 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM sdc 399.00 134265.60 0.00 336.51 0.60 1.51 1.47 58.76 11:50:31 AM sdb 200.00 134400.00 0.00 672.00 6.87 34.34 5.00 99.92 11:50:31 AM sdd 399.60 134265.60 0.00 336.00 0.60 1.49 1.46 58.48 11:50:31 AM sde 399.80 134400.00 0.00 336.17 1.33 3.32 2.45 97.94 11:50:31 AM sdf 397.00 134400.00 0.00 338.54 1.87 4.71 2.51 99.80 11:50:31 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:31 AM sdh 397.20 134400.00 0.00 338.37 1.84 4.64 2.51 99.80 11:50:31 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:50:36 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM sdc 389.00 132115.20 0.00 339.63 0.65 1.66 1.51 58.86 11:50:36 AM sdb 196.60 132115.20 0.00 672.00 6.85 34.85 5.07 99.70 11:50:36 AM sdd 393.40 132115.20 0.00 335.83 0.58 1.47 1.45 57.22 11:50:36 AM sde 393.20 132115.20 0.00 336.00 1.30 3.30 2.44 95.86 11:50:36 AM sdf 390.60 132115.20 0.00 338.24 1.83 4.69 2.54 99.28 11:50:36 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:36 AM sdh 387.00 131968.00 0.00 341.00 1.85 4.78 2.56 98.94 11:50:36 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:50:41 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM sdc 397.40 135072.00 0.00 339.89 0.63 1.59 1.49 59.08 11:50:41 AM sdb 200.80 134937.60 0.00 672.00 6.84 34.05 4.96 99.54 11:50:41 AM sdd 402.00 135072.00 0.00 336.00 0.59 1.46 1.44 58.04 11:50:41 AM sde 401.40 135072.00 0.00 336.50 1.32 3.28 2.41 96.86 11:50:41 AM sdf 390.40 134937.60 0.00 345.64 1.89 4.83 2.52 98.36 11:50:41 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:41 AM sdh 394.60 135084.80 0.00 342.33 1.85 4.69 2.51 99.04 11:50:41 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:50:46 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM sdc 400.00 134400.00 0.00 336.00 0.60 1.50 1.47 58.82 11:50:46 AM sdb 200.00 134400.00 0.00 672.00 6.87 34.33 5.00 100.02 11:50:46 AM sdd 400.00 134400.00 0.00 336.00 0.60 1.50 1.46 58.46 11:50:46 AM sde 399.60 134265.60 0.00 336.00 1.33 3.33 2.46 98.18 11:50:46 AM sdf 399.20 134400.00 0.00 336.67 1.87 4.68 2.50 99.88 11:50:46 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:46 AM sdh 399.40 134400.00 0.00 336.50 1.84 4.61 2.50 99.90 11:50:46 AM DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util 11:50:51 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:51 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:51 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:51 AM loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:51 AM sda 0.60 0.00 0.30 0.50 0.00 3.00 3.33 0.20 11:50:51 AM sdc 380.00 129293.60 0.00 340.25 0.94 2.49 1.58 60.00 11:50:51 AM sdb 192.60 129293.60 0.00 671.31 6.49 33.71 4.91 94.64 11:50:51 AM sdd 384.20 129293.60 0.00 336.53 0.59 1.54 1.44 55.20 11:50:51 AM sde 383.00 129293.60 0.00 337.58 1.26 3.29 2.39 91.58 11:50:51 AM sdf 365.80 129297.60 0.00 353.47 2.05 5.59 2.58 94.38 11:50:51 AM sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:51 AM sdh 366.80 129300.00 0.00 352.51 1.83 5.00 2.57 94.20 Average: DEV tps rkB/s wkB/s areq-sz aqu-sz await svctm %util Average: loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sda 0.12 0.00 0.06 0.50 0.00 3.00 3.33 0.04 Average: sdc 393.08 133029.28 0.00 338.43 0.68 1.74 1.50 59.10 Average: sdb 198.00 133029.28 0.00 671.87 6.78 34.26 4.99 98.76 Average: sdd 395.84 133029.28 0.00 336.07 0.59 1.49 1.45 57.48 Average: sde 395.40 133029.28 0.00 336.44 1.31 3.30 2.43 96.08 Average: sdf 388.60 133030.08 0.00 342.33 1.90 4.89 2.53 98.34 Average: sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: sdh 389.00 133030.56 0.00 341.98 1.84 4.74 2.53 98.38 sdg is cache and sda is flash
  3. yep, that's what I was planning to use (an HBA working out of the box) thanks!
  4. And thanks for both of you for helping me!
  5. I've just stopped all dockers and the VM (jeedom) and launched a diagnostic FYI, instead of ~15Mb/s, I now have ~33Mb/s (less than 6.9.x with all dockers and VM started) nas-diagnostics-20220531-1100.zip
  6. Yes of course sorry, Here it is (from yesterday, but nothing has changed since) nas-diagnostics-20220530-2000.zip
  7. Hello, since I've upgraded to 6.10.1 (and then 6.10.2), parity check seems to be very slow (move from ~1.5 day at 59.8 MB/s to 6 days at 15.3 MB/s)... I've looked at logs and I can see that one of the drive negotiate at low sata speed (this is the purple one in the sceenshot). it's still "quite" fast (and not too low compared to other), so I was wondering if something else can the cause ?
  8. Hello, I'm using an 88SE9215 marvel controller for 4 of 7 disks. It seems to work OK (the drives are the bottom except "purple" one + cache) but we can see that the speed is lower. I was thinking to move the drive to an LSI controller. Can I do it by just swapping the cards (and changing the cables)?
  9. Hello, seems to be an user permission. Look if permissions of your folder is OK
  10. Well, I think that's because there's no answer from the trackers (I have the same with t411.io...). They tried to only answers with results from the category but it screws sonar...
  11. The update is there and you have to check because I never had the issue...
  12. Hello, update for Jackett is ongoing and should be there within an hour!
  13. Hello, I was on holidays without laptop sorry. I did update jackett to 0.6.3 2 days ago and 0.6.4 today.
  14. Hello this is the continuation of the questions I've asked in preclear topic. Since it's a 6.0 Unraid that I have, it seems to be more efficient to put it there : I tried a new preclear but I stopped at the "start" of the pre read (running et less than 1MB/S...). Attached smart status (which seems clear...) and the syslog (where we see the errors shown before, but no other ones...). I tried with another HDD on the same SATA Card (but not port) and It worked smoothly. To be honest, I think that I'll return the HDD to amazon. What do you think? smart.txt syslog-pre-clear-2015-08-02.txt
  15. Hello Robj. First, Thank you for answering. Second, I didn't saw the "Read Me first" and I apologize for that. In order to check if the issue was from the disk or for any other stuff, I've started a preclear on a spare drive (1Tb this one). So I had to reboot... I'll wait for the preclear to be finished and I'll start back the preclear on the 3Tb (because it's so far so good on the 1Tb disk). And I'll create a topic accordingly. Thanks again, Sylvain
  16. Hello Guys, I bought a new hard drive on friday and installed it on my NAS (unraid 6.0.1). it's a WD 3T (WDC_WD30EZRX-00D8PB0_WD-WMC4N0JAPE9L) So I used preclear script in order to clear it using the web plugin (from gfjardim). After a long pre read part (starting from 4.4Mb/s, ending à ~75MB/s), my disk went to preclear but a lot more faster (~220 MB/s). At the end of the process, the system was weird (no possibility to see the exit of the command) so I decided to reboot and redo (withour pre / post read) And I have this error: ================================================================== 1.15 = unRAID server Pre-Clear disk /dev/sde = cycle 1 of 1, partition start on sector 1 = = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Elapsed Time: 3:34:17 ========================================================================1.15 == == SORRY: Disk /dev/sde MBR could NOT be precleared == == out4= 00000 == out5= 00000 ============================================================================ 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000240382 s, 0.0 kB/s 0000000 I took a look into /var/log/messages and there's huge amount of this comments: Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: 139 callbacks suppressed Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 0 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 00 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860532992 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 00 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860532992 Aug 1 13:54:19 nas kernel: buffer_io_error: 134 callbacks suppressed Aug 1 13:54:19 nas kernel: Buffer I/O error on dev sde, logical block 732566624, async page read Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 a0 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860533152 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 a0 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860533152 Aug 1 13:54:19 nas kernel: Buffer I/O error on dev sde, logical block 732566644, async page read Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 0 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 0 Aug 1 13:54:19 nas kernel: Buffer I/O error on dev sde, logical block 0, async page read Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 08 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 8 Aug 1 13:54:19 nas kernel: Buffer I/O error on dev sde, logical block 1, async page read Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 a8 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860533160 Aug 1 13:54:19 nas kernel: Buffer I/O error on dev sde, logical block 732566645, async page read Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 13:54:19 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 5d 50 a3 a8 00 00 00 08 00 00 Aug 1 13:54:19 nas kernel: blk_update_request: I/O error, dev sde, sector 5860533160 and also some of these errors: Aug 1 10:21:14 nas kernel: ata2.00: exception Emask 0x0 SAct 0x1fffffff SErr 0x0 action 0x6 frozen Aug 1 10:21:14 nas kernel: ata2.00: failed command: WRITE FPDMA QUEUED Aug 1 10:21:14 nas kernel: ata2.00: cmd 61/e8:00:b8:b8:09/07:00:00:00:00/40 tag 0 ncq 1036288 out Aug 1 10:21:14 nas kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 1 10:21:14 nas kernel: ata2.00: status: { DRDY } ... Aug 1 10:21:14 nas kernel: ata2.00: status: { DRDY } Aug 1 10:21:14 nas kernel: ata2: hard resetting link Aug 1 10:21:24 nas kernel: ata2: softreset failed (timeout) Aug 1 10:21:24 nas kernel: ata2: hard resetting link Aug 1 10:21:34 nas kernel: ata2: softreset failed (timeout) Aug 1 10:21:34 nas kernel: ata2: hard resetting link Aug 1 10:22:09 nas kernel: ata2: softreset failed (timeout) Aug 1 10:22:09 nas kernel: ata2: limiting SATA link speed to 1.5 Gbps Aug 1 10:22:09 nas kernel: ata2: hard resetting link Aug 1 10:22:14 nas kernel: ata2: softreset failed (timeout) Aug 1 10:22:14 nas kernel: ata2: reset failed, giving up Aug 1 10:22:14 nas kernel: ata2.00: disabled Aug 1 10:22:14 nas kernel: ata2.00: device reported invalid CHS sector 0 ... Aug 1 10:22:14 nas kernel: ata2: EH complete Aug 1 10:22:14 nas kernel: sd 3:0:0:0: [sde] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Aug 1 10:22:14 nas kernel: sd 3:0:0:0: [sde] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 00 09 b0 d0 00 00 07 e8 00 00 Aug 1 10:22:14 nas kernel: blk_update_request: I/O error, dev sde, sector 635088 Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79386, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79387, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79388, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79389, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79390, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79391, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79392, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79393, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79394, lost async page write Aug 1 10:22:14 nas kernel: Buffer I/O error on dev sde, logical block 79395, lost async page write Any clue of what happened? Do I have to RMA the disk?
  17. Still having the same problem on my 2 trackers.... yep some trackers sometimes stops to work can you retest (version 0.5.1) ?
  18. thanks for the topic! I'm currently running plex directly on unraid ans was considering moving to a container (especially for auto update...). I'm waiting the answers of the owners now
  19. I built a new image with one more volume (/opt/Jackett/.config/Jackett) with should allow to make it work correctly. I have tested it and it work with my two trackers (and config is "saved").
  20. it's directly a build from git (my fork to add frenchtorrentdb). I'll check tonight sorry!
  21. Just woke up. Gimme an hour or two. Added. You should add the <Date> entry to the template so that it shows up in the new / updated list. If you're running CA using Kode's feed, then the app should be available within two hours. If you're not using Kode's feed with CA, the App is available immediately. (Working on an update to alleviate this to a certain extent) Hi and thanks! I'll test in the next 2 hours!
  22. Hi, I tested only with t411.io and frenchtorrentdb (that I have coded) where I have accounts. On which tracker do you have tested?