bunkermagnus

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bunkermagnus's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. First off I'd like to say that my entry into the UnRAID universe happened last spring and I began with 6.8.3. This plattforms viability and stability has blown me away. So now with 6.9.x it's my first major update. I have been holding it off for a while since 6.8.3 has been so good but last night I decided it was time to make the jump to 6.9.1. Everything went really smooth, I ran the upgrade assistant and followed it's advice, like uninstallintg VFIO-PCI plugin etc etc. I have a question pertaining to VM's and Windows VMs in particular. I noticed that since 6.9.1 runs on a newer version of Hypervisor layer, there is a new virtIO ISO version available, so which one is the most accurate statement of the ones below: You should upgrade the drivers in your windows 10 guests to the latest (virtio-win-0.1.190-1) as soon as possible. If it works fine, don't touch it! It's not necessary but you could see some perfomance benefits by upgrading your VirtIO drivers. Thank you.
  2. Thank you for clearing that up! It's the small nuances that gets me, the devil is as usual in this case the details between "prefer" and "yes". I must have read these severeal times in the F1 help in the web UI but the mover part escaped me somehow.
  3. Hi, the order you list here is how I was going to do this, I just wanted to ask you about step 3, don't you mean you set the Domains and Appdata share Chache setting to "No" and then ran mover to move any data FROM SSD? Just want to clraify that I understand the intent correctly before proceeding.
  4. Yeah I figured it must be something network related closer to my end of things as I haven't seen a single post about someone with the same problem. The thing is that 5 minutes later the appfeed is there again like nothing happened. As for the DNS settings in my unraid server, they are locked down statically server side from day one.
  5. Hi, is there some maintenance or problems going on with CA related infrastructure? I've been getting this error (see image) often the last couple of days, I've been going through my network and rebooted routers, DNS settings etc but it seems to be OK. This occurs randomly a couple of times per day during the last week I would say. I'm on UnRAID 6.8.3 and CA plugin 2020.07.13 and I haven't seen this error before the last week.
  6. Thank you, however there is an issue that appeared after the 06.19 version for me that is still persistent, the dashboard widget is stuck in "Retrievening stream data..." If I click on the stream icon in the widget it show the correct data but the widget simply refuses to update. Tried different browsers, Edge and Chrome with the same result.
  7. First, thank you for putting this container together to tap into the potential of our unused CPU cycles. I've noticed a "quirk", the container does it's job and uploads the WU fine, but it fails to clean up the work folder after it's done and the log gets spammed with the below. If I restart the container it cleans up fine on startup, but when the next WU is done it fails to clean up again., 13:02:30:WU00:FS00:0xa7: Version: 0.0.18 13:02:30:WU00:FS00:0xa7: Author: Joseph Coffland <joseph@cauldrondevelopment.com> 13:02:30:WU00:FS00:0xa7: Copyright: 2019 foldingathome.org 13:02:30:WU00:FS00:0xa7: Homepage: https://foldingathome.org/ 13:02:30:WU00:FS00:0xa7: Date: Nov 5 2019 13:02:30:WU00:FS00:0xa7: Time: 06:13:26 13:02:30:WU00:FS00:0xa7: Revision: 490c9aa2957b725af319379424d5c5cb36efb656 13:02:30:WU00:FS00:0xa7: Branch: master 13:02:30:WU00:FS00:0xa7: Compiler: GNU 8.3.0 13:02:30:WU00:FS00:0xa7: Options: -std=c++11 -O3 -funroll-loops -fno-pie 13:02:30:WU00:FS00:0xa7: Platform: linux2 4.19.0-5-amd64 13:02:30:WU00:FS00:0xa7: Bits: 64 13:02:30:WU00:FS00:0xa7: Mode: Release 13:02:30:WU00:FS00:0xa7:************************************ Build ************************************* 13:02:30:WU00:FS00:0xa7: SIMD: avx_256 13:02:30:WU00:FS00:0xa7:******************************************************************************** 13:02:30:WU00:FS00:0xa7:Project: 16806 (Run 7, Clone 236, Gen 36) 13:02:30:WU00:FS00:0xa7:Unit: 0x0000002d82ed0b915eb41c47fe4bf238 13:02:30:WU00:FS00:0xa7:Reading tar file core.xml 13:02:30:WU00:FS00:0xa7:Reading tar file frame36.tpr 13:02:30:WU00:FS00:0xa7:Digital signatures verified 13:02:30:WU00:FS00:0xa7:Calling: mdrun -s frame36.tpr -o frame36.trr -cpt 15 -nt 2 13:02:30:WU00:FS00:0xa7:Steps: first=18000000 total=500000 13:02:31:WU00:FS00:0xa7:Completed 1 out of 500000 steps (0%) 13:02:35:WU01:FS00:Upload 50.94% 13:02:41:WU01:FS00:Upload complete 13:02:42:WU01:FS00:Server responded WORK_ACK (400) 13:02:42:WU01:FS00:Final credit estimate, 3275.00 points 13:02:42:WU01:FS00:Cleaning up [91m13:02:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:02:42:WU01:FS00:Cleaning up [91m13:02:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:03:42:WU01:FS00:Cleaning up [91m13:03:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:05:19:WU01:FS00:Cleaning up [91m13:05:19:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:07:56:WU01:FS00:Cleaning up [91m13:07:56:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:08:20:WU00:FS00:0xa7:Completed 5000 out of 500000 steps (1%) 13:12:11:WU01:FS00:Cleaning up [91m13:12:11:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:14:09:WU00:FS00:0xa7:Completed 10000 out of 500000 steps (2%) 13:19:02:WU01:FS00:Cleaning up [91m13:19:02:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:19:58:WU00:FS00:0xa7:Completed 15000 out of 500000 steps (3%) 13:25:47:WU00:FS00:0xa7:Completed 20000 out of 500000 steps (4%) 13:30:08:WU01:FS00:Cleaning up [91m13:30:08:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:31:35:WU00:FS00:0xa7:Completed 25000 out of 500000 steps (5%) 13:37:25:WU00:FS00:0xa7:Completed 30000 out of 500000 steps (6%) 13:43:13:WU00:FS00:0xa7:Completed 35000 out of 500000 steps (7%) 13:48:04:WU01:FS00:Cleaning up [91m13:48:04:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 13:49:02:WU00:FS00:0xa7:Completed 40000 out of 500000 steps (8%) 13:54:51:WU00:FS00:0xa7:Completed 45000 out of 500000 steps (9%) 14:00:40:WU00:FS00:0xa7:Completed 50000 out of 500000 steps (10%) 14:06:29:WU00:FS00:0xa7:Completed 55000 out of 500000 steps (11%) 14:12:19:WU00:FS00:0xa7:Completed 60000 out of 500000 steps (12%) 14:17:07:WU01:FS00:Cleaning up [91m14:17:07:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m 14:18:09:WU00:FS00:0xa7:Completed 65000 out of 500000 steps (13%) 14:23:58:WU00:FS00:0xa7:Completed 70000 out of 500000 steps (14%) 14:29:47:WU00:FS00:0xa7:Completed 75000 out of 500000 steps (15%) 14:35:35:WU00:FS00:0xa7:Completed 80000 out of 500000 steps (16%) 14:41:24:WU00:FS00:0xa7:Completed 85000 out of 500000 steps (17%) 14:47:14:WU00:FS00:0xa7:Completed 90000 out of 500000 steps (18%)
  8. Hi Dan, I switched back to 6.8.3 after my Kernel panic and haven't had any crashes since then. For what it's worth, I did not use any remote SMB shares through unassigned devices.
  9. You're probably right, it's a fringe situation problem.
  10. Well the lockup happened a couple of hours ago with the version that was new yesterday when I installed it. I notice you published a new version just recently but I haven't tried it again, as I said, it's no biggie but I suppose it could cause problems for someone who don't know about this. I just realized this on chance while searching for "Crashed webui interface" on google, someone mentioned that pausing certain Docker containers could cause this so I quickly had to learn the CLI commands for listing and unpausing dockers. Just to be clear, it happened when Plex container was paused, not stopped, I'm guessing a paused container confuses the TCP/IP stack in this case.
  11. Hi there and thanks for a great plugin. Now I'm sorry if this has been covered before but I believe this plugin can hang the whole UnRAID WebUI if the Plex docker container is paused for too long, around 1 hrs or so. I just recently paused my Plex docker while using the UnBalance plugin to consolidate some media folders, after letting the plex container sit in a "paused" state for about an hour, my UnRAID Web UI was completely unreachable. Extract from syslog, this was spammed repeatedly until I ssh'ed into the server and unpaused the plex docker, then everything unlocked and started working again. I can't see any other connection between the webUI and the plex container than this plugin that I recently installed. This is not a major problem as I'll just turn off the plex container instead. Just thought I'd give you a heads up. It is also entirely possible that I am wrong about this, if so I apologize. Jun 11 19:41:44 Sauron nginx: 2020/06/11 19:41:44 [error] 11020#11020: *516793 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.200, server: , request: "GET /plugins/plexstreams/ajax.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.250", referrer: "http://192.168.1.250/Dashboard"
  12. Thank you for your reply. For now I have stepped back to stable release as I really don't need the beta release for anything else than to be able to see the CPU temp. Over the last week with beta release I have seen that my Ryzen 5 3600 never goes over 75C so I feel comfortable to let BIOS handle fans from here. If the problem persists, I will probably setup the syslog and /or start replacing out hardware parts like RAM or PSU. Thank you!
  13. UnRAID noob as I am I am still just one week into my trial period. I have been working to transfer all my data and setup backup and media solutions with dockers. The system has been rock solid for the whole week but just now, when I am nearly done with the dockers that I need I had an unexpected kernel panic. I couldn't resist upgrading to the 6-9-0 Beta 1 but I suppose I'll downgrade to the stable version again to make sure it's not my hardware that's at fault. My rig is a brand new Ryzen 5 3600 + Aorus x570 Elite build with 16 Gb or corsair RAM with a bunch of WD Red 3Tb disks and a Samsung 970 plus 512 Gb NVMe M.2 drive as cache. I have no VMs installed, just a bunch of docker containers and what seem to be the most common plugins. I'm attaching the diagnostics below and I took a photo of my consoles "last breath". sauron-diagnostics-20200606-1230.zip
  14. Thank you for your input, I'll be replacing that drive because it continues to throw errors in the log, all other 4 disks are working perfectly. 👍
  15. Hi there, I'm a new UnRAID user since a few days back. I bought a new Motherboard Aorus X570 Elite and Ryzen 5 3600 to use as the foundation of my next gen NAS / App docker server. I threw in a few disks of different quality and brand, I have build my first array with 3 WD Red 3 Tb disks but I havent moved any critical data to the server. I have attached 2 older disks also, of which one is a WD Green 2 Tb. this disk has been giving me warnings from the start, sequence of events: * Upon Installing the disk UnRAID warned me about SMART "current pending sectors" * I ran an extended SMART Test over approx 8 hrs which cleared the disk of errors. * I started a pre-clear 1 cycle of the disk and new errors started appearing which eventually led to the pore-clearing addon stating that the pre clearing job has been aborted with the following outpout: May 31 22:00:57 Sauron kernel: ata10: SATA max UDMA/133 abar m2048@0xf6400000 port 0xf6400380 irq 85 May 31 22:00:57 Sauron kernel: ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 300) May 31 22:00:57 Sauron kernel: ata10.00: ATA-8: WDC WD20EARS-07MVWB0, WD-WCAZA4240147, 51.0AB51, max UDMA/133 May 31 22:00:57 Sauron kernel: ata10.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 32), AA May 31 22:00:57 Sauron kernel: ata10.00: configured for UDMA/133 May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB) May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] 4096-byte physical blocks May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] Write Protect is off May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] Mode Sense: 00 3a 00 00 May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 31 22:00:57 Sauron kernel: sdf: May 31 22:00:57 Sauron kernel: sd 10:0:0:0: [sdf] Attached SCSI removable disk May 31 22:01:18 Sauron emhttpd: WDC_WD20EARS-07MVWB0_WD-WCAZA4240147 (sdf) 512 3907029168 May 31 22:28:13 Sauron emhttpd: WDC_WD20EARS-07MVWB0_WD-WCAZA4240147 (sdf) 512 3907029168 May 31 22:28:26 Sauron emhttpd: WDC_WD20EARS-07MVWB0_WD-WCAZA4240147 (sdf) 512 3907029168 May 31 22:28:28 Sauron emhttpd: WDC_WD20EARS-07MVWB0_WD-WCAZA4240147 (sdf) 512 3907029168 May 31 22:31:42 Sauron emhttpd: WDC_WD20EARS-07MVWB0_WD-WCAZA4240147 (sdf) 512 3907029168 May 31 22:38:11 Sauron preclear_disk_WD-WCAZA4240147[1316]: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --notify 2 --frequency 1 --cycles 1 --no-prompt /dev/sdf May 31 22:38:14 Sauron preclear_disk_WD-WCAZA4240147[1316]: Pre-Read: dd if=/dev/sdf of=/dev/null bs=2097152 skip=0 count=2000398934016 conv=notrunc,noerror iflag=nocache,count_bytes,skip_bytes Jun 1 04:55:04 Sauron preclear_disk_WD-WCAZA4240147[1316]: Zeroing: dd if=/dev/zero of=/dev/sdf bs=2097152 seek=2097152 count=2000396836864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes Jun 1 08:23:27 Sauron preclear.disk: Pausing preclear of disk 'sdf' Jun 1 08:24:05 Sauron preclear.disk: Resuming preclear of disk 'sdf' Jun 1 08:38:57 Sauron preclear.disk: Pausing preclear of disk 'sdf' Jun 1 08:39:41 Sauron preclear.disk: Resuming preclear of disk 'sdf' Jun 1 10:58:44 Sauron preclear_disk_WD-WCAZA4240147[1316]: Post-Read: cmp /tmp/.preclear/sdf/fifo /dev/zero Jun 1 10:58:44 Sauron preclear_disk_WD-WCAZA4240147[1316]: Post-Read: dd if=/dev/sdf of=/tmp/.preclear/sdf/fifo count=2096640 skip=512 conv=notrunc iflag=nocache,count_bytes,skip_bytes Jun 1 10:58:45 Sauron preclear_disk_WD-WCAZA4240147[1316]: Post-Read: cmp /tmp/.preclear/sdf/fifo /dev/zero Jun 1 10:58:45 Sauron preclear_disk_WD-WCAZA4240147[1316]: Post-Read: dd if=/dev/sdf of=/tmp/.preclear/sdf/fifo bs=2097152 skip=2097152 count=2000396836864 conv=notrunc iflag=nocache,count_bytes,skip_bytes Jun 1 12:09:48 Sauron kernel: ata10.00: exception Emask 0x0 SAct 0x98017 SErr 0x0 action 0x0 Jun 1 12:09:48 Sauron kernel: ata10.00: irq_stat 0x40000008 Jun 1 12:09:48 Sauron kernel: ata10.00: failed command: READ FPDMA QUEUED Jun 1 12:09:48 Sauron kernel: ata10.00: cmd 60/40:78:00:e4:f6/05:00:2e:00:00/40 tag 15 ncq dma 688128 in Jun 1 12:09:48 Sauron kernel: ata10.00: status: { DRDY ERR } Jun 1 12:09:48 Sauron kernel: ata10.00: error: { UNC } Jun 1 12:09:48 Sauron kernel: ata10.00: configured for UDMA/133 Jun 1 12:09:48 Sauron kernel: sd 10:0:0:0: [sdf] tag#15 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=9s Jun 1 12:09:48 Sauron kernel: sd 10:0:0:0: [sdf] tag#15 Sense Key : 0x3 [current] Jun 1 12:09:48 Sauron kernel: sd 10:0:0:0: [sdf] tag#15 ASC=0x11 ASCQ=0x4 Jun 1 12:09:48 Sauron kernel: sd 10:0:0:0: [sdf] tag#15 CDB: opcode=0x28 28 00 2e f6 e4 00 00 05 40 00 Jun 1 12:09:48 Sauron kernel: blk_update_request: I/O error, dev sdf, sector 787932336 op 0x0:(READ) flags 0x84700 phys_seg 146 prio class 0 Jun 1 12:09:48 Sauron kernel: ata10: EH complete Jun 1 12:31:10 Sauron kernel: ata10.00: exception Emask 0x0 SAct 0x2102ec04 SErr 0x0 action 0x0 Jun 1 12:31:10 Sauron kernel: ata10.00: irq_stat 0x40000008 Jun 1 12:31:10 Sauron kernel: ata10.00: failed command: READ FPDMA QUEUED Jun 1 12:31:10 Sauron kernel: ata10.00: cmd 60/40:58:00:6a:eb/05:00:3d:00:00/40 tag 11 ncq dma 688128 in Jun 1 12:31:10 Sauron kernel: ata10.00: status: { DRDY ERR } Jun 1 12:31:10 Sauron kernel: ata10.00: error: { UNC } Jun 1 12:31:10 Sauron kernel: ata10.00: configured for UDMA/133 Jun 1 12:31:10 Sauron kernel: sd 10:0:0:0: [sdf] tag#11 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=3s Jun 1 12:31:10 Sauron kernel: sd 10:0:0:0: [sdf] tag#11 Sense Key : 0x3 [current] Jun 1 12:31:10 Sauron kernel: sd 10:0:0:0: [sdf] tag#11 ASC=0x11 ASCQ=0x4 Jun 1 12:31:10 Sauron kernel: sd 10:0:0:0: [sdf] tag#11 CDB: opcode=0x28 28 00 3d eb 6a 00 00 05 40 00 Jun 1 12:31:10 Sauron kernel: blk_update_request: I/O error, dev sdf, sector 1038838432 op 0x0:(READ) flags 0x84700 phys_seg 148 prio class 0 Jun 1 12:31:10 Sauron kernel: ata10: EH complete Jun 1 12:31:15 Sauron kernel: ata10.00: exception Emask 0x0 SAct 0x80c00565 SErr 0x0 action 0x0 Jun 1 12:31:15 Sauron kernel: ata10.00: irq_stat 0x40000008 Jun 1 12:31:15 Sauron kernel: ata10.00: failed command: READ FPDMA QUEUED Jun 1 12:31:15 Sauron kernel: ata10.00: cmd 60/40:b0:00:04:ec/05:00:3d:00:00/40 tag 22 ncq dma 688128 in Jun 1 12:31:15 Sauron kernel: ata10.00: status: { DRDY ERR } Jun 1 12:31:15 Sauron kernel: ata10.00: error: { UNC } Jun 1 12:31:15 Sauron kernel: ata10.00: configured for UDMA/133 Jun 1 12:31:15 Sauron kernel: sd 10:0:0:0: [sdf] tag#22 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=4s Jun 1 12:31:15 Sauron kernel: sd 10:0:0:0: [sdf] tag#22 Sense Key : 0x3 [current] Jun 1 12:31:15 Sauron kernel: sd 10:0:0:0: [sdf] tag#22 ASC=0x11 ASCQ=0x4 Jun 1 12:31:15 Sauron kernel: sd 10:0:0:0: [sdf] tag#22 CDB: opcode=0x28 28 00 3d ec 04 00 00 05 40 00 Jun 1 12:31:15 Sauron kernel: blk_update_request: I/O error, dev sdf, sector 1038878848 op 0x0:(READ) flags 0x84700 phys_seg 24 prio class 0 Jun 1 12:31:15 Sauron kernel: ata10: EH complete Jun 1 12:31:19 Sauron kernel: ata10.00: exception Emask 0x0 SAct 0x3420a SErr 0x0 action 0x0 Jun 1 12:31:19 Sauron kernel: ata10.00: irq_stat 0x40000008 Jun 1 12:31:19 Sauron kernel: ata10.00: failed command: READ FPDMA QUEUED Jun 1 12:31:19 Sauron kernel: ata10.00: cmd 60/08:70:88:08:ec/00:00:3d:00:00/40 tag 14 ncq dma 4096 in Jun 1 12:31:19 Sauron kernel: ata10.00: status: { DRDY ERR } Jun 1 12:31:19 Sauron kernel: ata10.00: error: { UNC } Jun 1 12:31:19 Sauron kernel: ata10.00: configured for UDMA/133 Jun 1 12:31:19 Sauron kernel: sd 10:0:0:0: [sdf] tag#14 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=2s Jun 1 12:31:19 Sauron kernel: sd 10:0:0:0: [sdf] tag#14 Sense Key : 0x3 [current] Jun 1 12:31:19 Sauron kernel: sd 10:0:0:0: [sdf] tag#14 ASC=0x11 ASCQ=0x4 Jun 1 12:31:19 Sauron kernel: sd 10:0:0:0: [sdf] tag#14 CDB: opcode=0x28 28 00 3d ec 08 88 00 00 08 00 Jun 1 12:31:19 Sauron kernel: blk_update_request: I/O error, dev sdf, sector 1038878856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Jun 1 12:31:19 Sauron kernel: Buffer I/O error on dev sdf, logical block 129859857, async page read Jun 1 12:31:19 Sauron kernel: ata10: EH complete Jun 1 12:31:22 Sauron kernel: ata10.00: exception Emask 0x0 SAct 0x348000 SErr 0x0 action 0x0 Jun 1 12:31:22 Sauron kernel: ata10.00: irq_stat 0x40000008 Jun 1 12:31:22 Sauron kernel: ata10.00: failed command: READ FPDMA QUEUED Jun 1 12:31:22 Sauron kernel: ata10.00: cmd 60/08:90:88:08:ec/00:00:3d:00:00/40 tag 18 ncq dma 4096 in Jun 1 12:31:22 Sauron kernel: ata10.00: status: { DRDY ERR } Jun 1 12:31:22 Sauron kernel: ata10.00: error: { UNC } Jun 1 12:31:22 Sauron kernel: ata10.00: configured for UDMA/133 Jun 1 12:31:22 Sauron kernel: sd 10:0:0:0: [sdf] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=2s Jun 1 12:31:22 Sauron kernel: sd 10:0:0:0: [sdf] tag#18 Sense Key : 0x3 [current] Jun 1 12:31:22 Sauron kernel: sd 10:0:0:0: [sdf] tag#18 ASC=0x11 ASCQ=0x4 Jun 1 12:31:22 Sauron kernel: sd 10:0:0:0: [sdf] tag#18 CDB: opcode=0x28 28 00 3d ec 08 88 00 00 08 00 Jun 1 12:31:22 Sauron kernel: blk_update_request: I/O error, dev sdf, sector 1038878856 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Jun 1 12:31:22 Sauron kernel: Buffer I/O error on dev sdf, logical block 129859857, async page read Jun 1 12:31:22 Sauron kernel: ata10: EH complete Jun 1 12:31:25 Sauron preclear_disk_WD-WCAZA4240147[1316]: Post-Read: dd output: dd: error reading '/dev/sdf': Input/output error DONE ############################################################################################################################ # # # unRAID Server Preclear of disk WD-WCAZA4240147 # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [6:16:50 @ 88 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [6:01:32 @ 92 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: FAIL # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 13:51:04 | Total elapsed time: 13:51:04 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status (device type: default) # # # # # # ATTRIBUTE INITIAL STATUS # # 5-Reallocated_Sector_Ct 0 - # # 9-Power_On_Hours 34221 - # # 194-Temperature_Celsius 27 - # # 196-Reallocated_Event_Count 0 - # # 197-Current_Pending_Sector 229 - # # 198-Offline_Uncorrectable 0 - # # 199-UDMA_CRC_Error_Count 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> FAIL: Post-Read verification failed. Your drive is not zeroed. To me, this sounds like a classic case of dying old disk due to surface read/write errors. So I just wanted to verify with the expertise here that: * Disk shuld be thrown away /recycled? * Does log show any other error, controller, cable UDMA etc or is this a typical HD failure?