TexasUnraid Posted June 4, 2021 Share Posted June 4, 2021 So been playing around with chia and pulling old hardware out of the closet. 1 of which is a 6tb drive that was dropped years ago and got some bad sectors. The sector count has not increased since and after ~10 badblock runs at this point, not a single error has been detected. The issue is that while this drive works perfectly fine on windows or ubuntu when I try to use it on unraid I will instantly get errors and it will lock the drive to read only mode when I try to use it? The format doesn't seem to matter, although XFS sometimes will give me a few mins before it locks the drive. Here are some examples of the errors it gives: Jun 1 13:06:27 NAS emhttpd: read SMART /dev/sdh Jun 1 13:06:57 NAS kernel: sd 1:1:7:0: [sdh] tag#122 UNKNOWN(0x2003) Result: hostbyte=0x07 driverbyte=0x08 cmd_age=0s Jun 1 13:06:57 NAS kernel: sd 1:1:7:0: [sdh] tag#122 Sense Key : 0xb [current] [descriptor] Jun 1 13:06:57 NAS kernel: sd 1:1:7:0: [sdh] tag#122 ASC=0x0 ASCQ=0x0 Jun 1 13:06:57 NAS kernel: sd 1:1:7:0: [sdh] tag#122 CDB: opcode=0x8a 8a 08 00 00 00 01 80 00 80 78 00 00 00 08 00 00 Jun 1 13:06:57 NAS kernel: blk_update_request: I/O error, dev sdh, sector 6442483832 op 0x1:(WRITE) flags 0x29800 phys_seg 1 prio class 0 Jun 1 14:05:25 NAS emhttpd: shcmd (40875): /usr/sbin/cryptsetup luksOpen /dev/sdh1 Chia_Farm_3 Jun 1 14:05:59 NAS kernel: sd 1:1:7:0: [sdh] tag#184 UNKNOWN(0x2003) Result: hostbyte=0x07 driverbyte=0x08 cmd_age=0s Jun 1 14:05:59 NAS kernel: sd 1:1:7:0: [sdh] tag#184 Sense Key : 0xb [current] [descriptor] Jun 1 14:05:59 NAS kernel: sd 1:1:7:0: [sdh] tag#184 ASC=0x0 ASCQ=0x0 Jun 1 14:05:59 NAS kernel: sd 1:1:7:0: [sdh] tag#184 CDB: opcode=0x8a 8a 08 00 00 00 01 80 00 80 78 00 00 00 08 00 00 Jun 1 14:05:59 NAS kernel: blk_update_request: I/O error, dev sdh, sector 6442483832 op 0x1:(WRITE) flags 0x29800 phys_seg 1 prio class 0 Jun 1 18:06:07 NAS emhttpd: spinning down /dev/sdh Jun 4 08:33:59 NAS kernel: BTRFS info (device sdh1): disk space caching is enabled Jun 4 08:33:59 NAS kernel: BTRFS info (device sdh1): has skinny extents Jun 4 08:33:59 NAS unassigned.devices: Successfully mounted '/dev/sdh1' on '/mnt/disks/Chia_Farm_3'. Jun 4 08:34:29 NAS kernel: sd 1:1:7:0: [sdh] tag#309 UNKNOWN(0x2003) Result: hostbyte=0x07 driverbyte=0x08 cmd_age=0s Jun 4 08:34:29 NAS kernel: sd 1:1:7:0: [sdh] tag#309 Sense Key : 0xb [current] [descriptor] Jun 4 08:34:29 NAS kernel: sd 1:1:7:0: [sdh] tag#309 ASC=0x0 ASCQ=0x0 Jun 4 08:34:29 NAS kernel: sd 1:1:7:0: [sdh] tag#309 CDB: opcode=0x8a 8a 08 00 00 00 00 00 0f a0 80 00 00 00 08 00 00 Jun 4 08:34:29 NAS kernel: blk_update_request: I/O error, dev sdh, sector 1024128 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0 Jun 4 08:34:29 NAS kernel: BTRFS warning (device sdh1): lost page write due to IO error on /dev/sdh1 (-5) Jun 4 08:34:29 NAS kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0 Jun 4 08:34:29 NAS kernel: BTRFS error (device sdh1): error writing primary super block to device 1 Jun 4 08:34:29 NAS kernel: BTRFS: error (device sdh1) in write_all_supers:3915: errno=-5 IO failure (1 errors while writing supers) Jun 4 08:34:29 NAS kernel: BTRFS info (device sdh1): forced readonly Jun 4 08:34:29 NAS kernel: BTRFS warning (device sdh1): Skipping commit of aborted transaction. Jun 4 08:34:29 NAS kernel: BTRFS: error (device sdh1) in cleanup_transaction:1942: errno=-5 IO failure Jun 4 08:37:53 NAS unassigned.devices: Unmount cmd: /sbin/umount '/dev/sdh1' 2>&1 I have the same drive with the same format in an ubuntu machine now reading and writing happily. Going to plot a chia on it to give it a workout I think. Any ideas why the drive works fine on other systems but unraid/unassigned devices refuses to work with it? Quote Link to comment
TexasUnraid Posted June 12, 2021 Author Share Posted June 12, 2021 Just to add a bit more to this, I have been using this drive, formatted at BTRFS, on Ubuntu for over a week now as a transfer drive for chia plots. Filled it up a few times so far and no issues on the ubuntu machine at all with it. If I plug it into unraid though it either refuses to mount it at all or locks it to read only mode. What is different between them that would cause unraid to refuse to use the drive when it obviously is still usable on ubuntu? Quote Link to comment
JorgeB Posted June 13, 2021 Share Posted June 13, 2021 18 hours ago, TexasUnraid said: What is different between them that would cause unraid to refuse to use the drive when it obviously is still usable on ubuntu? There shouldn't be anything that would cause this just by using Unraid, same or different controller with Ubuntu? Quote Link to comment
TexasUnraid Posted June 13, 2021 Author Share Posted June 13, 2021 5 hours ago, JorgeB said: There shouldn't be anything that would cause this just by using Unraid, same or different controller with Ubuntu? I have swapped controllers a few times during testing but at some point they were both on LSI cards in IT mode. All my HBA's are in IT/passthrough mode so would they have any say in this? I could try booting unraid on the secondary system to see if it can mount the drive there I suppose so the hardware is completely ruled out. Quote Link to comment
JorgeB Posted June 13, 2021 Share Posted June 13, 2021 If it's not the hardware it could be an issue with the LSI driver, if using a different version, can't think of anything else that would make sense. Quote Link to comment
TexasUnraid Posted June 13, 2021 Author Share Posted June 13, 2021 (edited) 2 minutes ago, JorgeB said: If it's not the hardware it could be an issue with the LSI driver, if using a different version, can't think of anything else that would make sense. It is currently on an adaptec HBA, I will try booting the other system with unraid once it finishes what it is doing and see if it will mount on it to completely rule out hardware and driver issues. Edited June 13, 2021 by TexasUnraid Quote Link to comment
TexasUnraid Posted June 13, 2021 Author Share Posted June 13, 2021 (edited) Looks like you are on to something, turns out that this drive was connected to an onboard sas port on the secondary machine (forgot it was connected to some of the bays) now and when I booted unraid it was able to mount the drive and write to it briefly. So if it is the sas controller, then why would this be the case? Could it be that the onbards intel sas controller doesn't have the advanced sas features and that is what is causing issues on the other controllers? Edited June 13, 2021 by TexasUnraid Quote Link to comment
JorgeB Posted June 14, 2021 Share Posted June 14, 2021 14 hours ago, TexasUnraid said: So if it is the sas controller, then why would this be the case? Sometimes there are compatibility issues with a specific device and a controller, for example some recent 2.5" SATA HGST disks don't work correctly with LSI controllers, I think I also remember reading that some implementations of Intel SCU SAS controller could be kind of iffy. Quote Link to comment
TexasUnraid Posted June 14, 2021 Author Share Posted June 14, 2021 Odd, it is just a regular WD blue 6tb drive, it is also working on the SCU SAS and not the lsi or adaptec. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.