tr0910

Members
  • Posts

    1446
  • Joined

  • Last visited

Everything posted by tr0910

  1. @jortanAfter a week, this dodgy drive had got up to 18% resilvered, when I got fed up with the process. Dropped a known good drive into the server and pulled the dodgy one. This is what it said: zpool status pool: MFS2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Jul 26 18:28:07 2021 824G scanned at 243M/s, 164G issued at 48.5M/s, 869G total 166G resilvered, 18.91% done, 04:08:00 to go config: NAME STATE READ WRITE CKSUM MFS2 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 replacing-0 DEGRADED 387 14.4K 0 3739555303482842933 UNAVAIL 0 0 0 was /dev/sdf1/old 5572663328396434018 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-WDC_WD30EZRS-00J99B0_WD-WCAWZ1999111-part1 sdf ONLINE 0 0 0 (resilvering) sdg ONLINE 0 0 0 errors: No known data errors It passed the test, 4 hours later we have this: zpool status pool: MFS2 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details. scan: resilvered 832G in 04:09:34 with 0 errors on Tue Aug 3 07:13:24 2021 config: NAME STATE READ WRITE CKSUM MFS2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 errors: No known data errors
  2. Yep, and just as it passed 8% we had a power blink from a lightning storm and I intentionally did not have this plugged into the UPS. It failed gracefully, but restarted from zero. I have perfect drives that I will replace this with, but why not experience all of ZFS quirks since I have a chance. If the drive fails during resilvering I won't be surprised. If ZFS can manage resilvering without getting confused on this dingy harddrive, I will be impressed.
  3. @glennv @jortan I have installed a drive that is not perfect and started the resilvering (this drive has some questionable sectors). Might as well start with a worst possible case and see what happens if resilvering fails. (grin) I have docker and VM running from the degraded mirror while the resilvering is going on. Hopefully this doesn't confuse the resilvering. How many days should a resilvering take to complete on a 3tb drive? Its been running for over 24 hours now. zpool status pool: MFS2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Jul 26 18:28:07 2021 545G scanned at 4.30M/s, 58.6G issued at 473K/s, 869G total 49.1G resilvered, 6.75% done, no estimated completion time config: NAME STATE READ WRITE CKSUM MFS2 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 replacing-0 DEGRADED 0 0 0 3739555303482842933 FAULTED 0 0 0 was /dev/sdf1 sdi ONLINE 0 0 0 (resilvering) sdf ONLINE 0 0 0 errors: No known data errors
  4. (bump) Has anyone done the zpool replace? What is the unRaid syntax for the replaced drive? Zpool status is reporting strange device names above.
  5. unRaid has a dedicated following, but there are some areas of general data integrity and security that unRaid hasn't developed as far as it has with Docker and VM support. I would like Open ZFS baked in at some point, and I have seen some interest from the developers, but they have to get around the Oracle legal bogeyman. I have seen no discussion around snapraid. Check out ZFS here.
  6. I need to do a zpool replace but what is the syntax for using with unRaid? I'm not sure how to reference the failed disk? I need to replace the failed disk without trashing the ZFS mirror. A 2 disk mirror has dropped one device. Unassigned devices does not even see the failing drive at all any more. I rebooted and swapped the slots for these 2 mirrored disks, and the same problem remains. The failure follows the missing disk. zpool status -x pool: MFS2 state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub repaired 0B in 03:03:13 with 0 errors on Thu Apr 29 08:32:11 2021 config: NAME STATE READ WRITE CKSUM MFS2 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 3739555303482842933 FAULTED 0 0 0 was /dev/sdf1 sdf ONLINE 0 0 0 My ZFS had 2 x 3tb Seagate spinning rust for VM and Docker. BothVM's and Docker seem to continue to work with a failed drive, but aren't working as fast. I have installed another 3tb drive that I want to replace the failed drive with. Here is the unassigned devices with the current ZFS disk and a replacement disk that was in the array previously. I will do the zpool replace but what is the syntax for using with unRaid? Is it zpool replace 3739555303482842933 sdi
  7. Pass through of iGPU is not often done, and not required for most Win10 use via RDP. Mine was not passed through.
  8. I don't have this combo, but a similar one. Windows 10 will load and run fine with the integrated graphics on mine. I'm using Windows RDP for most VM access. The only downside is that that video performance is nowhere near bare metal. Perfectly usable for Office applications, Internet browsers and totally fine for programming, but weak for anything where you need quick response from keyboard and mouse such as gaming. The upside is that RDP will run over the network so no separate cabling for video or mouse. For bare metal performance a dedicated video card for each VM is required, and then you need video and keyboard / mouse cabline.
  9. Every drive has a death sentence. But just like Mark Twain, "the rumors of my demise are greatly exaggerated". It's not so much the number of reallocated sectors that is worrying, but whether the drive is stable and is not adding more reallocated sectors on a regular basis. Use it with caution, (maybe run a second preclear to see what happens) and if it doesn't grow any more bad sectors, put it to work. I have had 10 yr old drives continue to perform flawlessly, and I have had them die sudden and violent deaths much younger. Keep your parity valid, and also backup important data separately. Parity is not backup.
  10. I've attempted to move the docker image to ZFS along with appdata. VM's are working. Docker refuses to start. Do I need to adjust the BTRFS image type? Correction, VM's are not working once the old cache drive is disconnected.
  11. ZFS was not responsible for the problem. I have a small cache drive and some of the files for docker and VM's still comes from there for startup. This drive didn't show up on boot. Powering down, and making sure this drive came up, resulted in VM and docker behaving normally. I need to get all appdata files moved to ZFS and off this drive as I am not using it for anything else.
  12. I have had one server on 6.9.2 since initial release and a pair of ZFS drives are serving Docker and VM's without issue. I just upgraded a production server 6.8.3 to 6.9.2 and now Docker refuses to start and VM's on the ZFS are not available. Zpool status looks fine pool: MFS2 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details. scan: scrub repaired 0B in 02:10:49 with 0 errors on Fri Mar 26 10:24:18 2021 config: NAME STATE READ WRITE CKSUM MFS2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdp ONLINE 0 0 0 sdo ONLINE 0 0 0 The mount points for Docker are visible in MNT and everthing looks similar to the working 6.9.2 server. This server has been running 6.8.3 for an extended period of time with ZFS working fine. kim-diagnostics-20210428-1423.zip
  13. If I understand you right, you are suggesting that I just monitor the error and don't worry about it. As long as it doesn't deteriorate it's no problem. Yes, this is one approach. However if these errors are spurious and not real, reseting them to zero is also ok. I take it there is no unRaid parity check equivalent for ZFS? (In my case, the disk with these problems is generating phantom errors. The parity check just confirms that there is no errors)
  14. I have a 2 disk ZFS being used for VM's on one server. These are older 3tb Seagates. And one showing 178 pending and 178 uncorrectable sectors. UnRaid parity check usually finds these errors are spurious and resets everthing to zero. Is there anything similar to do with ZFS?
  15. Thx, using ZFS for VM and Dockers now. Yes, its good. Only issue is updating ZFS when you update unRaid. Regarding unraid and enterprise, it seems that the user base is more the bluray and DVD hoarders. There are only a few of us that use unRaid outside of this niche. I'll be happy when ZFS is baked in.
  16. A few months ago, there was chatter about ZFS being part of unRaid supported file systems. @limetech was expressing frustration with btrfs. What is the current status of this?
  17. Likely the top one will perform best if you can get all the cores busy, but I have never used this Dell. What processor does it have? The second machine has higher clock rate so individual core performance will be better. It will really depend on your planned usage, and whether your apps will be able to utilize all the cores. Often, you find that some utilities or apps are single core only, so having massive numbers of cores is useless for that app. I am using Xeon 2670v1 based servers and find that sticking a second cpu in with another 16 cores is not that much of a benefit for what I do.
  18. Well in your case you want all your storage in the fast zone. I also want to have ZFS continue to work and VM's continue to run even if the unRaid array is stopped and restarted. Then unRaid will be perfectly able to run our firewall's, and pfSense without the (your firewall shuts down if the array is stopped).
  19. I've been running my VM's of a pair of old 2tb spinners using ZFS. I have been amazed that it just works, and I don't notice the slowness of the spinners for running VM's. The reason I switched was for the snapshot backups. I love the ability to just snapshot my windows VM back to a known good state. I look forward to having ZFS baked in more closely to unRaid. I still have one VM on the SSD with BTRFS but I can't see any speed benefit to the SSD compared with the ZFS spinners. I have a Xeon 2670 with 64GB ECC ram. Lesser RAM and/or non ECC RAM may not be such a good option. I can see benefits via a 2 stage storage system in the future, with ZFS for the speed, and unRaid array for the near line storage that can be spun down most of the time.
  20. So the process for those who have 6.8.3 and some version of your plugin is update ZFS plugin then update unRaid to 6.9rc1 then reboot? My unRaid only finds an update to the ZFS plugin from November, not your December one.
  21. Herrobin-diagnostics-20201130-0353.zipe is new diagnostics file. Disk is WDC - 7804.
  22. @Frank1940 The rebuild completed successfully, but another drive showed 200 errors during the rebuild process. Does this mean that errors were replicated into the newly built drive and it is not fully perfect? I have 2 parity drives, so that other WDC EZRS can go bad without further issue...
  23. Here part of the syslog on the old server just before it locked up. Nov 27 20:06:54 Robin root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D Nov 27 20:06:54 Robin avahi-daemon[9235]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214). Nov 27 20:06:54 Robin avahi-daemon[9235]: Successfully dropped root privileges. Nov 27 20:06:54 Robin avahi-daemon[9235]: avahi-daemon 0.7 starting up. Nov 27 20:06:54 Robin avahi-daemon[9235]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns! Nov 27 20:06:54 Robin avahi-daemon[9235]: Successfully called chroot(). Nov 27 20:06:54 Robin avahi-daemon[9235]: Successfully dropped remaining capabilities. Nov 27 20:06:54 Robin avahi-daemon[9235]: Loading service file /services/sftp-ssh.service. Nov 27 20:06:54 Robin avahi-daemon[9235]: Loading service file /services/smb.service. Nov 27 20:06:54 Robin avahi-daemon[9235]: Loading service file /services/ssh.service. Nov 27 20:06:54 Robin avahi-daemon[9235]: Joining mDNS multicast group on interface br0.IPv6 with address 2605:a601:ae0f:2700:230:48ff:fe7d:3760. Nov 27 20:06:54 Robin avahi-daemon[9235]: New relevant interface br0.IPv6 for mDNS. Nov 27 20:06:54 Robin avahi-daemon[9235]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.13.75. Nov 27 20:06:54 Robin avahi-daemon[9235]: New relevant interface br0.IPv4 for mDNS. Nov 27 20:06:54 Robin avahi-daemon[9235]: Joining mDNS multicast group on interface bond0.IPv6 with address fe80::230:48ff:fe7d:3760. Nov 27 20:06:54 Robin avahi-daemon[9235]: New relevant interface bond0.IPv6 for mDNS. Nov 27 20:06:54 Robin avahi-daemon[9235]: Network interface enumeration completed. Nov 27 20:06:54 Robin avahi-daemon[9235]: Registering new address record for 2605:a601:ae0f:2700:230:48ff:fe7d:3760 on br0.*. Nov 27 20:06:54 Robin avahi-daemon[9235]: Registering new address record for 192.168.13.75 on br0.IPv4. Nov 27 20:06:54 Robin avahi-daemon[9235]: Registering new address record for fe80::230:48ff:fe7d:3760 on bond0.*. Nov 27 20:06:54 Robin emhttpd: shcmd (20): /etc/rc.d/rc.avahidnsconfd start Nov 27 20:06:54 Robin root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon: /usr/sbin/avahi-dnsconfd -D Nov 27 20:06:54 Robin avahi-dnsconfd[9245]: Successfully connected to Avahi daemon. Nov 27 20:06:54 Robin emhttpd: shcmd (25): /etc/rc.d/rc.php-fpm start Nov 27 20:06:54 Robin root: Starting php-fpm done Nov 27 20:06:54 Robin emhttpd: shcmd (26): /etc/rc.d/rc.nginx start Nov 27 20:06:54 Robin root: Starting Nginx server daemon... Nov 27 20:06:54 Robin emhttpd: stale configuration Nov 27 20:06:54 Robin root: error: /plugins/preclear.disk/Preclear.php: wrong csrf_token Nov 27 20:06:55 Robin avahi-daemon[9235]: Server startup complete. Host name is Robin.local. Local service cookie is 3056464356. Nov 27 20:06:56 Robin avahi-daemon[9235]: Service "Robin" (/services/ssh.service) successfully established. Nov 27 20:06:56 Robin avahi-daemon[9235]: Service "Robin" (/services/smb.service) successfully established. Nov 27 20:06:56 Robin avahi-daemon[9235]: Service "Robin" (/services/sftp-ssh.service) successfully established. Nov 27 20:07:00 Robin login[9505]: ROOT LOGIN on '/dev/pts/0' Nov 27 20:07:48 Robin kernel: ata20: link is slow to respond, please be patient (ready=0) Nov 27 20:07:52 Robin kernel: ata20: SRST failed (errno=-16) Nov 27 20:07:57 Robin kernel: ata20: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 27 20:07:57 Robin kernel: ata20.00: ATA-8: WDC WD30EZRX-00MMMB0, WD-WCAWZ2389747, 80.00A80, max UDMA/133 Nov 27 20:07:57 Robin kernel: ata20.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32) Nov 27 20:07:58 Robin kernel: ata20.00: configured for UDMA/133 Nov 27 20:07:58 Robin kernel: scsi 21:0:0:0: Direct-Access ATA WDC WD30EZRX-00M 0A80 PQ: 0 ANSI: 5 Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB) Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] 4096-byte physical blocks Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] Write Protect is off Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] Mode Sense: 00 3a 00 00 Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: Attached scsi generic sg8 type 0 Nov 27 20:07:58 Robin kernel: sdh: sdh1 Nov 27 20:07:58 Robin kernel: sd 21:0:0:0: [sdh] Attached SCSI disk Nov 27 20:07:58 Robin rc.diskinfo[7275]: SIGHUP received, forcing refresh of disks info. Nov 27 20:07:58 Robin unassigned.devices: Disk with serial 'WDC_WD30EZRX-00MMMB0_WD-WCAWZ2389747', mountpoint 'WDC_WD30EZRX-00MMMB0_WD-WCAWZ2389747' is not set to auto mount and will not be mounted... Nov 27 20:07:58 Robin emhttpd: shcmd (93): rmmod md-mod Nov 27 20:07:58 Robin kernel: md: unRAID driver removed Nov 27 20:07:58 Robin emhttpd: shcmd (94): modprobe md-mod super=/boot/config/super.dat Nov 27 20:07:58 Robin kernel: md: unRAID driver 2.9.4 installed Nov 27 20:07:58 Robin emhttpd: Device inventory: Nov 27 20:07:58 Robin emhttpd: WDC_WD30EZRX-00MMMB0_WD-WCAWZ2389747 (sdh) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: WDC_WD30EURS-63R8UY0_WD-WCAWZ1111556 (sdg) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: ST3000DM001-9YN166_W1F0N4JK (sdd) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: ST3000DM001-9YN166_W1F0ZVBK (sde) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: ST3000DM001-9YN166_Z1F0YAZG (sdb) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: WDC_WD30EZRS-00J99B0_WD-WCAWZ0361944 (sdf) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: WDC_WD30EZRS-00J99B0_WD-WCAWZ2007804 (sdc) 512 5860533168 Nov 27 20:07:58 Robin emhttpd: SanDisk_Cruzer_Fit_4C532000060115109422-0:0 (sda) 512 31266816 Nov 27 20:07:58 Robin kernel: mdcmd (1): import 0 sde 64 2930266532 0 ST3000DM001-9YN166_W1F0ZVBK Nov 27 20:07:58 Robin kernel: md: import disk0: (sde) ST3000DM001-9YN166_W1F0ZVBK size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (2): import 1 sdb 64 2930266532 0 ST3000DM001-9YN166_Z1F0YAZG Nov 27 20:07:58 Robin kernel: md: import disk1: (sdb) ST3000DM001-9YN166_Z1F0YAZG size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (3): import 2 sdd 64 2930266532 0 ST3000DM001-9YN166_W1F0N4JK Nov 27 20:07:58 Robin kernel: md: import disk2: (sdd) ST3000DM001-9YN166_W1F0N4JK size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (4): import 3 Nov 27 20:07:58 Robin kernel: mdcmd (5): import 4 Nov 27 20:07:58 Robin kernel: md: import_slot: 4 missing Nov 27 20:07:58 Robin kernel: mdcmd (6): import 5 sdf 64 2930266532 0 WDC_WD30EZRS-00J99B0_WD-WCAWZ0361944 Nov 27 20:07:58 Robin kernel: md: import disk5: (sdf) WDC_WD30EZRS-00J99B0_WD-WCAWZ0361944 size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (7): import 6 sdg 64 2930266532 0 WDC_WD30EURS-63R8UY0_WD-WCAWZ1111556 Nov 27 20:07:58 Robin kernel: md: import disk6: (sdg) WDC_WD30EURS-63R8UY0_WD-WCAWZ1111556 size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (8): import 7 sdh 64 2930266532 0 WDC_WD30EZRX-00MMMB0_WD-WCAWZ2389747 Nov 27 20:07:58 Robin kernel: md: import disk7: (sdh) WDC_WD30EZRX-00MMMB0_WD-WCAWZ2389747 size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (9): import 8 Nov 27 20:07:58 Robin kernel: mdcmd (10): import 9 Nov 27 20:07:58 Robin kernel: md: import_slot: 9 missing Nov 27 20:07:58 Robin kernel: mdcmd (11): import 10 Nov 27 20:07:58 Robin kernel: mdcmd (12): import 11 Nov 27 20:07:58 Robin kernel: mdcmd (13): import 12 Nov 27 20:07:58 Robin kernel: mdcmd (14): import 13 Nov 27 20:07:58 Robin kernel: mdcmd (15): import 14 sdc 64 2930266532 0 WDC_WD30EZRS-00J99B0_WD-WCAWZ2007804 Nov 27 20:07:58 Robin kernel: md: import disk14: (sdc) WDC_WD30EZRS-00J99B0_WD-WCAWZ2007804 size: 2930266532 Nov 27 20:07:58 Robin kernel: mdcmd (16): import 15 Nov 27 20:07:58 Robin kernel: mdcmd (17): import 16 Nov 27 20:07:58 Robin kernel: mdcmd (18): import 17 Nov 27 20:07:58 Robin kernel: mdcmd (19): import 18 Nov 27 20:07:58 Robin kernel: mdcmd (20): import 19 Nov 27 20:07:58 Robin kernel: mdcmd (21): import 20 Nov 27 20:07:58 Robin kernel: mdcmd (22): import 21 Nov 27 20:07:58 Robin kernel: mdcmd (23): import 22 Nov 27 20:07:58 Robin kernel: mdcmd (24): import 23 Nov 27 20:07:58 Robin kernel: mdcmd (25): import 24 Nov 27 20:07:58 Robin kernel: mdcmd (26): import 25 Nov 27 20:07:58 Robin kernel: mdcmd (27): import 26 Nov 27 20:07:58 Robin kernel: mdcmd (28): import 27 Nov 27 20:07:58 Robin kernel: mdcmd (29): import 28 Nov 27 20:07:58 Robin kernel: mdcmd (30): import 29 Nov 27 20:07:58 Robin kernel: md: import_slot: 29 empty Nov 27 20:07:58 Robin emhttpd: import 30 cache device: no device Nov 27 20:07:58 Robin emhttpd: import flash device: sda Nov 27 20:08:42 Robin login[12726]: ROOT LOGIN on '/dev/pts/1' Nov 27 20:09:56 Robin kernel: ata17: link is slow to respond, please be patient (ready=0) Nov 27 20:09:59 Robin kernel: ata17: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 27 20:09:59 Robin kernel: ata17.00: ATA-9: WDC WD30EZRX-00DC0B0, WD-WMC1T0416128, 80.00A80, max UDMA/133 Nov 27 20:09:59 Robin kernel: ata17.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32) Nov 27 20:09:59 Robin kernel: ata17.00: configured for UDMA/133 Nov 27 20:09:59 Robin kernel: scsi 18:0:0:0: Direct-Access ATA WDC WD30EZRX-00D 0A80 PQ: 0 ANSI: 5 Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: [sdi] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB) Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: [sdi] 4096-byte physical blocks Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: [sdi] Write Protect is off Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: [sdi] Mode Sense: 00 3a 00 00 Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 27 20:09:59 Robin kernel: sd 18:0:0:0: Attached scsi generic sg9 type 0 Nov 27 20:10:00 Robin kernel: sdi: sdi1 Nov 27 20:10:00 Robin kernel: sd 18:0:0:0: [sdi] Attached SCSI disk Nov 27 20:10:02 Robin kernel: ------------[ cut here ]------------ Nov 27 20:10:02 Robin kernel: kernel BUG at drivers/ata/sata_mv.c:2118! Nov 27 20:10:02 Robin kernel: invalid opcode: 0000 [#1] SMP PTI Nov 27 20:10:02 Robin kernel: CPU: 1 PID: 1223 Comm: scsi_eh_18 Not tainted 4.18.20-unRAID #1 Nov 27 20:10:02 Robin kernel: Hardware name: Supermicro X7DB8-X/X7DB8-X, BIOS 6.00 08/13/2007 Nov 27 20:10:02 Robin kernel: RIP: 0010:mv_qc_prep+0x153/0x1c4 [sata_mv] Nov 27 20:10:02 Robin kernel: Code: 66 89 48 0a eb 26 0f b6 57 2a 80 ce 11 66 89 50 0a 0f b6 4f 2f 48 8d 50 0e 80 cd 11 66 89 48 0c eb 0a 48 8d 50 0a 84 c9 74 02 <0f> 0b 0f b6 47 30 80 cc 12 66 89 02 0f b6 47 2c 80 cc 13 66 89 42 Nov 27 20:10:02 Robin kernel: RSP: 0018:ffffc90000f83a90 EFLAGS: 00010006 Nov 27 20:10:02 Robin kernel: RAX: ffff880222c59060 RBX: ffff880222b9df30 RCX: 0000000000000047 Nov 27 20:10:02 Robin kernel: RDX: ffff880222c5906a RSI: ffff880222c59000 RDI: ffff880222b9df30 Nov 27 20:10:02 Robin kernel: RBP: ffff880222b9c000 R08: 0000000000000002 R09: 0000000000000001 Nov 27 20:10:02 Robin kernel: R10: 0000000000000002 R11: 0000000000000200 R12: ffff880222b9e040 Nov 27 20:10:02 Robin kernel: R13: ffffc90000f83bd0 R14: ffff880222b9c000 R15: 0000000000000000 Nov 27 20:10:02 Robin kernel: FS: 0000000000000000(0000) GS:ffff88022fd00000(0000) knlGS:0000000000000000 Nov 27 20:10:02 Robin kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 27 20:10:02 Robin kernel: CR2: 000015239dfe7480 CR3: 0000000222768000 CR4: 00000000000006e0 Nov 27 20:10:02 Robin kernel: Call Trace: Nov 27 20:10:02 Robin kernel: ata_qc_issue+0x162/0x194 Nov 27 20:10:02 Robin kernel: ata_exec_internal_sg+0x2b1/0x4c9 Nov 27 20:10:02 Robin kernel: ata_exec_internal+0x6a/0x89 Nov 27 20:10:02 Robin kernel: ata_read_log_page+0x11a/0x16b Nov 27 20:10:02 Robin kernel: ata_eh_analyze_ncq_error+0xad/0x282 Nov 27 20:10:02 Robin kernel: ata_eh_link_autopsy+0x126/0x83c Nov 27 20:10:02 Robin kernel: ? ata_set_mode+0xdc/0xe5 Nov 27 20:10:02 Robin kernel: ? __accumulate_pelt_segments+0x1d/0x2a Nov 27 20:10:02 Robin kernel: ? __update_load_avg_se.isra.1+0xe9/0x19c Nov 27 20:10:02 Robin kernel: ata_eh_autopsy+0x23/0xbe Nov 27 20:10:02 Robin kernel: sata_pmp_error_handler+0x3a/0x7a9 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x40/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x34/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x40/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x34/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x40/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x34/0x70 Nov 27 20:10:02 Robin kernel: ? __switch_to_asm+0x40/0x70 Nov 27 20:10:02 Robin kernel: ? lock_timer_base+0x4b/0x71 Nov 27 20:10:02 Robin kernel: ata_scsi_port_error_handler+0x221/0x53c Nov 27 20:10:02 Robin kernel: ? scsi_eh_get_sense+0xda/0xda Nov 27 20:10:02 Robin kernel: ata_scsi_error+0x8c/0xb5 Nov 27 20:10:02 Robin kernel: ? scsi_try_target_reset+0x74/0x74 Nov 27 20:10:02 Robin kernel: scsi_error_handler+0x9d/0x36c Nov 27 20:10:02 Robin kernel: kthread+0x10b/0x113 Nov 27 20:10:02 Robin kernel: ? kthread_flush_work_fn+0x9/0x9 Nov 27 20:10:02 Robin kernel: ret_from_fork+0x35/0x40 Nov 27 20:10:02 Robin kernel: Modules linked in: md_mod nfsd lockd grace sunrpc bonding e1000e coretemp kvm sr_mod ipmi_si i2c_i801 i2c_core cdrom i5000_edac i5k_amb ata_piix sata_mv button pcc_cpufreq acpi_cpufreq [last unloaded: md_mod] Nov 27 20:10:02 Robin kernel: ---[ end trace 0ba3e512db1142cf ]--- Nov 27 20:10:02 Robin kernel: RIP: 0010:mv_qc_prep+0x153/0x1c4 [sata_mv] Nov 27 20:10:02 Robin kernel: Code: 66 89 48 0a eb 26 0f b6 57 2a 80 ce 11 66 89 50 0a 0f b6 4f 2f 48 8d 50 0e 80 cd 11 66 89 48 0c eb 0a 48 8d 50 0a 84 c9 74 02 <0f> 0b 0f b6 47 30 80 cc 12 66 89 02 0f b6 47 2c 80 cc 13 66 89 42 Nov 27 20:10:02 Robin kernel: RSP: 0018:ffffc90000f83a90 EFLAGS: 00010006 Nov 27 20:10:02 Robin kernel: RAX: ffff880222c59060 RBX: ffff880222b9df30 RCX: 0000000000000047 Nov 27 20:10:02 Robin kernel: RDX: ffff880222c5906a RSI: ffff880222c59000 RDI: ffff880222b9df30 Nov 27 20:10:02 Robin kernel: RBP: ffff880222b9c000 R08: 0000000000000002 R09: 0000000000000001 Nov 27 20:10:02 Robin kernel: R10: 0000000000000002 R11: 0000000000000200 R12: ffff880222b9e040 Nov 27 20:10:02 Robin kernel: R13: ffffc90000f83bd0 R14: ffff880222b9c000 R15: 0000000000000000 Nov 27 20:10:02 Robin kernel: FS: 0000000000000000(0000) GS:ffff88022fd00000(0000) knlGS:0000000000000000 Nov 27 20:10:02 Robin kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 27 20:10:02 Robin kernel: CR2: 000015239dfe7480 CR3: 0000000222768000 CR4: 00000000000006e0 Nov 27 20:12:00 Robin emhttpd: device /dev/sdi problem getting id Nov 27 20:13:00 Robin kernel: INFO: rcu_sched detected stalls on CPUs/tasks: Nov 27 20:13:00 Robin kernel: 0-...!: (0 ticks this GP) idle=c06/1/4611686018427387904 softirq=46628/46628 fqs=0 Nov 27 20:13:00 Robin kernel: (detected by 1, t=60002 jiffies, g=19733, c=19732, q=1927) Nov 27 20:13:00 Robin kernel: Sending NMI from CPU 1 to CPUs 0: Nov 27 20:13:00 Robin kernel: NMI backtrace for cpu 0 Nov 27 20:13:00 Robin kernel: CPU: 0 PID: 1253 Comm: kworker/u4:20 Tainted: G D 4.18.20-unRAID #1 Nov 27 20:13:00 Robin kernel: Hardware name: Supermicro X7DB8-X/X7DB8-X, BIOS 6.00 08/13/2007 Nov 27 20:13:00 Robin kernel: Workqueue: events_freezable_power_ disk_events_workfn Nov 27 20:13:00 Robin kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x15f/0x16d Nov 27 20:13:00 Robin kernel: Code: 00 f0 0f b1 37 75 eb eb 13 48 8b 0a 48 85 c9 75 04 f3 90 eb f4 c7 41 08 01 00 00 00 65 ff 0d a6 da f9 7e c3 85 d2 74 0a 8b 07 <84> c0 74 04 f3 90 eb f6 66 c7 07 01 00 c3 49 b8 eb 83 b5 80 46 86