ryanborstelmann

Members
  • Posts

    51
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ryanborstelmann's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. Oh my i can't believe myself! 😬 Thanks!
  2. How can I run "zfs status -x" to see the status of the pool(s)? I swear this worked at one point, but now it throws an error that it doesn't know what "status" is as a command: root@NAS01:~# zfs status -x unrecognized command 'status' usage: zfs command args ... where 'command' is one of the following: create [-p] [-o property=value] ... <filesystem> create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume> destroy [-fnpRrv] <filesystem|volume> destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...] destroy <filesystem|volume>#<bookmark> snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@<snap> ... rollback [-rRf] <snapshot> clone [-p] [-o property=value] ... <snapshot> <filesystem|volume> promote <clone-filesystem> rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot> rename [-f] -p <filesystem|volume> <filesystem|volume> rename -r <snapshot> <snapshot> bookmark <snapshot> <bookmark> list [-Hp] [-r|-d max] [-o property[,...]] [-s property]... [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ... set <property=value> ... <filesystem|volume|snapshot> ... get [-rHp] [-d max] [-o "all" | field[,...]] [-t type[,...]] [-s source[,...]] <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ... inherit [-rS] <property> <filesystem|volume|snapshot> ... upgrade [-v] upgrade [-r] [-V version] <-a | filesystem ...> userspace [-Hinp] [-o field[,...]] [-s field] ... [-S field] ... [-t type[,...]] <filesystem|snapshot> groupspace [-Hinp] [-o field[,...]] [-s field] ... [-S field] ... [-t type[,...]] <filesystem|snapshot> mount mount [-vO] [-o opts] <-a | filesystem> unmount [-f] <-a | filesystem|mountpoint> share <-a [nfs|smb] | filesystem> unshare <-a [nfs|smb] | filesystem|mountpoint> send [-DnPpRvLec] [-[i|I] snapshot] <snapshot> send [-Lec] [-i snapshot|bookmark] <filesystem|volume|snapshot> send [-nvPe] -t <receive_resume_token> receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ... <filesystem|volume|snapshot> receive [-vnsFu] [-o <property>=<value>] ... [-x <property>] ... [-d | -e] <filesystem> receive -A <filesystem|volume> allow <filesystem|volume> allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...] <filesystem|volume> allow [-ld] -e <perm|@setname>[,...] <filesystem|volume> allow -c <perm|@setname>[,...] <filesystem|volume> allow -s @setname <perm|@setname>[,...] <filesystem|volume> unallow [-rldug] <"everyone"|user|group>[,...] [<perm|@setname>[,...]] <filesystem|volume> unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume> unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume> unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume> hold [-r] <tag> <snapshot> ... holds [-r] <snapshot> ... release [-r] <tag> <snapshot> ... diff [-FHt] <snapshot> [snapshot|filesystem] Each dataset is of the form: pool/[dataset/]*dataset[@name] For the property list, run: zfs set|get For the delegated permission list, run: zfs allow|unallow Thanks!
  3. Feature question - I'm looking to run a script while CA Backup & Restore has my VMs stopped - I'd like to run a Plex database maintenance script that must run when Plex is stopped, so ideally my nightly CA Backup would be the best option. I know based on the comment by @Squid (see below) that the custom scripts in CA Backup/Restore run prior to the containers stopping, and after the containers start - nothing that can run while the containers are stopped. Is that the case? Any way I can do what I'm looking for? I suppose I could do a custom stop script (prior to containers stopping) that has "docker stop plex" at the start of the script, then backup/restore will start the Plex container after the backup is complete? Will that work?
  4. Hey all. Are there any SSH Debug logs to find out more about what SSH clients are doing during login/auth requests? My issue is that I am using Termius, which is a cross-platform SSH/SFTP client. On my Ubuntu desktop, it works fine to connect to my Unraid boxes (two of them both show this issue). On their iOS app, Unraid throws bad password errors and SSH fails. I've of course triple-checked my root password is correct, but for whatever reason Unraid isn't having it. Unraid boxes are running 6.6.1 & 6.6.2, both exhibit this behavior. Good (Ubuntu Termius client): Oct 14 13:25:52 NAS02 sshd[19889]: SSH: Server;Ltype: Version;Remote: 10.1.1.100-35774;Protocol: 2.0;Client: libssh2_1.8.1_DEV Oct 14 13:25:52 NAS02 sshd[19889]: SSH: Server;Ltype: Kex;Remote: 10.1.1.100-35774;Enc: aes128-ctr;MAC: hmac-sha2-256;Comp: none [preauth] Oct 14 13:25:53 NAS02 sshd[19889]: SSH: Server;Ltype: Authname;Remote: 10.1.1.100-35774;Name: root [preauth] Oct 14 13:25:53 NAS02 sshd[19889]: Accepted none for root from 10.1.1.100 port 35774 ssh2 Oct 14 13:25:53 NAS02 sshd[19889]: SSH: Server;Ltype: Kex;Remote: 10.1.1.100-35774;Enc: aes128-ctr;MAC: hmac-sha2-256;Comp: none Bad (iOS Termius client): Oct 14 13:27:00 NAS02 sshd[20558]: SSH: Server;Ltype: Version;Remote: 10.1.255.3-55983;Protocol: 2.0;Client: libssh2_1.8.1_DEV Oct 14 13:27:00 NAS02 sshd[20558]: SSH: Server;Ltype: Kex;Remote: 10.1.255.3-55983;Enc: aes128-ctr;MAC: hmac-sha2-256;Comp: none [preauth] Oct 14 13:27:01 NAS02 sshd[20558]: SSH: Server;Ltype: Authname;Remote: 10.1.255.3-55983;Name: root [preauth] Oct 14 13:27:01 NAS02 sshd[20558]: Failed password for root from 10.1.255.3 port 55983 ssh2 Oct 14 13:27:01 NAS02 sshd[20558]: Received disconnect from 10.1.255.3 port 55983:11: Normal Shutdown [preauth] Oct 14 13:27:01 NAS02 sshd[20558]: Disconnected from authenticating user root 10.1.255.3 port 55983 [preauth] Clients are both in my LAN. This occurs on both of my Unraid boxes. I have a support request open with Termius to find out how their two clients differ, but thought I could dig into the SSH server side of things in Unraid while I wait. Thoughts on how I can dig further?
  5. Will look into that in the coming week or two, and will report back findings. Thanks!
  6. Happened first on 6.5.3, but just because I've never used disk spin-down prior to that. I updated to 6.6.0 this weekend and the issue persists.
  7. Update: Spent ~10 days with disk spin-down disabled, and saw zero read errors. Did a full parity check in there too with zero errors output. Decided to turn disk spin-down back on last night, and within 13 minutes, saw the read errors hit: https://hastebin.com/ikejufadok.nginx Disabled it, cleared the stats, and have seen zero errors since. This is clearly tied to disk spin-down, and I can't figure out why it'd be occurring. Can anyone shed some light on this for me?
  8. Update: Since disabling disk spin-down, I have had 0 read errors. Honestly have no idea what the issue would be - maybe my HBA freaked out or something. Will let it go for a week or two to ensure 100% that it's stable without errors, then might do some poking around to determine what could be the issue. Thanks for the guidance, folks!
  9. I was able to resolve it by running chmod 777 /mnt/docker/* Unsure why this works while the /mnt/user/docker one has all the non-777 perms, but that's fine. I should probably someday go back and find out what's going on, but for my Plex server, it doesn't matter that things are 777. Thanks!
  10. No dice set it to /mnt/docker as my ZFS mountpoint, and I still get permissions issues. Here's my zpool setup and transfer of CA Backup to the new pool - maybe something I need to add/change in this? I stop docker prior to the configuration, change where my docker.img file is located in unraid settings, then start it up after the process. zpool create docker -m /mnt/docker mirror scsi-350000393a819195c scsi-350000393a8195a74 mirror scsi-350000393b82b5d94 scsi-350000393b82b6a2c zfs set compression=lz4 docker zfs set atime=off docker tar xzvf /mnt/user/backups/docker/2018-09-12\@03.01/CA_backup.tar.gz -C /mnt/docker/ cp /mnt/user/docker/docker.img /mnt/docker/
  11. Trying this out. Will report back findings. Thanks!
  12. Hey all, anything I should be aware of regarding folder permissions when using ZFS on UnRAID? I've moved my docker's appdata folder from /mnt/user/docker to /zfs/docker (using the ZFS plugin to create a zpool outside my array): root@NAS01:~# zfs list NAME USED AVAIL REFER MOUNTPOINT docker 20.1G 518G 20.1G /zfs/docker root@NAS01:~# zpool status pool: docker state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM docker ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 scsi-350000393a819195c ONLINE 0 0 0 scsi-350000393a8195a74 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 scsi-350000393b82b5d94 ONLINE 0 0 0 scsi-350000393b82b6a2c ONLINE 0 0 0 errors: No known data errors I restored my CA Backup of my docker appdata folder, and it seem to have kept all my permissions in place: root@NAS01:~# ls -ln /zfs/docker total 4267660 drwxrwxrwx 8 0 0 19 Jul 27 16:14 Community_Applications_USB_Backup/ drwxrwxrwx 3 0 0 3 Jul 27 16:14 appdata/ drwxrwxrwx 3 0 0 7 Sep 12 00:11 1/ drwxr-xr-x 3 0 0 4 Aug 9 13:59 2/ drwxrwxrwx 3 0 0 3 Aug 8 14:51 3/ drwxrwxrwx 3 0 0 4 Jul 27 16:14 4/ -rw-rw-rw- 1 99 100 53687091200 Sep 12 11:23 docker.img drwxr-xr-x 4 911 911 4 Jul 27 16:14 5/ drwxrwxrwx 6 911 911 6 Jul 27 16:14 6/ drwxrwxrwx 7 1000 1000 18 Sep 11 10:10 7/ drwxr-xr-x 6 0 0 15 Jul 27 16:14 8/ drwxr-xr-x 2 1000 100 3 Jul 27 16:14 9/ drwxrwxrwx 3 1000 100 6 Sep 12 03:07 10/ drwx------ 6 1000 100 10 Jul 27 16:38 11/ drwxrwxrwx 4 0 0 4 Jul 27 16:38 12/ drwxrwxrwx 7 1000 100 15 Sep 12 02:19 13/ drwxr-xr-x 2 911 911 4 Jul 27 16:38 14/ drwxrwxr-x 7 1000 100 15 Sep 12 02:36 15/ drwxr-xr-x 2 1000 100 2 Jul 27 16:39 16/ drwxrwxrwx 8 1000 100 13 Sep 12 03:07 17/ drwx------ 4 0 0 6 Jul 27 16:39 18/ drwxr-xr-x 5 1000 100 10 Jul 27 16:39 19/ drwxrwxrwx 5 1000 100 5 Jul 27 16:39 20/ drwx------ 7 911 911 7 Jul 27 16:40 21/ root@NAS01:~# ls -ln /mnt/user/docker total 8357876 drwxrwxrwx 1 0 0 328 Jul 27 16:14 Community_Applications_USB_Backup/ drwxrwxrwx 1 0 0 20 Jul 27 16:14 appdata/ drwxrwxrwx 1 0 0 147 Sep 12 11:11 1/ drwxr-xr-x 1 0 0 47 Aug 9 13:59 2/ drwxrwxrwx 1 0 0 38 Aug 8 14:51 3/ drwxrwxrwx 1 0 0 37 Jul 27 16:14 4/ -rw-rw-rw- 1 99 100 53687091200 Sep 12 10:50 docker.img drwxr-xr-x 1 911 911 34 Jul 27 16:14 5/ drwxrwxrwx 1 911 911 57 Jul 27 16:14 6/ drwxrwxrwx 1 1000 1000 4096 Sep 11 10:10 7/ drwxr-xr-x 1 0 0 212 Jul 27 16:14 8/ drwxr-xr-x 1 1000 100 25 Jul 27 16:14 9/ drwxrwxrwx 1 1000 100 53 Sep 12 11:23 10/ drwx------ 1 1000 100 191 Jul 27 16:38 11/ drwxrwxrwx 1 0 0 40 Jul 27 16:38 12/ drwxrwxrwx 1 1000 100 238 Sep 12 11:11 13/ drwxr-xr-x 1 911 911 44 Jul 27 16:38 14/ drwxrwxr-x 1 1000 100 232 Sep 12 11:12 15/ drwxr-xr-x 1 1000 100 6 Jul 27 16:39 16/ drwxrwxrwx 1 1000 100 208 Sep 12 11:11 17/ drwx------ 1 0 0 66 Jul 27 16:39 18/ drwxr-xr-x 1 1000 100 184 Jul 27 16:39 19/ drwxrwxrwx 1 1000 100 41 Jul 27 16:39 20/ drwx------ 1 911 911 71 Jul 27 16:40 21/ (obfuscated the folder names) Yet when I move my docker volumes from /mnt/user/docker/xyz to /zfs/docker/xyz, the containers have all sorts of permissions issues writing to their config folders. For example, Plex won't start, UNMS throws permissions errors on its config folder, etc. I can't find any differences in UID/GIDs between the two docker folders, but every container I've tried so far has the same issue. Any thoughts on what I'm missing?
  13. Hey all, I've moved my docker's appdata folder from /mnt/user/docker to /zfs/docker (using the ZFS plugin to create a zpool outside my array). I restored my CA Backup of my docker appdata folder, and it seem to have kept all my permissions in place: root@NAS01:~# ls -ln /zfs/docker total 4267660 drwxrwxrwx 8 0 0 19 Jul 27 16:14 Community_Applications_USB_Backup/ drwxrwxrwx 3 0 0 3 Jul 27 16:14 appdata/ drwxrwxrwx 3 0 0 7 Sep 12 00:11 1/ drwxr-xr-x 3 0 0 4 Aug 9 13:59 2/ drwxrwxrwx 3 0 0 3 Aug 8 14:51 3/ drwxrwxrwx 3 0 0 4 Jul 27 16:14 4/ -rw-rw-rw- 1 99 100 53687091200 Sep 12 11:23 docker.img drwxr-xr-x 4 911 911 4 Jul 27 16:14 5/ drwxrwxrwx 6 911 911 6 Jul 27 16:14 6/ drwxrwxrwx 7 1000 1000 18 Sep 11 10:10 7/ drwxr-xr-x 6 0 0 15 Jul 27 16:14 8/ drwxr-xr-x 2 1000 100 3 Jul 27 16:14 9/ drwxrwxrwx 3 1000 100 6 Sep 12 03:07 10/ drwx------ 6 1000 100 10 Jul 27 16:38 11/ drwxrwxrwx 4 0 0 4 Jul 27 16:38 12/ drwxrwxrwx 7 1000 100 15 Sep 12 02:19 13/ drwxr-xr-x 2 911 911 4 Jul 27 16:38 14/ drwxrwxr-x 7 1000 100 15 Sep 12 02:36 15/ drwxr-xr-x 2 1000 100 2 Jul 27 16:39 16/ drwxrwxrwx 8 1000 100 13 Sep 12 03:07 17/ drwx------ 4 0 0 6 Jul 27 16:39 18/ drwxr-xr-x 5 1000 100 10 Jul 27 16:39 19/ drwxrwxrwx 5 1000 100 5 Jul 27 16:39 20/ drwx------ 7 911 911 7 Jul 27 16:40 21/ root@NAS01:~# ls -ln /mnt/user/docker total 8357876 drwxrwxrwx 1 0 0 328 Jul 27 16:14 Community_Applications_USB_Backup/ drwxrwxrwx 1 0 0 20 Jul 27 16:14 appdata/ drwxrwxrwx 1 0 0 147 Sep 12 11:11 1/ drwxr-xr-x 1 0 0 47 Aug 9 13:59 2/ drwxrwxrwx 1 0 0 38 Aug 8 14:51 3/ drwxrwxrwx 1 0 0 37 Jul 27 16:14 4/ -rw-rw-rw- 1 99 100 53687091200 Sep 12 10:50 docker.img drwxr-xr-x 1 911 911 34 Jul 27 16:14 5/ drwxrwxrwx 1 911 911 57 Jul 27 16:14 6/ drwxrwxrwx 1 1000 1000 4096 Sep 11 10:10 7/ drwxr-xr-x 1 0 0 212 Jul 27 16:14 8/ drwxr-xr-x 1 1000 100 25 Jul 27 16:14 9/ drwxrwxrwx 1 1000 100 53 Sep 12 11:23 10/ drwx------ 1 1000 100 191 Jul 27 16:38 11/ drwxrwxrwx 1 0 0 40 Jul 27 16:38 12/ drwxrwxrwx 1 1000 100 238 Sep 12 11:11 13/ drwxr-xr-x 1 911 911 44 Jul 27 16:38 14/ drwxrwxr-x 1 1000 100 232 Sep 12 11:12 15/ drwxr-xr-x 1 1000 100 6 Jul 27 16:39 16/ drwxrwxrwx 1 1000 100 208 Sep 12 11:11 17/ drwx------ 1 0 0 66 Jul 27 16:39 18/ drwxr-xr-x 1 1000 100 184 Jul 27 16:39 19/ drwxrwxrwx 1 1000 100 41 Jul 27 16:39 20/ drwx------ 1 911 911 71 Jul 27 16:40 21/ (obfuscated the folder names) Yet when I move my docker volumes from /mnt/user/docker/xyz to /zfs/docker/xyz, the containers have all sorts of permissions issues writing to their config folders. For example, Plex won't start, UNMS throws permissions errors on its config folder, etc. I can't find any differences in UID/GIDs between the two docker folders, but every container I've tried so far has the same issue. Any thoughts on what I'm missing?
  14. They're hardware errors but they make no sense, so I'm looking to roll back the only things that changed just before I started seeing the errors. There's no way the same exact sector on 6 disks failed at once, and there's even less of a chance that it happened 100 times in one second, all on the same sectors. Even a bad cable/seating/HBA failure, etc would see different sectors going bad, or the entire drive going bad. Even if 6 or 9 drives go bad at once, there is about 0% chance it's all the same sector, and even less of a chance it happens many times. How is UnRAID even looking at the same exact sector on 6 drives at the same time to determine there's a read error? Is there a possibility that this is a kernel bug and it's not actually reporting correctly? How is SMART not seeing any read errors, but the UnRAID kernel is? Sorry for the questions, but this is all so odd and I'd like to wrap my head around it before getting new hardware.
  15. Again overnight, at 4:33AM, 6 of the drives all reported the same bad sectors. Only thing that happened was CA Backup ran at 3:00 and finished at 3:30, and the drives all spun down at 3:58AM. Full log: https://ghostbin.com/paste/ja7mh Relevant snippet: Sep 12 04:33:33 NAS01 kernel: sd 7:0:20:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:20:0: [sdw] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:20:0: [sdw] tag#18 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:20:0: [sdw] tag#18 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:20:0: [sdw] tag#18 CDB: opcode=0x28 28 00 43 95 86 78 00 04 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdw, sector 1133872760 Sep 12 04:33:33 NAS01 kernel: sd 7:0:23:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:23:0: [sdz] tag#21 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:23:0: [sdz] tag#21 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:23:0: [sdz] tag#21 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:23:0: [sdz] tag#21 CDB: opcode=0x88 88 00 00 00 00 00 43 95 86 78 00 00 04 00 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdz, sector 1133872760 Sep 12 04:33:33 NAS01 kernel: sd 7:0:24:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:24:0: [sdaa] tag#22 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:24:0: [sdaa] tag#22 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:24:0: [sdaa] tag#22 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:24:0: [sdaa] tag#22 CDB: opcode=0x28 28 00 43 95 86 78 00 04 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdaa, sector 1133872760 Sep 12 04:33:33 NAS01 kernel: sd 7:0:25:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:25:0: [sdab] tag#23 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:25:0: [sdab] tag#23 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:25:0: [sdab] tag#23 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:25:0: [sdab] tag#23 CDB: opcode=0x28 28 00 43 95 86 78 00 04 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdab, sector 1133872760 Sep 12 04:33:33 NAS01 kernel: sd 7:0:27:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:27:0: [sdac] tag#24 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:27:0: [sdac] tag#24 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:27:0: [sdac] tag#24 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:27:0: [sdac] tag#24 CDB: opcode=0x28 28 00 43 95 86 78 00 04 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdac, sector 1133872760 Sep 12 04:33:33 NAS01 kernel: sd 7:0:29:0: timing out command, waited 180s Sep 12 04:33:33 NAS01 kernel: sd 7:0:29:0: [sdae] tag#25 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 12 04:33:33 NAS01 kernel: sd 7:0:29:0: [sdae] tag#25 Sense Key : 0x2 [current] Sep 12 04:33:33 NAS01 kernel: sd 7:0:29:0: [sdae] tag#25 ASC=0x4 ASCQ=0x2 Sep 12 04:33:33 NAS01 kernel: sd 7:0:29:0: [sdae] tag#25 CDB: opcode=0x88 88 00 00 00 00 00 43 95 86 78 00 00 04 00 00 00 Sep 12 04:33:33 NAS01 kernel: print_req_error: I/O error, dev sdae, sector 1133872760 Sep 12 04:34:33 NAS01 kernel: md: disk10 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk11 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk15 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk21 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk22 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk23 read error, sector=1133872696 Sep 12 04:34:33 NAS01 kernel: md: disk10 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk11 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk15 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk21 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk22 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk23 read error, sector=1133872704 Sep 12 04:34:33 NAS01 kernel: md: disk10 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk11 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk15 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk21 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk22 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk23 read error, sector=1133872712 Sep 12 04:34:33 NAS01 kernel: md: disk10 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk11 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk15 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk21 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk22 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk23 read error, sector=1133872720 Sep 12 04:34:33 NAS01 kernel: md: disk10 read error, sector=1133872728 Sep 12 04:34:33 NAS01 kernel: md: disk11 read error, sector=1133872728 Sep 12 04:34:33 NAS01 kernel: md: disk15 read error, sector=1133872728 Sep 12 04:34:33 NAS01 kernel: md: disk21 read error, sector=1133872728 Sep 12 04:34:33 NAS01 kernel: md: disk22 read error, sector=1133872728 Sep 12 04:34:33 NAS01 kernel: md: disk23 read error, sector=1133872728 Thoughts? My next course of action is to roll back some of the changes I made this past weekend (disable drive spin-down, etc) just in case something isn't happy.