WonkoTheSane
-
Posts
39 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by WonkoTheSane
-
-
The forum seems to be experiencing some problems. The upload of my diagnostics archive keeps
failing. I'll try again later.
-
Hi all,
my server has been stable for about a year. After a BIOS update, I'm seeing frequent crashes, meaning the
server is completely inaccessible, can't even be pinged anymore. In order to isolate the issue, I've fiddled
with a couple of BIOS settings and stopped using my VMs, but the problem remains.
I'm attaching my diagnostics to this message, hopefully someone will be able to help.
Thanks & best regards
Matthias
-
Thanks for the info. Was really doubting myself there.
-
1 hour ago, SimonF said:
Drives need to be spun up for that info to show. Are they spun up.?
Alright, I feel very stupid now. Has this always been the case? Coincidence that I've never bumped into this issue before, I guess.
-
Hey all,
I've got 2 Unraid servers running v. 6.9.2. On both servers, when I click on an array disk in the array devices tab, neither
SMART attributes nor capabilities are displayed for any disk.
"Can not read attributes"
"Can not read capabilities"
The information for my cache disks displays fine.
I've checked the system log and was unable to find any entries regarding
SMART issues.
Thanks for your time!
-
Thanks for your answer.
As you can see, the free space and the capacity of the
share differ by several terabytes in each case. Just as a test, I invoked the
mover and checked again after it completed. I'm seeing exacly the same numbers
alternating when refreshing the directory on that share. I've got another share
that also uses the cache disk for new files. There the numbers don't alternate.
-
Hi all,
this is probably a minor issue, but I've noticed for quite some time
that the free space reported for an SMB share sometimes differs
dramatically between refreshs of said share.
See attached screenshots of the same share taken before and after
a refresh/reload of the current directory on the SMB share.
Best regards
Matthias
-
Hi Johnnie,
I just tried that, doesn't seem to make a difference. Still 'no exportable user shares', access
denied for disk shares.
BUT, I compared ownership and access right flags under /mnt/ to my other Unraid server.
Turns out, everything except for /mnt/disks was set to 'rw-rw-rw-' whereas on my working Unraid
instance, it is 'rwxrwxrwx'. I'm not really sure how this happened, but it looks like everything is okay for now.
Thanks & best regards
Matthias
-
Hi all,
as of this morning, all of my user shares have disappeared. Rebooting the
server did not fix this issue. The only share I'm seeing is 'flash'. The
share configuration on the flash drive is present and looks fine. The
disk shares seem to be configured correctly but when trying to access them
I get an access denied error.
I attached my diagnostics to this message.
Any help is appreciated.
-
-
Hi all,
I'm facing a problem I'm not sure how to solve.
I have two parity drives and have just replaced 2 data drives in my
array. Shortly after starting the server to do a data rebuild, one
of my parity drives was marked as disabled.
What options do I have now?
Any help is appreciated.
-
21 hours ago, dlandon said:
Oct 2 10:08:05 BigNAS kernel: nfs: server lochnas not responding, still trying
Your server is going off-line.
Hi again,
what I don't understand is this. When I go to the unraid main tab, I see the "Please wait, retrieving information ..." message from unassigned devices.
This takes forever, in the syslog I see this:
QuoteOct 5 18:12:42 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_roms' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003387s.
Oct 5 18:12:55 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_vm_bup' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.002427s.
Oct 5 18:12:57 xxxxx proftpd[10268]: 127.0.0.1 (103.78.12.72[103.78.12.72]) - FTP session closed.
Oct 5 18:13:02 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_sys' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003904s.
Oct 5 18:13:15 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_work_bup' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003625s.
Oct 5 18:13:22 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_vm_bup' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003307s.
Oct 5 18:13:35 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_ebooks' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003602s.
Oct 5 18:13:42 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_work_bup' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.004085s.
Oct 5 18:13:55 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_movies' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003528s.
Oct 5 18:14:02 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_ebooks' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003560s.
Oct 5 18:14:22 xxxxx unassigned.devices: benchmark: shell_exec(/usr/bin/timeout 20 /bin/df '/mnt/disks/lochnas_movies' --output=size,used,avail|/bin/grep -v '1K-blocks' > /tmp/unassigned.devices/df 2>/dev/null) took 20.003188s.These are obviously all timing out, so something is definetly up.
When I connect to the unraid server running unassigned devices via ssh and manually mount one of those NFS shares, it works without a problem
in these situations. I can list the contents of the share and copy stuff to/from it. So the server is not offline but somehow
the mounts done by unassigned devices become inaccessible after a while.
I'm not sure where to go from here.
Adding to this: When I do a lazy unmount of the unassigned devices mounts I'm afterwards able to remount them again with
the mount buttons on the unraid main page.
-
-
Update: I downgraded both my servers back to v. 6.5.3 and the problem persists. Let me know if you need specific information/logs.
-
-
+1 for NFS issues since 6.6.0
See my post here:
-
Hi all,
since the unraid 6.6.0 update I've been having issues with NFS shares mounted by the unassigned devices plugin. I've got 2 unraid servers. Server B mounts a couple of NFS shares on server A and runs a number of rsync scripts on a schedule to push new/modified files to server A. It looks like these NFS mounts become "stale" pretty quickly. Right after a server reboot manually triggering my sync scripts works just fine. A day later, rsync hangs at "sending incremental file list" and I'm unable to "cd" to the NFS mount points.
Any clues on how to fix this problem are appreciated.
I'm currently on unraid 6.6.1 on both of my servers. Unassigned devices plugin version is 2018.09.23.
Also, I'm still able to manually mount the NFS shares from the command line.
Cheers!
-
Hi all,
I had to rollback from latest to 2.3.3-1-01 since sabnzbd was unable to resolve the names of my usenet servers.
Did I miss anything about some configuration changes required on my part or is this a known issue?
-
Hi all,
I just updated my sabnzbd container and it does not seem to be working any more. Here is the log,
any help is appreciated.
2017-05-24 21:06:20,185 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,185::INFO::[SABnzbd:1275] SSL supported protocols ['TLS v1.2', 'TLS v1.1', 'TLS v1']
2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,188::INFO::[SABnzbd:1386] Starting web-interface on 0.0.0.0:8090
2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,189::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STARTING
2017-05-24 21:06:20,193 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,192::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Started monitor thread '_TimeoutMonitor'.
2017-05-24 21:06:20,357 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,357::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Serving on http://0.0.0.0:8080
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,358::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x2b1999457950>>
Traceback (most recent call last):
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 207, in publish
output.append(listener(*args, **kwargs))
File "/opt/sabnzbd/cherrypy/_cpserver.py", line 167, in start
self.httpserver, self.bind_addr = self.httpserver_from_self()
File "/opt/sabnzbd/cherrypy/_cpserver.py", line 158, in httpserver_from_self
httpserver = _cpwsgi_server.CPWSGIServer(self)
File "/opt/sabnzbd/cherrypy/_cpwsgi_server.py", line 64, in __init__
self.server_adapter.ssl_certificate_chain)
File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__
self.context.load_cert_chain(certificate, private_key)
SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,359::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Shutting down due to error in start listener:
Traceback (most recent call last):
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 245, in start
self.publish('start')
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 225, in publish
raise exc
ChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)')
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,359::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPING
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server None already shut down
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Stopped thread '_TimeoutMonitor'.
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPED
2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITING
2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,362::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITED
2017-05-24 21:06:20,364 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47147513872536 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stdout)>
2017-05-24 21:06:20,364 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47147513480472 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stderr)>
2017-05-24 21:06:20,364 INFO exited: sabnzbd (exit status 70; not expected)
2017-05-24 21:06:20,364 DEBG received SIGCLD indicating a child quit
2017-05-24 21:06:21,365 INFO gave up: sabnzbd entered FATAL state, too many start retries too quickly -
*I just realized I posted to the wrong thread. Sorry about that.
Hi all,
I just updated my sabnzbd container and it does not seem to be working any more. Here is the log,
any help is appreciated.
2017-05-24 21:06:20,185 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,185::INFO::[SABnzbd:1275] SSL supported protocols ['TLS v1.2', 'TLS v1.1', 'TLS v1']
2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,188::INFO::[SABnzbd:1386] Starting web-interface on 0.0.0.0:8090
2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,189::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STARTING
2017-05-24 21:06:20,193 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,192::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Started monitor thread '_TimeoutMonitor'.
2017-05-24 21:06:20,357 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,357::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Serving on http://0.0.0.0:8080
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,358::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x2b1999457950>>
Traceback (most recent call last):
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 207, in publish
output.append(listener(*args, **kwargs))
File "/opt/sabnzbd/cherrypy/_cpserver.py", line 167, in start
self.httpserver, self.bind_addr = self.httpserver_from_self()
File "/opt/sabnzbd/cherrypy/_cpserver.py", line 158, in httpserver_from_self
httpserver = _cpwsgi_server.CPWSGIServer(self)
File "/opt/sabnzbd/cherrypy/_cpwsgi_server.py", line 64, in __init__
self.server_adapter.ssl_certificate_chain)
File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__
self.context.load_cert_chain(certificate, private_key)
SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,359::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Shutting down due to error in start listener:
Traceback (most recent call last):
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 245, in start
self.publish('start')
File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 225, in publish
raise exc
ChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)')
2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,359::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPING
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server None already shut down
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Stopped thread '_TimeoutMonitor'.
2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPED
2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITING
2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:
2017-05-24 21:06:20,362::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITED
2017-05-24 21:06:20,364 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47147513872536 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stdout)>
2017-05-24 21:06:20,364 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47147513480472 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stderr)>
2017-05-24 21:06:20,364 INFO exited: sabnzbd (exit status 70; not expected)
2017-05-24 21:06:20,364 DEBG received SIGCLD indicating a child quit
2017-05-24 21:06:21,365 INFO gave up: sabnzbd entered FATAL state, too many start retries too quickly -
Okay. I'll see if I have another slot available for the controller.
This issue is very concerning.
-
Hi again,
I finally got around to following your suggestions. I flashed the latest firmware I could find
on the supermicro website, disabled VT-D and updated to Unraid 6.3.1.
When I restarted the server, the drive was present and (obviously) still marked with a red X.
I then tried to start the short S.M.A.R.T. selftest, but got an error stating that a mandatory
command failed. I checked the "Main" tab again and the drive was marked as missing all of a
sudden. I rebooted again, the drive was present again and I could see the S.M.A.R.T. information
for it. I'm attaching it now.
Thanks for your help.
-
-
Hi all,
I woke up this morning finding that one of the disks of my Unraid Server (running 6.3.0)
has been disabled. I hope someone is able to tell me what exactly to look for
( failing disk, cable issue, ? )
Here is an extract of my syslog which hopefully helps sorting this out.
Thanks for your time.
*The data on the disk is accessible, I am currently copying it to another disk via shell.
Feb 7 03:54:06 LochNAS kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1
Feb 7 03:54:06 LochNAS kernel: sas: trying to find task 0xffff8801ee3c6900
Feb 7 03:54:06 LochNAS kernel: sas: sas_scsi_find_task: aborting task 0xffff8801ee3c6900
Feb 7 03:54:06 LochNAS kernel: sas: sas_scsi_find_task: task 0xffff8801ee3c6900 is aborted
Feb 7 03:54:06 LochNAS kernel: sas: sas_eh_handle_sas_errors: task 0xffff8801ee3c6900 is aborted
Feb 7 03:54:06 LochNAS kernel: sas: ata14: end_device-1:5: cmd error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata9: end_device-1:0: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata10: end_device-1:1: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata11: end_device-1:2: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata12: end_device-1:3: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata13: end_device-1:4: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata14: end_device-1:5: dev error handler
Feb 7 03:54:06 LochNAS kernel: sas: ata15: end_device-1:6: dev error handler
Feb 7 03:54:06 LochNAS kernel: ata14.00: exception Emask 0x0 SAct 0x10000000 SErr 0x0 action 0x6 frozen
Feb 7 03:54:06 LochNAS kernel: ata14.00: failed command: READ FPDMA QUEUED
Feb 7 03:54:06 LochNAS kernel: sas: ata16: end_device-1:7: dev error handler
Feb 7 03:54:06 LochNAS kernel: ata14.00: cmd 60/40:00:80:70:3b/00:00:39:00:00/40 tag 28 ncq dma 32768 in
Feb 7 03:54:06 LochNAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 7 03:54:06 LochNAS kernel: ata14.00: status: { DRDY }
Feb 7 03:54:06 LochNAS kernel: ata14: hard resetting link
Feb 7 03:54:07 LochNAS kernel: sas: sas_form_port: phy1 belongs to port5 already(1)!
Feb 7 03:54:08 LochNAS kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[1]:rc= 0
Feb 7 03:54:14 LochNAS kernel: ata14.00: qc timeout (cmd 0xec)
Feb 7 03:54:14 LochNAS kernel: ata14.00: failed to IDENTIFY (I/O error, err_mask=0x4)
Feb 7 03:54:14 LochNAS kernel: ata14.00: revalidation failed (errno=-5)
Feb 7 03:54:14 LochNAS kernel: ata14: hard resetting link
Feb 7 03:54:14 LochNAS kernel: sas: sas_form_port: phy1 belongs to port5 already(1)!
Feb 7 03:54:16 LochNAS kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[1]:rc= 0
Feb 7 03:54:22 LochNAS kernel: ata14.00: qc timeout (cmd 0x27)
Feb 7 03:54:22 LochNAS kernel: ata14.00: failed to read native max address (err_mask=0x4)
Feb 7 03:54:22 LochNAS kernel: ata14.00: HPA support seems broken, skipping HPA handling
Feb 7 03:54:22 LochNAS kernel: ata14.00: revalidation failed (errno=-5)
Feb 7 03:54:22 LochNAS kernel: ata14: hard resetting link
Feb 7 03:54:22 LochNAS kernel: sas: sas_form_port: phy1 belongs to port5 already(1)!
Feb 7 03:54:24 LochNAS kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[1]:rc= 0
Feb 7 03:54:39 LochNAS kernel: ata14.00: qc timeout (cmd 0xef)
Feb 7 03:54:39 LochNAS kernel: ata14.00: failed to set xfermode (err_mask=0x4)
Feb 7 03:54:39 LochNAS kernel: ata14.00: disabled
Feb 7 03:54:39 LochNAS kernel: ata14: hard resetting link
Feb 7 03:54:39 LochNAS kernel: sas: sas_form_port: phy1 belongs to port5 already(1)!
Feb 7 03:54:41 LochNAS kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[1]:rc= 0
Feb 7 03:54:42 LochNAS kernel: ata14: EH complete
Feb 7 03:54:42 LochNAS kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 70 80 00 00 00 40 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960196736
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196672
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#4 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 0
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196680
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196688
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196696
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196704
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196712
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196720
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196728
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#6 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 70 c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#6 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 7c c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960199872
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960196800
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196736
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199808
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196744
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199816
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196752
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199824
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196760
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199832
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196768
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#10 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199840
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#10 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 84 c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960201920
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#3 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 74 c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960197824
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196776
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199848
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196784
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199856
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196792
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199864
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196800
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199872
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196808
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199880
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196816
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#15 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#15 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 8c c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960203968
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199888
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#5 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 78 c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196824
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960198848
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199896
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196832
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199904
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196840
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199912
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196848
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199920
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196856
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#17 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199928
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#17 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 94 c0 00 00 03 c0 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960206016
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196864
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199936
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] tag#8 CDB: opcode=0x88 88 00 00 00 00 00 39 3b 80 c0 00 00 04 00 00 00
Feb 7 03:54:42 LochNAS kernel: blk_update_request: I/O error, dev sdg, sector 960200896
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196872
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199944
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196880
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199952
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196888
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960199960
Feb 7 03:54:42 LochNAS kernel: md: disk11 read error, sector=960196896
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00
Feb 7 03:54:42 LochNAS kernel: sd 1:0:5:0: [sdg] Sense not available.
6.9.2 - Frequent server crashes
in General Support
Posted · Edited by WonkoTheSane
@ChatNoir: Thanks, it works now.