WonkoTheSane

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by WonkoTheSane

  1. The forum seems to be experiencing some problems. The upload of my diagnostics archive keeps failing. I'll try again later.
  2. Hi all, my server has been stable for about a year. After a BIOS update, I'm seeing frequent crashes, meaning the server is completely inaccessible, can't even be pinged anymore. In order to isolate the issue, I've fiddled with a couple of BIOS settings and stopped using my VMs, but the problem remains. I'm attaching my diagnostics to this message, hopefully someone will be able to help. Thanks & best regards Matthias nasferatu-diagnostics-20220130-0834.zip
  3. Thanks for the info. Was really doubting myself there.
  4. Alright, I feel very stupid now. Has this always been the case? Coincidence that I've never bumped into this issue before, I guess.
  5. Hey all, I've got 2 Unraid servers running v. 6.9.2. On both servers, when I click on an array disk in the array devices tab, neither SMART attributes nor capabilities are displayed for any disk. "Can not read attributes" "Can not read capabilities" The information for my cache disks displays fine. I've checked the system log and was unable to find any entries regarding SMART issues. Thanks for your time!
  6. Thanks for your answer. As you can see, the free space and the capacity of the share differ by several terabytes in each case. Just as a test, I invoked the mover and checked again after it completed. I'm seeing exacly the same numbers alternating when refreshing the directory on that share. I've got another share that also uses the cache disk for new files. There the numbers don't alternate.
  7. Hi all, this is probably a minor issue, but I've noticed for quite some time that the free space reported for an SMB share sometimes differs dramatically between refreshs of said share. See attached screenshots of the same share taken before and after a refresh/reload of the current directory on the SMB share. Best regards Matthias
  8. Hi Johnnie, I just tried that, doesn't seem to make a difference. Still 'no exportable user shares', access denied for disk shares. BUT, I compared ownership and access right flags under /mnt/ to my other Unraid server. Turns out, everything except for /mnt/disks was set to 'rw-rw-rw-' whereas on my working Unraid instance, it is 'rwxrwxrwx'. I'm not really sure how this happened, but it looks like everything is okay for now. Thanks & best regards Matthias
  9. Hi all, as of this morning, all of my user shares have disappeared. Rebooting the server did not fix this issue. The only share I'm seeing is 'flash'. The share configuration on the flash drive is present and looks fine. The disk shares seem to be configured correctly but when trying to access them I get an access denied error. I attached my diagnostics to this message. Any help is appreciated. bignas-diagnostics-20200324-0848.zip
  10. Hi Johnnie, diagnostics attached to this post. Thanks! lochnas-diagnostics-20200222-1145.zip
  11. Hi all, I'm facing a problem I'm not sure how to solve. I have two parity drives and have just replaced 2 data drives in my array. Shortly after starting the server to do a data rebuild, one of my parity drives was marked as disabled. What options do I have now? Any help is appreciated.
  12. Hi again, what I don't understand is this. When I go to the unraid main tab, I see the "Please wait, retrieving information ..." message from unassigned devices. This takes forever, in the syslog I see this: These are obviously all timing out, so something is definetly up. When I connect to the unraid server running unassigned devices via ssh and manually mount one of those NFS shares, it works without a problem in these situations. I can list the contents of the share and copy stuff to/from it. So the server is not offline but somehow the mounts done by unassigned devices become inaccessible after a while. I'm not sure where to go from here. Adding to this: When I do a lazy unmount of the unassigned devices mounts I'm afterwards able to remount them again with the mount buttons on the unraid main page.
  13. Update: I downgraded both my servers back to v. 6.5.3 and the problem persists. Let me know if you need specific information/logs.
  14. +1 for NFS issues since 6.6.0 See my post here:
  15. Hi all, since the unraid 6.6.0 update I've been having issues with NFS shares mounted by the unassigned devices plugin. I've got 2 unraid servers. Server B mounts a couple of NFS shares on server A and runs a number of rsync scripts on a schedule to push new/modified files to server A. It looks like these NFS mounts become "stale" pretty quickly. Right after a server reboot manually triggering my sync scripts works just fine. A day later, rsync hangs at "sending incremental file list" and I'm unable to "cd" to the NFS mount points. Any clues on how to fix this problem are appreciated. I'm currently on unraid 6.6.1 on both of my servers. Unassigned devices plugin version is 2018.09.23. Also, I'm still able to manually mount the NFS shares from the command line. Cheers!
  16. Hi all, I had to rollback from latest to 2.3.3-1-01 since sabnzbd was unable to resolve the names of my usenet servers. Did I miss anything about some configuration changes required on my part or is this a known issue?
  17. Hi all, I just updated my sabnzbd container and it does not seem to be working any more. Here is the log, any help is appreciated. 2017-05-24 21:06:20,185 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,185::INFO::[SABnzbd:1275] SSL supported protocols ['TLS v1.2', 'TLS v1.1', 'TLS v1']2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,188::INFO::[SABnzbd:1386] Starting web-interface on 0.0.0.0:80902017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,189::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STARTING2017-05-24 21:06:20,193 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,192::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Started monitor thread '_TimeoutMonitor'.2017-05-24 21:06:20,357 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,357::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Serving on http://0.0.0.0:80802017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,358::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x2b1999457950>>Traceback (most recent call last):File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 207, in publishoutput.append(listener(*args, **kwargs))File "/opt/sabnzbd/cherrypy/_cpserver.py", line 167, in startself.httpserver, self.bind_addr = self.httpserver_from_self()File "/opt/sabnzbd/cherrypy/_cpserver.py", line 158, in httpserver_from_selfhttpserver = _cpwsgi_server.CPWSGIServer(self)File "/opt/sabnzbd/cherrypy/_cpwsgi_server.py", line 64, in __init__self.server_adapter.ssl_certificate_chain)File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__self.context.load_cert_chain(certificate, private_key)SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,359::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Shutting down due to error in start listener:Traceback (most recent call last):File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 245, in startself.publish('start')File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 225, in publishraise excChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)')2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,359::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPING2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server None already shut down2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Stopped thread '_TimeoutMonitor'.2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPED2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITING2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,362::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITED2017-05-24 21:06:20,364 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47147513872536 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stdout)>2017-05-24 21:06:20,364 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47147513480472 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stderr)>2017-05-24 21:06:20,364 INFO exited: sabnzbd (exit status 70; not expected)2017-05-24 21:06:20,364 DEBG received SIGCLD indicating a child quit2017-05-24 21:06:21,365 INFO gave up: sabnzbd entered FATAL state, too many start retries too quickly
  18. *I just realized I posted to the wrong thread. Sorry about that. Hi all, I just updated my sabnzbd container and it does not seem to be working any more. Here is the log, any help is appreciated. 2017-05-24 21:06:20,185 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,185::INFO::[SABnzbd:1275] SSL supported protocols ['TLS v1.2', 'TLS v1.1', 'TLS v1']2017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,188::INFO::[SABnzbd:1386] Starting web-interface on 0.0.0.0:80902017-05-24 21:06:20,189 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,189::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STARTING2017-05-24 21:06:20,193 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,192::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Started monitor thread '_TimeoutMonitor'.2017-05-24 21:06:20,357 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,357::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Serving on http://0.0.0.0:80802017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,358::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x2b1999457950>>Traceback (most recent call last):File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 207, in publishoutput.append(listener(*args, **kwargs))File "/opt/sabnzbd/cherrypy/_cpserver.py", line 167, in startself.httpserver, self.bind_addr = self.httpserver_from_self()File "/opt/sabnzbd/cherrypy/_cpserver.py", line 158, in httpserver_from_selfhttpserver = _cpwsgi_server.CPWSGIServer(self)File "/opt/sabnzbd/cherrypy/_cpwsgi_server.py", line 64, in __init__self.server_adapter.ssl_certificate_chain)File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__self.context.load_cert_chain(certificate, private_key)SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,359::ERROR::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Shutting down due to error in start listener:Traceback (most recent call last):File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 245, in startself.publish('start')File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 225, in publishraise excChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2699)')2017-05-24 21:06:20,359 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,359::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPING2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE HTTP Server None already shut down2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Stopped thread '_TimeoutMonitor'.2017-05-24 21:06:20,361 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus STOPPED2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,361::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITING2017-05-24 21:06:20,362 DEBG 'sabnzbd' stderr output:2017-05-24 21:06:20,362::INFO::[_cplogging:219] [24/May/2017:21:06:20] ENGINE Bus EXITED2017-05-24 21:06:20,364 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47147513872536 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stdout)>2017-05-24 21:06:20,364 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47147513480472 for <Subprocess at 47147433790064 with name sabnzbd in state STARTING> (stderr)>2017-05-24 21:06:20,364 INFO exited: sabnzbd (exit status 70; not expected)2017-05-24 21:06:20,364 DEBG received SIGCLD indicating a child quit2017-05-24 21:06:21,365 INFO gave up: sabnzbd entered FATAL state, too many start retries too quickly
  19. Okay. I'll see if I have another slot available for the controller. This issue is very concerning.
  20. Hi again, I finally got around to following your suggestions. I flashed the latest firmware I could find on the supermicro website, disabled VT-D and updated to Unraid 6.3.1. When I restarted the server, the drive was present and (obviously) still marked with a red X. I then tried to start the short S.M.A.R.T. selftest, but got an error stating that a mandatory command failed. I checked the "Main" tab again and the drive was marked as missing all of a sudden. I rebooted again, the drive was present again and I could see the S.M.A.R.T. information for it. I'm attaching it now. Thanks for your help. lochnas-smart-20170209-1801.zip
  21. Hey, thanks for your reply, attaching diagnostics.zip now. lochnas-diagnostics-20170207-1910.zip
  22. Hi all, I woke up this morning finding that one of the disks of my Unraid Server (running 6.3.0) has been disabled. I hope someone is able to tell me what exactly to look for ( failing disk, cable issue, ? ) Here is an extract of my syslog which hopefully helps sorting this out. Thanks for your time. *The data on the disk is accessible, I am currently copying it to another disk via shell.