Jump to content

bb12489

Members
  • Content Count

    56
  • Joined

  • Last visited

Community Reputation

1 Neutral

About bb12489

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This helped! I deleted the server and cert files from my appdata/sabnzbd/admin/ directory. Started up normally. Thanks!
  2. After an update to the Sab docker this morning, It appears that there is an error keeping it from starting up. I've tried removing the container completely, and re-creating it, but it still ends with the same error as posted below. Is anyone else running into this? My sab docker was working just fine before going to bed. 2019-02-27 08:52:31,667::ERROR::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x150cfe541b90>> Traceback (most recent call last): File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 207, in publish output.append(listener(*args, **kwargs)) File "/usr/share/sabnzbdplus/cherrypy/_cpserver.py", line 167, in start self.httpserver, self.bind_addr = self.httpserver_from_self() File "/usr/share/sabnzbdplus/cherrypy/_cpserver.py", line 158, in httpserver_from_self httpserver = _cpwsgi_server.CPWSGIServer(self) File "/usr/share/sabnzbdplus/cherrypy/_cpwsgi_server.py", line 64, in __init__ self.server_adapter.ssl_certificate_chain) File "/usr/share/sabnzbdplus/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__ self.context.load_cert_chain(certificate, private_key) SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2779) 2019-02-27 08:52:31,769::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Serving on http://0.0.0.0:8080 2019-02-27 08:52:31,770::ERROR::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Shutting down due to error in start listener: Traceback (most recent call last): File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 245, in start self.publish('start') File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 225, in publish raise exc ChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2779)') 2019-02-27 08:52:31,770::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus STOPPING 2019-02-27 08:52:31,773::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down 2019-02-27 08:52:31,773::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE HTTP Server None already shut down 2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus STOPPED 2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus EXITING 2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus EXITED
  3. That did it! Thank you. I must have mounted it for some stupid reason. Everything is working probably now!
  4. I currently have two 500GB Samsung 850 EVO SSD's in a raid 0 setup for cache drives. I needed space and speed over redundancy. I recently came into possession of a third 500GB EVO drive, and when adding it to the cache pool, it kicks off a re-balance as expected. Once the balance has finished though, the array size still sits at 1TB instead of the expected 1.5TB. I thought this might be because Unraid switch the RAID level during the automatic balance, but upon checking, it says I'm still running in raid 0 config. So I tried yet another balance to raid 0 with the same result, then finally tried balancing to raid 1 and back to raid 0, with again, the same result... The only thing that stands out to me as being odd (aside from the space not increasing) is that the drive shows no read or write activities on it (shown in the screenshot). Does anyone have any recommendations?
  5. I did add the cores I wanted isolated in my syslinux.cfg. It's just that the XML looked odd to me.
  6. Hey guys, I'm just finally getting started with setting up a Gamestream VM to use with my Nvidia Shield TV. I think I've gotten the CPU pinning set correctly, but I'm hoping someone could give it a second look. My system is running dual Xeon L5640's (6 core 12 thread), so I have 24 threads to work with. My thought was to isolate the last 3 cores (bolded below) which would give me 6 threads for the VM. Is my thinking correct? The only thing that looks off to me in the XML is the "cputune". Shouldn't this be showing 9,21,10,22,11,23? My thread pairing is as follows.... cpu 0 <===> cpu 12 cpu 1 <===> cpu 13 cpu 2 <===> cpu 14 cpu 3 <===> cpu 15 cpu 4 <===> cpu 16 cpu 5 <===> cpu 17 cpu 6 <===> cpu 18 cpu 7 <===> cpu 19 cpu 8 <===> cpu 20 cpu 9 <===> cpu 21 cpu 10 <===> cpu 22 cpu 11 <===> cpu 23
  7. Update: I think SMB is broken in Unraid at this point. I can't access my shares from my Windows 10 client, and my Kodi-Headless docker can't map it's sources to the shares either. The only device that does seem to have access still is my Android phone.
  8. Just wanted to chime in here and say that I have the same problem on and off. Sometimes I can access my shares, and other times I can't. It was suggested to just enter random usernames into the credential box when prompted on Windows 10. This works sometimes, and other times it does not. Either way after a reboot, the solution stops working.
  9. Just a heads up for anyone that is wondering... I tried installing this on my Unraid 6.2.2 server, and the WebUI would fail to load. Finally had to add in an extra command when setting up the app. --net=bridge -p 0.0.0.0:8123:8123 Found the solution here https://community.home-assistant.io/t/docker-on-mac-install-front-end-nowhere-to-be-found/5553/5
  10. Just a question here..... By any chance has the BTRFS balancing options been updated in the stable build? I remember from a few beta builds ago that there was going to be a fix for switching the BTRFS cache pool from raid 1 to raid 0, and having it not rebuild the pool back to raid 1 every time you added/removed a drive. I know I may be one of the very few users on here who is running their cache pool in raid 0, but I do have my reasons (space and speed).
  11. This is a Windows issue, if your shares are set to public type a random username without any password, e.g., "user" and it should work, if it works save credentials. I had the same issue with all my shares. They are all set to public, but after the upgrade from RC5 to Stable, I was no longer able to access the shares from my Windows 10 desktop. Kept prompting for a username and password. I even checked credential manager and there are no saved credentials. I did end up trying your solution of typing in "user" for the username and no password, and now I'm back in though. Very odd.
  12. I never set up an icon, so that's why it looks like that. Oh weird. In community apps it shows an icon, so I thought it had an icon. My bad! Sent from my XT1575 using Tapatalk
  13. For some reason the Grafana icon isn't showing on my Unraid Dashboard. Just shows as a broken img. I don't believe it's an issue with my server. All other dashboard icons are working.