bobokun

Members
  • Posts

    224
  • Joined

  • Last visited

Posts posted by bobokun

  1. Sorry if this question has been asked a lot, is it best practice to use all SATA ports on your motherboard before using the ones on the controller? I have 8 ports on my motherboard, i'm using all 8 (2x for SSD and 6x HDD) but want to add more drives, is it better to use 8xHDD on the controller and leave 2xSSD on the motherboard?

  2. I recently upgraded my parity drive by swapping my old parity out and recalculating the parity with the new larger drive and once that was complete I added the old drive back in (Unraid cleared the drive and added it to the array). I noticed that I have a few Buffer I/O errors and other errors like OCSP responder prematurely closed connection while requesting certificate status. Not sure what these are or I should worry about this.

     

    image.thumb.png.5b8a15a9f15e397501a0659751fca9fc.png

    unnas-diagnostics-20190403-1746.zip

  3. My plex docker container has a memory leak and I'm not sure how to fix it. When I restart the docker container it barely uses any memory however it slowly increases and doesn't stop, after 1 or 2 days it's using 5-6GB of RAM. I've tried to delete my docker img and recreate it downloading a fresh docker container for binhex-plex-pass. I would prefer not to start from scratch as I have quite a number of users on my server. Is there anything I can do to figure out what is causing this?

  4. No, I need to wait until the next  time it happens again for me to find out which one is causing the issue. The problem is when I tried to stop a docker container it just hangs (I think because the cpu and RAM are all 100%), so I can't even kill the docker container or shut it down from the GUI. Is there a way to stop all docker containers using ssh commands? Or if that doesn't work can I kill the process running the docker container? 

  5. The hard drives start maknig a lot of noise when this happens. I think there is an issue with I/O and it gets stuck? Is there any way I can figure out if it's a hardware issue or software? My drives are all relatively new (1-2years) and have gone through several runs of preclear before using them so I don't think it should be the hard drives. Any way I can check?

  6. Unfortunately changing to binhex-sonarr didn't fix the issue. I noticed that all my cpu and ram are being used as well when the docker containers stop responding but htop shows otherwise . Attached are some new diagonostics.

     

    Edit: weird thing happened but once docker containers automatically updated the CPU all went back to normal. The docker containers that got updated were :

     

    Community Applications
    Docker Auto Update
    normal: airsonic (was stopped), bazarr binhex-sonarr nextcloud radarr rutorrent sonarr (was stopped), tautulli unifi-controller jackett letsencrypt Automatically Updated

     

    How can I figure out which docker containers is causing the hang ups?

     

    Screenshot_20190325-053012.thumb.png.5a6706c2ba7853bdd82670e07706f633.png

     

    Screenshot_20190325-054203.thumb.png.4082252f5f6e28ce774ee5209ec39d8f.png

    unnas-diagnostics-20190325-0529.zip

  7. I noticed the past couple days after leaving the server running all my docker containers would stop responding and the only way to fix it is to restart the server. 

    I thought it was a one time thing but it just happened again and I'm not sure what the cause is. 

     

    Going on the docker tab takes a longer time to load than usual (around 1min) and then pressing stopping all Dockers just ends up with a looping animation. I can't restart any docker containers and the only option is to restart the server.

    unnas-diagnostics-20190313-1221.zip

  8. 1 hour ago, CHBMB said:

    Try

     

    
    sudo -u abc php7 /config/www/nextcloud/occ db:convert-filecache-bigint

     

    Thank you that worked perfectly!

    image.thumb.png.3ef513fb6f721398cb4f9bf9f3a00555.png

     

    I am also having an issue with not setting up caldav/carddav. I've tried searching through the thread and I've added this code to appdata\letsencrypt\nginx\site-confs\default but it's still giving me the error. I've also tried adding the same code to my nextcloud\nginx\site-config\default and it's not working either. Anyone have an idea on how to fix this issue?

    	location = /.well-known/carddav {
    		return 301 $scheme://$host/remote.php/dav;
    	}
    	location = /.well-known/caldav {
    		return 301 $scheme://$host/remote.php/dav;
    	}

     

    EDIT: I have fixed this issue! for anyone else who is having this error and is using YOURDOMAIN.COM/nextcloud as the URL you need to change it to this code. Very silly mistake of me but I'm glad all issues are resolved now :)

    	location = /.well-known/carddav {
    		return 301 $scheme://$host/nextcloud/remote.php/dav;
    	}
    	location = /.well-known/caldav {
    		return 301 $scheme://$host/nextcloud/remote.php/dav;
    	}

     

     

  9. On 1/18/2019 at 10:26 AM, CorneliousJD said:

    Sorry to bother, I did try searching for this and found others with the issue but not a resolution.

     

    I had 13.0.0 installed, updated to 14.x and then 15.0.2 after that - all via WebUI and that went very smoothly.

     

    I just now have these warnings, before the upgrades I didn't have any warnings or issues here listed, I also still continue to get an A+ rating on the Nextcloud security scan, but I would like to resolve all of these issues listed here for good measure.

     

    EDIT - got the tables updated w/ the sudo -u -abc command in the docker shell, but still not sure why the refer-policy is kicking that back, i had thought i had that issue on 13.x originally and fixed it. I'll have to look around some more, but if someone has a link or info handy feel free to send it my way!

      

    Any help is appreciated!

     

    image.thumb.png.a4ef418c64832179412f0c05a9274bec.png

    I have this same error, what was the command you used to fix it? 

    I tried "sudo -u abc php occ db:convert-filecache-bigint" and it gives me an error "Could not open input file: occ"

     

  10. 12 minutes ago, johnnie.black said:

    While you should avoid preclearing SSDs doing it once it shouldn't kill it, unless it was already failing, try doing a secure erase with the Intel SSD Toolbox, if that fails you'll need a new SSD.

    I tried installing the SSD Toolbox and this is what I see... Does any of the logs indicate any failure during preclear? Onc the preclear was complete I checked the smart status and it showed 0 reallocated sectors and nothing unusual from the device failing... 

     

    I also tried to boot into intel's firmware update iso but it said it couldn't detect any Intel SSDs..

     

    image.png.2c03e2eab05b92e90045ee49fd085449.png