Napper198

Members
  • Posts

    14
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Napper198's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Just for documentation: after updating my instance, it immediately stopped itself after successful start with error Failed to initialize CoreCLR, HRESULT: 0x80004005 Running Docker Safe New Perms fixed this
  2. of course there had to be a 3rd place where I have to change that. Works flawlessly now, thanks @ijuarez
  3. If you mean client_max_body_size that is also zero in the config. Any idea about the wording of that post? I can't seem to find it unfortunately user abc; worker_processes 4; pid /run/nginx.pid; include /etc/nginx/modules/*.conf; events { worker_connections 768; # multi_accept on; } http { # Basic Settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; client_max_body_size 0; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /config/log/nginx/access.log; error_log /config/log/nginx/error.log; # Gzip Settings gzip off; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Virtual Host Configs include /etc/nginx/conf.d/*.conf; include /config/nginx/site-confs/*; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; ssl_stapling on; # Requires nginx >= 1.3.7 ssl_stapling_verify on; # Requires nginx => 1.3.7 }
  4. For the most part I'm happy with the docker and it does what I need it to do, however I'm running into 413 Entity request too large when I try to upload anything larger than 10MB to my nextcloud Docker. Downloading works fine. Uploading directly to nextcloud works fine as well (not using reverse proxy). Example error from the log: 2017/11/14 16:17:29 [error] 339#339: *79 client intended to send too large body: 16014220 bytes, client: 79.223.239.124, server: cloud.*, request: "PUT /remote.php/webdav/Photos/2017/11/17-11-14%2013-30-11%200175.mov HTTP/1.1", host: "cloud.dixl.me" My config for this particular reverse proxy: server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name cloud.*; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'stuff I'm not sure I want to share'; ssl_prefer_server_ciphers on; client_max_body_size 0; client_body_temp_path /unraid/www/cache/; proxy_buffering off; proxy_request_buffering off; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass https://192.168.59.140:443; } Any help would be appreciated.
  5. After deleting and rebuilding the docker file it seems mostly fine but I may need to delete some files from the emby docker config folder as they appear to be corrupted. If nothing comes up I'll mark this as solved on monday.Thanks for the quick help again.
  6. Oh, I see. A scrub on the docker Image held no results so I guess I'll try to delete it now
  7. that is the system. I changed the cable and the port on the motherboard. The drive labels have changed now. Unraid picked sdc as Parity 1 (which should be fine since this is the good drive) Should I try to reformat the bad drive?
  8. Uh, that doesn't look too well: Edit: keeps scrolling as well
  9. root@Iduna:~# btrfs dev stats /mnt/cache [/dev/sdb1].write_io_errs 8699682 [/dev/sdb1].read_io_errs 9130479 [/dev/sdb1].flush_io_errs 10864 [/dev/sdb1].corruption_errs 433 [/dev/sdb1].generation_errs 1 [/dev/sdc1].write_io_errs 0 [/dev/sdc1].read_io_errs 0 [/dev/sdc1].flush_io_errs 0 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 thanks for the quick reply Edit: since the corruption error match with the scrub I did after reseating the SSD that data might be old and due to me not being at home the degraded stat was going for about 4-5h
  10. I don't exactly what happened but it appears that either my main cache SSD is straight up broken or the motherboard has some sort of issue, however I started getting emails with the following text: fstrim: /mnt/cache: FITRIM ioctl failed: Input/output error After restarting the server once the drive was not detected anymore. So I popped the SSD into a external enclosure and plugged it into another PC to see if the drive was indeed bad, but it got detected. Back into the server it got detected by the BIOS and after booting unRAID back up everything seemed fine at first. I noticed that my Emby Docker was acting up and after a short search found out that the given error is drive related and a look at the unRAID log revealed the error mentioned in the title. So what now? iduna-diagnostics-20171028-2118.zip
  11. The external storage plugin keeps unmounting as well but at least it's reconnecting automatically, so it's usable. Semi-Solved but I'm still open for other ideas.
  12. Had it installed but haven't looked at it until now. Seems to be somewhat slow but fast enough for internet usage so I guess I'll look into that and see if it also has a habit of random dismounts. Thanks for pointing it out.
  13. So, I somehow successfully installed Nextcloud on a Ubuntu Server VM (I know I'm lazy, don't judge me) and got it mostly working the way I want it to, Now, the way it's set up is that I have a share named Cloud which contains the actual data from Nexcloud itself but since I want to access other shares like Media from within nexcloud itself I want to mount those into the Cloud share in order to maintain some rights management and to save on disk space. The mount for Cloud works perfectly fine and the nested mounts also work as intended for a while, but the nested folders keep unmounting randomly even while being used for i.e. a rescan by Nexcloud. It doesn't matter if I let them mount by the VM via nfs or let unRIAD bind-mount them locally and share it through nohide, the nested folders keep unmounting without any apparent reason or even a entry in the log. I can also remount them simply by re-running the mount command. any ideas would be appreciated Log pointers: Ignore the network bond messages. there is only one cable plugged in because of some performance problem I still need to investigate 2017-06-04 20:31 last attempt to mount all folders via nfs 2017-06-05 17:28 testing local mounts 2017-06-05 17:31 restarting VM 2017-06-05 18:18 manual local remount iduna-syslog-20170605-1829.zip