Debaser

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by Debaser

  1. ooh i see, i think there was a new version released 2 days ago (0.4.4) where when i installed the mod pack via curse a few days ago it was 0.4.3. I see the container is using 0.4.4 so upgrading the client now to test thanks for the info edit: yep that was it, all good now, thanks!
  2. Hey, thanks for the container, makes it a lot easier than uploading mods to mineos. Running into an issue with AllTheMods7 container. It's up and running, I can see it, I can see my connection attempt in the logs, but I get an error `mismatched mod channel list` in the client when connecting. This seems to be a mismatch of mods or mod versions on the client vs the server. I don't play minecraft, just hosting a server for friends so I don't really know how to troubleshoot the mods. I downloaded the package and launched from the CurseForge app. Any easy way to compare what mods are mismatched? Thanks
  3. hey all, checking in here, system has been stable for about 2 weeks now. it seems the culprit may have been a pi-hole docker container i was running that was having issues to begin with on its own. ever since removing that container, it seems to have been running stable. going to keep monitoring for a while, but hopeful that may have been the solution. appreciate the help!
  4. So, had a few good days with zero issues, then looks like around 1am today (tuesday), looks like server crashed and went unresponsive again, had to power cycle through idrac interface Here's the syslog from the past 2 days. another kernel panic on the console output Any ideas? SyslogCatchAll-2020-06-29.txt SyslogCatchAll-2020-06-30.txt
  5. Thanks. I've deleted the docker image and re-installed my containers from community applications. I stepped away during the install of the containers. It looks like it finished, as all my containers are there, but it looks like sometime after the system had a kernel panic, and the system became unresponsive. I had to do a power cycle via the idrac. I've attached a screenshot of the unresponsive console with the kernel panic, as well as the syslog output from today. After power cycling, unraid is back up and everything seems to be running ok for now. I'll continue to monitor to see what happens SyslogCatchAll-2020-06-25.txt
  6. Got it, thanks. diagnostics attached. stevenet-diagnostics-20200625-1059.zip
  7. Hey guys, update here. I've been running the syslog server since last night. Everything as far as I can tell is still running smooth other than Grafana showing my Cache read/write at a constant 0. I have noticed this usually happens before I notice any major issues. Most of my docker containers are running in cache, so there should be some constant read/write activity I believe. I've attached my syslog output. There are plenty of critical errors such as the following 2020-06-24 21:22:55 Kernel.Info 192.168.1.204 Jun 24 21:22:53 stevenet kernel: BTRFS info (device loop2): no csum found for inode 4004 start 0 2020-06-24 21:22:55 Kernel.Critical 192.168.1.204 Jun 24 21:22:53 stevenet kernel: BTRFS critical (device loop2): corrupt leaf: root=7 block=12937510912 slot=43, bad key order, prev (18446744073709551606 128 12001943552) current (18446744073709551606 16 72057081092011947) 2020-06-24 21:22:55 Kernel.Critical 192.168.1.204 Jun 24 21:22:53 stevenet kernel: BTRFS critical (device loop2): corrupt leaf: root=7 block=12937510912 slot=43, bad key order, prev (18446744073709551606 128 12001943552) current (18446744073709551606 16 72057081092011947) 2020-06-24 21:22:55 Kernel.Info 192.168.1.204 Jun 24 21:22:53 stevenet kernel: BTRFS info (device loop2): no csum found for inode 4004 start 4096 After some googling, it appears loop2 would be my docker.img? Could this be the cause of my system crashing on me? What would my troubleshooting steps be here, delete my docker image and reinstall my containers? Or am I going down the wrong path here? SyslogCatchAll-2020-06-24.txt SyslogCatchAll-2020-06-25.txt
  8. Thanks guys for the responses. I've gone and set up a remote syslog server, as well as tailing the syslog in my idrac console. I'll post back with any information on the next crash. @jonp - this issue did just start happening in the past few weeks, after more than a year running perfectly stable. I did notice the cache ssd temperature warnings and realized that unraid has some generic temperate warning levels. I found my SSDs should warn around 60c and critical around 75, so I have adjusted those levels, so heat does not seem to be the issue. I haven't made any hardware changes recently, and according to grafana and the unraid dashboard, my resources don't seem to be strapped. i've been transcoding a large video library from x264 to x265 using tdarr since January which may be putting some strain on the CPUs. I've dropped it from 3 workers to 1 to lessen the strain, but it does not seem like anything has changed.
  9. so a few times since my last post about a week ago, my server has needed a hard reboot 2-3 times. it's currently in a state where some containers and services are working, but some are not. not sure what is going on. i'm pretty sure if i try a graceful shutdown at this point it will just hang and require another hard reboot pages in unraid such as docker, dashboard, etc are unresponsive or do not load. others such as Main, Shares seem to be loading fine anyone have any advice on how to troubleshoot this?
  10. gotcha, thanks for the info. so you're saying the SSD trim isn't likely what is causing these issues with unraid? it's been stable for the last 24 hours or so, but has been pretty unstable for the past week
  11. Thanks for the info. I'll have to take a look when I get home. I thought I read that on the r720XD the onboard sata is disabled how important is it to run trim on the SSDs? would I be better off disabling the trim plugin if I can't use the onboard SATA?
  12. sorry, diagnostics attached stevenet-diagnostics-20200614-1835.zip
  13. Hey all, been having some strange issues lately that I'm not even sure how to properly describe. Over the past 2 weeks or so, I've been experiencing issues like my array becoming unresponsive, webui not loading all elements, docker service becoming stuck, unable to reboot gracefully. I've experienced about 3-4 times in this period where I've had to power cycle the server via idrac, no graceful shutdown. I can use the idrac console or ssh into unraid to run a reboot command, but it just says the server is going down for a reboot, and just sits there until I power cycle. I believe the issue is with my cache drives. I have a 2x800gb ssd cache pool. I have been seeing that they have been running hot often, but usually coming back down in temp shortly after I get the temperature notifications. In my /root directiry, I see a file called dead.letter. Here are the contents of that file Event: Unraid Parity check Subject: Notice [STEVENET] - Parity check started Description: Size: 8 TB Importance: warning fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error When I console in through idrac, I see some messages like this (attached) My SMART checks have all passed when I last ran them, which was a day or two ago, after I started experiencing these issues. Is this a sign of my cache drive(s) failing, or possibly something else? Not sure which steps to take next. Thanks
  14. hmm ok so i'm stupid, the drive just isn't mounted apparently looking at `df -h` i don't see it listed anywhere, but when i run a `fdisk -l` i see it i was thinking the disk and partitons would be automatically be mounted. should i be mounting this manually with `/etc/fstab`?
  15. Hey there, trying to figure out how to pre-allocate the full disk size on my Ubuntu 18.04 server VM. I'm running unifi nvr and there's a setting to keep at least X space free, and the lowest setting is 10gb. my VM only has ~500mb free because it has an expanding disk size it seems. Trying to figure the best way to go about this. I suppose I could mount one of my unraid shares, but i haven't been able to get that to work either. any ideas?
  16. if anyone else stumbles across this with the same issue, it's because of the network type on the docker container. at least it was in my case. i had it set to custom with its own IP. needed to set it to Host to allow inbound connections. here's the reddit thread where someone helped me understand this: here's my config for the reverse proxy if interested server { listen 443 ssl; server_name nvr.redacted.xyz; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:$ ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; client_body_buffer_size 400M; location / { proxy_set_header X-Real_IP $remote_addr; proxy_pass https://192.168.1.204:7443/; proxy_max_temp_file_size 2048m; include /config/nginx/proxy.conf; } }
  17. having issues getting the unifi video container working behind nginx reverse proxy. anyone having any luck? getting a 502 bad gateway here's my config map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80; server_name nvr.domain.xyz; client_max_body_size 4G; location / { proxy_redirect off; proxy_set_header Host $host; proxy_pass http://192.168.1.10:7080/; } } server { listen 443 ssl http2; server_name nvr.domain.xyz; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl on; set $upstream 192.168.1.10:7443; location / { proxy_pass https://$upstream; proxy_redirect https://$upstream https://$server_name; proxy_cache off; proxy_store off; proxy_buffering off; proxy_http_version 1.1; proxy_read_timeout 36000s; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Referer ""; client_max_body_size 0; } }
  18. Spoke too soon, looks like it's throwing a 502 timeout after the upload completes 1.5 hrs later. Where would I be looking to change the timeout interval?
  19. Good call I ran a grep on my cache drives where my containers are installed grep -R client_max_body_size /mnt/cache/appdata Most configs were already set to client_max_body_size 0; But I missed in the letsencrypt nginx proxy.conf was set to client_max_body_size 10m; So ran a quick command to change it over sed -i 's/client_max_body_size 10m;/client_max_body_size 0;/g' /mnt/cache/appdata/letsencrypt/nginx/proxy.conf Seems to be uploading now without a problem. Thanks for the shove in the right direction
  20. Hey guys, trying to upload large files to my nextcloud, but getting an error 413 Request Entity Too Large I'm running this behind my letsencrypt docker on a reverse proxy. I've tried editing the site config for both the letsencrypt nginx as well as nextcloud's nginx as follows letsencrypt nginx client_max_body_size 20G; client_body_buffer_size 400M; nextcloud nginx # set max upload size client_max_body_size 20G; fastcgi_buffers 64 4K; Restarted both containers fron unRAID and still getting this error when I try to sync my large file. The file is 14GB if that makes a difference Any suggestions? Thanks
  21. Ok so turns out, the weird command not found error stems from my new password I was using. The password was a mix of characters which included a '&', followed by the '9QS5yKwDR' that it is complaining about. Turns out that threw the mount command for a loop, as the & didn't get escaped, causing it to think it was two separate commands? Changed the password again, this time with no &, but having trouble loading the shares on my Synology. Giving everything a quick reboot then testing again. EDIT: @Squid yeah i was typing up my reply when you posed this. Changed the password and testing again EDIT2: Looks like all is good. I'm just a tard
  22. Hey guys, having an issue mounting my Synology NAS via SMB with Unassigned Devices plugin. This has previously worked without problems. After a clean reboot, I suddenly cannot mount the share anymore. I recently changed my password on my Synology and re-created the SMB share in UnRAID, yet getting this strange unknown command error in the log... Here's the snip from the log: Mar 22 10:25:22 stevenet unassigned.devices: Mount SMB share '//NAS/MOVIES' using SMB1 protocol. Mar 22 10:25:22 stevenet unassigned.devices: Mount SMB/NFS command: mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,,vers=1.0,username=sstoveld,password=******* '//NAS/MOVIES' '/mnt/disks/NAS_MOVIES' Mar 22 10:25:22 stevenet unassigned.devices: Mount of '//NAS/MOVIES' failed. Error message: sh: 9QS5yKwDR: command not found Mar 22 10:26:50 stevenet unassigned.devices: Mount SMB share '//NAS/MOVIES' using SMB3 protocol. Mar 22 10:26:50 stevenet unassigned.devices: SMB3 mount failed: sh: 9QS5yKwDR: command not found . Mar 22 10:26:50 stevenet unassigned.devices: Mount SMB share '//NAS/MOVIES' using SMB2 protocol. Mar 22 10:26:50 stevenet unassigned.devices: SMB2 mount failed: sh: 9QS5yKwDR: command not found . Mar 22 10:26:50 stevenet unassigned.devices: Mount SMB share '//NAS/MOVIES' using SMB1 protocol. Mar 22 10:26:50 stevenet unassigned.devices: Mount SMB/NFS command: mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,,vers=1.0,username=sstoveld,password=******* '//NAS/MOVIES' '/mnt/disks/NAS_MOVIES' Mar 22 10:26:50 stevenet unassigned.devices: Mount of '//NAS/MOVIES' failed. Error message: sh: 9QS5yKwDR: command not found Any ideas?
  23. Still battling with this, anyone have any ideas? Half the time it's fine. the other half it's unusable remotely without VPN
  24. Still trying to get this sorted out. It seems that it might be a slow DNS resolve to point to the docker containers. Anyone know any settings I can tweak to maybe help speed it up? Been digging into this for a while now with no success
  25. So I made sure everything is running off my cache drive pool which consists of 2x800gb SSD drives so everything should be nice and snappy. Both docker containers as well as download paths. Still getting poor performance from webui for all docker containers via domain name rather than ip. Any idea? Thanks!