furian
-
Posts
28 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by furian
-
-
for a good while i had fail2ban working in a seperate docker with nginx proxy manager and authelia in seperate dockers.
now suddenly (worked like a charm so i stopped checking the logs, stupid) it stopped working.
in the fail2ban logs i can see that it is actually banning the ip, in the console of fail2ban i can also see this happening with iptables -nL.
however it is no longer pushing the chain through to the unraid host and so it is not blocking the actual incoming connections..
here is my authelia jail.local
Quote[authelia-auth]
enabled = true
logpath = %(remote_logs_path)s/authelia/authelia.log
chain = DOCKER-USER
action = iptables-multiport[name=HTTP, port="http,https,9091,4443,18443,8181,7818,8080,1880", protocol=tcp]
ignoreip = 127.0.0.1/8 ::1
172.18.0.0/16
192.168.0.0/24
bantime = -1
findtime = 24h
maxretry = 1The docker is running with the network type Host btw.
The below output is from inside the docker container:
Quotefail2ban-client status
Status
|- Number of jail: 13
`- Jail list: authelia-auth, nextcloud-auth, nginx-418, nginx-bad-request, nginx-badbots, nginx-botsearch, nginx-deny, nginx-http-auth, nginx-limit-req, nginx-unauthorized, sabnzbd-auth, sonarr-auth, vaultwarden-authQuoteStatus for the jail: authelia-auth
|- Filter
| |- Currently failed: 0
| |- Total failed: 3
| `- File list: /remotelogs/authelia/authelia.log
`- Actions
|- Currently banned: 645
|- Total banned: 645Quoteiptables -L
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Chain INPUT (policy ACCEPT)
target prot opt source destination
f2b-HTTP tcp -- anywhere anywhere multiport dports http,https,9091,4443,18443,8181,7818,http-alt,1880Chain FORWARD (policy ACCEPT)
target prot opt source destinationChain OUTPUT (policy ACCEPT)
target prot opt source destinationChain f2b-HTTP (1 references)
target prot opt source destination
REJECT all -- 45.128.232.213 anywhere reject-with icmp-port-unreachable
REJECT all -- love.zonogicism.nl anywhere reject-with icmp-port-unreachable
REJECT all -- 95.214.55.115 anywhere reject-with icmp-port-unreachable
REJECT all -- 95.214.27.9 anywhere reject-with icmp-port-unreachable
REJECT all -- ecs-94-74-90-173.compute.hwclouds-dns.com anywhere reject-with icmp-port-unreachable
REJECT all -- ecs-94-74-88-143.compute.hwclouds-dns.com anywhere reject-with icmp-port-unreachable
REJECT all -- ecs-94-74-74-175.compute.hwclouds-dns.com anywhere reject-with icmp-port-unreachable
REJECT all -- ecs-94-74-120-130.compute.hwclouds-dns.com anywhere reject-with icmp-port-unreachable
REJECT all -- 94.232.43.74 anywhere reject-with icmp-port-unreachable
REJECT all -- 94.156.69.209 anywhere reject-with icmp-port-unreachable
REJECT all -- 94.156.66.33 anywhere reject-with icmp-port-unreachable
REJECT all -- cloud.census.shodan.io anywhere reject-with icmp-port-unreachable
REJECT all -- 232.190.205.92.host.secureserver.net anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.255.83 anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.253.56 anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.251.33 anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.250.119 anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.246.41 anywhere reject-with icmp-port-unreachable
REJECT all -- 91.92.246.219 anywhere reject-with icmp-port-unreachableand in the log from fail2ban itself:
Quote2024-04-05 15:47:00,311 14BA6F7ECB38 INFO [nginx-bad-request] Found 45.128.232.213 - 2024-04-05 15:46:59
2024-04-05 15:47:00,808 14BA6F5E9B38 NOTIC [nginx-bad-request] Ban 45.128.232.213
2024-04-05 15:47:04,313 14BA6F7ECB38 INFO [nginx-bad-request] Found 45.128.232.213 - 2024-04-05 15:47:03
2024-04-05 15:47:04,819 14BA6F5E9B38 NOTIC [nginx-bad-request] 45.128.232.213 already banned
2024-04-05 15:47:15,518 14BA6F7ECB38 INFO [nginx-bad-request] Found 45.125.66.34 - 2024-04-05 15:47:15
2024-04-05 15:47:16,030 14BA6F5E9B38 WARNI [nginx-bad-request] 45.125.66.34 already banned
2024-04-05 15:47:31,522 14BA6F7ECB38 INFO [nginx-bad-request] Found 45.128.232.213 - 2024-04-05 15:47:30
2024-04-05 15:47:32,042 14BA6F5E9B38 NOTIC [nginx-bad-request] 45.128.232.213 already banned
2024-04-05 17:42:38,197 14BA6F7ECB38 INFO [nginx-bad-request] Found 80.75.212.75 - 2024-04-05 17:42:37
2024-04-05 17:42:38,505 14BA6F5E9B38 WARNI [nginx-bad-request] 80.75.212.75 already banned
2024-04-05 17:56:31,035 14BA7092CB38 INFO [authelia-auth] Found 31.132.200.11 - 2024-04-05 17:56:31
2024-04-05 17:56:31,296 14BA70725B38 WARNI [authelia-auth] 31.132.200.11 already banned
on the unraid host:
Quoteiptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
ts-input 0 -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_INP 0 -- 0.0.0.0/0 0.0.0.0/0Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER 0 -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ts-forward 0 -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_FWX 0 -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_FWI 0 -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_FWO 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
WIREGUARD 0 -- 0.0.0.0/0 0.0.0.0/0Chain OUTPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_OUT 0 -- 0.0.0.0/0 0.0.0.0/0Chain DOCKER (2 references)
target prot opt source destination
ACCEPT 6 -- 0.0.0.0/0 172.18.0.2 tcp dpt:3306
ACCEPT 6 -- 0.0.0.0/0 172.17.0.3 tcp dpt:27017
ACCEPT 6 -- 0.0.0.0/0 172.18.0.3 tcp dpt:80
ACCEPT 6 -- 0.0.0.0/0 172.18.0.4 tcp dpt:6379
ACCEPT 6 -- 0.0.0.0/0 172.17.0.4 tcp dpt:8181
ACCEPT 6 -- 0.0.0.0/0 172.17.0.4 tcp dpt:8080
ACCEPT 6 -- 0.0.0.0/0 172.17.0.4 tcp dpt:4443
ACCEPT 6 -- 0.0.0.0/0 172.18.0.5 tcp dpt:9091
ACCEPT 6 -- 0.0.0.0/0 172.18.0.6 tcp dpt:8080
ACCEPT 6 -- 0.0.0.0/0 172.18.0.7 tcp dpt:9696
ACCEPT 6 -- 0.0.0.0/0 172.18.0.8 tcp dpt:7878
ACCEPT 6 -- 0.0.0.0/0 172.18.0.9 tcp dpt:8090
ACCEPT 6 -- 0.0.0.0/0 172.18.0.9 tcp dpt:8080
ACCEPT 6 -- 0.0.0.0/0 172.18.0.10 tcp dpt:9897
ACCEPT 6 -- 0.0.0.0/0 172.18.0.10 tcp dpt:8989
ACCEPT 6 -- 0.0.0.0/0 172.18.0.11 tcp dpt:8181
ACCEPT 6 -- 0.0.0.0/0 172.18.0.12 tcp dpt:6767
ACCEPT 6 -- 0.0.0.0/0 172.18.0.13 tcp dpt:8266
ACCEPT 6 -- 0.0.0.0/0 172.18.0.13 tcp dpt:8265
ACCEPT 6 -- 0.0.0.0/0 172.18.0.13 tcp dpt:8264
ACCEPT 6 -- 0.0.0.0/0 172.18.0.14 tcp dpt:9897
ACCEPT 6 -- 0.0.0.0/0 172.18.0.14 tcp dpt:8989
ACCEPT 6 -- 0.0.0.0/0 172.18.0.15 tcp dpt:8500
ACCEPT 6 -- 0.0.0.0/0 172.18.0.16 tcp dpt:9696
ACCEPT 6 -- 0.0.0.0/0 172.18.0.17 tcp dpt:8191
ACCEPT 6 -- 0.0.0.0/0 172.18.0.18 tcp dpt:5055
ACCEPT 6 -- 0.0.0.0/0 172.17.0.5 tcp dpt:5454
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:8189
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:8182
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:8118
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:6881
ACCEPT 17 -- 0.0.0.0/0 172.18.0.19 udp dpt:6881
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:2831
ACCEPT 6 -- 0.0.0.0/0 172.18.0.19 tcp dpt:1080
ACCEPT 6 -- 0.0.0.0/0 172.18.0.20 tcp dpt:8090
ACCEPT 6 -- 0.0.0.0/0 172.18.0.20 tcp dpt:8080
ACCEPT 17 -- 0.0.0.0/0 172.17.0.2 udp dpt:10001
ACCEPT 6 -- 0.0.0.0/0 172.17.0.2 tcp dpt:8880
ACCEPT 6 -- 0.0.0.0/0 172.17.0.2 tcp dpt:8843
ACCEPT 6 -- 0.0.0.0/0 172.17.0.2 tcp dpt:8443
ACCEPT 6 -- 0.0.0.0/0 172.17.0.2 tcp dpt:8080
ACCEPT 6 -- 0.0.0.0/0 172.17.0.2 tcp dpt:6789
ACCEPT 17 -- 0.0.0.0/0 172.17.0.2 udp dpt:5514
ACCEPT 17 -- 0.0.0.0/0 172.17.0.2 udp dpt:3478
ACCEPT 17 -- 0.0.0.0/0 172.17.0.2 udp dpt:1900
ACCEPT 6 -- 0.0.0.0/0 172.18.0.21 tcp dpt:7878
ACCEPT 6 -- 0.0.0.0/0 172.18.0.22 tcp dpt:9897
ACCEPT 6 -- 0.0.0.0/0 172.18.0.22 tcp dpt:8989Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 0 -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 0 -- 0.0.0.0/0 0.0.0.0/0
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP 0 -- 0.0.0.0/0 0.0.0.0/0
DROP 0 -- 0.0.0.0/0 0.0.0.0/0
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN 0 -- 0.0.0.0/0 0.0.0.0/0Chain LIBVIRT_FWI (1 references)
target prot opt source destination
ACCEPT 0 -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED
REJECT 0 -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachableChain LIBVIRT_FWO (1 references)
target prot opt source destination
ACCEPT 0 -- 192.168.122.0/24 0.0.0.0/0
REJECT 0 -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachableChain LIBVIRT_FWX (1 references)
target prot opt source destination
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0Chain LIBVIRT_INP (1 references)
target prot opt source destination
ACCEPT 17 -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
ACCEPT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT 17 -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67
ACCEPT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67Chain LIBVIRT_OUT (1 references)
target prot opt source destination
ACCEPT 17 -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
ACCEPT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT 17 -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68
ACCEPT 6 -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:68Chain WIREGUARD (1 references)
target prot opt source destinationChain ts-forward (1 references)
target prot opt source destination
MARK 0 -- 0.0.0.0/0 0.0.0.0/0 MARK xset 0x40000/0xff0000
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0 mark match 0x40000/0xff0000
DROP 0 -- 100.64.0.0/10 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0Chain ts-input (1 references)
target prot opt source destination
ACCEPT 0 -- 100.85.90.34 0.0.0.0/0
RETURN 0 -- 100.115.92.0/23 0.0.0.0/0
DROP 0 -- 100.64.0.0/10 0.0.0.0/0
ACCEPT 0 -- 0.0.0.0/0 0.0.0.0/0
ACCEPT 17 -- 0.0.0.0/0 0.0.0.0/0 udp dpt:54283no matter what i try, im not able to get it running again
-
Hi,
I wanna see how my cache drive is doing, however i have no idea what the wear lvl is of this drive..
looking on google intel says you should look at the E9 media wear-out indicator.. thats 233 for unraid..
However added picture shows what i'm seeing.. and i got no idea how to convert this to reliable information..
thnx in advance.
-
i just cant get this to work..my fail2ban log shows that my work ip is blocked.. but the page still loads and i can still enter my username and password and keep trying and refreshing..
i'm just at a loss here...
tried everything i could think of.. only thing i just changed is the priviliged mode and going to try that now.
edit: priviliged mode did not change anything.. i can still keep pressing F5 and trying new passwords etc..also added my iptables -nvL from the host. as you can see the reject tables are present.. but something is preventing them from being blocked..
authelia.txt fail2banlog.txt filter authelia-auth.local.txt jail authelia-auth.local.txt
-
i just noticed while trying to get fail2ban working (damn hard btw.. I could really use some help with fail2ban to work with authelia and nginx.. with docker)
There is a red dot showing the error: incorrect autostart order..
i dont know where i can check what the order should be then...
this is my list as it is right now:
mariadb
redis
fail2ban
adguard-home
nginx
authelia
unifi-controller
adguard-sync
nextcloud
*arrs and downloaders
-
On 9/16/2023 at 12:20 AM, blender50 said:
I took the above script as a starting point, added a few things that made its use more useful to me. Disclaimer: I'm no BASH god, and I'm sure it can be improved upon, but for me it does the trick. There are some use remarks in the comments at the top of the script.
#!/bin/bash # It takes a number of seconds as an argument. That controls how often the script updates. # # To run the script save it to /tmp and set it to execute (chmod +x scriptname) # then do # "/tmp/syncscript xx" where xx is the refresh time in seconds (without the quotes ;-) # # You can leave this script running while pausing and resuming the parity check from the GUI Main page without issue. :-) # # To stop the script Press [CTRL+C] # # This script will display like this: # # RUNNING # Press [CTRL+C] to stop. # Array Status: STARTED # Parity Check: Running # Speed: 119.9 MB/sec # Progress: 1 GB of 12000 GB # # (0.0%) # Completing in: 27h 48m # On approximately: Sat Sep 16 18:38:31 PDT 2023 # Total Errors: 0 # Refreshing in 0 # # PAUSED # Press [CTRL+C] to stop. # Array Status: STARTED # Parity Check: Paused # Speed: 0 MB/sec # Progress: 13 GB of 0 GB # # (0%) # Completing in: N/A # On approximately: N/A # Total Errors: 0 # Refreshing in 9 # # STOPPED # Press [CTRL+C] to stop. # Array Status: STARTED # Parity Check: Not Running # Speed: 0 MB/sec # Progress: 0 GB of 0 GB # # (0%) # Completing in: N/A # On approximately: N/A # Total Errors: 0 # Refreshing in 6 # Check to see the argument is provided. Otherwise exit with helpful message. if [ -z "$1" ] then echo " To run this script please supply a refresh time in x seconds, like: # /tmp/syncstatus x" exit 0 fi # Define Colored text for use in countdown RED='\033[0;31m' NC='\033[0m' # No Color # Main loop of script while true do # Set refresh to the arguments value plus 1 (necessary for the countdown) refresh=$(($1+1)) # clear the screen and display the script at the top of the window (one row down for readability) tput clear tput cup 1 0 # grab various datapoints from the output of mdcmd status command status=$(mdcmd status | sed -n 's/mdState=//p' ) size=$(mdcmd status | sed -n 's/mdResync=//p' ) pos=$(mdcmd status | sed -n 's/mdResyncPos=//p' ) dt=$(mdcmd status | sed -n 's/mdResyncDt=//p' ) db=$(mdcmd status | sed -n 's/mdResyncDb=//p' ) # calculating sizes and data processed gbsize=$(awk "BEGIN{printf \"%.0f\", $size * 1024 / 1000^3}" ) gbpos=$(awk "BEGIN{printf \"%.0f\", $pos * 1024 / 1000^3}" ) # error checking for stopped, paused, ... if [ $size == 0 ] then progress="0" else progress=$(awk "BEGIN{printf \"%.1f\", ($pos / $size) * 100}" ) fi # checking for 0 speed. Indicating stopped or paused if [ $dt == 0 ] then speed="0" else speed=$(awk "BEGIN{printf \"%.1f\", ($db / $dt) * 1024 / 1000^2}" ) fi # if there is a speed value greater than 0, then the check is running and we report the amount of time left if [ $speed == 0 ] then finish="N/A" else # since it is running we can calculate and display the projected end date finish=$(awk "BEGIN{ m=(($dt*(($size-$pos)/($db/100+1)))/100)/60 print int(m/60) \"h \" int(m%60) \"m\" }") fi # check to see if Parity check is paused, stopped or running if [[ $gbpos -gt "0" ]] && [[ $gbsize == 0 ]] then parityCheck="Paused" elif [ $size == 0 ] then parityCheck="Not Running" else parityCheck="Running" fi # Begin outputting info echo "Press [CTRL+C] to stop." echo "Array Status: $status" echo "Parity Check: $parityCheck" echo "Speed: $speed MB/sec" echo "Progress: $gbpos GB of $gbsize GB" # round progress to nearest whole number for progress bar display progressRounded=$(printf %.0f $progress) # calc progress bar and display %age completed for i in $(eval echo "{0..$progressRounded}") do echo -n "#" done echo " (${progress}%)" echo "Completing in: $finish" # account for stopped or paused processs if [[ $parityCheck="Running" ]] then # calc end date # Extract hours and minutes hours=$(echo "$finish" | awk '{print $1}' | sed 's/h//') minutes=$(echo "$finish" | awk '{print $2}' | sed 's/m//') # Convert to seconds if [[ -n "$hours" ]] && [[ -n "$minutes" ]] then seconds_left=$((hours*3600 + minutes*60)) endDate=$(date -d "now + $seconds_left seconds") else endDate="N/A" fi echo "On approximately: $endDate" fi # Parse all mdcmd status data and summing all rdevNumErrors values input_string=$(mdcmd status) # Use grep to filter lines starting with rdevNumErrors rdev_errors=$(echo "$input_string" | grep -Eo 'rdevNumErrors\.[0-9]+=[0-9]+' | awk -F= '{sum+=$2} END{print sum}') # Print the total value echo "Total Errors: $rdev_errors" # Countdown display count displays in red from 5 to 0 before refresh while [ $refresh -gt 0 ] do refresh=$((refresh-1)) if [ $refresh -le 5 ] then printf " Refreshing in ${RED}$refresh${NC} " printf "\r" sleep 1 else printf " Refreshing in $refresh " printf "\r" sleep 1 fi done # end of countdown while done # end of 'whole script' while
I hope others find it useful.
thank you!
My GUI stopped responding after starting the array when i replaced the parity disk (14tb) for a new 18tb disk i had no idea what the server was doing..
i selected the 14TB disk as a replacement for a old 4tb disk, i can't format it right now as the gui stopped responding and i had no idea how long it would take..
Now i see this, i think i will have to wait for it to complete... but at least now i know what is happening!
- 1
-
so i know there is the smb bug with unraid 6.12.2 (rc included).
but i dont know if below is also part of that error..
this is when trying to use remote smb through unassigned plugin.
(removed ip's and hostnames)
Jul 12 13:12:05 unassigned.devices: Mount SMB share '///Backup' using SMB 1.0 protocol.
Jul 12 13:12:05 unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=1.0,credentials='/tmp/unassigned.devices/credentials_Backup' '///Backup' '/mnt/remotes/Backup'
Jul 12 13:12:05 kernel: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers
Jul 12 13:12:05 kernel:
Jul 12 13:12:05 kernel: CIFS: VFS: Use of the less secure dialect vers=1.0 is not recommended unless required for access to very old servers
Jul 12 13:12:05 kernel: CIFS: Attempting to mount \\\Backup
Jul 12 13:12:06 kernel: CIFS: VFS: Error connecting to socket. Aborting operation.
Jul 12 13:12:06 kernel: CIFS: VFS: cifs_mount failed w/return code = -111
Jul 12 13:12:06 unassigned.devices: SMB 1.0 mount failed: 'mount error(111): could not connect to IP Unable to find suitable address. '.
Jul 12 13:12:06 unassigned.devices: Share '///Backup' failed to mount. -
i went back to 6.12.0-rc5 to get it fixed
-
2 minutes ago, bonienl said:
A fix will come in a future release.
It will allow users to specify additional interfaces or IP addresses to listen to, this may include any custom tunnels.
wonderfull.. any idea how long i would have to wait for it?
-
and here i was trying to get it working.. turned out that @Angeloc posted the exact reason why i could not get it working..
1 of my nodes is on the latest version.
hoping for a fix soon...
-
Same here..
removed everything from the plugin, reinstalled.. does nothing .. just shows 0 files added.
tried the older file but it wont install.
-
getting a invalid response when trying to download it.. any way around this?
edit: worked now
-
so.. this is enabled... (i allready enabled it weeks ago during the first install of unraid) and it should put the logs in the appdata folder.. but there are no files there...
ah it seems i needed to put the servers own ip to the remote part.. now i can see logs generating.
-
so i was running 6.9 stable for quite a while now but decided that i wanted to try 6.10.rc2.
after 2 days it suddenly stopped responding over night.
the fun part is that ping still works..
after a while this shows up on the webpage of unraid (while trying to connect).
500 Internal Server Error
nginx
i will do a reboot shortly after posting this, so let me know what i can do to give support files.
edit: added the diagnostics from the tools page
-
replace parity first. then do a 1 by 1 replacement of the 6tb disks.
-
any clue on when the 11th gen cpu's are fully supported? i still get artifacts during transcoding of 2160p..
-
is this version working with the 11th gen cpu? cause the linuxserver version still shows artifact during transcoding of 2160p content...
damn i wish plex would just make it a priority...
-
2 minutes ago, ich777 said:
That's not true, see the post from @alturismo and follow the link:
Intel 11th gen is capable of SR-IOV.
allright.. but is it implemented somehwere then at this point? (dont get me wrong, i'm glad it can be done ).
-
any timing on when the 11th gen intel cpu's are supported?
edit:
nvm, i just found that intel page saying nope.. you fucked up when bying our newest hardware.
-
no fix yet for the issues with 11th gen cpu's and unraid 6.10.0-rc1?
fail2ban not pushing CHAIN to unraid host
in General Support
Posted
bump?