Leaderboard

Popular Content

Showing content with the highest reputation on 09/15/21 in all areas

  1. Hola, gracias por la bienvenida. Intentaré ponerme al día lo antes posible. Como le dije a Spencer, tenemos la intención de promover UnRAID lo mejor que podamos en España y la comunidad de habla hispana. Durante las últimas semanas hemos estado trabajando intensamente para tener la infraestructura mínima necesaria para brindar a la comunidad hispanohablante el acompañamiento necesario. Esperamos poder tener al grupo actual de miembros con el nivel suficiente para poder dejarlos de la mano y que disfruten felices de sus servidores. Por mi parte, como no podía ser de otra manera, agradezco la confianza depositada por el Community Manager de UnRAID en esta nueva etapa que se abre para nosotros. Un cordial saludo a todos y una vez más muchas gracias por la confianza depositada. 😎
    4 points
  2. Hola a todos 👋 Es un placer presentarles a @EUGENI_CAT como nuevo moderador del foro español. @EUGENI_CATadministra un sitio web y una comunidad hispanohablante de Unraid en unraid.es. También hay un grupo de Telegram centrado en Unraid de rápido crecimiento. Él ha accedido gentilmente a moderar aquí también. Ayúdame a darle la bienvenida a @EUGENI_CAT 👏 👏 👏
    3 points
  3. Muchas felicidades Eugeni!!! Confianza más que merecida, por todo lo que estás haciendo por la comunidad
    3 points
  4. Enhorabuena Eugeni! Pedazo de fichaje habéis hecho 😉
    2 points
  5. Gracias Spencer Estaremos a la altura de lo esperado. Saludos cordiales😎
    2 points
  6. No puede tener mejores moderadores está comunidad! Intentaré aportar mi granito.
    2 points
  7. ¡Viva la Unraid Furia Roja!
    2 points
  8. Hi let me first start of by saying, that i would have liked to contribute to unraid community a long time ago, but time has been getting in my way. And now that lastpass has decided to limit their free service, which force many users to change password manager and a lot of people is turning to bitwarden, i thought it would be a good idea for me to share in detail exactly how to protect your selfhosted bitwarden (bitwarden_rs) with fail2ban and connect it to cloudflare IP Access Rules. And also protect /admin 😋 So in this guide we are going to be using the following. 1. Swag from linuxserver/swag 2. Bitwardenrs from bitwardenrs/server 3. GilbN's cloudflare-apiv4 template So let's say some unknown user comes along and tries to login with unauthorized credentials / start hammering the portal 30 seconds ~ 1 min and poff banned via cloudflare. Here is how we set this up. (my bitwardenrs container is named bitwarden) First we go into the settings of bitwardenrs container and we edit the template like I have in the images below. And inside bitwarden logs you add this. The /data/ is equal to /mnt/cache/appdata/bitwarden/ Now it's important to know that the bitwarden.log I have there is not going to be generated untill you add the varaibles in the container, then restart the container. Once this is done the .log will show up in /appdata/bitwarden/ Lets now head into the fail2ban folder that is located inside /mnt/cache/appdata/swag/fail2ban And let's open jail.local (your might be different) but these are the settings i use for my fail2ban if we scroll down below to the very end, and we add our own jail for bitwarden this is what you type. (ofc you add your own IP) Save the jail and cd into, /mnt/cache/appdata/swag/fail2ban/filter.d Create a new file in here called bitwarden_rs.conf In this file you type this, Then save it. Now cd into /mnt/cache/appdata/swag/fail2ban/action.d Create a new file in here called cloudflare-apiv4.conf inside this file you copy the excellent template from GilbN which you can find on his blog here https://technicalramblings.com/blog/cloudflare-fail2ban-integration-with-automated-set_real_ip_from-in-nginx/ Not much needs to be changed in here just at the bottom, cfuser = and cftoken = and you can find those from https://dash.cloudflare.com/profile/api-tokens (Global API Key) Next let's head into Swag container, and add a path like i have done in the image below. And same thing here, the /bitwarden/ is equal to /mnt/cache/appdata/bitwarden/ Next you save and restart swag Following commands should now work. In unraid webui > webterminal type > docker exec -it swag bash > cat bitwarden/bitwarden.log A tail should work also, webterminal > docker exec swag tail -f /bitwarden/bitwarden.log Don't know if you need it but, this is how my volume mapping looks for bitwarden. Yours should look the same. I wanted to add this section. As I have CF Real IP in my nginx.conf located in /mnt/cache/appdata/swag/nginx/nginx.conf ## # CF Real IP ## include /config/nginx/cf_real-ip.conf; real_ip_header X-Forwarded-For; right under my Gzip Settings If i do a simple cat on cf_real-ip.conf it looks like this. :~# cat /mnt/cache/appdata/swag/nginx/cf_real-ip.conf ## Version 2021/02/06 set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 104.16.0.0/12; set_real_ip_from 108.162.192.0/18; set_real_ip_from 131.0.72.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 162.158.0.0/15; set_real_ip_from 172.64.0.0/13; set_real_ip_from 173.245.48.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 190.93.240.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32; set_real_ip_from 2c0f:f248::/32; set_real_ip_from 2a06:98c0::/29; When it comes to the reverse proxy for some reason i could not get the default template from linuxserver.io to work for me. So i decided to add my reverse proxy and my ssl.conf that i use for bitwarden. (gives A+ on securityheaders.com) It's based on Roxedus template (i think...) ## Version 2020/12/09 server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name bw.FQDN.TLD; # TLS settings include /config/nginx/bitssl.conf; # Organizr Authentication include /config/nginx/auth.conf; # Custom Error Pages include /config/nginx/errorpages.conf; # Maxmind Geographic IP Block # include /config/nginx/geoblock.conf; client_max_body_size 128M; location /{ proxy_pass http://IP:PORT; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Url-Scheme $scheme; proxy_hide_header X-Frame-Options; proxy_hide_header "x-webkit-csp"; proxy_hide_header "content-security-policy"; proxy_set_header Accept-Encoding ""; sub_filter '</head>' '<link rel="stylesheet" type="text/css" href="https://gilbn.github.io/theme.park/CSS/themes/bitwarden/plex.css"> </head>'; sub_filter_once on; } location /admin { auth_request /auth-0; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app bitwarden; set $upstream_port 8242; set $upstream_proto http; proxy_pass http://IP:PORT; } location /notifications/hub/negotiate { proxy_pass http://IP:PORT; } location /notifications/hub { proxy_pass http://IP:PORT; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } Error pages can be found from here https://docs.organizr.app/books/setup-features/page/custom-error-pages Here is my bitssl.conf ## Version 2019/06/19 - Changelog: https://github.com/linuxserver/docker-letsencrypt/commits/master/root/defaults/ssl.conf # session settings ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE cipher suites ssl_dhparam /config/nginx/dhparams.pem; # ssl certs ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; # protocols ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; resolver 127.0.0.11 valid=30s; # Docker DNS Server ## Header security settings to reach A+ Grade on, htbridge.com/websec | securityheaders.io | ssllabs.com add_header Content-Security-Policy "form-action 'self' https://xy.FQDN.TLD https://FQDN.TLD; base-uri 'self'; upgrade-insecure-requests; block-all-mixed-content; frame-ancestors https://xy.FQDN.TLD https://yx.FQDN.TLD https://FQDN.TLD"; add_header "Content-Security-Policy-Report-Only" "report-uri https://xy.report-uri.com/r/d/csp/reportOnly"; add_header Expect-CT max-age=604800,enforce,report-uri="https://xy.report-uri.com/r/d/ct/reportOnly"; add_header Expect-Staple "max-age=31536000; includeSubDomains; preload"; add_header Referrer-Policy "no-referrer-when-downgrade" always; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Frame-Options "ALLOW-FROM https://yx.FQDN.TLD"; add_header X-Robots-Tag none; ##/SSL SETTINGS And since i'm using organizr as my http authentication i'll include that .conf too Now if the bitwarden.conf fails just # the auth out or ask here in comments and i'll do my best to help out # Version 2020/12/09 # ###################### # Organizr Auth v2 # ###################### # auth_request /auth-0; #=Admin # auth_request /auth-1; #=Co-Admin # auth_request /auth-2; #=Super User # auth_request /auth-3; #=Power User # auth_request /auth-4; #=User # auth_request /auth-8; # logged in # auth_request /auth-9; #=Guest location ~ /auth-([0-9]+) { internal; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_organizr organizrv2; proxy_pass http://IP:PORT/api/v2/auth?group=$1; proxy_set_header Content-Length ""; } After this restart swag. And that's it 🙂 Everything should now be working as intended. This guide was written & created by me Zidichy on February 21, 2021. ~ Fin
    1 point
  9. You can look at this https://forums.unraid.net/topic/85495-unraid-newenckey-change-your-drive-encryption-unlock-key/, but without the current key it won't work. How the system works is pretty much how you would have wanted it since you made the choice to encrypt.
    1 point
  10. I run monthly parity checks. I don't understand how anyone could be comfortable with only running 2 full parity checks over an entire year. At the least, I would pick a schedule that allows a full parity check to complete within the month.
    1 point
  11. It logs the command it uses. Just run a user script and modify to suit.
    1 point
  12. Enhorabuena Eugeni.... Bien merecido..... Salu2.
    1 point
  13. Gracias @Cifu2 No esperaba menos de un grupo tan cohesionado como somos los UNRAIDES. Un abrazo😎
    1 point
  14. Gracias Jorge, Que voy a contar que tú no sepas ya ¡¡¡ Un abrazo😎
    1 point
  15. if the other nine are working and it looks like they are... https://arkbrowser.com/servers?q=cluster%3AD1R7Y you are going to have to keep digging in your servers file system... there is no way for us to know how your system is behaving... Are the other containers behaving as you expect? What are the permissions of /mnt/cache/appdata/ark-se/ARK6-Valguero_P? Do they match the other ARK* folders? what are the contents? Are they layed out the same as the other ARK* folders? is the .ark file present? (ls -l /mnt/cache/appdata/ark-se/ARK6-Valguero_P/SavedArks/Valguero_P.ark) does the server container start? what do the logs say? if it does start can you join the game on it? transfer in or out to others in the cluster? You are going to have to do some of the digging leg work before anyone is going to be able to assist.
    1 point
  16. I've been running the Cloudberry Docker, backing up to a Backblaze bucket for a while now. But I am missing files and folders from my B2 bucket. I purchased the Linux license of CB, and I'm only trying to backup about 600G of data (well under the limit of the license I have). I've had a look at the settings on CB, but everything looks like it's configured correctly. Below is the retention policy of the backup job, which overrides the global settings: NOT CONFIGURED to delete any files automatically. Keep 3 versions of each file Delete files that have been deleted locally after 30 days My B2 Bucket is configured to keep all versions of the file (IE allow CB to manage it) Yet, with these settings, I'm still not seeing all of my folders backed up. Is this a bad config, or is CB just that unreliable?
    1 point
  17. File system is fully allocated, run a balance: https://forums.unraid.net/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551
    1 point
  18. i just checked again in graphana, it was no data at all tables becose most of data was in deutch , so i changed to english metrics and all seems to be Fine . i was worried that not values of msi afterburner was exported ,was in go format... ( # HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. # TYPE go_memstats_next_gc_bytes gauge go_memstats_next_gc_bytes 6.080416e+06 ) but anyway all important metric is OK i think , i will stay in english MSI Afterburner for now . Thank you for all your help again.
    1 point
  19. No problem, overlooked your post... sorry... This looks fine to me and should not cause any issues. What is the exact problem you have? Is it possible for you to let the exporter run in English and edit the values or at least the Dashboard in Grafana to fit your needs?
    1 point
  20. I know about that one but I would prefer an app made by limetech.
    1 point
  21. Unraid has a built in NTP server by default. Just point your devices at Unraid's IP.
    1 point
  22. Try to force an update from the binhex container, it should theoretically work on binhex's container without a problem from what I know.
    1 point
  23. Sure. Will give it a shot tomorrow and let you know.
    1 point
  24. Maybe I got time later today to send you a screenshot from my Jellyfin instance with NVENC running if you don't get it to work.
    1 point
  25. J'ai finis par craqué pour un 6To en remplacement, l'array est en cours de reconstruction, merci pour ton aide
    1 point
  26. This version is terrific...thanks to all involved for the work into it.
    1 point
  27. Love this feature....now if we could just get a cell phone app!
    1 point
  28. Oh, jaaa, stimmt.. Daran hatte ich nicht gedacht. Ich hatte irgendwie im Hinterkopf, dass ich nur RAID1 nutzen kann beim pooling von Platten. Aber es geht ja auch RAID0. Ich denke das würde es sicher lösen... Nicht zwingend. Ich bediene mich lediglich von den Daten aus dem Array. Die beiden 6TB WD Blue sollen reine Backups sein. Das heißt, dass vielleicht ein mal pro Woche ein sync von Array zu Backup passiert. Allerdings sollte es möglich sein, einzelne Dateien aus dem Backup wieder zurück ins Array zu schieben, ohne jedes Mal ein Full-Restore machen zu müssen. Beispiel: Ich sortiere Bilder und bearbeite sie und lösche/überschreibe/verliere ein Bild. Dann würde ich ins Backup springen wollen, das Bild suchen und wieder ins Array kopieren.
    1 point
  29. Thank you binhex. It is indeed database corruption: Sep 15, 2021 09:25:55.942 [0x1469927bb640] INFO - Plex Media Server v1.21.3.4021-5a0a3e4b2 - unknown PC unknown - build: linux-x86_64 redhat - GMT 01:00 Sep 15, 2021 09:25:55.942 [0x1469927bb640] INFO - Linux version: 5.10.19-Unraid (#1 SMP Sat Feb 27 08:00:30 PST 2021), language: en-GB Sep 15, 2021 09:25:55.942 [0x1469927bb640] INFO - Processor Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz Sep 15, 2021 09:25:55.942 [0x1469927bb640] INFO - /usr/lib/plexmediaserver/Plex Media Server Sep 15, 2021 09:25:55.941 [0x146992ca7200] DEBUG - BPQ: [Idle] -> [Starting] Sep 15, 2021 09:25:55.941 [0x146992ca7200] VERBOSE - BPQ: delaying processing 120 second(s) Sep 15, 2021 09:25:55.942 [0x146992ca7200] DEBUG - FeatureManager: Using cached data for features list Sep 15, 2021 09:25:55.942 [0x146992ca7200] DEBUG - Opening 20 database sessions to library (com.plexapp.plugins.library), SQLite 3.26.0, threadsafe=1 Sep 15, 2021 09:25:55.952 [0x146992ca7200] INFO - SQLITE3:(nil), 283, recovered 705 frames from WAL file /config/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-wal Sep 15, 2021 09:25:55.952 [0x146992ca7200] ERROR - SQLITE3:(nil), 11, database corruption at line 66053 of [bf8c1b2b7a] Sep 15, 2021 09:25:55.952 [0x146992ca7200] ERROR - SQLITE3:(nil), 11, database disk image is malformed in "PRAGMA cache_size=2000" Sep 15, 2021 09:25:55.952 [0x146992ca7200] ERROR - Database corruption: sqlite3_statement_backend::prepare: database disk image is malformed for SQL: PRAGMA cache_size=2000 I still have the CA Backup from 2 days ago. I'll wait for the disk rebuild process to finish and then I'll try to restore the appdata folder. I'm guessing the process should be as simple as this: But I might recreate the docker.img file before. Just in case.
    1 point
  30. I used to have this issue too which was also solved by the VTI trick. Maybe worth trying anyway?
    1 point
  31. Pour 4VM, à mon sens il faut au moins un 8C voir plus puisqu'il faut des processeurs pour unRAID. Pour ma part, j'ai actuellement en test 2 VM Win10 + 1 Linux sur 3 coeurs de mon i7-3770 et ca marche super bien. Mais je suis resté super précis : Il faut bien optimiser le windows 10, j'ai supprimé tout ce qui est superflu. Chaque machine tourne sur 1 "processeur" (1 coeur) avec des pics à 80% d'utilisation mais en general ca tourne autour de 40%. Donc tout dépends de la façon dont sont utilisés et optimisés les OS !
    1 point
  32. Das steht doch als Hilfetext beim Cache.
    1 point
  33. Do you force a transcrode from within the web player or the application where you play the file? What kind of media do you want to transcode? For Plex you need a Plex pass to get transcoding to work. What container are you using? What does the overwiew section in Jellyfin tell you? Don't forget that it is possible that the file that you transcode maybe also needs to be transcoded in terms of audio and that is producing also load on the CPU.
    1 point
  34. das ist der netzwerk stack welcher dann von dem jdownloader auch genutzt wird, daher passt das schon, ansonsten, einen ovpn container nehmen welcher auch einen http proxy oder socks proxy bereit stellt was in jdownloader ja funktioniert und den dann nur darauf limitieren. Beispiel das NUR mit proxy geht ansonsten zu deinem "speed" Problem, der Vergleich hinkt da der Windows client sicherlich nicht ovpn nutzt ... und 125 kb fast nach 1mbit limit aussieht ... schwer zu sagen da die Seite von denen ja nichts dazu sagt ... mit einem "Proxy" container kannst du auch per browser etc mal "testen", wäre jetzt mein erster Ansatz, ovpn file schau ich mir morgen vielleicht mal an
    1 point
  35. ...the Intel 82576 DUAL will work fine. I had one of these running for years.
    1 point
  36. Do you have any newer diagnostics that includes those activities?
    1 point
  37. Das klingt komisch. Der Pool muss eingestellt bleiben, wo appdata aktuell drauf liegt. Dann stellst du den Cache auf Yes, damit der Pool Richtung Array geleert wird. Danach den Pool ändern und auf Prefer ändern. Dann Mover Richtung neuen Pool verschieben lassen. Direkt von Pool zu Pool kann der Mover nicht. Das müsste man tatsächlich per "cp -ar quelle ziel" Kommando manuell machen.
    1 point
  38. Should only happen when you are on the Dashboard in unRAID or am I wrong? This is very unlikely to be caused by the plugin, also consider enabling nvidia-persistenced mode.
    1 point
  39. It would be for more than just the UI.
    1 point
  40. Sorry for the trouble, nice job figuring out it was related to the license. I've recreated this and confirmed the problem is in the My Servers plugin, we'll get that fixed.
    1 point
  41. It seems like the error is related to the MariaDB container being upgraded and as a result MariaDB being upgraded from 10.1 to MariaDB 10.2. To get rid of this ssh into the Mariadb container and delete the binary log file in /config/databases (these log file are called "ib_logfile0", ""ib_logfile1" etc). After deleting restart the MariaDB container. https://mariadb.com/kb/en/upgrading-from-mariadb-101-to-mariadb-102/+comments/2903
    1 point
  42. Hi, if i execute the script i become the folloing error: root@Avalon:/boot/config/scripts# sh backup_rsync.sh backup_rsync.sh: line 74: syntax error near unexpected token `>' backup_rsync.sh: line 74: ` exec &> >(tee "${backup_path}/logs/${new_backup}.log")' root@Avalon:/boot/config/scripts# How can i solve this? Thank you!
    1 point
  43. It is, you need to install it and log into the webUI to pull your server's data. Then use the IP of the container (http://xxx.xxx.xxx.xxx:3005/api/getServers) as a JSON API data source in Grafana.
    1 point
  44. It's easy for us to get overwhelmed by new issues, especially coinciding with new features and new kernel releases. Our lack of immediate reply does not mean your report is being ignored. We very much appreciate all hints, testing results, etc. Remember, for very odd issues, please reboot in "Safe Mode" to ensure no strange interaction with a plugin.
    1 point