Doublemyst

Members
  • Posts

    41
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Doublemyst's Achievements

Rookie

Rookie (2/14)

7

Reputation

  1. Is Nvidia supported in BigSur? I though Nvidia cards are supported in High Sierra and nothing newer.
  2. Ah, alright .. Just saw the 480 mb read/write, and assumed it will be the average Is there a SSD you would suggest, where this speeds will "maintain"? Or what should I look for?
  3. Hi, here are the diagnostic files: tower-diagnostics-20220115-1409.zip
  4. Thank you! It finally works! I was on the edge of giving up!
  5. Did you manage to find a solution? I have the same problems with my Nvidia GTX 1060 6GB. Here is a reddit thread (https://www.reddit.com/r/unRAID/comments/myup6h/gpu_passthrough_vm_no_display/) with a good explanation of all points "normally" needed to be done, but it didn't help in my case. I have tried to boot unraid in legacy and UEFI mode, but with no luck .. I am out of ideas right now. At the boot there is a screen output (right before unraid starts booting, than it hangs on the windows until VM autostart), but as soon as the VM starts, the screens turn black and goes into "sleep" mode.
  6. Hi Squid, thanks again for the reply. Yeah, sorry that I didn't explain it well enough. Basically my exact the same problem was for the original post, that I don't have write access to folders (which is the case for my VM folders). In addition I have problems with dockers like Nextcloud, which needed permission reset on the share in order to be able to write on the share. I have checked now radarr app and it was indeed set under permission (chmod Folder) to 0755, I have changed it now to 0777. I guess there might be similar setting for all the apps somewhere. Thanks for the hint, I think this might be the reason, that the app is writing the file wrong and blocking folders. Cheers
  7. Hi all I am a noob and have literally very little knowledge in this stuff. My goal is to make my hCaptcha from the Epicgames Freegames Node Docker available from everywhere and not just local WiFi. I have tried to find a solution with the information on the docker page Webserver setup Expose port 3000 in your Docker run config (e.g. -p 81:3000 maps the host machine's port 81 to the container's port 3000) If you want to access the Captcha solving page from outside your network, setup any port forwarding/reverse proxy/DNS Set the webPortalConfig.baseUrl in the config The web portal uses WebSocket to communicate. If you're using a reverse proxy, you may need additional configuration to enable WebSocket. This guide from Uptime Kuma covers most scenarios. The way I have tried it, didn't work. I am using SWAG as my reverse proxy (got it working today for nextcloud and deluge with some YouTube tutorials). Has anyone managed to make epicgames available from outside your local network with swag? What configuration files are you using? Here is what I have at the moment (have tried a lot of possible configurations with commenting out lines, which were generating problems .. I know I was just poking with the hope it will just work, but it didn't..). server { listen 443 ssl http2; server_name epic.*; include /config/nginx/ssl.conf; location / { # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app XXXXX; <---- here is the local IP of my server (f.e. 192.168.1.100) set $upstream_port 3055; <---- This is the port of my Server set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; # proxy_pass http://XXXXX:3055/; # proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } I'd be glad for any help, thanks! Cheers Edit: I've solved the problem Here is the domain config: ## Version 2021/05/18 # make sure that your dns has a cname set for epicgames and that your epicgames container is not using a base url server { listen 443 ssl http2; listen [::]:443 ssl; server_name epic.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app XXX.XXX.XXX.XXX; # COMMENT: here is the local IP of my server (f.e. 192.168.1.100) set $upstream_port 3055; # COMMENT: this is the docker port from epicgames - yours may be different set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_pass http://XXX.XXX.XXX.XXX:3055/; # proxy_http_version 1.1; # proxy_set_header Upgrade $http_upgrade; # proxy_set_header Connection "upgrade"; } } And very important, in the epicgames-freegames appdata you have to modify the webportalconfig -> BaseURL to the "base URL" -> for example "https://epicgames.myserver.com". VERY IMPORTANT - leave the https:// at the beginning, or try with http:// I don't know which one will work for you, but without the https:// part for me it wasn't even starting properly.
  8. Hi Squid Thanks for your reply. I don't think this is a problem related to nextcloud, as I have the same problem with deluge downloaded files and even on a VM Share, on which i download YouTube videos with youtube-dl (now yt-dlp). The problem is going across dockers and system overall.
  9. I have the exact same problem for all the time I am using Unraid. Even now, I have tried to use an older share for my Nextcloud setup. When the Nextcloud app tried to upload the photos, I got messages, that there was an issue .. Well, I knew it is again the shares problem and i've runned the new permissions on this folder and like expected, everything worked fine afterwards, but this won't hold for long. Does anyone how to fix this permanently? Cheers
  10. Hey son_goolong and maybe spyd4r I don't know if this will help, but I had to set it up like this (in console of MariaDB) in order to let firefly access the firefly-SQL-DB: GRANT ALL privileges ON `FIREFLYDBNAME-HERE`.* TO 'YOURFIREFLYUSER-HERE'@'%'; As much as I undertand it, and I don't really, with this command your firefly-db-user will be able to access those fireflytables from any local network address. I think (not sure) it is because the firefly container has a different internal IP as the maria db and mariadb is blocking this request, as the IP is not the same as in marias container ... Maybe someone with actual knowledge can explain it better :D. But long story short, the "%" in the command is for any local IP. I hope this helps (I think it will help son_goolong, as I had similar errors, but not sure about spyd4r)
  11. Hello, Thanks for the image and your work guys! I (kind of a newbie) managed to launch it. The only problem I have now, is that I can't change the date format from MM/DD/YYYY to DD/MM/YYYY. In the settings (inside firefly) it says that "For a language other than English to work properly, your operating system must be equipped with the correct locale-information. If these are not present, currency data, dates and amounts may be formatted wrong." I have really tried to make it work ... googled dozens of forums, but the solutions written there aren't working when I open the firefly console and try to use them. The solution, which should have worked is: RUN apt-get install -y locales locales-all ENV LC_ALL en_US.UTF-8 ENV LANG en_US.UTF-8 ENV LANGUAGE en_US.UTF-8 (Here I would just change the language I want for example en_GB.UTF-8, or de_DE but i get an error message when i run "apt-get install -y locales locales-all": Reading package lists... Done Building dependency tree Reading state information... Done Package locales-all is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'locales-all' has no installation candidate I've googled it, and the reason seems to be a missing universe repository. The command to activate it, doesn't really work: "add-apt-repository universe" sh: 9: add-apt-repository: not found At the moment I'm stuck at: Command: locale Output: LANG= LANGUAGE= LC_CTYPE="POSIX" LC_NUMERIC="POSIX" LC_TIME="POSIX" LC_COLLATE="POSIX" LC_MONETARY="POSIX" LC_MESSAGES="POSIX" LC_PAPER="POSIX" LC_NAME="POSIX" LC_ADDRESS="POSIX" LC_TELEPHONE="POSIX" LC_MEASUREMENT="POSIX" LC_IDENTIFICATION="POSIX" LC_ALL= and command: locale -a Result: C C.UTF-8 POSIX bg_BG.utf8 cs_CZ.utf8 de_DE.utf8 el_GR.utf8 en_GB.utf8 en_US.utf8 es_ES.utf8 fi_FI.utf8 fr_FR.utf8 hu_HU.utf8 it_IT.utf8 lt_LT.utf8 nb_NO.utf8 nl_NL.utf8 pl_PL.utf8 pt_BR.utf8 pt_PT.utf8 ro_RO.utf8 ru_RU.utf8 sk_SK.utf8 sv_SE.utf8 vi_VN.utf8 zh_CN.utf8 zh_TW.utf8 Which, as much as I've read on the internet, means, that no locale is set in this docker container. Can anyone help me out on this? Thanks
  12. No idea what you mean by that, but thanks anyway! At least I know now the rule to have such stuff at the top
  13. Woah, you are like a magician .. after I rearranged the docker folders, it worked ... so strange, as I thought that if I add a "wait" on the dockers itself, it would wait X seconds and then start, but rearranging did the trick for me (in this case I not only updated the SSD, but also added the docker folders, which caused the issues): Thanks for help Squid and Trurl