HojojojoWololo

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by HojojojoWololo

  1. Actually, I have no clue since I have not tried that and because I stopped using Matrix/Element, I won't try it any time soon. Sorry
  2. In my defense, the post is almost two years old (so it's pretty outdated) and it took me a days-long odyssey to get it to work, too. I mentioned that here, though, so you could have been forewarned But I can absolutely understand your annoyance and when an update of Jitsi failed last year, I decided to get rid of it cause the setup was so painful.
  3. Hi guys, I need some help, too, cause I can't figure out what to do even after some hours of research. Problem: I am using Wireguard for some months and everything works fine since everyone who connects via Wireguard is supposed to have complete access to the LAN of the server. But on my server, there is one docker-container which I allow some friends to have access to. For that purpose, I used an OpenVPN container since I was able to restrict the VPN access to just one specific container (within the OpenVPN config, I was able to restrict certain users to certain IP mappings within the server's docker network). Now the OpenVPN docker is EOL for Unraid and coincidentally, my OpenVPN setup broke. My problem: how can I achieve to set this up via Wireguard in Unraid? I do not want those people to access my whole server/LAN/... but only one specific docker container (IP is only "fixed" by the boot sequence of the docker containers - not by assigning a fixed IP to the container itself). Hopefully, someone has some tipps for me
  4. No, you cant. The mobile game is a completely different product and technically has nothing to do with the console/PC version.
  5. Yeah, it seems like it works without any problems, it just looks ugl y when opening the log 😅 Thanks for your reply 😀
  6. But I still got some problems here. The biggest one: Rocket.Chat isn't able to connect to the Rocket Chat Cloud. I set up an account and had to register it manually - the link for the online registration with my Rocket Cloud account didn't work. When I want to connect to the cloud services and click on "the sign in button within my Rocket Chat server, I am redirected to a cloud.rocket.chat uri to login which returns this error: I tried to click sync - rocket chat even tells me that the sync was successful. But the error message is generated again when trying to sign in. I logged into cloud.rocket.chat and removed the server from my workspaces. I also registered that I had some kind of dummy second workspace called Your Workspace registered. I deleted this one, too. In a second step, I was able to register again - with the online token. I was directed to the correct URI and was able to login with my credentials. Another error message appeared: I also tried to delete the cloud sync info from my database using the MongoDB containers console mongo use rocketchat db.rocketchat_settings.remove({"_id": “Cloud_Workspace_Id”}); db.rocketchat_settings.remove({"_id": “Cloud_Workspace_Client_Id”}); db.rocketchat_settings.remove({"_id": “uniqueID”}); But there still is the "associated with a different Rocket.Chat Cloud account" error. Is there anyone who has a solution for this problem?
  7. I think I found a solution. Had a problem with the authorization while installing Rocket.Chat, too. After reading that working with a MongoDB replica set requires both account and keyfile, I started a terminal and mounted into the MongoDB appdata folder: cd /mnt/user/appdata/mongodb Then I created a keyfile and set read/write permissions with the following two commands: openssl rand -base64 741 > mongodb.key chmod 600 mongodb.key Afterwards, I edited to mongodb.conf file in the MongoDB appdata folder and added the direction of the keyfile: security: authorization: enabled keyFile: /data/db/mongodb.key Now the enabled authorization works for me 💪😁
  8. Don't lose hope. I think you nearly got it. Reminds me of my status some days ago 😁 Try what I posted above: in your homeserver.yaml put client and federation in 'inverted commas' - port: 8008 tls: false type: http x_forwarded: true bind_addresses: ['0.0.0.0'] resources: - names: ['client', 'federation'] - port: 8448 tls: false type: http x_forwarded: true bind_addresses: ['0.0.0.0'] resources: - names: ['federation'] Also set database direction to data/homeserver.db database: name: sqlite3 args: database: /data/homeserver.db Additionally, try to add a TCP TURN URI (remember to forward port 3478 to your server for both protocols): turn_uris: ["turn:bridge.mydomain.xyz:3478?transport=udp", "turn:bridge.mydomain.xyz:3478?transport=tcp"] And finally, remember to change your secrets and keys since you shared them with us. That's not the meaning of shared_secret 😉😁 I hope that you get that thing running. If it doesn't help post your turnserver.conf please (without the shared secret of course).
  9. Yep, the tutorial kinda works with a few adjustments. But since I had to work my way through multiple posts and other sites I would love to spare you the pains. My initial setup was an unraid server running Swag (since the Letsencrypt docker wont be supported anymore in the future due to naming rights - spaceinvaderone made a great tutorial how to switch from the Letsencrypt to the Swag docker). Yinzer's tutorial for the Letsencrypt docker still seems fine, though you really should use the Swag docker instead. Furthermore, Jitsi was already up and running when I started to install Matrix (thanks to spaceinvaderone, again 😄), so I will skip that part. If you have to set up a reverse proxy (be sure to use the Swag container instead of the Letsencrypt container) or want to switch to Swag, the spaceinvaders videos are really helpful. My adjustments to @yinzer's Matrix setup: Setting up Swag (formerly Letsencrypt) matrix.subdomain.conf - thanks to @akamemmnon for his config server { listen 443 ssl; listen 8448 ssl; server_name bridge.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app your.unraid.server.ip; set $upstream_port 8008; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Forwarded-For $remote_addr; } location /.well-known/matrix/server { default_type application/json; return 200 '{"m.server": "yourdomain.com:443"}'; add_header Access-Control-Allow-Origin *; } } Make sure to change your.unraid.server.ip to your unraid server's IP adress and yourdomain.com to your domain name 😁 Since Riot was renamed to Element, there is a new container so we will use that one instead of Riot and have to adjust the Swag configuration file. element-web.subdomain.conf server { listen 443 ssl; server_name chat.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app element-web; set $upstream_port 80; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } Install Matrix and configure it according to yinzer's tutorial. Adjustments: Setting up Matrix homeserver.yaml under "listeners" in the "# Unsecure HTTP listeners: for when matrix traffic passes through a reverse proxy" section: - port: 8008 tls: false type: http x_forwarded: true bind_addresses: ['0.0.0.0'] resources: - names: ['client', 'federation'] - port: 8448 tls: false type: http x_forwarded: true bind_addresses: ['0.0.0.0'] resources: - names: ['federation'] Make sure you respect the .yaml syntax - that's what created the syntax errors of @lewisd19, @jafi and @l2evy. No tabs, just spaces! Additionally, the resource names have to be commented with inverted commas: 'text'. The examples over this section can help you with this. If you use the standard SQLite database, make sure you changed the database's direction - thanks to @spyd4r for your input. database: name: sqlite3 args: database: /homeserver.db SHOULD BECOME database: name: sqlite3 args: database: /data/homeserver.db turnserver.conf Delete the first line which says "lt-cred-mech" since we use "use-auth-secret". Also think about adding the pidfile and userdb infos yinzer posted in his tutorial. My turnserver.conf looks like this: use-auth-secret static-auth-secret=YOUR-STATIC-AUTH-SECRET realm=turn.bridge.yourdomain.com cert=/data/bridge.yourdomain.com.tls.crt pkey=/data/bridge.yourdomain.com.tls.key dh-file=/data/bridge.yourdomain.com.tls.dh cipher-list="HIGH" pidfile=/data/turnserver.pid userdb=/data/turnserver.db Setting up Element-Web (based on @yinzer's tutorial for Riot Chat) 1. Before we start, we need to manually create the config path and pull in the default config. So open the terminal/SSH to your server. 2. Create the config path by executing mkdir -p /mnt/user/appdata/element-web/config 3. Download the default config by executing wget -O /mnt/user/appdata/element-web/config/config.json https://raw.githubusercontent.com/vector-im/element-web/develop/element.io/app/config.json 4. In Community Applications, search for `element-web' by vectorim 5. Set the `Network Type` to `Custom: ssl proxy` 6. Set the `Fixed IP address` to `172.20.0.20` (or whatever) 7. The rest of the settings should be fine. Create the container and run it. Now lets edit our Element config. It's a JSON file, so make sure you respect JSON syntax 1. Edit /mnt/user/appdata/riot-web/config/config.json 2. Change 'default_server_name' to "default_server_name": "bridge.yourdomain.com", 3. Insert your domain to the 'roomDirectory' "roomDirectory": { "servers": [ "bridge.yourdomain.com", "matrix.org", "gitter.im" ] } 4. Add the following lines in the config: { "jitsi": { "preferredDomain": "meet.yourdomain.com" } }, Caution: Using a Jitsi server with enabled authentification doesn't work with Element! And this should also be noticed: Jitsi Setup Just follow spaceinvaderone's instructions in this video. But for setting up a working Matrix synapse and the Element-web container, that should be it. @yinzer Feel free to update your initial post with this adjustments 😃
  10. Thanks for the tutorial and all your work @yinzer - altough it could use a little update. Finally, I got Matrix working... after nearly 3 days of work and reading this thread as well as various other sites again and again😀 Element-Web is also connected now and I was able to create a useraccount on my matrix synapse. Federation and integrations manager seem to work, too. *edit* Problem solved. Stupid mistake because of lack of sleep. Night shifts ^^ But the log of my Matrix Container shows the same "Socket: protocol not supported" errors @xthursdayx described in this Github post. Must be the TURN-part of the Matrix container. @xthursdayx: did you find a solution for your problem?
  11. Nice workaround. Thanks for sharing your thoughts, it seems to work so far. But if I remember it right, the Letsencrypt container has to be added to the jitsi docker-network each time the Letsencrpyt container restarts, correct? I seem to recall that the docker cointainers are only able to save one network.
  12. Sound like an interesting idea. +1 demand unit for your proposal and +1 demand units for the demand of the threadstarter.
  13. Hey Jones, as you can see in my signature, I switched to a Ryzen 2600 and the server runs like a charm - despite some config-errors during the setup (first unRAID setup ;). The CPU's cores are only fully utilized when I run foldingathome - even during three streams with downcoding via Jellyfin, the max. CPU-usage is at about 40-60 percent (at the moment no GFX card). Max. CPU-Temp. is around 37°C (~30 in idle) with the boxed cooler. Planning to upgrade the server with a GFX-card and watercooling in the future though. The SSD-cache seems crucial to me, otherwise copying data to the server would definitaly be to slow for me. The SanDisk nVMe-SSDs get a little warm on a high workload (about 45-55°C) but thats just during intense copying-sessions. Picking the WD-HDDs was a good choice, too - no problems, even my 5-year-old WD Red 3TBs from my old FreeNAS still work fine without SMART-errors or other problems. Case keeps the machine very silent. And finally, I am very happy about the two Gbit-LANs. Makes lots of stuff easier. The IPMI-feature I only used for the setup... made things a little bit easier but since I never used it again, I'd say the feature is rather nice-to-have than a must-have. If you have any other questions, just PM me. Best wishes.
  14. Okay, thanks for the info. I decided to use a Nanoxia Deep Silence 6 Rev. B for now :)
  15. Ouch, the height. Yes, new case. Thanks. But no idea where to put a tower. Have to figure that out first.
  16. Yeah, you got a point there. Altough the estimated 20 hours are more than a little bit of time for me, I really don't want to mess around with the zpool.... 😅 Thanks for your reply. - priest
  17. 16 Slots for 40 bucks? Congrats 💪😄 - priest
  18. I am new to unraid, too. But I based the decision on my SATA controller on... *drum rolls* ... The Hardware Compatibility List from the unraid-wiki 😁 Seems like the easiest way for you is buying a Highpoint RocketRAID 2740 or the recommended LSI SAS 9201-16i, both with 16 slots. I have no idea about the prices, though.
  19. Hi guys Since my old FreeNAS in a Fractal Note case has just 600GB of free space left and I do not dare to fill the last two 3,5 slots with HDDs because of the ventilation, I plan to build a new server. After having read lots about FreeNAS, proxmox and unRAID, I am quite sure that I will like the easy setup and the whole storage system of unRAID with its easy storage upgrades. Furthermore, I really want to try virtualization, perhaps I can get rid of my gaming tower in the future. From what I've read, Proxmox seems to be problematic in terms of hardware passthrough and the FreeNAS virtualization apparently is immature. Primarily, I want to use the unRAID server as a NAS for personal data and videos. But being sick of running multiple Raspis and of migrating the Kodi SQL databases every time I update Kodi, I also plan to run - (4 cores, up to 8192 RAM) Jellyfin Docker (OpenSource Emby fork) - (1 cores, up to 512 RAM) small VM for a PiHole - (1 cores, up to 1024 RAM) VM for Ubuntu Server for OpenHAB (home automatization - runs on a Raspi 3+ at the moment) - (4 cores, up to 8192 RAM) perhaps Ubuntu Desktop to use as a personal workspace (at the moment, Ubuntu is installed on an own SSD in my Windows 10 gaming PC - but I would love to seperate Windows from my personal Linux workspace) - would love to additionally run an instance of Debian for a max2play server (Multiroom-Audio) - 2 cores with 1024 RAM would be enough So, I finally ended up with the following configuration for my future unRAID server (please correct me if my assuptions are wrong - that's the purpose of this post😞 Setup Case: Nanoxia Deep Silence 6 Rev. B - lots of space for HDDs, SSDs and ventilation PSU: 650 Watt Corsair TX650M Modular 80+ Gold - 650 Watt because I plan to buy a Geforce 1650 with h.265-transcoding support for Jellyfin in the future and/or to upgrade the CPU for a gaming VM - modular Mainboard: AsRock Rack X470D4U - AM4 X470 server/consumer mainboard - I decided to go for an AM4 board because of 1. the TDP of the Ryzens and Threadrippers 2. the price-performance ratio of AMDs consumer multicore CPUs and 3. because I am more flexible (since I think about upgrading the server to a gaming VM host later) - the X470D4U has 2x Gbit LAN + a third RJ45 port for IPMI 2.0, supports up to 128 GB DDR4 ECC-RAM in 2x2 dual channel slots, has 8x SATA 6Gb onboard, an onboard ASpeed AST2500 256 MB GPU and 2x M.2 full profile slots (1x PCIe 3.0 x2, 1x PCIe 2.0 x4) CPU: AMD Ryzen 5 1600 6x 3.20 GHz - Socket AM4, no iGPU - since I have IPMI 2.0 and an onboard GPU, I do not need an iGPU, although it would be nice if it could be used by Jellyfin for transcoding - but from what i've read, the passthrough of the iGPUs seems complicated - 6 cores/12 threads for <100 Euro RAM: 16GB Kingston KSM26ED8/16ME DDR4-2666 ECC - 2x for 32GB RAM - listed on ASRocks Qualified Vendor List for the X470D4U mainboard, ECC works - 2x16 because this way I will be able to upgrade to 64GB without any problems (I am pretty sure that I wont need 128GB RAM during the next decade) SSD-Cache: SanDisk Extreme PRO 500 GB M.2 NVMe 3D SSD - 2x (in RAID1) for 500GB Cache - equivalent to WD Black 3D NVMe SSD HDDs: WD Red 4000 GB - 4x 3,5" (+ 4x 3000 GB 3,5" after data migration) - never had a problem with the Reds in my FreeNAS for 5 years now, so I bought new ones when they were on sale recently - after data migration, I will have the 4x 3000 GB WD Reds from the old FreeNAS to extend the unRAID storage SSDs: SANDISK Ultra 3D 512 GB - 2x 2,5" - got them on sale, not sure what to do with them tbh XD thought about passing them through as native storage for the private Ubuntu VM or/and a possible windows 10 VM? HBA: LSI Broadcom 9201-8i 6Gbps SATA SAS HBA Controller - should work out of the box (unRAID hardware compatibility list) - I do just need 2 more SATA ports, but if I have to upgrade the server some day, I think I will be happy for the extra slots Plans for the future: - install a low profile Geforce 1650 and pass it through to Jellyfin for transcoding - update the CPU (8 core AMD Ryzen 7 2700X or 12 core AMD Ryzen 9 3900X - depending on my budget ^^) - get a 10 Gbit LAN card when the house is built and Cat. 7 cables are installed - 'till then I am stuck on 1 Gbit powerline/DLAN adapters Plan for migration: - already installed unRAID on a SanDisk Cruzer Fit 16GB USB2.0 - tested on my Windows machine, was able to boot and start unRAID - assemble the server and start unRAID the first time - build an array of 2x 4 TB WD Reds with 2x 4 TB WD Reds as two parity disks - data on the old FreeNAS: 7,11 TiB - capacity of the new array: 7,12 TiB (2* 4 TB * 0,89 TiB in TB) - if it doesn't fit, my gaming machine has some additional space left - Two parity disks because I really do not want to lose my media, I have a little PTSD from losing all my 380 music videos back in the times of cable modems - the personal data is additionally E2EE-backuped on a dedicated Nextcloud instance - copy the data (FreeNAS 1xGbit -> Gbit 4 Port Managed Switch -> unRAID 2xGbit with the NVMe cache - limited to the 1xGbit of the FreeNAS) - wait - wait - ... (think again about my idea to host a FreeNAS VM, install the 4x 3 TB in the new server, import the zpool from the old FreeNAS and copy the stuff directly, still not being sure if that would have worked out)... - wait - done! - after the copying of the data remove the 4x 3 TB HDDs from the FreeNAS, install them in the rack and preclear them - add them to the array Sooooo, my questions: 1. Have I overlook something or is my plan reasonable? 2. Since I am new to virtualization: is it possible and advisable to assign more cores than my CPU has threads? In my case, can I assign more than 12 cores using a 6core/12 thread CPU? 3. Is the combination of CPU and RAM strong enough to virtualize an additional Windows 10 VM? With a GTX 1650, I should be able to in-home-stream Steam games, right? 4. What should I do with the 'additional' 2x2,5" SanDisk Ultra 3D 512GB? Is it possible to use the 2x 500GB M.2 NVMe as cache and the 2x 512GB 2,5" as back-up? In doing so, I would increase my SSD-cache to 1TB. Or asked differently, would a RAID1 of 2x 500GB NVMe M.2 and 2x 512GB 2,5" slow down the cache because of the significantly lower read/write speed of the 2,5" SSDs? 5. What do you think about the WD Reds? During my recherche, I saw lots of people use non-NAS-specific HDDs like the WD Greens. I am still not sure if the NAS-HDDs can play to their strenghts in terms of 24/7 activity with the unRAID file system - since I don't know enough about it. But, I already bought them and didn't read anything bad about them in posts about unRAID. 6. What's with my migration plan? - I am really not sure about copying via LAN - but adding a FreeNAS VM, passing through the SATA controller and importing the old zpool to copy the data within the machine seems a little bit risky to me since I am inexperienced with VMs - somewhere I read that preclearing isn't necessary anymore because unRAID is able to preclean without array downtime now, is that correct? Thanks those, who took the time and read my post... and to those, who try to answer my questions - priest