Jump to content

HojojojoWololo

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by HojojojoWololo

  1. On 8/28/2022 at 5:36 PM, HojojojoWololo said:

    Hi guys,

     

    I need some help, too, cause I can't figure out what to do even after some hours of research.

     

    Problem: I am using Wireguard for some months and everything works fine since everyone who connects via Wireguard is supposed to have complete access to the LAN of the server (wife and I). But on my server, there is one docker-container which I allowed some friends to have access to. For that purpose, I used an OpenVPN container since I was able to restrict the VPN access to just one specific container (within the OpenVPN config, I was able to restrict certain users to certain IP mappings within the server's docker network). Now the OpenVPN docker is EOL for Unraid and coincidentally, my OpenVPN setup broke. My problem: how can I achieve to set this up via Wireguard in Unraid?

     

    I do not want those people to access my whole server/LAN/... but only one specific docker container (IP is only "fixed" by the boot sequence of the docker containers - not by assigning a fixed IP to the container itself).

     

    Hopefully, someone has some tipps for me :)

     

     

    Up :)

  2. On 8/23/2022 at 11:17 AM, Gazeley said:

    I've probably spent a dozen hours on this but I'm still hitting snags all over the place. An updated guide seems necessary because apparently a lot has changed since the first couple pages in this thread.

     

    After combing through every post here I have a semi-working matrix server but:

    • I can't invite or chat with people outside of my server. Assuming this has something to do with federation but I have no idea what's wrong.

    • Element docker doesn't work. I get a generic "Your Element is misconfigured - Unexpected error resolving homeserver configuration" message. Pretty stumped on this because according to the setup guide all there is to do is add my domain to 2 spots in the config.json file.... not sure how I could have screwed it up.

    • I can't get Jitsi installed at all. I followed spaceinvaderone's video precisely but when it comes time to download/install the 4 docker images with that script, 2 of them throw a bunch of errors and fail to start.

     

    If anyone has any input it'd be appreciated.

     

    ------------------------------------------------------

    ------------------------------------------------------

     

    (EDIT 2)

     

    Finally fixed the federation issue! I've been tearing my hair out over this. Its been a Cloudflare issue all along - you have to toggle the "Proxy Status" on the CNAME record from the default "Proxied" to "DNS Only"

     

    I pass all checks at https://federationtester.matrix.org/

     

    Also I fixed element by adding the following to the matrix homeserver.yaml:

    web_client_location: https://chat.mydomain.com

     

    ------------------------------------------------------

    ------------------------------------------------------

     

    (EDIT 3)

     

    😭 I spoke too soon. Switching the CNAME from 'Proxied' to 'DNS Only' did fix federation, but eventually it broke my subdomain to where I couldn't reach bridge.mydomain.com anymore.

     

    Somehow the issue is with DNS and Cloudflare and Federation but it's all over my head and I can't find any good documentation.

     

    ------------------------------------------------------

    ------------------------------------------------------

     

    (EDIT 4)

     

    Apparently if you want federation the subdomain (bridge.mydomain.com) needs to be an A record NOT a CNAME record. You also need to create an SRV record like so:

     

    firefox_ffTAjd0jQT.thumb.png.cf7a51fff10df903146afacec5dbec20.png

     

    After days of banging my head against the wall this finally got federation working for me.

     

    ------------------------------------------------------

    ------------------------------------------------------

    (EDIT 5)


    AAAAAHHHHHHHHHHH!!!! I spoke to ****ing soon again! After changing the Cloudflare settings above I finally pass the federation check - but I still can't chat with other users.

     

    This is a living nightmare but I'm in too deep to give up.

     

    ------------------------------------------------------

    ------------------------------------------------------

    (EDIT 6)

     

    *incoherent cursing and sobbing*

     

    ------------------------------------------------------

    ------------------------------------------------------

    (EDIT 7)

     

    Finally got it!!! 😄 The A record on Cloudflare has to be toggled to "DNS Only". I knew it was going to be something stupid simple. I've never had an issue with Cloudflare proxies before - but apparently matrix federation does not like it one bit.

     

    ------------------------------------------------------

    ------------------------------------------------------

    (EDIT 8 )

     

    This is a goddamn Greek tragedy.

     

    I made the above edit while I was at work - on a different IP. But I just got home from work to discover I can't access bridge.mydomain.com at all, presumably because DNS doesn't work properly when the source and destination IP are the same. If I turn proxy back on everything works great internally but then I'm isolated to my own server with no federation again.

     

    Curse you @yinzer!!!! And you @HojojojoWololo!!! And everyone else who made this look easy and led me down this dark path! I rue the day I ever found this thread.

     

    ------------------------------------------------------

    ------------------------------------------------------

    (EDIT 9)

     

    I'm almost scared to write this for fear of tempting fate - but I seem to have resolved the issue.

     

    Turns out the issue was my firewall. I had to enable "Automatic outbound NAT for Reflection" in OPNsense under Firewall > Settings > Advanced.

     

    It's been a long hard road and this journey has transformed me. I'm no longer the same naive boy who thought setting up a matrix server would be a fun Saturday project. I'm now a grizzled veteran of unraid networking, a guru of OPNsense, and a master of matrix. But I've lost the gleam in my eye and the wind in my soul.

     

     

     

    firefox_uEb4h5OTKV.png

     

     

     

    homeserver.yaml 1.23 kB · 0 downloads config.json 1.94 kB · 0 downloads turnserver.conf 325 B · 0 downloads matrix.subdomain.conf 1.4 kB · 0 downloads element-web.subdomain.conf 469 B · 0 downloads

     

    In my defense, the post is almost two years old (so it's pretty outdated) and it took me a days-long odyssey to get it to work, too. I mentioned that here, though, so you could have been forewarned :D But I can absolutely understand your annoyance and when an update of Jitsi failed last year, I decided to get rid of it cause the setup was so painful.

  3. Hi guys,

     

    I need some help, too, cause I can't figure out what to do even after some hours of research.

     

    Problem: I am using Wireguard for some months and everything works fine since everyone who connects via Wireguard is supposed to have complete access to the LAN of the server. But on my server, there is one docker-container which I allow some friends to have access to. For that purpose, I used an OpenVPN container since I was able to restrict the VPN access to just one specific container (within the OpenVPN config, I was able to restrict certain users to certain IP mappings within the server's docker network). Now the OpenVPN docker is EOL for Unraid and coincidentally, my OpenVPN setup broke. My problem: how can I achieve to set this up via Wireguard in Unraid?

     

    I do not want those people to access my whole server/LAN/... but only one specific docker container (IP is only "fixed" by the boot sequence of the docker containers - not by assigning a fixed IP to the container itself).

     

    Hopefully, someone has some tipps for me :)

     

     

  4. But I still got some problems here. The biggest one: Rocket.Chat isn't able to connect to the Rocket Chat Cloud. I set up an account and had to register it manually - the link for the online registration with my Rocket Cloud account didn't work. When I want to connect to the cloud services and click on "the sign in button within my Rocket Chat server, I am redirected to a cloud.rocket.chat uri to login which returns this error:

    Quote

    ErrorInvalid redirect uri. If you are attempting to login to your workspace, please go back and click the "Sync" button (a3da349d-e022-4604-8f67-1e9a19feef8c).

    I tried to click sync - rocket chat even tells me that the sync was successful. But the error message is generated again when trying to sign in.

     

    I logged into cloud.rocket.chat and removed the server from my workspaces. I also registered that I had some kind of dummy second workspace called Your Workspace registered. I deleted this one, too. In a second step, I was able to register again - with the online token. I was directed to the correct URI and was able to login with my credentials. Another error message appeared:

    Quote

    The Rocket.Chat Workspace you are attempting to sign into is currently associated with a different Rocket.Chat Cloud account than the one you are signed in with. (09f1ef34-7813-40a8-926d-3498e1827f2a)

     

    I also tried to delete the cloud sync info from my database using the MongoDB containers console

    mongo
    
    use rocketchat
    
    db.rocketchat_settings.remove({"_id": Cloud_Workspace_Id”});
    
    db.rocketchat_settings.remove({"_id": Cloud_Workspace_Client_Id”});
    
    db.rocketchat_settings.remove({"_id": uniqueID”});

    But there still is the "associated with a different Rocket.Chat Cloud account" error.

     

    Is there anyone who has a solution for this problem?

  5. On 10/7/2020 at 5:45 PM, MrKevbo said:

    Exact same thing happened to me.  It looks like it broke something with the replication, but I haven't been able to fix it yet.

    Any luck on your end?

    I think I found a solution. Had a problem with the authorization while installing Rocket.Chat, too. After reading that working with a MongoDB replica set requires both account and keyfile, I started a terminal and mounted into the MongoDB appdata folder:

    cd /mnt/user/appdata/mongodb

    Then I created a keyfile and set read/write permissions with the following two commands:

    openssl rand -base64 741 > mongodb.key
    
    chmod 600 mongodb.key

    Afterwards, I edited to mongodb.conf file in the MongoDB appdata folder and added the direction of the keyfile:

    security:
      authorization: enabled
      keyFile: /data/db/mongodb.key

    Now the enabled authorization works for me 💪😁

    • Like 1
    • Thanks 1
  6. 7 hours ago, jafi said:

    Thank you for everybody how has contributed to this topic.

     

    I tried to run matrix again, but still no luck. Different problem now, I think.

     

    When I try to access bridge.mydomain.xyz via browser I get 502 Bad Gateway.

    Don't lose hope. I think you nearly got it. Reminds me of my status some days ago 😁

    Try what I posted above: in your homeserver.yaml put client and federation in 'inverted commas'

     - port: 8008
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['0.0.0.0']
    
    resources:
          - names: ['client', 'federation']
    
      - port: 8448
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['0.0.0.0']
        
        resources:
          - names: ['federation']

     

    Also set database direction to data/homeserver.db

     

    database:
      name: sqlite3
      args:
        database: /data/homeserver.db

     

    Additionally, try to add a TCP TURN URI (remember to forward port 3478 to your server for both protocols):

     

    turn_uris: ["turn:bridge.mydomain.xyz:3478?transport=udp", "turn:bridge.mydomain.xyz:3478?transport=tcp"]

     

    And finally, remember to change your secrets and keys since you shared them with us. That's not the meaning of shared_secret 😉😁 I hope that you get that thing running. If it doesn't help post your turnserver.conf please (without the shared secret of course).

     

    • Like 1
  7. On 10/20/2020 at 5:13 PM, waymon said:

    Is this guide still good?  I would love to setup a matrix chat client on my unraid box.

     

    Yep, the tutorial kinda works with a few adjustments. But since I had to work my way through multiple posts and other sites I would love to spare you the pains.

     

    My initial setup was an unraid server running Swag (since the Letsencrypt docker wont be supported anymore in the future due to naming rights - spaceinvaderone made a great tutorial how to switch from the Letsencrypt to the Swag docker). Yinzer's tutorial for the Letsencrypt docker still seems fine, though you really should use the Swag docker instead. Furthermore, Jitsi was already up and running when I started to install Matrix (thanks to spaceinvaderone, again 😄), so I will skip that part. If you have to set up a reverse proxy (be sure to use the Swag container instead of the Letsencrypt container) or want to switch to Swag, the spaceinvaders videos are really helpful. My adjustments to @yinzer's Matrix setup:

     

    Setting up Swag (formerly Letsencrypt)

     

    matrix.subdomain.conf - thanks to @akamemmnon for his config

    server {
    	listen 443 ssl;
    	listen 8448 ssl;
    	
    	server_name bridge.*;
    
    	include /config/nginx/ssl.conf;
    	
    	client_max_body_size 0;
    
    	location / {
    		include /config/nginx/proxy.conf;
    		resolver 127.0.0.11 valid=30s;
    		set $upstream_app your.unraid.server.ip;
    		set $upstream_port 8008;
    		set $upstream_proto http;
    		proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    		proxy_set_header X-Forwarded-For $remote_addr;
    	}
    	
    	location /.well-known/matrix/server {
            default_type application/json;
                   return 200 '{"m.server": "yourdomain.com:443"}';
                 add_header Access-Control-Allow-Origin *;
    	}
    }

    Make sure to change your.unraid.server.ip to your unraid server's IP adress and yourdomain.com to your domain name 😁

     

    Since Riot was renamed to Element, there is a new container so we will use that one instead of Riot and have to adjust the Swag configuration file.

     

    element-web.subdomain.conf

     server {
           listen 443 ssl;
           server_name chat.*;
           include /config/nginx/ssl.conf;
           client_max_body_size 0;
    
           location / {
                   include /config/nginx/proxy.conf;
                   resolver 127.0.0.11 valid=30s;
                   set $upstream_app element-web;
                   set $upstream_port 80;
                   set $upstream_proto http;
                   proxy_pass $upstream_proto://$upstream_app:$upstream_port;
           }
    }

     

    Install Matrix and configure it according to yinzer's tutorial. Adjustments:

     

    Setting up Matrix

     

    homeserver.yaml

     

    under "listeners" in the "# Unsecure HTTP listeners: for when matrix traffic passes through a reverse proxy" section:

      - port: 8008
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['0.0.0.0']
        
        resources:
          - names: ['client', 'federation']
    
      - port: 8448
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['0.0.0.0']
        
        resources:
          - names: ['federation']

    Make sure you respect the .yaml syntax - that's what created the syntax errors of @lewisd19, @jafi and @l2evy. No tabs, just spaces! Additionally, the resource names have to be commented with inverted commas: 'text'. The examples over this section can help you with this.

     

    If you use the standard SQLite database, make sure you changed the database's direction - thanks to @spyd4r for your input.

    database:
      name: sqlite3
      args:
        database: /homeserver.db
    
    SHOULD BECOME
    
    database:
      name: sqlite3
      args:
        database: /data/homeserver.db

     

    turnserver.conf

     

    Delete the first line which says "lt-cred-mech" since we use "use-auth-secret". Also think about adding the pidfile and userdb infos yinzer posted in his tutorial. My turnserver.conf looks like this:

    use-auth-secret
    static-auth-secret=YOUR-STATIC-AUTH-SECRET
    realm=turn.bridge.yourdomain.com
    cert=/data/bridge.yourdomain.com.tls.crt
    pkey=/data/bridge.yourdomain.com.tls.key
    dh-file=/data/bridge.yourdomain.com.tls.dh
    cipher-list="HIGH"
    pidfile=/data/turnserver.pid
    userdb=/data/turnserver.db

     

    Setting up Element-Web (based on @yinzer's tutorial for Riot Chat)

     

    1. Before we start, we need to manually create the config path and pull in the default config. So open the terminal/SSH to your server.

    2. Create the config path by executing

    mkdir -p /mnt/user/appdata/element-web/config

    3. Download the default config by executing

    wget -O /mnt/user/appdata/element-web/config/config.json https://raw.githubusercontent.com/vector-im/element-web/develop/element.io/app/config.json

    4. In Community Applications, search for `element-web' by vectorim
    5. Set the `Network Type` to `Custom: ssl proxy`
    6. Set the `Fixed IP address` to `172.20.0.20` (or whatever)
    7. The rest of the settings should be fine. Create the container and run it.

     

    Now lets edit our Element config. It's a JSON file, so make sure you respect JSON syntax


    1. Edit  /mnt/user/appdata/riot-web/config/config.json

    2. Change 'default_server_name' to

    "default_server_name": "bridge.yourdomain.com",

    3. Insert your domain to the 'roomDirectory'

    "roomDirectory": {
            "servers": [
                "bridge.yourdomain.com",
                "matrix.org",
                "gitter.im"
            ]
        }

    4. Add the following lines in the config:

    {
      "jitsi": {
        "preferredDomain": "meet.yourdomain.com"
      }
    },

     

    Caution: Using a Jitsi server with enabled authentification doesn't work with Element! And this should also be noticed:

    Quote

    Currently the Element [iOS App] do not support custom Jitsi servers and will instead use the default jitsi.riot.im server. When users on the mobile apps join the call, they will be joining a different conference which has the same name, but not the same participants. (see here)

     

     

    Jitsi Setup

     

    Just follow spaceinvaderone's instructions in this video.

     

    But for setting up a working Matrix synapse and the Element-web container, that should be it. @yinzer Feel free to update your initial post with this adjustments 😃

    • Thanks 1
  8. Thanks for the tutorial and all your work @yinzer - altough it could use a little update. Finally, I got Matrix working... after nearly 3 days of work and reading this thread as well as various other sites again and again😀 Element-Web is also connected now and I was able to create a useraccount on my matrix synapse. Federation and integrations manager seem to work, too.

     

    *edit* Problem solved. Stupid mistake because of lack of sleep. Night shifts ^^

     

    But the log of my Matrix Container shows the same "Socket: protocol not supported" errors @xthursdayx described in this Github post. Must be the TURN-part of the Matrix container. @xthursdayx: did you find a solution for your problem?

  9. On 4/17/2020 at 11:11 PM, Wouterr said:

    Got it working by installing the NerdPack (on Community applications). from which i installed PIP

    with PIP you can then install docker-compose.

     

    Then just follow the quick start on https://github.com/jitsi/docker-jitsi-meet and play arround with the .env configuration file.

    Make sure to set the secrets and passwords to something or the application might not start.

    Also, Some important settings to change:

    CONFIG=/mnt/user/appdata/jitsi-meet-cfg

    DOCKER_HOST_ADDRESS=<your unraid server IP>

     

    If you can forward port 80 and 443 to that docker, great, set ENABLE_LETSENCRYPT and you're done. as i have already set-up a nginx reverse proxy in a letsencrypt docker (as set-up to @SpaceInvaderOne totorial) so i could not use the build in functionality.

     

    Quik soluition is to join your letsencrypt docker to the jitsi.meet docker network. this can be done trough the terminal with the command

    
    docker network connect docker-jitsi-meet_meet.jitsi letsencrypt

    Where letsencrypt is my letsencrypt docker acting as reverse proxy and docker-jitsi-meet_meet.jitsi is the docker network name as created during docker-compose.

     

    Using folowing jitsi.subdomain.conf in the appdata\letsencrypt\nginx\proxy-confs

    where server_name meet.* point to the subdomain i have configured to serve the jitsi webinterface

    
    # https://community.jitsi.org/t/issues-with-using-nginx-on-separate-server/15783/8
    server {
    	listen 80;
    	listen 443 ssl;
    	server_name meet.*;
    
    
    	location / {
    		ssi on;
    		proxy_pass http://meet.jitsi;
    		proxy_set_header X-Forwarded-For $remote_addr;
    		proxy_set_header Host $http_host;
    	}
    	# BOSH
    	location /http-bind {
    		proxy_pass http://xmpp.meet.jitsi:5280/http-bind;
    		proxy_set_header X-Forwarded-For $remote_addr;
    		proxy_set_header Host $http_host;
    	}
    
    	# xmpp websockets
    	location /xmpp-websocket {
    		proxy_pass              http://xmpp.meet.jitsi:5280/xmpp-websocket;
    		proxy_http_version      1.1;
    		proxy_set_header        Upgrade $http_upgrade;
    		proxy_set_header        Connection "upgrade";
    		proxy_set_header        Host $host;
    		tcp_nodelay             on;
    	}
    }

     

     

    Nice workaround. Thanks for sharing your thoughts, it seems to work so far. But if I remember it right, the Letsencrypt container has to be added to the jitsi docker-network each time the Letsencrpyt container restarts, correct? I seem to recall that the docker cointainers are only able to save one network.

  10. On 3/26/2020 at 10:07 PM, hernandito said:

    I offer +100 demand units....

     

    We need to implement a demand-multiplying factor based on how long people have been using unRAID. You earn 2 demand unit credits a month. Developers can submit bids to create the docker. All demand points accumulate, and once the units are reached and the developer follows through, they get the units added to their account. Developers can then award back the units in their accounts to donors. I am saying this mostly in jest, but hey.... you never know.

    Sound like an interesting idea. +1 demand unit for your proposal and +1 demand units for the demand of the threadstarter.

  11. Hey Jones,

     

    as you can see in my signature, I switched to a Ryzen 2600 and the server runs like a charm - despite some config-errors during the setup (first unRAID setup ;). The CPU's cores are only fully utilized when I run foldingathome - even during three streams with downcoding via Jellyfin, the max. CPU-usage is at about 40-60 percent (at the moment no GFX card). Max. CPU-Temp. is around 37°C (~30 in idle) with the boxed cooler. Planning to upgrade the server with a GFX-card and watercooling in the future though.

     

    The SSD-cache seems crucial to me, otherwise copying data to the server would definitaly be to slow for me. The SanDisk nVMe-SSDs get a little warm on a high workload (about 45-55°C) but thats just during intense copying-sessions. Picking the WD-HDDs was a good choice, too - no problems, even my 5-year-old WD Red 3TBs from my old FreeNAS still work fine without SMART-errors or other problems.

     

    Case keeps the machine very silent.

     

    And finally, I am very happy about the two Gbit-LANs. Makes lots of stuff easier. The IPMI-feature I only used for the setup... made things a little bit easier but since I never used it again, I'd say the feature is rather nice-to-have than a must-have.

     

    If you have any other questions, just PM me.

     

    Best wishes.

  12. 58 minutes ago, Benson said:

    The ventilation of rack case actually more poor then tower case due to its length, 4x80mm FAN even worse. The height (2U) also a limitation for CPU heatsink. If no special benefit ( i.e. form factor ) then no reason go to rack case.

     

    This should be main reason why we use Unraid instead others, but I suggest you should try setup all ( VM / docker / Networking ...) in existing hardware or only new CPU / MB first. Because you may change the design at last.

    Ouch, the height. Yes, new case. Thanks. But no idea where to put a tower. Have to figure that out first.

  13. Just now, trurl said:

    LAN makes more sense to me. Why go to a lot of extra trouble to save a little bit of time for a one time operation?

    Yeah, you got a point there. Altough the estimated 20 hours are more than a little bit of time for me, I really don't want to mess around with the zpool.... 😅 Thanks for your reply.

    - priest

  14. Hi guys

     

    Since my old FreeNAS in a Fractal Note case has just 600GB of free space left and I do not dare to fill the last two 3,5 slots with HDDs because of the ventilation, I plan to build a new server. After having read lots about FreeNAS, proxmox and unRAID, I am quite sure that I will like the easy setup and the whole storage system of unRAID with its easy storage upgrades. Furthermore, I really want to try virtualization, perhaps I can get rid of my gaming tower in the future. From what I've read, Proxmox seems to be problematic in terms of hardware passthrough and the FreeNAS virtualization apparently is immature.

     

    Primarily, I want to use the unRAID server as a NAS for personal data and videos. But being sick of running multiple Raspis and of migrating the Kodi SQL databases every time I update Kodi, I also plan to run

    - (4 cores, up to 8192 RAM) Jellyfin Docker (OpenSource Emby fork)

    - (1 cores, up to   512 RAM) small VM for a PiHole

    - (1 cores, up to 1024 RAM) VM for Ubuntu Server for OpenHAB (home automatization - runs on a Raspi 3+ at the moment)

    - (4 cores, up to 8192 RAM) perhaps Ubuntu Desktop to use as a personal workspace (at the moment, Ubuntu is installed on an own SSD in my Windows 10 gaming PC - but I would love to seperate Windows from my personal Linux workspace)

    - would love to additionally run an instance of Debian for a max2play server (Multiroom-Audio) - 2 cores with 1024 RAM would be enough

     

    So, I finally ended up with the following configuration for my future unRAID server (please correct me if my assuptions are wrong - that's the purpose of this post😞

     

     

     

    Setup

    Case:  Nanoxia Deep Silence 6 Rev. B

      -  lots of space for HDDs, SSDs and ventilation

     

    PSU: 650 Watt Corsair TX650M Modular 80+ Gold

      -  650 Watt because I plan to buy a Geforce 1650 with h.265-transcoding support for Jellyfin in the future and/or to upgrade the CPU for a gaming VM

      -  modular

     

    Mainboard: AsRock Rack X470D4U - AM4 X470 server/consumer mainboard

      -  I decided to go for an AM4 board because of 1. the TDP of the Ryzens and Threadrippers 2. the price-performance ratio of AMDs consumer multicore CPUs and 3. because I am more flexible (since I think about upgrading the server to a gaming VM host later)

    - the X470D4U has 2x Gbit LAN + a third RJ45 port for IPMI 2.0, supports up to 128 GB DDR4 ECC-RAM in 2x2 dual channel slots, has 8x SATA 6Gb onboard, an onboard ASpeed AST2500 256 MB GPU and 2x M.2 full profile slots (1x PCIe 3.0 x2, 1x PCIe 2.0 x4)

     

    CPU: AMD Ryzen 5 1600 6x 3.20 GHz - Socket AM4, no iGPU

      -  since I have IPMI 2.0 and an onboard GPU, I do not need an iGPU, although it would be nice if it could be used by Jellyfin for transcoding - but from what i've read, the passthrough of the iGPUs seems complicated

      -  6 cores/12 threads for <100 Euro

     

    RAM: 16GB Kingston KSM26ED8/16ME DDR4-2666 ECC - 2x for 32GB RAM

      -  listed on ASRocks Qualified Vendor List for the X470D4U mainboard, ECC works

      -  2x16 because this way I will be able to upgrade to 64GB without any problems (I am pretty sure that I wont need 128GB RAM during the next decade)

     

    SSD-Cache: SanDisk Extreme PRO 500 GB M.2 NVMe 3D SSD - 2x (in RAID1) for 500GB Cache

      -  equivalent to WD Black 3D NVMe SSD

     

    HDDs: WD Red 4000 GB - 4x 3,5" (+ 4x 3000 GB 3,5" after data migration)

      -  never had a problem with the Reds in my FreeNAS for 5 years now, so I bought new ones when they were on sale recently

      -  after data migration, I will have the 4x 3000 GB WD Reds from the old FreeNAS to extend the unRAID storage

     

    SSDs: SANDISK Ultra 3D 512 GB - 2x 2,5"

      -  got them on sale, not sure what to do with them tbh XD thought about passing them through as native storage for the private Ubuntu VM or/and a possible windows 10 VM?

     

    HBA: LSI Broadcom 9201-8i 6Gbps SATA SAS HBA Controller

      -  should work out of the box (unRAID hardware compatibility list)

      -  I do just need 2 more SATA ports, but if I have to upgrade the server some day, I think I will be happy for the extra slots

     

    Plans for the future:

      -  install a low profile Geforce 1650 and pass it through to Jellyfin for transcoding

      -  update the CPU (8 core AMD Ryzen 7 2700X or 12 core AMD Ryzen 9 3900X - depending on my budget ^^)

      -  get a 10 Gbit LAN card when the house is built and Cat. 7 cables are installed - 'till then I am stuck on 1 Gbit powerline/DLAN adapters

     

     

     

     

    Plan for migration:

    - already installed unRAID on a SanDisk Cruzer Fit 16GB USB2.0 - tested on my Windows machine, was able to boot and start unRAID

    - assemble the server and start unRAID the first time

    - build an array of 2x 4 TB WD Reds with 2x 4 TB WD Reds as two parity disks

      -  data on the old FreeNAS: 7,11 TiB

      -  capacity of the new array: 7,12 TiB (2* 4 TB * 0,89 TiB in TB) - if it doesn't fit, my gaming machine has some additional space left

      -  Two parity disks because I really do not want to lose my media, I have a little PTSD from losing all my 380 music videos back in the times of cable modems - the personal data is additionally E2EE-backuped on a dedicated Nextcloud instance

    - copy the data (FreeNAS 1xGbit -> Gbit 4 Port Managed Switch -> unRAID 2xGbit with the NVMe cache - limited to the 1xGbit of the FreeNAS)

    - wait

    - wait

    - ... (think again about my idea to host a FreeNAS VM, install the 4x 3 TB in the new server, import the zpool from the old FreeNAS and copy the stuff directly, still not being sure if that would have worked out)...

    - wait

    - done!

    - :) after the copying of the data remove the 4x 3 TB HDDs from the FreeNAS, install them in the rack and preclear them

    - add them to the array

     

     

     

    Sooooo, my questions:

     

    1. Have I overlook something or is my plan reasonable?

     

    2. Since I am new to virtualization: is it possible and advisable to assign more cores than my CPU has threads? In my case, can I assign more than 12 cores using a 6core/12 thread CPU?

     

    3. Is the combination of CPU and RAM strong enough to virtualize an additional Windows 10 VM? With a GTX 1650, I should be able to in-home-stream Steam games, right?

     

    4. What should I do with the 'additional' 2x2,5" SanDisk Ultra 3D 512GB? Is it possible to use the 2x 500GB M.2 NVMe as cache and the 2x 512GB 2,5" as back-up? In doing so, I would increase my SSD-cache to 1TB. Or asked differently, would a RAID1 of 2x 500GB NVMe M.2 and 2x 512GB 2,5" slow down the cache because of the significantly lower read/write speed of the 2,5" SSDs?

     

    5. What do you think about the WD Reds? During my recherche, I saw lots of people use non-NAS-specific HDDs like the WD Greens. I am still not sure if the NAS-HDDs can play to their strenghts in terms of 24/7 activity with the unRAID file system - since I don't know enough about it. But, I already bought them and didn't read anything bad about them in posts about unRAID.

     

    6. What's with my migration plan?

    - I am really not sure about copying via LAN - but adding a FreeNAS VM, passing through the SATA controller and importing the old zpool to copy the data within the machine seems a little bit risky to me since I am inexperienced with VMs

    - somewhere I read that preclearing isn't necessary anymore because unRAID is able to preclean without array downtime now, is that correct?

     

    Thanks those, who took the time and read my post... and to those, who try to answer my questions :)

     

    - priest

×
×
  • Create New...