Leaderboard

Popular Content

Showing content with the highest reputation on 10/14/21 in all areas

  1. AutoAdd Issue in Deluge First, let me say that I've benefited greatly from this container and the help in this thread, so thank you all. And although I'm running the container on a Synology unit, I thought I'd finally give something back here for anyone who may be having a similar issue. Background Container was running great for me up to binhex/arch-delugevpn:2.0.3-2-01, but any time I upgraded past that I had problems with Deluge's AutoAdd functionality (basically its "watch" directory capability that was formerly an external plugin that is now baked into Deluge itself). Certain elements worked like Radarr/Sonnar integration, but other elements broke, like when I used manual search inside Jackett, which relies on a blackhole watch folder. I ended up just rolling back the container, it worked fine again, and I stopped worrying about it for a while. However, with the new (rare) potential for IP leakage, it's been nagging at me to move to the new container versions. Initially, I wasn't sure if it was the container, the VPN, or Deluge itself, but it always kind of felt like Deluge given the VPN was up, I could download torrents, and Radarr/Sonarr integration worked -- it was only the AutoAdd turning itself off and not behaving itself when using Jackett manual searches that wasn't working. I'm actually surprised I haven't seen more comments about this here because of how AWESOME using Jackett this way is! (Y'all are missing out, LOL). The Fix I finally put my mind to figuring this out once and for all yesterday, and I believe I tracked it down. Turns out the baked-into-Deluge AutoAdd is currently broken for certain applications (like watching for Magnets), and that version is in the current BinHex containers. Even though the fix hasn't been promoted into Deluge yet (so of course not in the container yet either), there is a manual fix available, and it's easy (just drop an updated AutoAdd egg into the Deluge PlugIns folder and it will take precedence over the baked-in version). I will say that I literally just implemented and tested, so it's possible I'll still run into problems, but it's looking promising at the moment. Thanks again for this container and this thread, enjoy! The temporary fix can be found here
    2 points
  2. here runs unRAID 6.8.1 now as VM under Proxmox. all is fine so far, but i admit, i don't use VMs inside unRAID (as that can be very tricky with performance). using quite a few dockers and plug-ins, all runs fine. Proxmox as the hypervisor uses KVM/QEMU itself to run VMs. so it would be great to have QEMU Agent support for unRAID as being run as VM (the QEMU agent would then report some informations back to the hypervisor). there's one guy, which did a VMware Tools build for his ESXi as hypervisor. but (and that's fine) he won't/can't do it for the QEMU Agent. so if anybody, or LimeTech, could bring support for the standard QEMU/Agent – that would be fine. thx. alot. PS: Proxmox is free (based on Debian) and has no limitations whatsoever. it can use all (from Debian supported) hardware, offers (built in) ZFS as filesystem and has a quite good GUI and also a respectable user base & forums. it can be (and is offered) with prof. support if wanted.
    1 point
  3. Hello all, I use the appdata backup pluggin for all my apps, however I want to make double sure that I have no issues retrieving my passwords for Bitwarden in the event of a system failure. As such, would the following script work in backing up my Bitwarden passwords to G-Suite via Rclone crypt (I'll set it to run daily); sudo docker stop bitwardenrs rclone copy /mnt/user/appdata/bitwarden/ GoogleDriveBackupCrypt:unraid/Bitwarden_Backup sudo docker start bitwardenrs This is my first attempt at any sort of script that I haven't simply copied and pasted from a tutorial. Many thanks. ***Edit*** It works, well when I press "run in background" anyway, so should be ok running via schedule. It's the simplest script I think you could run, but I'm quietly pleased I managed it myself. 😂
    1 point
  4. I posted this on the serverbuilds.net forums, and noticed that several users here were interested, so cross-posting! This a somewhat complex yet in-demand installation, so I figured I'd share my steps in getting a Riot.im chat server syndicated through a Matrix bridge that supports a Jitsi voip/video conference bridge. The end result is a self-hosted discord-like chat server where any chat room can become a video conference with a single click! It has some other neat features like end-to-end encryption and syndication with other matrix server AND other types of chat servers (you can have a chat room that links to a discord room, irc channel, etc). We'll do almost all of this using apps from the Unraid Community Applications repo! Summary: We'll setup some domains for each of our components, then use a LetsEncrypt proxy to generate certificates. Matrix will run the back-end, Riot Chat will run the front-end, and Jitsi will handle the A/V. DNS Setup: You're gonna want a few subdomains, even if you have a dyndns setup pointing to your host. Then can all point to the same IP, or you can use CNAME or ALIAS records to point to the root domain. A DNS setup for somedomain.gg might look like this: Type - Host - Value A - @ - 1.2.3.4 (Your WAN IP) CNAME - bridge - somedomain.gg CNAME - chat - somedomain.gg CNAME - meet - somedomain.gg In the above-the `@` A-record will set the IP for your domain root, and the CNAME-records will cause the 3 subdomains to resolved to whatever domain name you point them at (the root domain, this this case). Each domain will host the following: bridge: matrix - The core communications protocol chat: riot - The chat web UI meet: jitsi - The video conferencing bridge Firewall Setup: You'll need the following ports forwarded from you WAN to you Unraid server: LetsEncrypt: WAN TCP 80 -> LAN 180 , WAN TCP 443 -> LAN 1443, WAN TCP 8448 -> LAN 1443, all on your Unraid server IP - 80: Used by LetsEncrypt to validate your certificate signing request -- this can be disabled after setup, then only enabled when you need to renew a certificate. - 443: LetsEncrypt proxy for encrypted web, duh - 8448: Matrix Integrations port for enabling plugins. Also proxied via LetsEncrypt. Make sure this points to 1443, not 8443! STUN: TCP and UDP 3478 on WAN -> 3478 on Unraid (or changed to suit your needs) Jitsi: UDP Port 10000 -> 10000 on Unraid We'll be assuming you used these ports in the rest of the guide, so if you needed to change any, compensate as needed! Docker Networking: This is a fairly complex configuration that will use at least 7 docker containers. To make this easier we'll create a custom docker network that these containers will all live on, so that they can communicate between each other without having to worry about exposing unnecessary ports to your LAN: 1. In Unraid, go to Settings->Docker. 2. Disable docker so you can make changes: set `Enable Docker` to `No` 3. Set `Preserve user defined networks` to `Yes` 4. Re-enable Docker 5. Open the Unraid console or SSH in. 6. Create a new Docker network by executing `docker network --subnet 172.20.0.0/24 create sslproxy` or whatever subnet works for you (adjusted below as needed). We're now done with the pre-install stuff! I'd suggest testing your DNS and that the ports are all open on your FW and are getting directed to the right places. If everything looks good, then lets get some dockers! LetsEncrypt Install: Before proceeding, wait for your DNS server to update and make sure you can resolve the 3 subdomains remotely. This is REQUIRED for LetsEncrypt to validate the domains! LetsEncrypt will need listen on port 80 and port 443 of your WAN (public-facing) interface so that it can validate your ownership of the domains. We're going to use a Docker from the Unraid Community Applications docker. But before we do, we need to enabled user defined networks in our Docker settings. But first, 1. In Community Applications, search for `LetsEncrypt` and install the container from `linuxserver` 2. Set the `Network Type` to `Custom: ssl proxy` 3. Set the `Fixed IP address` to `172.20.0.10` (or whatever works for you) 4. Make sure `Privileged` is set to `On` 5. Set the `http` port to `180` and the `https` port to `1443` 6. Supply an email 7. Enter your domain name, ie `somedomain.gg` 8. Enter your subdomains: `chat,bridge,meet` (and any others you want to encrypt) 9. Optional: set `Only Subdomains` to false if you want the root domain to also have a cert! The rest of the options should be fine as-is. If you do NOT have a domain, but use a dynamic dns service, you can still mange but might be limited to a single domain. Make sure `Only Subdomains` is set to `True`, otherwise your install will fail as LetsEncrypt will expect you have be running on your dyndns services web server! The following steps will also require you to do some nginx subdirectory redirection instead of domain proxying. SpaceInvader has a great video that demonstrates this in detail. Once you've created the docker instance, review the log. It might take a minute or two to generate the certificates. Let it finished and make sure there are no errors. It should say `Server ready` at the end if all goes well! Try browsing to your newly encrypted page via https://somedomain.gg (your domain) and make sure all looks right. You should see a letsencrypt landing page for now. If all went well, your LetsEncrypt certificates and proxy configuration files should be available in /mnt/user/appdata/letsencrypt/ LetsEncrypt Proxy Configuration: LetsEncrypt listens on ports 80 and 443, but we also need it to listen on port 8448 in order for Riot integrations via the public integration server to work property. Integrations let your hosted chatrooms include bots, helper commands (!gif etc), and linking to other chat services (irc, discord, etc). This is optional! If you're happy with vanilla Riot, you can skip this. Also, you can run your own private Integrations server, but I'm not getting into that here. So assuming you want to use the provided integrations, we need to get nginx listening on port 8448. To do that, edit `/mnt/user/appdata/letsencrypt/nginx/site-confs/default` and make the following change: Original: New: Next, we are going to need 3 proxy configurations inside LetsEncrypt's nginx server (one for matrix, riot and jitsi). These live in `/mnt/user/appdata/letsencrypt/mnt/user/appdata/letsencrypt/`. Create the following file: matrix.subdomain.conf: riot-web.subdomain.conf: jitsi.subdomain.conf: ^^^ NOTE!!! Make sure you saw the `CHANGE THIS` part of the `$upstream_app` setting. This should be the LAN IP of your Unraid server! Done! To test, trying visiting https://<subdomain>.somedomain.gg/ and you should bet a generic gateway error message. This means that the proxy files attempted to route you to their target services, which don't yet exist. If you got the standard LetsEncrypt landing page, then something is wrong! Matrix A Matrix container is available from avhost in Community Applications. 1. In Community Applications, search for `Matrix` and install the container from `avhost` 2. Set the `Network Type` to `Custom: ssl proxy` 3. Set the `Fixed IP address` to `172.20.0.30` or whatever works for you 4. Set the `Server Name` to `bridge.somedomain.gg` (your domain) 5. The rest of the settings should be fine, and I suggest not changing the ports if you can get away with it. Create the container and run it. Now we need to edit our Matrix config. 1. Edit `/mnt/user/appdata/matrix/homeserver.yaml` 2. Change `server_name: "bridge.somedomain.gg"` 3. Change `public_baseurl: https://bridge.somedomain.gg/"` 4. Under `listeners:` and `- port: 8008` change `bind_address: ['0.0.0.0']` 5. Change `enable_registration: true` 6. Change `registration_shared_secret: xxxx` to some random value. It doesn't matter what it is, just don't use the one from the default config! 7. Change `turn_uris` to point to your domain, ie `"turn:bridge.somedomain.gg:3478?transport=udp"` 8. Set a good long random value for `turn_shared_secret` If you have errors at start-up about your turnserver.pid file or database, you can try editing your /mnt/user/appdata/matrix/turnserver.conf file and adding: pidfile=/data/turnserver.pid userdb=/data/turnserver.db There are a ton of other settings you can play with, but I'd wait until after it working to get too fancy! Now restart the Matrix container, and check that https://bridge.somedomain.gg/ now shows the Matrix landing page. If not, something's wrong! Riot Chat Riot Chat servers as we web front-end chat interface. There's also a great mobile app called RiotIM. For the web interface, there's an Community Applications image for that! 1. Before we start, we need to manually create the config path and pull in the default config. So open a console/SSH to your server. 2. Create the config path by executing `mkdir -p /mnt/user/appdata/riot-web/config` 3. Download the default config by executing `wget -O /mnt/user/appdata/riot-web/config/config.json https://raw.githubusercontent.com/vector-im/riot-web/develop/config.sample.json` (**NOTE**: This is a different URL than the one suggested in the Docker!) 4. In Community Applications, search for `riot web` and install the container from `vectorim`. Watch you, there are two -- use the one with the fancy icon, which doesn't end with an asterisk (`*`)! 5. Set the `Network Type` to `Custom: ssl proxy` 6. Set the `Fixed IP address` to `172.20.0.20` (or whatever) 7. The rest of the settings should be fine. Create the container and run it. Now lets edit our Riot config. It's a JSON file, so make sure you respect JSON syntax 1. Edit ` /mnt/user/appdata/riot-web/config/config.json` 2. Change `"base_url": "https://bridge.somedomain.gg",` 3. Change `"server_name": "somedomain.gg",` 4. Under the `"Jitsi:"` subsection near the bottom, change `"preferredDomain": "meet.somedomain.gg"` If all went well, you should see the Riot interface at http://chat.somedomain.gg! If not, figure out why... Now lets create our first account! 1. From the welcome page, click `Create Account` 2. If the prior config was correct, `Advanced` should already be selected and it should say something like `Create your Matrix account on somedomain.gg`. If the `Free` option is set, then your RiotChat web client is using the public matrix.org service instead of your private instance! Make sure your `base_url` setting in your config.json is correct. Or just click Advanced, and enter `https://bridge.somedomain.gg` in the `Other Servers: Enter your custom homeserver URL` box. 3. Set your username and password 4. Setup encryption by following the prompts (or skip if you don't care). This may require that you whitelist any browser script blockers that you have running. Done! You now have a privately hosted Discord-alternative! Lets add some voice and video chat so we can stop using Zoom 😛 Jitsi This part doesn't have a solid Docker image in the Community Application store, so there's a few more steps involved. We're gonna need to clone their docker setup, which uses docker-compose. 1. Open a console/SSH to your server 2. Install docker-compose by executing `curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose` 3. Make it executable: `chmod u+x /usr/local/bin/docker-compose` 4. Move to your appdata folder : `cd /mnt/user/appdata` 5. Make and enter a folder for you docker-compose projctes: `mkdir docker-compose; cd docker-compose` 6. Clone and enter the `docker-jitsi-meet` repo: `git clone https://github.com/jitsi/docker-jitsi-meet ; cd docker-jitsi-meet` 7. Create an install environment: `cp env.example .env` 8. Populate some random secrets in your environment: `./gen-passwords.sh` 9. Edit the install environment (I'm using nano, but edit however you want): nano .env 10. Change `CONFIG=/mnt//mnt/user/appdata/jitsi-meet/` 11. Set TZ to your timezome, ie `TZ=America/Denver` 12. Change `PUBLIC_URL=https://meet.somedomain.gg` 13. Change `DOCKER_HOST_ADDRESS=192.168.0.1` or whatever the LAN address of your Unraid server is 14. Create the CONFIG path that you defined in step 10: `mkdir /mnt//mnt/user/appdata/jitsi-meet/` 15. Create and start the containers: `docker-compose -p jitsi-meet -f docker-compose.yml -f etherpad.yml up -d` 16. This will create 4 Jitsi containers are part of a Docker Stack -- see your list of dockers. You can't edit them, but take note of the `jitsi-meet_web_1` ports, which should be `8000` and `8443`. If you got any errors, it's likely a port conflict somewhere, so find the corresponding setting in your `.env` file and adjust as needed, reflecting any relevant changes in the next step. When we were setting up our Nginx proxy configs, you'll recall that the Jitsi config `$upstream_app` had to be set manually, rather than relying on the internal DNS. That's because the docker-compose stack names are not 100% predicatble, so it's better to just hard-code it. You might want to double-check that setting if you have in uses from here on. To test Jitsi, go to https://meet.somedomain.gg/ and hopfully you see the Jitsi page. Try to create a meeting. In the future, it may be wise to enable Authentication on your Jitsi server if you dont want any random person to be able to host conferences on your sever! See the docs (or SpaceInvader's video) for details on that. Now find a friend and get them to register a Riot account on your server at https://chat.somedomain.gg (or use the mobile app and connect to the custom host). Get in a chat room together, then click the Video icon next to the text input box and make sure it works. It's worth noting that Jitsi works differently when there are only 2 people chatting -- they'll communicate directly. With 3 or more, they'll communicate with the Jitsi server and use the TURN service. So it's a good idea to try to get a 3rd person to join as well, just to test out everything. Thats it, hope this helps! Enjoy! To Do: * Custom Integrations Server * Etherpad Integration Edit: While I was making this guide, SpaceInvader came out with a great video covering the Jitsi part! It covers some authentication options that I didn't get into, but would highly suggest. Check it out!
    1 point
  5. With Nvidia unlocking GPU's for multiple streams/sessions and officially supporting GPU passthrough in the latest round of driver releases would it be possible to include paravirtualization in the same fashion that Hyper-v allows so that multiple VM's can share resources on a single card. I will not pretend to understand the intricacies of how this works so forgive me if there is already a solution to this, but its is absolutely a game changer in my home allowing me to run hugely powerful windows 10 instances through parsec on pi thin clients all over the house without the huge expense of multiple GPU's. the flip side is its consuming a lot of resource from my windows 10 workstation that I would rather hand over to a dedicated GPU in my Unraid server.
    1 point
  6. assuming the controller is giving that information correctly. That is one of the reasons RAID controllers and USB connections aren't recommended. Looks like your hardware is good for that.
    1 point
  7. If it is the parity drive that isn't working, just don't assign any disk to parity, keep cache as assigned, and assign any other disk as disk1, then you can start the array and that will give you access to cache.
    1 point
  8. A disk actually dying is rare compared to other problems such as bad connections. People will often disturb connections any time they are messing around in the case. In your situation you do have multiple disks with known problems, but since these are both going to be eliminated during the double rebuild (or the double parity swap) that shouldn't be an issue. You should either rebuild both at the same time to new disks, or do both parity swaps at the same time. You don't want either to be involved in rebuilding the other. Parity swap doesn't seem any more risky to me than just a rebuild. The parity copy part of the procedure happens with the array offline, so there are no changes to parity during the copy. And that copy only concerns the new disk and parity. While copying parity you still have the original parity and could just New Config it back into the array and be back where you were. And the other part of the procedure is just a rebuild. Ultimately, I always trust the advice given by this person:
    1 point
  9. Ja es liegt an der Netzwerkschnittstelle br0 bei dem PiHole Docker-Container. Damit es funktioniert musst du unter den Docker Einstellungen > Erweiterte Ansicht folgende Einstellung aktivieren.
    1 point
  10. You should also install the Fix Common Problems plugin. Not only can it help you fix common problems, but it will tell you about problems you didn't know about.
    1 point
  11. This is exactly what i did. May try to do more advanced, but system moves the files over fairly quickly. I was able to mount the shares so i can access within the Dockers.
    1 point
  12. You already have disk7 disabled, you can only have one disable/missing device with single parity, and first need to replace that one.
    1 point
  13. Please stop the SWAG docker. Let's get the basic system working and then you can add SWAG back in. What version of Unraid is this? And please double-check that you have the latest My Servers Plugin (currently 2021.10.12.1921) Please go to Settings -> Management Access and let me know what the "Use SSL/TLS", "HTTP Port" and "HTTPS port" values are. Then go to Settings -> Management Access -> My Servers and let me know what the WAN Port is. On your router, what ports do you have forwarded to this server? When you press "Check" on the My Servers page, what happens? Also, please upload your diagnostics (from Tools -> Diagnostics)
    1 point
  14. Ok, I think I found the issue. The plex pool was looking for the sdae drive, but that drive path was reassigned to another disk after the shuffle. That disk is located in the virtualmachine pool which is initialized first before the plex pool normally. The reason it was causing problems when starting the array is because plex is where my appdata and system shares are located. Disabling docker allowed the array to start fine and I was able to format the other pools with changes and begin clearing the new data disks in the array. I do not currently have docker running and the plex pool is still read only. I am initiating a full back up of all files on the pool and then I will figure out what to do. I am considering just deleting the pool and recreating it and then restoring the backups since the mover isn't doing anything after I switch from prefer to yes for the cache option or to another cache pool. Unless you have any other ideas to make it work. I will not have physical access to the machine for the next week and a half so any solutions would have to be software solutions.
    1 point
  15. It was just a way of me getting ljm42 to pipe in.
    1 point
  16. Download gparted live iso and add that iso to the vm you are using, boot from it (either change the boot order in the vm settings or in the vm bios). You will boot into the gparted live iso and you could resize the partitions you want. I just used the gparted live iso to: - convert mbr to gpt - create a bios_grub partition to boot legacy bios + gpt - create an efi partition to migrate from legacy bios to uefi - move efi partition from "right" to "left" - resize (increase) the ext4 partition from 50 GB to 150 GB gparted is very easy to use, it has a nice gui. I don't know if there is any difference, but I prefer to use qemu-img to increase the disk size: qemu-img resize path/to/raw/img/vdisk.img +100G Will increase the size of a raw img by +100GB Make a backup first!playing with partitions can destroy all your data.
    1 point
  17. Got ya. It just went quiet so I was like “did he give up?” Glad to see you’re still going strong.
    1 point
  18. qcow2 ist jetzt kein riesiger Unterschied zu raw, jedoch sollte erwähnt sein, das auch hier eine manuelle Pflege immer notwendig ist weil der "shrink" nicht automatisch passiert, so zumindest meine Erfahrung. Sprich, die "Sektoren" auf der vdisk wo in der "Mitte" liegen wo etwas gelöscht wird shrinken nicht das image, nur tatsächlich wenn etwas "hinten" anliegt bzw. wenn der passende Befehl ausgeführt wird zu "verschieben", anfangs schreibt er halt von vorne nach hinten durch, mit jedem update, ... ist halt nicht "Echtzeit", nur als Info.
    1 point
  19. Apparently you have been ignoring SMART warnings on the Dashboard page, or at least never noticed them. You must setup Notifications to alert you immediately by email or other agent as soon as a problem is detected. Don't let one problem become multiple problems and data loss.
    1 point
  20. It's not built-in. There's a plugin, and there's also a docker container (command line) which may work better in some situations
    1 point
  21. Pre-clear is NOT built into Unraid, but there there is a plugin you can can install to do it from the GUI.
    1 point
  22. Not a possibility as once you have swapped out the old parity drives they are not available to support a rebuild This is a possibility, especially if you swap out the drive that failed its SMART test first. However you will not be protected against any other drive failing until the first parity rebuild completes. Not tried doing simultaneous Parity Swap procedures but as long as Unraid lets you do that it would be the fastest. Whichever route you go keep the old data disks intact until you are back fully protected. If the Parity Swap goes wrong in any way there is a good chance that most of the data off these drives would be recoverable.
    1 point
  23. if you need a extra 50-60W for an idle VM there would be something wrong i assume in my case, my whole system need ~ 10W less when the VM is in idle instead turning it off and the GPU is not in low mode ... and im ~ 60 - 65W in total with 3 running VM's, 1VM x GT1030, 1VM x RTX3070, 1VM gvt-g igpu and some docker(s) etc ... when i now turn my Media VM off (RTX 3070) my power consumption increases to 70 - 75W in total, so senseless to me to keep it off, i just have some small idle CPU consumption ... so for cooling it would be a little better, in sum i loose to turn off.
    1 point
  24. But in your xml only 04:00.0 is passed through. It could be related to properly resetting the gpu. Try to replace in your xml this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Moreover, you have both vnc and gpu passthrough: I'm not sure it can be done, some report that it works, some other that it's not working. I would delete also this: <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> If you want vnc install a vnc server inside the windows os or use remote desktop. Since you have a br0 network the vm is reachable from the same lan.
    1 point
  25. My hero. I saw the setting, but somehow forgot to fiddle with it. I feel a bit silly now. I've donated to you again. Thank you!
    1 point
  26. i just copy the contenct of caves/cave_server.ini to caves.server.ini and delete the cave_server.ini and it works fine now. the resetting part was mainly i being stupid that edtting files when the container is running.
    1 point
  27. Nein, das ist technisch nicht das gleiche. Ein USB Stick wird als Stick erkannt und eine Festplatte wird als Festplatte erkannt um es einfach auszudrücken und unRAID braucht einen USB Stick. Die Schreiblast wird im Dashboard von unRAID angezeigt, jedoch nicht die TBW aber du siehst im laufenden Betrieb wieviel da drauf geschrieben wird. Es werden aber nur config Dateien (paar KB) drauf runtergeladen und bei änderung wird drauf geschrieben, das hält sich wirklich in grenzen. Der Nvidia Treiber zB wird auch drauf runter geladen und dann beim booten nur immer installiert, also wird eigentlich hauptsächlich davon gelesen. Es gibt auch SLC von Transcend aber mittlerweile halte ich das auch für übertrieben. Ich verstehe die Bedenken aber das prinzip ist ein ganz anderes als auf einem Pi, ich glaube hier fehlt oft das verständnis dafür... Bei unRAID liegen das Betriebssystem, Kernel, Module, Firmware alle in den bz* Dateien, die werden nur einmal geschrieben bei der Erstinistallation und bei Updates werden die dann neu drauf geschrieben, dann gibt es noch die notwendigen anderen Verzeichnisse syslinux zum booten und das config Verzeichnis in dem praktisch alle persistent config Dateien liegen die nur bei einer Änderung in der unRAID WebGUI geschrieben werden, nur ein paar Byte bis Kilobyte. Wenn unRAID dann gestartet wird, wird das bzimage - Kernel, geladen in weitere Folge dann bzmodules, bzfirmware, bzroot in den RAM entpackt und unRAID läuft dann komplett autonom aus dem RAM und nicht wie RaspberryOS von der SD Karte/USB Stick. Ich hoffe das erklärt es ein wenig besser. Ich habe früher einen 1GB SLC Transcend Stick verwendet aber bin jetzt auf einen 32GB von Transcend umgestiegen, die Empfehlung liegt hier immer auf USB 2.0 da die nicht so warm werden und das natürlich die Chips schont und wie gesagt die Geschwindigkeit nicht ausschlaggebend ist da nur beim Booten intensiv vom Stick gelesen wird (das ist auch noch ein wesentlicher unterschied und Vorteil zu RaspberryOS). Ich verwende momentan sowas hier: Klick Bevor ich es vergesse ich muss natürlich auch die Nvidia Treiber (und diverse andere Sachen wenn ich die kompiliere) immer testen für jedes release von unRAID und das sind ca 150 bis 200MB die bei einem Update von unRAID noch zusätzlich geschrieben werden und hatte bis jetzt noch nie ein Problem und ich teste wirklich viele Versionen und Releases... Natürlich besteht immer die Möglichkeit einen Fehlerhaften Stick zu erwischen aber du kannst noch immer mit CA Backup ein automatisiertes Backup erstellen lassen oder MyServers nutzen für das Stick Backup oder sogar selbst ein Automatisiertes Skript schreiben das dir den Stick nochmal irgendwo hin sichert.
    1 point
  28. @yayitazale maybe... Maybe the slot you are putting it in is not fully compatible with the PCIe specification since the Dual Edge TPU makes use of two PCIe 2.0 x1 lanes and most M.2 E-Keyed slots are not "really" compatible with the PCIe specifications. I also learned that the hard way because I also needed an adapter for a PCIe slot that are really hard to find and now only one TPU is working because I only got a single PCIe x1 Slot free, but better than nothing... Here is an entire issue on Github about what is compatible: Click My recommendation would be to stick with a A+E-Key Coral TPU or with a B+M-Key Coral TPU, these are some weird devices or at least the compatibility is a littel weird... An extra question from my side: may I ask how do you cool the Dual Edge TPU, since this needs extra cooling.
    1 point
  29. Hi, sorry Machinaris makes use of calls against the Chia wallet. This doesn't preclude you from using a cold wallet though by changing the reward address and/or payout instructions under Settings | Farming I believe. Hope this helps!
    1 point
  30. you need to add the wireguard assigned network to the value for env var LAN_NETWORK, using a comma to separate the values, so you should end up with LAN_NETWORK value defining your LAN and your wireguard network.
    1 point
  31. Nochmals danke für euren Input. Wollte nur mal kurz mitteilen wie ich das jetzt für mich lösen werde. Hab jetzt wirklich viel gestöbert, aber eine Möglichkeit bei der Quadro einen Semi-Passiv Modus zu aktivieren habe ich nicht gefunden. Die Quadro wird jetzt in nächster Zeit durch eine passiv/semi-passiv gekühlte Graka ersetzt. Wühle mich gerade durch die Möglichkeiten. Aber die sind in dem Bereich ja sehr überschaubar .
    1 point
  32. Sorted this, Make sure you create the container initially with EXACTLY THE SAME NAME AS YOUR OLD WORLD. If you're trying to swap a container to a different world, after it's been created. It will not work. Create a fresh one with the exact name. Once you've done that, override the world files with your old ones. Hey presto, it'll work.
    1 point
  33. Is this not an setting/issue with nextcloud? Does that have a redirect / force domain option?
    1 point
  34. Ich hatte vor einiger Zeit das passende Kommando gepostet, wie man aus einer bestehenden Datei eine Sparsefile macht. Meinst du das?
    1 point
  35. Hallo zusammen, ich weiß, dass es hier eventuell nicht hingehört, aber ich möchte mich bei allen aktiven Mitgliedern dieses Forums bedanken. Ihr habt mir alle den Einstieg in die unRAID-Welt sehr erleichert. Ich war seit einigen Wochen stiller Mitleser und habe mir mein System zusammengebaut. Seien es die Hardware-Ratschläge oder die Lösungen zu kleinen/großen Problemen. Als Neuling, wie ich einer bin, konnte ich mir bis jetzt hier im Forum zu jeder Frage eine Antwort erlesen. Klasse! Mein System läuft und ich fange schon jetzt an mehr damit machen zu wollen 🙈 Vielen, lieben Dank! 👏 Wünsche euch allen nur das Beste.
    1 point
  36. Nur mit extremen Umwegen soweit ich weiß, mit dem Aufsetzen bist du vermutlich schneller. Hier aus dem Hilfetext von unRAID:
    1 point
  37. Advice is to wait for official unraid to be released with ovmf secure boot + tpm support, but if you can't wait, you can emulate tpm and run windows 11 also with 6.9.2; all you need to do is to add to the xml the emulated tpm (put inside <devices></devices> section): <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> AND add the additional swtpm as described here: https://www.linkedin.com/pulse/swtpm-unraid-zoltan-repasi/ AND use OVMF compiled with secure boot and tpm flags. If you want to compile yourself: Or if you like "black boxes" just download the attached files and edit your vm xml template to point to these OVMF_CODE_SECBOOT.fd and OVMF_VARS_SECBOOT.fd Note: secure boot is not enabled in these files, but capable, windows 11 will not complain about it. If you need secure boot enabled (but really...you want it??) you need to use the EnrollDefaultKeys.efi run from inside a uefi shell. EnrollDefaultKeys will inject microsoft certificates. Another way is to download and extract the edk2 rpm file from the fedora 36 package: https://kojipkgs.fedoraproject.org//packages/edk2/20210527gite1999b264f1f/3.fc36/noarch/edk2-ovmf-20210527gite1999b264f1f-3.fc36.noarch.rpm This is v. 202105, not the latest. Then extract files from the rpm: rpm2cpio edk2-ovmf-20210527gite1999b264f1f-3.fc36.noarch.rpm | cpio -idmv And you will find OVMF_CODE.secboot.fd and OVMF_VARS.secboot.fd inside ExtractedDirectory/usr/share/edk2/ovmf/ Again, point the xml code of the vm template to these files. Files from Fedora have Secure Boot enabled, certificates are already imported. At the time of writing, to install windows 11 without "hacks", the bios (ovmf) must be secure boot "capable", and you must have a tpm device (emulated or passed through), enough ram and storage (I didn't test these, but this should not be a great issue, just increase them if storage/ram is not enough). Luckily unsupported cpus are not a stopper! OVMF_202108_Stable_RELEASE_TPM_SECBOOT.zip
    1 point
  38. Update... Fresh install from the latest Windows 11 Insider Preview also works just fine (please ignore that it says not compatible because I've only assigned 50GB to the vdisk instead of the required 64GB *doh*) :
    1 point
  39. It’s funny to look through from the beginning on May 14, 2014 up to now how the amount of RAM increased. Started with often 2GB and now sometimes above 200 or even 300. Actually I am up with 16GB and seeing no need to have more. Using some docker (increasing) and no VM.
    1 point
  40. It looks like the cycle of alternate good and bad releases of Windows is continuing... 95 - good 98 - bad 98 SE - good ME - bad XP - good Vista - bad 7 - good 8 - bad 10 - good 11 - ???? Ok, that's a little contrived, but from everything that I hear about it, I am getting the feeling that upgrading to 11 any time soon may not be worth the effort.
    1 point
  41. I wanted to thank the team for how seamless they've made the backup and restore process! My USB drive failed last night, but luckily, the latest configuration was stored safely in the cloud. Restoring the configuration was extremely easy and got the server up-and-running in just a few minutes. A+ work, everyone!
    1 point
  42. I added nfsv4 support in kernel starting with 6.10-rc2. nfsv3 still works and the v4 protocol is definitely enabled but I can't get a client (another Unraid server) to mount a share using v4 protocol. Spent a couple hours on it this morning but I have no time to spend more time on it now. If someone wants to test this and let me know what has to happen, then we can add to 6.10 release. But please post in Prerelease Board.
    1 point
  43. Just wanted to provide an update. I haven't gotten to the bottom of why the network (and routing tables) suddenly disappear, but when it happens I'm able to fix it with the following commands (I reverse engineered the /etc/rc.d/rc.docker script) ip link add shim-br0 link br0 type macvlan mode bridge ip link set shim-br0 up ip route add 192.168.1.0/25 dev shim-br0 ip route add 192.168.1.128/25 dev shim-br0 Hopefully this helps someone else fix the issue in the future. Edit: And for those using ipvlan ip link add shim-br0 link br0 type ipvlan mode l3 ip link set shim-br0 up ip route add 192.168.1.0/25 dev shim-br0 ip route add 192.168.1.128/25 dev shim-br0
    1 point
  44. Welcome! I'm glad Machinaris is working well for you. To see the full listing on your Farming page, be sure to set a Variable in the Machinaris docker config named 'plots_dir' which is a colon-separated list of all your in-container plots paths. Please note that editing the Machinaris docker in Unraid will cause the container to restart so wait until you are taking a pause from plotting in-container before you change it. Details on the wiki. Hope this helps!
    1 point
  45. find * -maxdepth 9999 -mtime +5 -exec rm -vf {} \; -exec /bin/echo {} \; | wc -l | { read count; echo "Done. $count files deleted."; } Grabbed this syntax from a thread on stackoverflow, works good.
    1 point
  46. I have made a script to delete all cached images in Plex's PhotoTranscoder folder that are older than 7 days. See below: in the output the number of files deleted is on one line and then the "files were deleted" text is on the line under. I have tried to make the text appear directly after the count but can't seem to make it work. any ideas?
    1 point
  47. I’m thinking of doing this now but when following the video I don’t see where in 6.9 I can set my xfs encrypted password? Where are the settings for that?
    1 point