Leaderboard

Popular Content

Showing content with the highest reputation on 05/07/21 in Posts

  1. I have a feature suggestion which may (for valid reasons I'm not able to think of) get shot down. Right now everyone has their own https://hash.unraid.net address with this plugin. It would be fantastic if you could implement a setting to change the hash to a custom value. The plugin could then check to see if that custom value is already in use, but if not then it would assign that as the new subdomain prefix. It would look like this: Default URL: https://hash.unraid.net Change custom URL field in Unraid plugin settings to (for example) metallica. Plugin checks to see if https://metallica.unraid.net is already assigned. If it is, it rejects the change with an error that the requested value is unavailable. If it is not, the change is made. Right now we have to go to the Unraid forum first, and then get redirected to the URL, and then bookmark that page for easier access. Implementing this change would make it easier to be able to access your server because you would (hopefully) always know what the URL is. There might be a completely valid reason that this cannot be done (or even that it can be done but won't be implemented), but in case this is possible and just wasn't something anyone thought of I figured I would mention it. Thanks for this plugin! I love it.
    3 points
  2. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: Note: RAID controllers are not recommended for Unraid, this includes all LSI MegaRAID models, doesn't mean they cannot be used but there could be various issues because of that, like no SMART info and/or temps being displayed, disks not being recognized by Unraid if the controller is replaced with a different model, and in some cases the partitions can become invalid, requiring rebuilding all the disks. 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples: 6 ports: Asmedia ASM1166 (PCIe 3.0 x4 physical, x2 electrical) * * There have been some reports that some of these need a firmware update for stability and/or PCIe ASPM support, see here for instructions. These exist with both x4 (x2 electrical) and x1 PCIe interface, for some use cases the PCIe x1 may be a good option, i.e., if you don't have larger slots available, though bandwidth will be limited: 8 ports: any LSI with a SAS2008/2308/3008/3408/3808 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, 9500-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed (most of these require a x8 or x16 slot, older models like the 9201-8i and 9211-8i are PCIe 2.0, newer models like the 9207-8i, 9300-8i and newer are PCIe 3.0) For these and when not using a backplane you need SAS to SATA breakout cables, SFF-8087 to SATA for SAS2 models: SFF-8643 to SATA for SAS3 models: Keep in mind that they need to be forward breakout cables (reverse breakout look the same but won't work, as the name implies they work for the reverse, SATA goes on the board/HBA and the miniSAS on a backplane), sometimes they are also called Mini SAS (SFF-8xxx Host) to 4X SATA (Target), this is the same as forward breakout. If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander. P.S. Avoid SATA port multipliers with Unraid, also avoid any Marvell controller. For some performance numbers on most of these see below:
    2 points
  3. Overview: Support thread for Partition Pixel/Chia in CA. Application: Chia - https://github.com/Chia-Network/chia-blockchain "Docker Hub": https://github.com/orgs/chia-network/packages/container/package/chia GitHub: https://github.com/Chia-Network/chia-docker This is not my docker, nor my blockchain, and I'm not a developer for them either. I simply did an Unraid template for the already existing docker so that way It will be easier for me and others to install the docker on an existing Unraid Server. I can support any changes required to the xml template and provide assistance on how to use the parameters or how to use the docker itself. Please read on SSD Endurance if you don't know about Chia and you plan on farming it : https://github.com/Chia-Network/chia-blockchain/wiki/SSD-Endurance Instructions: Install Partition Pixel's Chia via CA. Create a 'chia' directory inside of your appdata folder. Skip to step 4 if you do not have an existing chia wallet Inside this new folder, create a new file called 'mnemonic.txt' and copy and paste your 24 words mnemonic from your wallet inside (every word one after another on the same line with 1 space in between like this sentence). Back on the docker template, choose a location for your plotting if you plan on plotting on your server (preferably a fast SSD here) Choose a location for storing your plots (this is where they will be used to 'farm', preferably HDD here) Feel free to click on show more settings and change any other variable or path you would like Save changes, pull down the container and enjoy ! If you have some unassigned or external HDDs that you want to use for farming: edit /mnt/user/appdata/chia/mainnet/config/config.yaml Add more plot directories like so : plot_directories: - /plots - /plots2 Create a new path in the docker template like so : config type : Path container path : /plots2 host path : /mnt/an_unassigned_hdd/plots/ Here are some often used command lines to get you started: Open a console in the docker container, then type : venv/bin/chia farm summary venv/bin/chia wallet show venv/bin/chia show -s -c venv/bin/chia plots check Command to start plotting : venv/bin/chia plots create -b 5000 -r 2 -n 1 -t /plotting/plot1 -d /plots -b is amount of ram you want to give -r is the amount of threads -n is the number of plots you want to queue -t is temp dir -d is the completed directory From user ropes: If you only want to harvest on this docker, then you don't need to create a mnemonic file with your passphrase. Instead you can do the following (more secure imo) : chia plots create [other plot options] -f <farmer key> -p <pool key> If you want to run in Parallel just run the command in another terminal window as many times as your rig will allow. Here are all the available CLI commands for chia : https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference From user tjb_altf4:
    2 points
  4. Da ich mit meinem altem Supermicro bisher sehr zufrieden war ist es jetzt das X11SCH-LN4F und der E-2226G geworden (alle andern Xeons mit iGPU sind mir aktuell zu teuer). Dabei muss ich aus meiner Sicht die wenigsten Kompromisse eingehen und habe eine potentere CPU, iGPU und 2 Sata und eine NVME Anschluss mehr + die Möglichkeit der 4x1GB im Bond. Danke für eure Unterstützung auch wenn es jetzt um einiges mehr gekostet hat. Das alte Board und den i3 werde ich dann wohl verkaufen. Die Teile haben ja noch Garantie und sind beide um einiges teurer als 2020 - verrückte Welt. Falls jemand Interesse hat kann er sich gerne an mich wenden 🙂
    2 points
  5. Check out both the NVENC and NVDEC lists and check the Consumer and Professional tabs for supported cards and limitations
    2 points
  6. This issue isn't due to UNRAID OS. For anyone getting this issue, spin up your drives first before running a controller benchmark. The cause is that my spin-up logic isn't working. The first second of the 15 seconds if data read is discarded to account for any drive head relocation but the spin up times is causing at least one second of 0 MB/sec read which is hurting the average. This issue isn't evident on the individual drive benchmarks because it performs seek & latency tests first so the drive is spun up before it gets to the benchmark.
    2 points
  7. My response, not the official Limetech response. Every single user of CA has had to acknowledge two distinct popups regarding applications prior to any installation being allowed. A general one and also another specifically geared towards plugins, which link to https://forums.unraid.net/topic/87144-ca-application-policies-notes/?tab=comments#comment-809115 and https://forums.unraid.net/topic/87144-ca-application-policies-notes/?tab=comments#comment-817555 respectively, which also detail some (but nowhere near all) of the security procedures in place. The lists within CA are not just a compendium of "applications at large". They are moderated, and constantly being reviewed. Great pains are also taken to have the ability to notify users of any malicious software installed, either through FCP or CA's own emergency notification system (opt-in). Now, if you do decide to go and install an app that is outside of CA's control, then you are on your own. But to directly answer your question, all plugins and many containers are code reviewed when being added into CA. ALL plugins MUST be open source except in exceptional circumstances. Provisions exist for removal / notification to the user if anything malicious happens after the fact. (To be quite honest though, I would worry far far more about a docker container being installed via the command line (or a dockerHub search) that's not listed within CA rather than a plugin. eg: How many examples are there already of mining software being embedded in a random container on dockerHub?)
    2 points
  8. Hello I'd like to request the ability to give a maximum size you want a certain share to be. I read this post but I believe it would be possible if you allowed a script to run that basically checks the share size with du or so and then makes it read only if it's the given size or higher. I wanted to allow SFTP before I made an ubuntu vm for it and I noticed that with each reboot it would revert the files I changed (due to it being on RAM to limit write usage to the usb stick) so diy'ing this isn't possible as far as my knowledge goes on unraid. Thanks in advance
    1 point
  9. dont take this the wrong way, mate, but using a NAS commits you to a lifetime of occasional drive purchases. If you need another drive to migrate data, you should just buy the drive. You will use it and need more in the future anyway. Im not being unsympathetic, this is the easiest way and you are going to buy more drives anyway as time goes by.
    1 point
  10. Your USB device is annoying. 😁 I added an existence check for that variable and looked for other potential misses and repushed version 2.9.3
    1 point
  11. Don't take this the wrong way but please don't spread misinformation. At the moment, you can get pretty lucky and get some with 1 plot but it is indeed just pure luck. In about 1-2 weeks, devs are supposed to release the pool protocol which will allow everyone to farm together in a pool. This will give you more consistent rewards and everyone will have to replot to participate in a pool. I personally still think it is worth it if you already have some storage that's doing nothing (which I'm suspecting a lot of unraid users have), but I'm not here to advocate for the blockchain, just providing an easier way to install for those who are insterested by the project. Also you are wrong about the price, chia is currently 550$ USD each and it is now tradeable in a few exchanges. (It was around 1300$ USD at the time of release on exchanges and is slowly stabilizing in the 500-600 range) EDIT: It also takes less space than you think, I have 14 TB right now and I have an estimated time to win of 1 month. Of course, the network is growing very quickly but it's just not in the 40 TB range yet 2nd EDIT: Seems like it's back in the 1100$ range lol 3rd EDIT: I also suggest you look into the project more in dept. Chia does not take a lot of electricity, especially if you already have a server running, only plotting really takes a bit but farming itself is very green especially compared to other crypto that are using proof of work (like bitcoin)
    1 point
  12. Great, thanks Hoopster. Ah, thanks, Ich777, I must have missed it.
    1 point
  13. Please look at the first post (in the Troubleshooting section) I've linked a page from Nvidia where you can check which cards are compatible with the drivers.
    1 point
  14. Vielen lieben Dank. Da hab ich wohl zu kompliziert gedacht. Diese Vorgehensweise ist mir garnicht in den Sinn gekommen
    1 point
  15. Absolutely, added the command in my first post, thank you 😃
    1 point
  16. Hat sich deutlich verbessert mit Ryzen 4000 und 5000, aber wie gesagt leider nur mit dem B550 Chipsatz. Der X570 ist auch mit mind 20W dabei. Es gibt von Asrock eine IPMI Karte. Ansonsten gibt es noch https://pikvm.org/ und ich nutze einfach einen HDMI Capture Stick und meinen Laptop. Für Reboot/Hochfahren geht auch eine smarte Steckdose.
    1 point
  17. my 2cents: RAM behalten, einen Xeon ohne HT - denn die bringen für VMs nix - aber mit IGP und als MB das X11SCH-LN4F. Wenn du schon IPMI gewöhnt bist, weisst Du ja was Dir das bringt. 4x1GB im Bond sind nicht zu verachten. Alternativ, wie @mguttoben schon schrieb.
    1 point
  18. Epyc ist nicht gerade stromsparend und hat auch relativ geringe Single Core Performance. Den holt man sich, wenn man viele GPUs und M.2 SSDs verbauen will (wegen der ganzen Lanes, die der Epyc bietet). Dann eher was aus dem Consumerbereich. Mit iGPU also ein 4000er Ryzen. Kommen aber alle nicht von der Effizienz an einen 1151v2 dran. Außer mit einem B550 Board, aber die haben meist nur 4 SATA Ports. In der Hinsicht sind C246 und W480 unschlagbar. Ich liebe zB mein Gigabyte C246N-WU2. Auch schick ist das Gigabyte C246M-WU4 und Gigabyte W480M Vision D. W480 in ITX gibt es leider nichts empfehlenswertes. Da entwickeln die Hersteller am Markt vorbei.
    1 point
  19. No. Currently you must have 1 data disk defined. That shouldn't be an issue, you could technically assign a scrap 1GB USB to it if you want.
    1 point
  20. ...eben, für mich keiner...papermerge hat oft neue Bugs in den Releases, daher sind Updates des Docker mit Vorsicht zu geniessen und nur mit paperless-ng bekomme ich die Cache/HHD-Spinndown "Problematik" ohne Probleme hin. Von der Funktion her fand ich papermerge auch gewöhnungsbedürftiger. Wechseln der App ist, denke ich schwierig, da das Datenmodell und die DB ja nicht einfach migrierbar/interoperabel ist, oder? Das ist dann aber bei EcoDMS usw auch so....sobald mal anso einen sunstantiellen "Datenvorrat" hat, finde ich das schwierig. Erst recht, wenn papier schon geschreddert ist
    1 point
  21. Ahead of you :-) I also noticed that and changed my post... And at the moment I am actually checking out SSD's :-) ROFL !
    1 point
  22. Yes works with multiple pools. Gotcha's? 1.) Try to avoid spaces in share or pool names. 2.) All fields from gui are applied to all pools. i.e. it's all the same. with one exception at the moment. 2a.) You can specify a different percentage per pool. Create an entry in ca.mover.tuning.cfg file. (example: /boot/config/plugins/ca.mover.tuning cachetv="65") - "cachetv" being the name of the pool. - if using single digits leading zero required. (i.e. cachetv="01" for 1 percent) 3.) I plan to expand this type of entry for a few other fields in the future. 4.) As with any share setting, be very careful when setting cache-prefer. Several posts where users are filling up their cache drive when using that setting.
    1 point
  23. Inkrementelles Backup auf das Synology NAS funktioniert jetzt. Die Unix Extensions lassen sich doch in der GUI einschalten. hier mal die Rückmeldung vom Synology Support dazu. Ich habe es gerade mit SMB3 laufen und die ersten Shares sind beim Backup schon erfolgreich abgeschlossen und bis jetzt gab es auch noch keine Fehler in den Logs. Über Hilfe bezüglich Skript auf dem Synology NAS auszuführen würde ich mich trotzdem noch freuen. Unraid Shares konnte ich mittels SMB erfolgreich einbinden, allerdings scheint das Skript direkt in der ersten Zeile des auszuführenden Teils Probleme zu haben.
    1 point
  24. 1 point
  25. Es ist umgekehrt. Im Github bei papermerge https://github.com/the-paperless-project/paperless steht: .....also ich lese daraus, dass papermege-ng neu ist und paperless selbst nicht mehr aktiv vom Original-Autor betreut wird. Vor allem kann ich bei paperless-ng den Cache nutzen. Das ging bei papermerge und paperless nicht, da gingen die Array-Disks (cache einstellungen "yes/prefer") nie schlafen, verursacht durch die Überwachung des Input-Ordners vom Scanner. Edit: Oh Mann...Du meinst papermerge Das lief bei mir nie wirklich stabil und hatte auch das "Problem" mit dem Cache/Aufwecken des Arrays im Scan-Folder.
    1 point
  26. Maybe a path in a docker or VM that is not correct ?
    1 point
  27. Thanks for the reply, I've added a static route now on both configs - Some improvements, but I still having some issues. On the simple server - I can now connect to the Unraid UI and to the pfsense UI. but I still can't access anything else on the network. I've tried the firewall rules because of the firewall log: (first one 192.168.0.31:8123 is the source, the 10.253.0.2 is the destination) it looks like the remote device (the VPN peer) try to talk to the local service, but when the local service try to "take back" there's an issue. on the complex server, it's basically the same + but I can't access the main UI as it forward automatically to the local domain (unraid.privateFQDN.org) and it stops there. dockers on the unraid server (using the IP address) connect perfectly. Edit: Found the fix the the issue, not sure why my config is causing it - but the scenario here is Asymmetric Routing. The solution is to enable "Bypass firewall rules for traffic on the same interface" under System/Advanced/Firewall & NAT: That fix both of the issues described above.
    1 point
  28. Yep, on downgrading the firmware to 20.00.07.00 the disks are now showing up correctly in 6.9.2. Super weird, the .11 firmware is what it shipped from the reseller with. Regardless, I really appreciate your help, and I'll go ahead and mark resolved.
    1 point
  29. ###2020.05.06 - Added 0 to options of days old. (Will move anything greater than 24 hours old) - Added 6,7,8,9 to options of days old. - Increased days old limit to 765 - Removed "Original Mover" button after paypal donation link. - Added "Move Now button follows plug-in filters:" to make "Move Now" button run based off of plug-in settings (Yes), or run original mover (No). - Moved some code around in mover.php so ionice would be initiated. - Updated mover code for moving from array to cache (generally cache-prefer) to follow Squid's update on 4/11/2021. (Fix for shares with space in their name) **********CAUTION**************** Move Now button now calls unraids original move now code!!!!! It will move almost everything, to change set Yes to " Move Now button follows plug-in filters:" (Last question in section) Future Changes: Add an option to invoke original mover (i.e. everything) if percent used reaches a user selectable percent. (Must be higher than the entry for "Only move at this threshold of used cache space:")
    1 point
  30. The biggest advantage my method has is that each step is discrete per drive, so it's much less risk as you can take your time and verify that everything is still where it should be after each step. Once you are done the only difference you should see is more free space on each replaced drive slot, and a totally empty 4TB in slot 7. All data should remain available throughout, with no major interruptions. I'd assign each line a new "job", and look for help and ask questions as needed, checking off before attempting to start the next item. Looking back over what I wrote, step 2 may require additional parts, I can't remember if Unraid will allow you to drop a parity drive without setting a new config. You definitely can't remove a data slot. If not, then it's a simple matter of setting the new config with save all, then doing the unassigning of parity2, checking that parity is valid, starting the array and letting it check. After that's done you would add the old parity 2 disk to slot 7 and let it clear. The array would still be available throughout. If you don't care to keep parity valid, you could combine step 1 and 2 for a considerable time savings, by simply doing the new config saving all data and cache, assigning the new single 8TB parity and old parity 2 to slot 7, DON'T check the parity is valid box, and let it build. Proceed from there. @JorgeB would know for sure, he plays around with test arrays much more than I do. It's fine, but it will be slow, and slow down the check more than the sum of the 2 tasks. Just to throw some random figures, if a parity check with no data access takes 12 hours, and copying a block of data would normally take 1 hour without the check running, it might take 2 hours to copy and extend the parity check to 16 hours. That's an example for effect, I haven't actually done hard benchmarks, but it feels pretty close based on observations.
    1 point
  31. Pretty sure this is not going to work with 6.9.2 as-is. Please unplug the device, open terminal window, and type (or paste) command above. This creates a file called /etc/udev/rules.d/99-removable.rules After that file is present, then hot plug the device and see if it it's recognized. We will be publishing 6.10-beta soon, any further work on this will need to be accomplished in this release. Note: for anyone wondering: we simply added Linux kernel support for Thunderbird/USB4 per user request. We made no "promise" that we are committed to Thunderbird/USB4 support in any particular release; however, during the 6.10-beta/rc series we will take a hard look at this assuming assistance from those interested in helping to get this working.
    1 point
  32. Ich bin, kurz nachdem ich zu unRaid kam, auf den Kanal von TGF aufmerksam geworden. Wohl dem YouTube-Algorithmus geschuldet. 😉 Meiner Meinung nach machen die Videos auf unRaid und seine Funktionen aufmerksam/neugierig. Die fundierten Infos dazu habe ich mir dann anderweitig besorgt. Ich denke aber, das genau das beabsichtigt ist. Die Videos sollen unterhalten. Und das tun sie.
    1 point
  33. Lots of UDMA CRC errors on the disable disk, you should replace the SATA cable before rebuilding.
    1 point
  34. In principle that is correct. However when you start the array with the drive unassigned Unraid should be emulating the missing drive and you should check that it contains the expected content before proceeding with the rebuild steps. The rebuild will only end up with what is on the emulated drive so you do not want to overwrite the physical disk with that if the emulated contents do not look correct.
    1 point
  35. The error in the USB Bus tree stopped the rest of the page from displaying which prevented you from seeing the Benchmark All button. I pushed version 2.9.3 which checks for the existence of the idProduct variable and sets it if not created by the USB device.
    1 point
  36. New repository is: vaultwarden/server:latest Change it in docker settings: Stop the container Rename repository to vaultwarden/server Hit Apply and start the container That's it. Don't forget to go to unRAID Settings >> click on Fix Common Problems (if the scan doesn't start automatically then click RESCAN) and you will receive a notification to apply a fix for *.xml file change. I just went through this procedure and can verify everything went smooth and well.
    1 point
  37. Good news, no more AMD reset bug with RX 6xxx XT series GPUs in macOS 11.4 beta 1 on UnRAID.
    1 point
  38. The other disks aren't dangerously full and with user shares it doesn't matter all that much which disk a file is on. You can adjust the thresholds for all disks in Disk Settings, or for a specific disk by clicking on the disk to get to its settings page. There is no "automatic" way to do what you ask. There are several manual approaches. unBALANCE plugin can probably help with this.
    1 point
  39. Ne das verfällt hinterher Die bisherige Vermutung ist, dass durch den SMART Test, die Firmware Bremse deaktiviert wird, der SMART Test aber nicht abgeschlossen werden kann, weil die Festplatte dann ja mit was anderem beschäftigt ist. Sobald der SMART Test dann abgeschlossen ist, aktiviert sich die "Bremse" wieder. Deswegen muss die Reihenfolge auch sein: 1. Parity Check/File Integrity/was auch immer anschmeißen 2. SMART Short Test für die jeweilige Platte/alle Platten starten 3. Freuen😉
    1 point
  40. How about this for a gag bit every few podcasts... like once a month/quarter for a topic. "Well it would have worked if...." The most spectacular fails in unraid; if it burned down we want to know.
    1 point
  41. Would be nice to see some highlight builds that are interesting from hardware perspective or from the ways Unraid itself is being used, maybe that falls into the community interview option.
    1 point
  42. Ok guys.... i could not resist and did a fresh install using the latest jitsi and it worked for me. Follow my REINSTALL guide below its (99.9% Spaceinvaderone with my own words) Needless to say is that you have to have a working SWAG reverse PROXI. REINSTALL GUIDE Precondition: This tutorial requires that your letsencypt/SWAG reverse proxi works and your firewall forwarding ports are set correctly. UNINSTALL 1. open portainer and select the "stack" of jitsi containers. Select all 4 and stop and then click remove. In unraid web ui click stop on the displayed jitis container 2. open krusader and rename the existing jisti folder in appdata e.g. jitis_bak INSTALL 1. Open terminal into UNRAID from web interface 2. Paste: curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 3. sudo chmod +x /usr/local/bin/docker-compose 4. Install portainer if not already done (use video as in community applications from spaceinvader) 5. mkdir -p /mnt/user/appdata/jitsi/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody,jicofo,jvb,jigasi,jibri} (press ENTER) 6. mkdir -p /mnt/user/appdata/jitsi/github && cd /mnt/user/appdata/jitsi/github (press ENTER) 7. git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet (press ENTER) 8. use nano to edit the .env file -> enter: nano env.example (press ENTER) -> leave the 6 passwords blank at the beginning of the file -> edit location of the config from: CONFIG=~/.jitis-meet-cfg to: CONFIG=/mnt/user/appdata/jitsi/jitsi-meet-cfg/ -> edit timezone if needed (optional) -> enable public URL for the web service (remove # in front of line) and then put YOUR full URL of your jitsi webservice under which user can reach jitsi e.g. PUBLIC_URL=https://geocities.meetmeonjitsi.org -> remove # from line "DOCKER_HOST_ADDRESS=192.168.1.1" and edit the IP to your current UNRAID IP address DONT IGNORE the authentication part. USE Authentication -> remove # from ENABLE_AUTH=1 -> remove # from ENABLE_GUESTS=1 -> remove # from AUTH_TYPE=internal when using nano press CTRL+O then ENTER to write the file and CTRL+X to exit 9. cp env.example .env (press ENTER) 10. gen-passwords.sh (press ENTER) 11. docker-compose up -d (press ENTER) Note: if you have done any install before then old jitisi containers may exist already and show up with _1 at the end of the name. Unused old containers can be removed or pruned later. 12. login to portainer -> you will see 1 stack has been created -> select it and click docker-jitsi-meet to see all four containers use the video guide to see howto rename them. Essentially assign proxynet to each container (join network), delete(leave network) the already existing network and rename the container as below for all 4: - jicofo -> focus.meet.jitsi - jvb -> video.meet.jitsi - prosody -> xmpp.meet.jitsi - web -> meet.jitsi 13. Assuming letsencrypt/SWAG is already running correctly then all the necessary files are in place -> Stop SWAG, Start SWAG 14. open a console window into the container xmpp.meet.jitsi in UNRAID dashboard left click on xmpp.meet.jitis container (running) and select console USE as example: prosodyctl --config /config/prosody.cfg.lua register <<YOUR USERNAME>> meet.jitsi <<YOUR PASSWORD>> PASTE (EXAMPLE): prosodyctl --config /config/prosody.cfg.lua register Johnny meet.jitsi PIW*$@$(J822189429 Press ENTER Close the console Restart xmpp.meet.jitsi container (optional/untested: restart SWAG again. This may resolve that you have to restart unraid (untested)) 15. Reboot UNRAID 16. TEST Jitsi Just did this myself and worked correctly. If it not working for you then maybe there is an issue with your reverse proxi. Thanks and Good Luck!
    1 point
  43. Major new release of UD: "Where are the switches?" The "Pass Through", "Read Only", "Automount", and "Share" switches have been moved to a new Edit Settings dialog. This is also where the script is modified. This saves some real estate on the UD page and keeps the page refresh smooth. A "Show Partitions" switch has been added to set showing all partitions by specific hard drive device and not generically as before. The option to show all partitions has been removed from the UD Settings. Read and Writes and all other status is updated every 3 seconds. You don't have to refresh the page with the browser to get an update. Mounts, unmounts, disk spin, and read and write status updates are much smoother. This is the same as the array pages. The values just update in place. You can see the current status of the device switches in a tool tip by hovering your mouse over the settings icon. Automount of remote shares and ISO files is off now by default. If you previously had a remote share that automounted and is not now, just edit the settings and turn on the Automount switch. I've been trying to achieve this for a long time, and I think I finally got there.
    1 point
  44. Copy the config folder, redo the flash drive and copy it back.
    1 point