Leaderboard

Popular Content

Showing content with the highest reputation on 01/18/24 in all areas

  1. geh mal in den Tab "Tools" und da zu "SYSTEM DEVICES" und da machst du al bitte einen Screenshot von "USB Devices" ist ziemlich weit unten. Dein Board hat hinten am IO keinen "normalen" USB sondern alles ab 10gbit USB und höher... vlt geht Unraid noch nicht so gut mit denen um. auf deinem Mainboar an sich ist noch USB 2 connector, ich würde den Unraid Stick da dran stecken per Adapter. Die Fehler im Log sieht nach Inkompatibilität mit den fancy hohen USB Standards aus.
    2 points
  2. Nextcloud based on the official docker hub image. Nextcloud with full Office integration. Based on: https://hub.docker.com/_/nextcloud/ Tag apache (latest) Please make shure you got the volume mounting correct. Please note you can mount any share to for example /mnt/Share and mount it in nextcloud with the "external storage" app. -- PREVIEW: -- DONATE: Please buy me a Pizza > https://www.buymeacoffee.com/maschhoff -- GUIDS: ---- FOLDER RIGHTS To get the right folder/file rights add this to Extra Parameter and Post Execution <ExtraParams>--user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0</ExtraParams> <PostArgs>&& docker exec -u 0 NAME_OF_THIS_CONTAINER /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars'</PostArgs> ExtraParams: --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 PostArgs: && docker exec -u 0 NAME_OF_THIS_CONTAINER /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' ---- REVERSE PROXY For security reasons and to get https you should use a reverse proxy If you want to run it behind a reverse proxy make shure you got your proxy configuration right. As reverse proxy I recommend to use LetsEncrypt with nginx. You will find a lot of example configurations Then add ‘overwriteprotocol’ => ‘https’ to your nextcloud config. You will not be able to access it without your https reverse proxy anymore. ---- SPLIT DNS If you are running nextcloud in your home network it is a good choice to use a split DNS to get the connection directly to your nextcloud. In addition you will need it to get ONLYOFFICE accessable from inside and outside of your net. To get this done you need a reverse proxy configuration that and the right DNS entry: Example for dnsmasq or pihole DNS Server config address=/cloud.mydomain.com/192.168.100.100 address=/cloud.mydomain.com/1a01:308:3c1:f360::35e 192.168.100.100 should be your reverse proxy. You need a proxy configuration that points to "cloud.mydomain.com" passing it to your real nextcloud ip. ---- SECURITY CHECK You can schedule a task that will automaticly check the security level and gives you a push notification if there are any issues. It is based on https://scan.nextcloud.com/ https://github.com/maschhoff/nextcloud_securipy Any questions? Dont mind to ask
    1 point
  3. IF anyone else wants to ever play Planetary Annihilation and needs a container, this is my take on that setup. Note it requires a non anon steam login which makes setup slightly trickier. https://hub.docker.com/r/obiwantoby/pa-dedicated-server
    1 point
  4. Per their faq : "We will provide the tools to create these servers yourself at launch." Release is tomorrow. So I will look at it then. I understand what you are saying though about making sure the parameters are correct. Ill comeback if I hit any roadblocks. I appreciate all that you do though! I have used you craftopia, valheim, v-rising, and corekeeper servers in the past and never had any issues!
    1 point
  5. Aktuell Seagate X18 18TB bei Mindfactory/Mindstars für 250 Euro Versandkostenfrei https://www.mydealz.de/deals/18tb-seagate-exos-x-x18-st18000nm000j-7200umin-256mb-35-89cm-sata-6gbs-2305408 Das sind dann rechnerisch rund 13,89 Euro/TB EDIT 23.01.2024 20:50 Uhr: Nachdem das Angebot heute Mittag nicht mehr da war, ist es heute für 249 Euro Versandkostenfrei wieder bei Mindfactory in der Kategorie Mindstars aufgetaucht.
    1 point
  6. Pauschal und vereinfacht: system, domains und appdata auf den Cache (das ist ohnehin Standard). Userdaten, inkl. Userdaten der Docker Container, ins Array. Mehr kann ich dazu nicht mehr sagen. Wir drehen uns im Kreis.
    1 point
  7. That appears to have been the problem. Now I was able to remove the drive and it's balancing as I type this! Thank you for the help!
    1 point
  8. @DataCollector @domrockt Danke für eure Hilfe. Ich werds im ersten Step mit dem Adapter probieren und schauen wie es sich verhält. Die heutigen Restarts (2x) haben übrigens sauber funktioniert... (ich habe nichts an den Settings geändert 😮) Melde mich, sobald er da ist
    1 point
  9. If you btrfs that has built-in check summing for detecting bitrot.
    1 point
  10. Okay, keine verdächtigen anderen USB Devices. Wenn es also wirklich ein USB Power loss ist, stellt sich mir die Frage: wieso bricht die Spannung des USB Ports ein? Leider werden anscheinend alle USB Ports des Mainboard direkt vom Chipsatz gesteuert. Wenn ein Zusatzchip dafür da wäre, könnte man hoffen, daß die dortigen Ports auch alternativ versorgt wären. Andere USB-Ports zum Booten hast Du bestimmt schon mal probiert. Falls Du irgendwo noch eine USB Zusatzsteckkarte hast könntest Du mal schauen ob Du davon booten kannst und ob es damit wieder geht. Das könnte aber dann bedeuten, daß bei dem harten Shutdown irgendeine Steuerschaltung auf dem Mainboard hops gegangen ist. Schlimmstenfalls sogar im Chipsatz. Bei vielen billigen Mainboards kommen die 5V USB stumpf aus dem Netzteilanschluß, etwas bessere haben zumindest eine Sicherung (Polyfuse) drin. Die noch anspruchsvoller designten Mainboards haben auch eine eigene Spannungswandlung für USB. Das Mainboard ist ja wirklich nicht eines der billigeren, aber zur USB Spannungsversorgung des Boards konnte ich nicht viel finden (MSI hat da wohl nu rein sehr rudimentäres Manual zusammengestellt). Aber der Hinweis, daß zum Laden eines IPad oder so extra Software installiert werden sollte deutet schon darauf hin, daß da eine steuerbare extra Schaltung installiert ist. Bleibt (in meinen Augen) also wirklich nur weiter probieren. So schön es ist den Fehler zu finden und zu lösen/umgehen, so nervig finde ich das Herumsuchen, bis man der Sache näher kommt. Good luck!
    1 point
  11. No, its not. Did you read the "Source path" info box? Is the volume mapping in question used by another container? Never saw this one before. Tar is telling, it could only write x blocks from y. How is the destination connected to Unraid? Is it a network path or an internal drive? Feels like the file was being modified by some other process, dont know exactly. Any container using the same mapping? Are those mentioned paths empty on the source? You set to exclude /mnt/user. But everything to backup is there, so its get excluded. Please check your duplicacy Exclusions. You say this message also takes place even the container is not started? There are small outstanding fixes in the next release but basically currently: PreRun is being executed directly after checking the existence of the destination. PreBackup is being executed ONE TIME After the docker XMLs got backed, right before start backing up the first container postBackup is being executed right after the last container backup - one time also postRun fires at last. But your percontainer scripts sounds useful, will note that. Also a "file shrank" issue. Is there any other container using this mapping? The stop methods actually waits DOCKER_TIMEOUT seconds to stop - this variable is being set via the docker settings page (Stop timeout). So, yes, it seems, some containers take too many time. I have to check if the Timeout means "Wait x secs and kill the container" or if it just stops waiting. I believe its the first. I believe that if the timeout hits, I get a non-success message back and therefore I test docker-stop method directly - which succeeds (container is already stopped then). No issue for that - but please check how long the container needs really to stop. Maybe you should increase the timeout. That being said: Since christmas, I got nearly zero time doing anything here. There will be some movement the next 2 weeks. The current beta should go live (770 downloads and NO feedback. I guess thats good?). Directly after that, the plugin will be able to detect volume multi-usage and report it to the user. Or it creates auto-grouping, dont know. There is also work needed for the restore-wizard then.
    1 point
  12. I went ahead and guessed that it might be that or the PSU. I overnighted a 750w PSU (better than the delta 550 I had), and a Dell H200 in IT mode. Installed it this morning and so far it's rebuilding the 2 "failed" disks. Also, i'm seeing about 30MB/s more write speed too. In the end, new PSU, HBA, and cables. Fingers crossed!
    1 point
  13. Yeah, I'll give the check a go, and then I guess I'll try the rebuild. If that succeeds, it's possible there was something about the unassigned drive (Toshiba MG07ACA14TE) that was causing an issue in the external enclosure, so I'll leave that out of the system for a while.
    1 point
  14. ja genau also ich denke usb1 1-6 die fehler scheint mit dem Unraid stick zusammen zu hängen dein Stick bekommt volle Geschwindigkeit über den xhci_hcd und dann resettet das Teil sich und hat einen Power Loss, der Rechner versucht den wieder zum arbeiten zu bewegen aber erst ohne Erfolg und zum Schluss geht er dann doch an 🙂
    1 point
  15. Thank you. My gut said it was provider side. I pulled new OVPN files last night and ran into the same, but after reading your post today I went to grab new files and lo and behold there were new servers for the locations I had been trying. Worked like a charm. Must have been some sort of outage over the past week or so. Every time I tried new files it still didnt work, out of the 5 servers I tried they are all removed from the new list. Thanks for confirming.
    1 point
  16. 1 point
  17. I'm constantly getting a tar creation failed every backup for plex and other containers. The error I get for plex and tdarr for example: [17.01.2024 04:13:23][ℹ️][tdarr] Should NOT backup external volumes, sanitizing them... [17.01.2024 04:13:23][ℹ️][tdarr] Calculated volumes to back up: /mnt/cache/appdata/tdarr/logs, /mnt/cache/appdata/tdarr/server, /mnt/cache/appdata/tdarr/configs, /mnt/cache/appdata/tdarr/transcode [17.01.2024 04:13:23][ℹ️][tdarr] Backing up tdarr... [17.01.2024 04:16:07][❌][tdarr] tar creation failed! Tar said: tar: /mnt/cache/appdata/tdarr/server/Tdarr/Backups/Backup-version-2.17.01-date-17-January-2024-00-00-01-ts-1705467601057.zip: File shrank by 52828677 bytes; padding with zeros [17.01.2024 04:16:09][ℹ️][plex] Should NOT backup external volumes, sanitizing them... [17.01.2024 04:16:09][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex [17.01.2024 04:16:09][ℹ️][plex] Backing up plex... [17.01.2024 06:14:11][ℹ️][plex] tar creation failed! Tar said: tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/3/55b946d9ba0e353f5c12c3ee2911e38d686bad5.bundle/Contents/Indexes/index-sd.bif: File shrank by 5892895 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/8/bce816be2aa4181d0bcc1dd4f5b3e4fa07adda6.bundle/Contents/Indexes/index-sd.bif: File shrank by 1206301 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/a/889476413724d73a052dac429bd3da5040d5201.bundle/Contents/Indexes/index-sd.bif: File shrank by 5933323 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/e/e081ca77978af00b9f7bed630525f53c728535d.bundle/Contents/Indexes/index-sd.bif: File shrank by 4318187 bytes; padding with zeros Not sure if this is the correct place to post this issue but please let me know if it is not. Deos anyone know what settings could be causing these issues? Debug ID: e0a37b9a-e599-408a-942d-e6855e3d6ed9 Thanks!
    1 point
  18. I was just being dramatic on the percentage, appreciate the help. Yeah through a proxy. Thanks for the config I will take a look. 👍
    1 point
  19. Thank you so much for your support
    1 point
  20. Hallo alturismo, das ist ein super Hinweis das hat mir sehr geholfen. Vielen Lieben Dank. LG StarSurfer
    1 point
  21. davon ausgehend dass du die Pakete installiert bekommst wo du möchtest ... alles was auf dem Stick in /extra liegt wird bei jedem Start mit installiert, wenn dass die Frage war. Persistent kann es nicht sein da sich das System bei jedem Start im RAM "neu installiert" anhand der Vorgaben, hier wird ja nichts auf einen Datenträger installiert. Abseits davon, ob das wirklich die beste Idee ist sei dahin gestellt ... dafür nutzen die meisten eher einen Docker, einen LXC Container, eine VM, ... als "Entwicklungsumgebung" um solche Dinge (kompilieren ...) auszuführen, deine Entscheidung
    1 point
  22. So würde ichs machen: So könntest du es machen. Unraid brauchst dafür aber nicht wirklich, das können auch alle anderen
    1 point
  23. e1000e Aus welche m Museum hast Du die denn ausgegraben. Die wird eigentlich nur noch von Virtualisieren emuliert aus Gründen der "Rückwärtskompatibilität". Kann es sein, dass Dein UNRAID eine VM ist?
    1 point
  24. What you're describing is the issue I had (and that was fixed in this latest version). The Docker Compose documentation doesn't really explain that specifying the env-file in the compose.yaml just sends the entire .env to the container - hence you can't access those values in the compose.yaml file. Having the .env file in the compose.yaml directory (or specifying it in the CLI with the --env-file flag) loads the env into the compose.yaml and you can access the values. Very confusing and annoying! I have around 30 containers and wanted to be able to easily keep all the "config" values such as appdata & document directories, database passwords, domains... all in one .env file so it's easier to update, change and migrate everything. This plufgin, with the Folder View plugin, make the unRaid docker management much easier.
    1 point
  25. für mich nicht ... CF Zero Trust, Domain, du rufst per externer Domain auf, Beispiel https://paperless.deine_domain.de geht jetzt Paperless ? oder geht es nicht ? wenn es dir um das "lokale" weiterleiten zu paperless geht, das spielt keine Rolle ... https wird ja zwischen CF und Client gemacht, sofern das jetzt die Frage war ...
    1 point
  26. I ended up implementing this my own way inside of Unraid since I needed the additional functionality this brings. It's pretty easy to do, actually. In /boot/config/go, I added this line: until [ -f /etc/samba/smb.conf ]; do sleep 1; done && echo "\tinclude = /boot/config/smb-override.conf" >> /etc/samba/smb.conf Then I added my custom samba changes to /etc/samba/smb.conf. What the above does is: Wait for Unraid's smb.conf to be present in the ramdisk (realistically it already should be by the time this executes - but we need to be certain so we check, and wait if not). Append a line saying to include /boot/config/smb-override.conf after the include for smb-shares.conf which is generated by unraid. Samba processes these config files from top to bottom, and includes are processed inline. This means anything declared in the first include happens there, then the second, then the third, and each of those respective files is processed from top to bottom completely when samba stitches the configurations together. This is good for us, since we can now override any per-share settings we want to, or create additional shares outside of what Unraid provides.
    1 point
  27. Update 2024.01.18b - HUGE update, really. IMPROVEMENT: Rewrote-ish the cronjob script, works in the plugin GUI as before. Also added back the possibility to run it via CLI again, for manual control: php -f /usr/local/emhttp/plugins/disklocation/pages/cron_disklocation.php cronjob|force [silent] BUG: Yes, to both - probably (fixed old ones and created new ones). IMPROVEMENT: Some minor design tweaks to fit larger systems and not using too much space on the screen/page. IMPROVEMENT: Added a "hover" over the tray number on the dashboard to show the temperature. BUG: Fixed a glitch in the cronjob script. (version b)
    1 point
  28. That was it... i thought i looked in every directory, but i guess i missed that one... thank you for the help!!
    1 point
  29. It was fixed for a time, then it came back in 6.12, reported here: Not sure how to elevate it to the devs, it's a simple fix. In the mean time I went back to my workaround/hack.
    1 point
  30. New release of UD. Notable changes: The method for determining a server to be on line is no longer a ping, The SMB or NFS port is checked to be open on a remote server and this determines the server being online for either SMB or NFS shares. Added a per SMB remote share device setting to enable encryption on the CIFS mount. This is intended for servers outside the LAN. Using it on LAN servers will introduce a performance hit. Reworked the Unraid startup script that waits for the network to be active. It no longer requires the gateway to respond to a ping. The startup waits for 2 minutes for the gateway to be available and respond to a ping. If the gateway is available but does not respond to a ping within 2 minutes, remote shares are mounted anyway,
    1 point
  31. Did the DayZ container get borked since the latest DayZ Experimental build? My server gets to this point but not able to connect to it anymore. //edit weird - mpmissions folder was missing. Grabbed it from somewhere else and it launches fine
    1 point
  32. That is the Dynamix file browser. Try removing the Dynamix File Manager plugin and see if it clears up. If not post diagnostics.
    1 point
  33. Show a screen shot. What mount path are you using to access the files?
    1 point
  34. Good news everyone! I managed to get C10 pkg C-State (previously I get no higher than C3) on Asrock LGA1700 mobo and you can too. Yay! My setup is: Motherboard: Asrock H610M-ITX/ac CPU: i5-12500 NVME: Samsung 970 EVO 500Gb SSD: PLEXTOR PX-128M (only used on Windows) / 2x2.5" HDD: 250GB Samsung HM250HI + 4TB Seagate ST4000LM016 (on Proxmox) RAM: 2x32Gb Samsung DDR4 3200 PSU: Corsair RM650x 2021 So you have to enable/change hidden BIOS menus by using AMISCE (AMI Setup Control Environment) utility v5.03 or 5.05 for Windows (it sometimes provided with MSI software products and can easily be found on the internet). So you have to install Windows and to enable Administrator password in your BIOS. 1 Run Powershell as admin and cd to folder where your AMISCE extracted when run this command: .\SCEWIN_64.exe /o /s '.\setup_script_file.txt' /a In the setup_script_file.txt current values is marked with asterisk “*”. Our goal is to change “Lower Power S0 Idle Capability” from 0x0 (Disabled) to 0x1 (Enabled). From the command line you can check value/status by this command: .\SCEWIN_64.exe /o /lang 'en-US' /ms "Low Power S0 Idle Capability" /hb “*” next to “[00]Disabled” indicates it currently disabled. Then change it: .\SCEWIN_64.exe /i /lang 'en-US' /ms "Low Power S0 Idle Capability" /qv 0x1 /cpwd YOUR-BIOS-ADMIN-PASSWORD /hb Check again: .\SCEWIN_64.exe /o /lang 'en-US' /ms "Low Power S0 Idle Capability" /hb I also changed this settings because I wanted to .\SCEWIN_64.exe /i /lang 'en-US' /ms "LED MCU" /qv 0x0 /hb .\SCEWIN_64.exe /i /lang 'en-US' /ms "Native ASPM" /qv 0x0 /cpwd YOUR-BIOS-ADMIN-PASSWORD /hb .\SCEWIN_64.exe /i /lang 'en-US' /ms "Discrete Bluetooth Interface" /qv 0x0 /cpwd YOUR-BIOS-ADMIN-PASSWORD /hb .\SCEWIN_64.exe /i /lang 'en-US' /ms "UnderVolt Protection" /qv 0x0 /hb .\SCEWIN_64.exe /i /lang 'en-US' /ms "Password protection of Runtime Variables" /qv 0x0 /cpwd YOUR-BIOS-ADMIN-PASSWORD /hb 2 Another approach is to edit setup_script_file.txt manually by changing the asterisk location. And then: .\SCEWIN_64.exe /i /s '.\setup_script_file_S0_enable.txt' /ds /r Finally, reboot your machine. In Windows I have C8 pkg C-State (Throttlestop utility) and 4.5 watts from the wall at idle (display went to sleep). in Proxmox (sorry I don't use Unraid but this forum is a godsend) as you see I have C10 (couldn't believe my eyes at first) and 5.5 - 6 watts from the wall with disks spinned down (added 2 2,5" HDDs: 250GB Samsung HM250HI and 4TB Seagate ST4000LM016 instead of Plextor SSD) This guide was heavily inspired by another guide (I don't know if it's allowed to post links to another forums but you can find it by searching "Enabling hidden BIOS settings on Gigabyte Z690 mainboards")
    1 point
  35. Feel free to make a feature request to change to add 5 mins to the available spin down times.
    1 point
  36. Ok, when I have some time (probably next weeks) I will update the bios and give it a try. Actually I downgraded because I thought to spare much more energy but in the end, the difference is negligible, but when I found it out, it was too late. At the moment my system is idling at 45w so 10-15w less than before
    1 point
  37. in case you were wondering, i got this to work by going into the console, editing the /usr/share/grafana/grafana.ini to port 3006 than using grafana-server --homepath='/usr/share/grafana' --config='/usr/share/grafana/conf/defaults.ini' to launch the grafana server from the console. it worked but the dashboards that are preconfigured werent there and i lost all motivation to re-establish them *Edit: if you install the OG container, then change it to the kyle one, you can follow the steps i posted and get the GUS dashboard! this was a lot of work lol
    1 point
  38. Hi everyone, Sorry for the "thread necromancy", but I hesitated a long time between refreshing this topic a bit and creating a new one. For the mods, let me know if you prefer a new thrread to be created, I'm fine with that. I have used for many years the small "mini" usb drives as boot drives to avoid having something long and fragile sticking outside my unraid server cases. But then, even though I stuck to reputable brands, it seems that they always end up after a few years as "read only" [thus it becomes impossible to update plugins]. Fair enough, I went on now to use internal motherboard usb2-to-usbA adapters and "longer" sticks. To avoid the prolonged overheating that seems to break those mini boot drives (too) quickly. But then, it struck me, and please correct me if I'm assuming wrongly, but... The official recommendation is to keep using only "USB2" drives from reputable manufacturers, and at a maximum of 32GB. So... I am not disputing that *some* among us have different boot drives, but it seems that there is a largely mixed bag of bad experiences with anything non-USB2 and/or larger than 32GB. And nowadays, it is becoming harder and harder to get such devices [still possible, but requires a LOT of double and triple-checks]. At some stage, it will probably not be possible anymore. Shouldn't we be getting a bit more love regarding the choice of USB boot drives? (I'm not talking about wasting resources on SDD or aything else using precious SATA or PCIE ports, just USB). Can we see an official support for USB3 drives of larger capacity? It seems to me when browsing online sellers than 128GB is kind of the "new" minimum.
    1 point
  39. Greetings. This issue was previously reported: And subsequently fixed in 6.11.1: However it's back now in 6.12.4. Looking closely at /etc/rc.d/rc.docker, there are two issues: Issue #1 - line 296: if [[ -n $MY_NETWORK && $MY_NETWORK != $MY_NETWORK ]]; then - The point of the if-statement is to skip processing MY_NETWORK from the inner loop if it is the same as MY_NETWORK from the outer loop. This situation would represent the case where we are looking at the default network that is defined in the XML, that will be processed like normal in the outer loop, hence the attempt to skip it in the inner loop. However, someone has done some 'clean up' and re-named the variables in this inner loop the same as the ones in the outer loop, specifically MY_NETWORK, which overwrites our only handle to the one in the outer loop. As-is, it just skips everything because MY_NETWORK is always equal to MY_NETWORK. In order to fix this, we'll need a new variable to hold the value of MY_NETWORK before this inner loop starts and we'll need to reference that in the if-statement, like so: OUTER_LOOP_NETWORK=$MY_NETWORK #save-off the value to use in the if-statement below for ROW in $USER_NETWORKS; do ROW=(${ROW/;/ }) MY_NETWORK=${ROW[0]} MY_IP=${ROW[1]/,/;} if [[ -n $MY_NETWORK && $MY_NETWORK != $OUTER_LOOP_NETWORK ]]; then LABEL=${MY_NETWORK//[0-9.]/} if [[ "br bond eth" =~ $LABEL && $LABEL != ${PORT:0:-1} ]]; then MY_NETWORK=${MY_NETWORK/$LABEL/${PORT:0:-1}} fi logger -t $(basename $0) "container $CONTAINER has an additional network that will be restored: $MY_NETWORK" NETRESTORE[$MY_NETWORK]="$CONTAINER,$MY_IP ${NETRESTORE[$MY_NETWORK]}" fi done Issue #2 - lines 297 - 300: LABEL=${MY_NETWORK//[0-9.]/} if [[ "br bond eth" =~ $LABEL && $LABEL != ${PORT:0:-1} ]]; then MY_NETWORK=${MY_NETWORK/$LABEL/${PORT:0:-1}} fi - I am guessing this has something to do with re-working network names when a user has applied the macvlan fix introduced in 6.12.4, maybe this tries to do an automatic 'translation' of sorts? It doesn't seem to 'translate' things very well, for me anyway it always just wants to change 'bond' to 'eth' so I get a bunch of 'eth2.10', 'eth2.20', etc. instead of what they should be - 'bond2.10' or 'bond2.20', etc. I have to completely comment this in order for the script to correctly restore my networks. It was obviously put here for a particular situation, but it seems to break other situations. My initial thought would be to just get rid of it entirely and let users know they will need to update their Post Arguments to correctly reference the proper network name(s), but I admit I am not aware of the exact, original reason this was put here so I may be missing something. Any thoughts? Thanks!!
    1 point
  40. For anyone who comes here by Google, I've looked into this myself a little and I thought I'd share my conclusions. IF you are going to do this, the way to go is definitely to virtualize unraid and run both it and pfSense on Proxmox. Having said that, doing so may be pointless for those users who are trying to reduce the energy consumption of their homelab (read on). The biggest issue that I've found with running pfSense on Unraid is that it's impossible to have any VMs running if the array is not started, even if those VMs and their data are entirely hosted on non-array disks. This has a few implications: if the array is down, pfSense is down, and as a result: - Your DHCP server is down. Fine if you have configured static IPs to important LAN devices (as I have), but if you've instead allocated static IPs in the pfSense DHCP service, everything on your network will be inaccessible. - Your VPN server and router are down. This is the big one for me. If the Unraid box loses the array for some reason (or loses power and fails to re-start the array), both your Internet connection and your VPN server are gone. No remote troubleshooting for you. - You will be unable to set any of this up in an Unraid trial -- Unraid trials will not start the array until Internet access is available. Conversely, Internet access will not be available until the array is started. Therefore, this setup is completely untestable without first buying a license. None of these issues exist if you virtualize Unraid on Proxmox and pass-through your SATA controller(s) and/or HBA(s). Doing this actually works pretty well, but results in signficantly higher idle CPU usage and idle power consumption (~+8-10 watts, on a server that otherwise takes only 15 watts at idle) on my server vs. just running Unraid on the metal. A side benefit of this is that (at least on my system), Proxmox boots and starts pfSense very quickly, whereas the Unraid boot is glacially slow (and it would require still more time to start pfSense after that!). Since the power consumption hit is so huge for virtualizing Unraid, at this point I'm planning to run Unraid on the HW and run pfSense on a separate system that only takes ~10 watts idle total (same as the hit to my Unraid server). Yes, this situation is extraordinarily frustrating, but I guess I just have to keep in mind that Unraid is primarily a storage server and it isn't reasonable to expect its VM management capabilities to be comparable to a dedicated virtualization product. I understand that "most" applications of virtualization would need the array up anyway ... but it's still disappointing that there's not a way to run a router VM independent of the array status.
    1 point
  41. For anyone else having trouble with this, when it's asking for a trusted domain, it's the domain you GO TO to access NextCloud...not where you're trying to access FROM. So, just go to /config/www/nextcloud/config and add your domain to the config.php as described here. Much easier than I originally thought.
    1 point
  42. What is the difference between the Private and Secure setting in the security settings?
    1 point
  43. Secure allows you to specify which users have RW access, but all users have read access Private allows you to specify which users have RW access, and which users have read only, and which users have no access
    1 point
  44. A Secure share can be read by everyone but only written by specified users. A Private share can only be read or written by specified users.
    1 point