Leaderboard

Popular Content

Showing content with the highest reputation on 04/24/23 in all areas

  1. Servus @stulpinger lies dir den Thread mal durch
    2 points
  2. Just thought I would add some context here for those looking at this thread. I came here to see example configs and the above examples seem to imply that the Zone ID is the url IE: `google.com` when in fact it is not, which is probably the reason @mjeshurun was getting the error they were getting. The Zone ID is simply the ID cloudflare uses to identify a domain. This will direct you to where you need to go to get the Zone ID if you do not already know where to find it. https://developers.cloudflare.com/fundamentals/get-started/basic-tasks/find-account-and-zone-ids/
    2 points
  3. ich hab das mal mit einer Folge getestet, siehe da, funktioniert ... coole Idee das könnte man ja sogar automatisieren wenn man will, Abgleich Größe, RAM, ... einlesen und spindown initiieren root@AlsServer:/mnt/user/Media/TVRIPS/The Following/Season 01# cat The\ Following\ -\ S01E01\ -\ Genie\ und\ Wahnsinn.mkv >/dev/null root@AlsServer:/mnt/user/Media/TVRIPS/The Following/Season 01# --- Apr 24 06:40:53 AlsServer emhttpd: read SMART /dev/sdc Apr 24 06:41:25 AlsServer autofan: Highest disk temp is 25C, adjusting fan speed from: OFF (0% @ 0rpm) to: 80 (31% @ 2830rpm) Apr 24 06:43:12 AlsServer emhttpd: spinning down /dev/sdc Apr 24 06:46:30 AlsServer autofan: Highest disk temp is 0C, adjusting fan speed from: 80 (31% @ 2842rpm) to: OFF (0% @ 2824rpm) lief jetzt 8 Min aus dem Ram ... und nach Stop und "als gesehen markiert" bleibt auch alles aus ... Danke, werde ich mir überlegen zu scripten ...
    2 points
  4. Nö, beim start wird UNRAID mit den configs vom Stick in den RAM geladen. Alle Änderungen werden jedoch (zwangsläufig) auf den Stick geschrieben. Deswegen wird ua empfohlen KEINE USB3 Sticks zu nehmen, sondern nur USB2. Oder zumindest den Stick dann in einen USB2 Port zu stecken. Dann ist die Gefahr deutlich niedriger, dass der Stick womöglich überhitzt.
    1 point
  5. Thanks, sounds interesting! I'm getting an error on Ubuntu even with sudo: nvidia-persistenced failed to initialize. Check syslog for more details. And the syslog: Apr 24 19:20:31 ubuntu nvidia-persistenced: Failed to lock PID file: Resource temporarily unavailable Apr 24 19:20:31 ubuntu nvidia-persistenced: Shutdown (3457) The manual says you don't need to run it directly and use nvidia-smi: Once the init script is installed so that the daemon is running, users should not normally need to manually interact with nvidia-persistenced: the NVIDIA management utilities, such as nvidia-smi, can communicate with it automatically as necessary to manage persistence mode. I was able to set it using: sudo nvidia-smi --persistence-mode=ENABLED First that increased power usage to 49W. And then it dropped back to 20W. So that seems to be an easier alternative. Thanks!
    1 point
  6. I've always had some of those as well, doesn't seem to cause a problem. Seems the cloudflare API sometimes times out while the container tries to get the existing records so it then tries to add some that are already there.
    1 point
  7. @ich777 I tried some more – I used a different (higher quality) HDMI cable to connect to the real monitor, so I used that cable to connect to PiKVM, didn't help. I also tried using a different EDID in PiKVM (I was using 1920x1080, tried 1280x1024 instead), still crashed with the same error. Reverted to rc2 for now, will try again when rc4 is out (let me know should you want me to run some more tests on rc3).
    1 point
  8. Did you remake the flash drive? That happens if the template isn't within /config/plugins/dockerMan/templates-user on the flash drive (eg: you inadvertently deleted it), is corrupt etc
    1 point
  9. Shouldn't be an issue. I would suggest uninstalling, re-installing. Make one small change to the config, save. Then leave the schedule page and come back to make the rest of your changes.
    1 point
  10. yes, search what youre looking for in apps, then click "Click here to get more results from dockerhub" Once you find what you're looking for you can click Install. It'll ask you if you want unraid to attempt to determine applicable paths and ports etc. You'll probably need to manually enter variable names/values.
    1 point
  11. Don't take the blog to literally. It was just meant to be a basic guide. The blog was written for a docker that doesn't come with a text editor and he is showing how to install nano. The linuxserver.io docker includes nano and vi already. Just use which ever text editor you are more comfortable with. The main thing is that you need to edit your version.php file to look like; <?php $OC_Version = array(23,0,12,2); $OC_VersionString = '23.0.12'; $OC_Edition = ''; $OC_Channel = 'stable'; $OC_VersionCanBeUpgradedFrom = array ( 'nextcloud' => array ( '22.0' => true, '23.0' => true, ), 'owncloud' => array ( '10.11' => true, ), ); $OC_Build = '2023-03-21T09:23:03+00:00 62cfd3b4c9ff4d8cdbbe6dcc8b63a1085bb94e3d'; $vendor = 'nextcloud'; Now restart the docker and see if you can get back into the GUI. It should all be happy now and offer an upgrade to 24.x
    1 point
  12. Manually extract the tar to a temporary location and then copy the folder for the app you need.
    1 point
  13. So I have not hit this myself but it looks like NextCloud is either trying to upgrade or downgrade in an unsupported way. It is normally only allowed to upgrade one major release at a time. Eg if it is currently on 22.x.x then it can go to 23.x.x but not to 24.x.x or higher. This usually happens when it gets confused because of a failed upgrade attempt in the passed so it thinks it is on a higher version than it actually is. Eg it thinks it's on 25.x and wants to upgrade to 26.x but it actually is on 24.x Check what version of NextCloud the GUI thinks it is on and what it wants to upgrade to. This can be seen on the administration overview page (like I posted yesterday). Then go into the docker shell and run the command "occ status". You should see something like; root@1384826c6432:/# occ status - installed: true - version: 25.0.5.1 - versionstring: 25.0.5 - edition: - maintenance: false - needsDbUpgrade: false - productname: Nextcloud - extendedSupport: false Then check what the version.php thinks; root@1384826c6432:/# cat /config/www/nextcloud/version.php <?php $OC_Version = array(25,0,5,1); $OC_VersionString = '25.0.5'; $OC_Edition = ''; $OC_Channel = 'stable'; $OC_VersionCanBeUpgradedFrom = array ( 'nextcloud' => array ( '24.0' => true, '25.0' => true, ), 'owncloud' => array ( '10.11' => true, ), ); $OC_Build = '2023-03-23T12:04:47+00:00 28add7e896b24fee2714b21a12151e4042ab677c'; $vendor = 'nextcloud'; Also check what version the config file thinks; root@1384826c6432:/# cat /config/www/nextcloud/config/config.php | grep version 'version' => '25.0.5.1', All of these should be in agreement. I suspect that they are not because of a previous failed install. I would guess that you are actually on 23.x or 24.x but the installer thinks you are on 25.x and it trying to take you straight to 26.0 You need to bring them all back so the system thinks it's on the correct version then it should be happy to upgrade and all going well, no further problems. Here is a blog that walks through the issue. Just keep in mind that since you are using the LinuxServer.io docker the path to the files above is slightly different than what is in the blog. Also it is possible depending on what version you are going from, you might need downgrade the docker by using a specific docker tag, but I wouldn't do that unless you hit a problem. NextCloud have some nice instructions on the normal upgrade process if you want to get your head around it.
    1 point
  14. Jetzt ist zwar schon einige Zeit vergangen, aber ich wollte mich trodtzdem noch mal bedanken für die schnelle Hilfe Es hat alles geklappt, Daten sind noch da und Server läuft wieder
    1 point
  15. Yes, and why I also mentioned running memtest.
    1 point
  16. I really can't help with that since I've never saw that on my server, maybe try to create a post on the Sundtek forums or maybe @Sundtek can help over here. Did you try this adapter on your Vu+ for a few days or only for a few hours, maybe you are correct and the DVB-C/T USB device is faulty, it could as be the case that something doesn't play nicely with your hardware, maybe check if a settings for USB devices is suspicious or try another USB port on your motherboard
    1 point
  17. Little feedback, rebuild the parity disk in the original HP server with the RAID controller. Then moved all disks back to the new server. Just make sure you use the exact same disk assignment. Then start the array. It won't be able to read the partitions but that's ok. Then stop the array, remove one of the data disks (not the parity). Start again to register the missing disk. You'll now be able to browse the missing disk and see all the files and confirm that is correct. Stop the array. Then re-add the missing disk and start in maintenance. Then rebuild the re-added disk. Repeat for all data disks. Took 3 x 8 hours, but everything is working again!
    1 point
  18. Apr 23 23:03:03 ZEUS kernel: vfio-pci 0000:01:00.0: BAR 1: can't reserve [mem 0x80000000-0x8fffffff 64bit pref] This is what's filling up the log, google this error, I believe there are some solutions.
    1 point
  19. Oh I see. #!/bin/bash COMPOSE_VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep 'tag_name' | cut -d\" -f4) curl -L "https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/lib/docker/cli-plugins/docker-compose sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose Use this as an automatic weekly script until the git is updated more often.
    1 point
  20. you really should try it, this only shows the exposed port from the dockerfile and has nothing todo if the app is listening now on which port ... in terms you use the Link from the docker icon klick, thats still pointing to 8080, just try manually http://192.168.1.5:8081 in your browser (in case you didnt treid this yet)
    1 point
  21. it will wait, i have 90 % and 30 days here and mover is waiting ...
    1 point
  22. vielleicht mal von vorne, wie auch mehrfach hier im Forum beschrieben macht es kaum Sinn unraid hinter einen pihole zu setzen welcher auch noch auf Unraid läuft, warum ?, Unraid startet und sucht einen DNS, pihole startet ja erst später als Docker ... kurz, nimm Unraid aus der Pihole Kette raus ... das ist mitunter eines deiner Probleme warum diverse Dienste nie richtig funktionieren werden ... Henne <> Ei Problem welche Liste meinst du und mal generell, plex hat auch ein log wo man ggf. Fehlermeldungen sieht und mit denen man evtl. helfen könnte, du schreibst zwar sehr viel Text, aber wirklich folgen kann ich zumindest diesem nicht ... da du ein "Wirrwarr" hast aus bridge, br0 host ... solltest du dich evtl. damit mal auseinander setzen, der binhex-plex läuft im Host mode und hat wahrscheinlich keinen expose hinterlegt, daher wird da einfach nichts angezeigt was aber nichts zu bedeuten hat, ansonsten wird auch dieser logs haben ... anhand dem was du hier aufzeigst würde ich tippen ... ich lass das jetzt Grundsätzlich, Unraid (oder jedes andere OS) vergisst nichts ... wenn dann passt eine Konfiguration nicht oder ... Schau Dir die logs an und dann kommst du sicher weiter, zum Thema Neuinstallation, klar kann man das machen, aber wenn du am Ende wieder das Gleiche machst ... kommt das gleiche Ergebnis raus.
    1 point
  23. Backup ist immer empfehlenswert, aber theoretisch wäre es nicht zwingend notwendig. Die Daten bleiben (wenn man keinen Fehler macht) beim neuaufsetzen von UNRAID erhalten. Mach einen Screenshot der Auflistung deines Arrays und wenn du dann mit dem neuen Stick startest, packst du anhand dieses Screenshots deine HDDs wieder so ins Array wie zuvor. Und bei derartig vielen "komischen" Fehlern würde ich tatsächlich auf einen defekten USB Stick tippen, wäre also vielleicht tatsächlich ratsam das nochmal mit einem neuen/anderen Stick auszuprobieren
    1 point
  24. I've got version 2.10.5 ready to upload which fixes the SSD benchmarking under UNRAID 6.12 but for some reason the app doesn't want to push partial content to the browser and instead waits until it's done before doing. For example, if you benchmark a drive, you will see a white screen until it's done. I've had this happen before but I don't recall what the cause was or how to resolve it. Time to get Googleing! Not only did the new version of Docker changed how it represents the internal working state of the virtualized hardware from inside the Docker container, it's not always the same from one run to the next. But I've identified what I believe to be all the locations needed to look at for the info so we should be good to go once I solve the other issue above.
    1 point
  25. In the meantime maybe you could try from here -- https://slackware.pkgs.org/15.0/slackware-x86_64/expect-5.45.4-x86_64-4.txz.html / https://slackware.uk/slackware/slackware64-15.0/slackware64/tcl/expect-5.45.4-x86_64-4.txz
    1 point
  26. Wenn Du die Daten alle eh auf dem Array hast, kann Dir ja nicht viel passieren. Du kannst dann in der v6.12 den unrad-pool (als zpool) einfach ganz neu mit Bordmitteln erstellen. Danach machst Du einfach "normal" weiter. Ich habe unter unraid auch shares, die nur auf einen unraid-pool (der ein zpool ist) Daten halten (und nicht auf dem Array)..das macht man eben mit den Cache Einstellungen im Share.
    1 point
  27. that did it. Thanks I am most appreciative!
    1 point
  28. Should be pretty easy to solve… Start the container without any proxy and on the default bridge network just as it is intended and maybe change the port in the template to avoid port conflicts. Then connect to the WebUI from Sabnzbd, go to settings and search for the port, change that value from 8080 to the port that you want to use eg 8081, click save and the container should restart. After that create a port entry with container port 8081 to host port eg 8081 in the OpenVPN container as described and route the Sabnzbd container again through the OpenVPN container. With that you should be now able to connect with the OpenVPN IP:8081 to Sabnzbd. I hope that helps and makes sense to you. It should be also possible to change the port for qbitorrent like for Sabnzbd, but I only know my container and you only have to change the port in one container anyways.
    1 point
  29. Plan B -> schallgedämmtes Gehäuse und die HDDs im Käfig schön entkoppeln. Edit: Plan C -> Renderer im WoZi und unraid mit HDDs für medien woanders...wofür hat man denn schliesslich Ethernet?
    1 point
  30. @JorgeB I will follow your instructions to Raid1 so I can remove that additional drive. Again, thanks so much for the help. You saved my day! 🙏
    1 point
  31. Während dem Schauen nicht machbar, weil Plex dann ja schon auf die Datei auf dem Array zugreift. Was evtl geht, wenn man ein Script hat, was die Plex-Aktivität überwacht und wenn ein Dateizugriff erfolgt, dass der dann die Datei komplett einliest, in der Hoffnung, dass sie dann im RAM liegt und dann den Spindown-Befehl absetzt. Das könntest du sogar testen. Starte einen Film und dann mach das: cat "/mnt/user/Filme/Filmname (2000)/Filmname (2000).mkv" >/dev/null Danach den Spindown-Button der entsprechenden Platte drücken. Wird natürlich nur gehen, wenn du genug RAM hast.
    1 point
  32. Feature Request - Do not use file extensions to open files in the editor. Instead open all non-binary files in the editor. Example: if [[ $(find "$file" -type f -size -100M -exec grep --files-with-matches --binary-files=without-match '' {} \;) ]] # open file with editor fi What it does: - "find -type f -size -100M" allows only to open files which are smaller than 100M - "grep --files-with-matches --binary-files=without-match" skips binary files
    1 point
  33. That looks fine. It's just warning you that it might timeout due to the size of the update. I have found that can happen but you just re-run the update and it continues where it left off. Just give it a try or you can perform the update from the command line which should work now that you have updated to docker.
    1 point
  34. This is what happens when you upgrade NextCloud (so it needs a newer PHP 8.x) but don't upgrade the docker (has old PHP 7.x). Which docker are you running and what version? It sounds like you are stuck on an old version. Make sure your tag is set to "latest".
    1 point
  35. @Gragorg and @Dent_, template updated today, so that should not be needed to be added anymore. I left it as an optional field. Should be good!
    1 point
  36. Official will do you no favours, it's exactly the same app and version, the difference is how I package it and the support (there is none on here for official Plex), but the choice is of course completely yours. Sent from my 22021211RG using Tapatalk
    1 point
  37. I am by no means a docker expert but i will try and answer your questions. I am not certain i know exactly how to answer your question here, as i think it could be refereeing to multiple things. If you are talking about what they call "rootless" containers then no I dont believe that feature is used. If you are talking about privileged containers then yes generally speaking containers are run with privileged mode disabled unless they really need it. Two questions in one here. If a container were to be hacked then yes something could be installed within the container. Escaping the container and running something on the host system would be much harder. I wont say impossible and unRAID is not meant to be a high security platform, however generally speaking the assumption is that programs cannot break out of a container. Exposing the docker socket inside a container can be risky and is generally not something that should be done. Generally if an application needs to use the docker socket i recommend pairing it with a docker socket proxy that limits what docker features it can access.
    1 point
  38. Hello All, I too wanted a piece of Pi on my Unraid. After having a good read of this, and others, I got confused so went for a play. In the end I have ended up downloading the latest "Raspberry Pi Desktop for PC and Mac" iso from https://www.raspberrypi.com/software/raspberry-pi-desktop/ . I created a new VM using the Debian template with the downloaded iso as the OS install iso drive and created a 15gb drive. I started this VM with the consol showing. A boot screen opened, where I selected Install. Dont just leave it as Pi will open, you will get excited but then you will realise its just in a boot loop. Work though the install, it's easy to do and you can see more information on it here https://projects.raspberrypi.org/en/projects/install-raspberry-pi-desktop/4. Once installed I stopped the VM and edited the configuration in Unraid to delete the install iso information so it boots to the 15gb drive. That was it! You start the VM, Debian pops up but you can just leave it and Pi will open. Setup a user etc, it will restart and your at the Pi home page. You can enable SSH etc as normal. In summary -Download "Raspberry Pi Desktop for PC and Mac" iso -Create new debian VM and set the OS install iso to the "Raspberry Pi Desktop for PC and Mac" iso -Start the VM -Select Install option (either one) -Stop VM -Edit VM and remove the OS install iso information -Start VM and enjoy a piece of Pi Hope this helps ...
    1 point
  39. Ja, das macht Sinn. Wenn Du keine Backups von den Daten hast und auf Nummer sicher gehen willst, dann installiere die UD Plugin(s) und schaue / mounte die Daten-Disks erstmal und vergleiche die IDs mit Deinem Screenshot. Eigentlich kann nix passieren, solange Du die Parity nicht zu einer Daten-DIsk machst und vor Allem auch nicht umgekehrt. Parity ist/sind die Disk(s), welche nicht formatiert ist/sind. Ich würde das Array erstmal ohne Parity zusammenbauen, starten und schauen, ob alles da ist...dann Parity einbauen/hinzufügen
    1 point
  40. Oke so this weekend i had some free time to spend and checked out support on unraid 6.12 rc2. I started of fresh by reinstalling my flash drive and putting back a copy of the config folder on to it. First some information for those reading my post for the first time: General Hardware: i5 12600K 64GB 3600Mhz ram Storage config: 5x 16TB array 3x pci-e 4.0 nvme 1tb cache Power usage: With a GTX 1070 my pwoer consumption on transcoding was: 214W With the ARC A380 and UHD 770 working it's 110W Transcode software used: tdarr + node At this moment i'm very limited on speed by the hdd's that have all the media and I'm not using any forms of ram transcoding at this moment. But the power efficiency of the arc card starts to show. When tdarr is finished on the current queue I will add some new media on the cache and see how far I can strecht the arc card on performance. Performance Numbers with media on HDD: UHD 770 = around 300 FPS total ARC A380 = around 200 - 220 FPS total Performance Numbers with media on SSD: UHD 770 = T.B.A. ARC A380 = T.B.A. Now let's move on to what I did: So i have a fresh unraid 6.12 RC2 install (copied back the config folder on the flash drive) While I was installing the fresh flash drive I added the line to the i915.conf file inside the modprobe.d folder "options i915 force_probe=56a5" (be aware that the value 56a5 = intel arc a380 i have the asrock challenger itx brand) After that I booted up the server and under tools --> system devices I checked if the driver was loaded for the arc card. Then I went on to check ls /dev/dri in terminal and It shows card0 card1 renderD128 renderD129 (card0 + 128 = udh 770 and card1 + 129 is intel arc in my case. I checked this by opening intel_gpu_top in terminal. In tdarr docker server + node I changed the extra parameter to dev/dri/cardX (where X is the number corresponding to the card you want to use) so in my case i added to tidarr /dev/dri/card0 and to tidarr_node /dev/dri/card1. In side tidarr I use the qsv plugin from boosh and the arc a380 options are set to qsv as well and it just works. I will keep you guys posted on some performance numbers when i have something loaded onto the cache This is al H265 btw no AV1 as I see no advantage on power consumption (most clients i have don't support av1 at this time) and the boost in hdd space is not needed at this moment so I hope that in the future when av1 is more mainstream I will move to it.
    1 point
  41. Es geht auch einfach... Nach der Installation: 1. virtio-win-gt.msi installieren > VM Neustarten 2. Gerätemanager öffnen 3. "Red Hat VirtIO Ethernet Adapter" Rechtsklick > Gerät deinstallieren 4. "Nach geänderter Hardware suchen" klicken 5. "Ethernet-Controller" Rechtsklick > Treiber aktualisieren 6. virtio ISO auswählen und Weiter Und funktioniert Im prinzip einfach nur nach der Installation von virtio den Treiber im Gerätemanager löschen und über die virtio ISO installieren. Funktioniert auf allen Settings Getestet auf: 7x Win10_22H2_x64_German 3x Win10_21H2_x64_German
    1 point
  42. I apologise for the necro, but came across this thread when I was trying to do the same. This command worked for me on 6.9.0: ifconfig br0 down && ifconfig br0 up I also noted there is /sbin/dhcpcd. So that could also be used but haven't tried it.
    1 point
  43. I used the User Scripts plugin to run this script on first start of array : #!/bin/bash # umask setup umask 077 # Variable Setup CONFIG=/boot/config/ssh HOME_SSH=/root/.ssh ZSH="/root/.oh-my-zsh" mkdir -p $HOME_SSH cp $CONFIG/pre-set/* $HOME_SSH chmod 700 $HOME_SSH chmod 644 $HOME_SSH/id_rsa.pub chmod 600 $HOME_SSH/id_rsa chmod 600 $HOME_SSH/authorized_keys ### Install zsh shits HOME="/root" sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)" umask g-w,o-w chsh -s $(which zsh) env zsh -l newplugins="git tmux zsh-autosuggestions" sed -i "s/(git)/($newplugins)/" /root/.zshrc sed -i "s#\(ZSH_THEME *= *\).*#\1agnoster#" /root/.zshrc echo "cd /mnt/user/" >> /root/.zshrc git clone https://github.com/zsh-users/zsh-autosuggestions $ZSH/custom/plugins/zsh-autosuggestions chmod 700 /root/.oh-my-zsh/custom/plugins/zsh-autosuggestions ### # Copy terminal (zsh) history cp /boot/config/.zsh_history /root Combined with this array stop script to copy ZSH history and SSH keys: #!/bin/bash # Copy terminal (zsh) history touch /boot/config/.zsh_history echo "$(cat /root/.zsh_history)" >> /boot/config/.zsh_history # Variable Setup CONFIG=/boot/config/ssh HOME_SSH=/root/.ssh # Copy any new keys on exit rsync -avhW $HOME_SSH/ $CONFIG/pre-set As for .zshrc (or aliases), I use /boot/config/go to write to /etc/profile on startup (how I was taught here to do it): # Re-make aliases on boot echo " #### Aliases copied from /boot/config/go #### alias size='du -c -h -d 1 | sort -h' export TERM=xterm-color #### End Aliases ####">>/etc/profile These are just example aliases and settings, use your own (though I do love that size alias)
    1 point