Leaderboard

Popular Content

Showing content with the highest reputation on 04/04/21 in all areas

  1. A simple plugin to allow you to create your own notes for every page in the GUI and bring them up on demand. After installation (navigate to a different page from apps, or reload the page), an "edit" icon will appear in the top right (black / white) or at the bottom (azure / grey) which will allow you to view / edit your notes for that particular GUI page. On the black & white themes, if a note exists you will see the icon flash red whenever you are on the applicable page. Buy me a beer
    2 points
  2. All Hail!!! We're not worthy!
    2 points
  3. Overview: Support for Docker image arch-syncthing in the binhex repo. Application: Syncthing - https://syncthing.net/ Docker Hub: https://hub.docker.com/r/binhex/arch-syncthing/ GitHub: https://github.com/binhex/arch-syncthing Documentation: https://github.com/binhex/documentation If you appreciate my work, then please consider buying me a beer 😁 For other Docker support threads and requests, news and Docker template support for the binhex repository please use the "General" thread here
    1 point
  4. @ich777 I think it's working. Thank you for the help!
    1 point
  5. Hallo, offensichtlich gab es einen Versionssprung, in den die Performance beim wiederherstellen der Daten beschleunigt wird. in v3 war das so das als erstes ein Vollbackup erstellt wurde, dann wurden die Inkrementellen Backup erstellt ( immer mit Hardlinks auf die Dateien im Vollbackup die sich nicht geändert haben. ( die Dateien die sich geändert haben wurden natürlich im Backup gespeichert... ) wenn nun Daten wieder hergestellt werden müssen ( z.b. aus den 6. inkrementellen ) dann sucht sich das Programm alle Daten anhand der Hardlinks bis runter zum letzten Vollbackup zusammen, das kann je nach anzahl etwas länger dauern die neue Variante ist glaube ich so: es wird als erstes ein Vollbackup gemacht, dann kommt das erste inkrementelle, nach Abschluss des inkrementellen wird das letzte Inkrementelle zum Vollbackup. also immer die letzte sicherung wird so nach Abschluss des Backups zum voll backup, und die davorliegenden sind inkrementelle... Der Punkt: FillCycle ist neu, und den hatte ich auf 1 stehe. das hat die neue Speichervariante "genutzt" nach dem 2. Backup hab ich kein Vollbackup mehr gesehen, sondern nur noch 2 inkrementelle, es waren aber alle Daten komplett vorhanden. verwirrend war für mich das ich kein Vollbackup mehr gesehen habe, und auch die tatsächliche Speichergrösse hab ich nicht mehr gesehen Gelöst habe ich das für mich jetzt so: FillCycle auf 0 und schon ist das wie vorher..... Danke für eurer Mühe mfg Peter
    1 point
  6. The plugin replaces the standard script with a slightly enhanced version that handles the extra fields the plugin handles to display history records which explains why you only see that error if the plugin is installed. Looks like something has crept into the multi-language support part of the script although I am not sure yet what it is. Thinking about it I have not checked the changes to produce the enhanced script against the standard 6.9.1 version. I have been able to reproduce this so will be able to fix it. Interestingly it happens in one of my test 6.9.1 systems but not another one on that same release. Not sure yet of the cause but I am sure it will be relatively easy to track down and fix. Thanks for reporting this.
    1 point
  7. Ahh, things must have changed since I looked last. Like I said, I'm not an employee, just been using Unraid for many years.
    1 point
  8. @giganode Vendor/Model: XFX AMD RADEON VII 16GB HBM2, 1750 MHz Boost, 1801 MHz Peak, 3xDP 1xHDMI PCI-E 3.0 I Try it for 3 days now Vendor-Reset kernel patch and it work partially. I get the reset bug after restarting VM 5 to 7. Ubuntu work great i can restart shutdown the VM from Unraid or VM it self with no problems. Mac OS work only for 5 to 7 times with Force stop and then restart from unraid, If try to restart from the VM it self i will get AMD Reset Bug. Windows 10 what ever i do it will crash and i have to restart my unraid server to use the GPU again. @ich777 thanks a lot you super star. ⭐🌟⭐ 🙃
    1 point
  9. MyServers has been going through some growing pains (hence why it's still tagged as being beta). Probably best to post in And wait for @ljm42 or @OmgImAlexis to help out. FWIW, I currently only have remote access to 1 of my servers.
    1 point
  10. Ignore it. That's a by-product of the fact that two templates exist which pull from the exact same dockerHub repository. While I usually don't allow that, in this particular case I did allow it for various reasons.
    1 point
  11. I don't know how your router interprets a metric value of zero, but from a routing perspective this should be at least 1. The WG configuration is okay.
    1 point
  12. 👍 I'll give it a try this evening and report.
    1 point
  13. Hey @Partition Pixel yup, not being able to use the MSR mod could be holding it back. My container supports it on hosts that have it enabled, but Unraid requires a bit of work to get it going. I’ll pm you when I get home and give you some steps to try, if it works for you I’ll merge it into the readme and OP.
    1 point
  14. And most data loss is due to user error.
    1 point
  15. To prevent this in the future you can use a dynamic DNS server to sort of work around changing IPs without paying for a static IP. Spaceinvader One has a good video on how to setup one using Duck DNS here ->
    1 point
  16. So far as your data is concerned, it really all depends upon your own measure of redundancy. No raid system is a backup. Unraid has inherent advantages over a traditional RAID system where if you exceed your level of redundancy you do not lose all of your data, only part of it. (NB: I have no clue what-so-ever why anyone would ever run a traditional RAID or ZFS on a home server unless they needed the access speed - the risks just aren't worth it). I have zero problems running 12 drives per server on a single parity drive. Actual drive failures are exceedingly rare. Most "failures" are because of poor cabling. If I went beyond 12 drives on each server, then I might consider double parity.
    1 point
  17. People are winding up getting into to trouble because they are either: Exposing the ports used to control the webGUI (80 / 443) Putting the entire server into a DMZ because they can't figure out how to forward a particular port Not having a root password It's hard to not have a server and not have any ports forwarded to the server. IE: Plex And this is completely fine. If you need to have access to docker applications, then either use a reverse proxy or Wireguard (or openVPN-AS) If you need to have access to the webUI remotely, then install the Unraid.net plugin (as a bonus this also gives access to any VM's without passed-thru video cards) I'm not particularly worried.
    1 point
  18. Just to add to the discussion: I was tinkering with this yesterday as well, and ran into issues. After finding this thread, I tried the sequence of steps that jonathanm posted above and itimpi added to the wiki, and those worked fine. 2 notes: I found that if I forget to unmount the USB, then just running the script a 2nd time also did the trick. Seems like the 1st attempt unmounts the drive but then fails, and the 2nd attempt succeeds on the unmounted drive. I got the same results whether running the script as "sudo bash ./make_bootable_linux" or leaving bash out and running "sudo ./make_bootable_linux". That may not be the case for other distros, not sure. But before I found this thread, I stumbled upon a different solution: editing the script slightly. For me the key was in the error that I repeatedly got when running the script: sudo: /tmp/UNRAID/syslinux/make_bootable_linux.sh: command not found JHM got this error at least once, and there are other posts that mention this error. The error seems to be indicating that the script is failing because sudo sees the path /tmp/UNRAID/syslinux/make_bootable_linux.sh without a command, and doesn't know what to do with it. Or at least I think that is what is happening. To fix it, I found the line below, towards then end of the script: sudo /tmp/UNRAID/syslinux/make_bootable_linux.sh $TARGET and added bash before sudo, to make it look like this: sudo bash /tmp/UNRAID/syslinux/make_bootable_linux.sh $TARGET The edited script will now complete without unmounting the USB. When running from the PC either "sudo bash ./make_bootable_linux" or "sudo ./make_bootable_linux" can be used. The script will also now run when launched from the USB itself, but only when using "sudo bash ./make_bootable_linux". Of course, inserting bash into that line in the script assumes that the shell being used is bash, but that is a pretty fair assumption these days. Maybe there is a way to call the shell in the script without specifying bash, but I don't know how to do that. Anywho, hope that helps.
    1 point
  19. Cool, I will add what I posted here to that thread. They also posted a series of steps that work with the script right out of the box (or out of the zip, I guess?) in that thread, and updated the getting started section of the wiki with those steps too. For those that find this thread before the thread linked above or the wiki, the steps are: Format the entire USB as a single FAT32 partition, label MUST be UNRAID (otherwise the script will not find the USB) Extract archive to mounted USB drive copy make_bootable_linux back to the PC unmount (not eject) USB drive run the following command from wherever the scipt is located on the PC (will not work if you try to run it from the USB) sudo bash ./make_bootable_linux I tried myself and found that these steps work too. If I forget to unmount the USB per step 4, then just running the script a 2nd time also did the trick. Seems like the 1st attempt unmounts the drive but then fails, and the 2nd attempt succeeds on the unmounted drive. Also, if you eject the USB instead of just manually unmounting it the script will not be able to find the USB to work its magic.
    1 point
  20. @brain:proxy @crattray Ok, not sure why this is happening, but update to the latest version (0.4.0) anyway. I've rewritten the part how PA handles episodes and tv show announcements with a little help of @Avalarion. Thanks again! Working on it, but it may take a while, since I'm currently quite busy.
    1 point
  21. Do you perhaps have bonding enabled under network settings? I have the same board with an extra NIC as well, and am able to stub two of them on 6.8.3 and passthrough to two separate VMs. I tried 6.9 at one point too and didn't have any issue there either.
    1 point
  22. Enable iommu not auto Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  23. Found this for an asrock board. Regarding your question about IOMMU, please find the settings below in the BIOS. -Advanced>AMD CBS>NBIO Common Options>IOMMU set to Enabled -Also make sure Enable AER Cap. is set to Enabled -Press F10 button to save the settings http://forum.asrock.com/forum_posts.asp?TID=14726&title=enabled-iommu-support
    1 point
  24. In wich group ? Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  25. Is there an available driver ? I think I read earlier that Intel did not yet provide that. I might have skimmed through the article too quick and be wrong. (plus it was an article in french, so not too much use here )
    1 point
  26. Thanks a billion. My indexers in nzbhydra would constantly have DNS issues with the indexers and disable them. When trying to reinstall containers or even new ones they would fail. My nzbget download speed had also dropped by 80%. I suspected isp/rsp throttling. Disabled these NIC settings and everything seems okay so far. Issues only occurred after 6.9.x updates
    1 point
  27. AMD Sensor support released for APU (Temp) and dGPU (Temp/Fan/Power).
    1 point
  28. For all the noobs like I am. When you followed the spaceinvader one tutorials to use your delugevpn container for all the others, there is following things to do (thanks to jonathanm for providing the link with all the explanations). 1. In your delugevpn Container -> Edit, and add under Container Variable: ADDITIONAL_PORTS all the ports you have added for your applications, which should be passed through comma separated. For example: 6789,7878,8989,9117,8080,9090,8686. 2. Now you need to change settings in your Containers, which are passed through (Radarr, Jacket and so on). Inside the containers, which now will be accessable you have to change the Server adress to "localhost". So for example if you had in Radarr under Download Clients -> Deluge -> Host (for example) = 192.168.X.XX, you need to change the host to "localhost" without the ".. Just type localhost there. 3. Change it everywhere in the container where you have your host as numbers to text localhost (for example also in indexer). It seems like this helped me with my problem.. If you see that I have missed something, please feel free to note it.
    1 point
  29. Has someone created a custom rm binary so you can remove some certain file? I sometimes upgrade my plex media files and I don't want to have duplicates there. So, I don't want to be looking for what drive that certain file is on and "chattr -i" plus "rm". Sure I'm not the only one looking for this script
    1 point
  30. https://forums.unraid.net/topic/75539-support-binhex-qbittorrentvpn/?do=findComment&comment=951967 If you are using binhex's containers, or any derived directly from his work, you will need to follow this post.
    1 point
  31. I love this package, easily the most useful I've installed to my server. Would you consider adding bat (https://github.com/sharkdp/bat) to the list? You've got fd and ripgrep which are also written in rust, and bat would be a great addition as this trio makes up 90% of all of my terminal commands. Thanks for the effort you put into this.
    1 point
  32. Unfortunately no difference. Update: I restored to 6.7.2 and ended up with the same problem. After clearing the browser cache noVNC access to the vms was normal in both 6.72 and 6.8 😁
    1 point
  33. Hi Guys this is a tutorial on how to setup and use Duplicati for encrypted backups to common cloud storage providers. You will also see how to backup from your unRAID server to another NAS/unRAID server on your network. You will also see some of the common settings in Duplicati and how to also restore backups etc. Hope its useful How to easily make encrypted cloud or network backups using Duplicati
    1 point