Leaderboard

Popular Content

Showing content with the highest reputation on 10/04/21 in all areas

  1. Compose Manager Beta Release! This plugin installs docker compose and compose switch. Use "docker compose" or "docker-compose" from the command line. See https://docs.docker.com/compose/cli-command/ for additional details. Install via Community Applications This plugin now adds a very basic control tab for compose on the docker page. The controls allow the creation of compose yaml files, as well as allowing you to issue compose up and compose down commands from the web ui. The web ui components are based heavily on the user.scripts plugin so a huge shoutout and thanks goes to @Squid. Future Work: Add scheduling for compose commands? Allow more options for configuring compose commands.
    3 points
  2. Supervisor – Unraid Controller (iOS) Supervisor is a new and natively built Unraid controller for iOS. Here are some of the key features: - Monitor system information. - View cpu information at a thread level. - Start and stop array. - Start, stop, and restart virtual machines. - Start, stop, and restart dockers. - Download, upload, view, edit, files from your SMB shares. - Start and stop a parity check in addition view current and past parity checks. - Reboot and shutdown the server. - View disk information including smart data, temperature, read, write and error counts AppStore: http://appstore.com/SupervisorUnraidController I am also looking for some people to become testers for new features and bug fixes, if interested you would get the application for free. All that I ask that you provide feedback about the bugs that arise. You can sign up through TestFlight link below, thank you in advance. TestFlight: https://testflight.apple.com/join/s42xUXv7 Screenshots:
    1 point
  3. Yeah sorry I was slow on the response here, what you did is exactly what i was recommending, the official python base image (which you used) was what i was thinking.
    1 point
  4. Bienvenu cher cousin éloigné ! C'est toujours un plaisir de retrouver un francophone au bout de la terre !
    1 point
  5. Our nextcloud container use version 1.20.1 of nginx, so update your container. If its updated, it's not our nextcloud container that is the issue.
    1 point
  6. You're misunderstanding. They should be sorted by Date Added when you click Show More under recently added. But, if I'm misunderstanding the sort orders are there to give you the option to resort the displayed apps by however you choose. (Or put another way, "Show More" is nothing but a shortcut to the "All Apps" category with a sort order preselected.) EDIT: I've added hide sorting options when using Show More onto the todo list
    1 point
  7. Hello ich777, Thank you very much for your clarification. Currently I do not use it for anything yet. But i was thinking of running CCTV with this.
    1 point
  8. Yes, and thanks again, but i wanted to follow your advice and wait for some more confirmations. But it doesn't look anyone else will respond. I decided to do it anyway. I've backupped the db data just to be sure, and did the update, all looks well.
    1 point
  9. @Squid would it be possible to get the "Recently Added", "Top Trending Apps" and "Top New Installs" back or better speaking that you've set it as starting page instead of showing all? For me it was a super cool way selecting "Recently Added" and look from time to time in there if anything new/useful apps where added.
    1 point
  10. Kein Problem. Nein. Es geht um die globalen Anmeldedaten der Externen Speicher - sofern Du diese verwendest. nobody greift dann auf /Dokumente aus dem o.g. Beispiel zu und stellt diese Nextcloud zur Verfügung. Die Nextcloud-User die wiederum darauf zugreifen dürfen - nennen wie sie mal User1 und User2 - definiert man neben dem externen Speicher. Schau Dir einfach die Seite an. Dann wird das klar. Ich bin mit meinen Daten schon dutzendfach umgezogen. Verschiedene Betriebssysteme, NAS, Verwaltungsprogramme. Meine Daten sind uralt und extrem gepflegt. Die überlasse ich keiner Software damit die jedesmal alles in andere Strukturen stecken. Das will ich einfach selbst kontrollieren.
    1 point
  11. I believe that Unraid currently uses the larger of the Minimum Free settings for the cache and the share in deciding if it can use the cache. I personally think these two settings should be independently applied to the cache and the array but at the moment that does not appear to be the case I am not sure if this is intended behaviour or just how it currently works
    1 point
  12. Ohne tief in die Details zu gehen: Du hast auf Deinem Array (!!!) Shares wie zum Beispiel /mnt/user/Bilder oder /mnt/user/Dokumente. appdata liegt wie gewohnt auf dem Cache oder einem Pool (wenn vorhanden) und sollte auch aus Performance-Gründen als solches angesprochen werden (/mnt/cache|pool/appdata). Unterhalb dieses Shares legen die Container ihre config Ordner ab. Damit hast Du i.d.R. nix zu tun. Das Mapping des Plex Containers könnte dann /mnt/user/Bilder <--> /Bilder lauten. Abhängig vom verwendeten Container gibt es vordefinierte Path-Variablen für diese Container-seitigen Ordner. Dann würdest Du diese für das Erstellen des Mappings verwenden. Abhängig davon ob Plex Deinen Content löschen darf (nein) ist dieses Mapping entsprechend mit Rechten zu versehen (r/w-slave oder r-slave). Innerhalb Plex greifst Du dann auf /Bilder zu. Nextcloud ist leicht anders gestrickt. Nextcloud pflegt ein config Verzeichnis für seine Daten sowie ein data Verzeichnis für Deine Daten. Diese würde ich i.d.R. verteilen auf config --> Cache|Pool, data --> Array. Du kannst data theoretisch als /mnt/user/Nextcloud <--> /Nextcloud mappen. Als Mapping für Deine Daten würdest Du /mnt/user/Bilder <--> /Bilder bzw. /mnt/user/Dokumente <--> /Dokumente verwenden. Innerhalb Nextcloud greifst Du dann auf /Bilder bzw. /Dokumente zu. Zusätzlich kennt Nextcloud Externe Speicher. Gerade wer schon umfangreiche Daten besitzt, wird diese zu schätzen wissen. Du richtest einfach innerhalb der Nextcloud GUI einen Externen Speicher Eintrag für Deine o.g. Besipiel-Ordner ein Dokumente --> /Dokumente, Bilder --> /Bilder. Gleichzeitig wählst Du die Nextcloud-User die auf diese zugreifen dürfen. Als globale Anmeldeinformation definierst Du nobody mit leerem Passwort. Ich arbeite ausschließlich mit externen Speichern. Nextcloud darf gerne mit seinen /config und /data Verzeichnissen spielen, aber der eigentliche Content gehört mir und den pflege ich über die externen Speicher. Wenn ich etwas außerhalb kopiere, dann geht das direkt in meine eigenen Ordner. Beim Zugriff über Nextcloud scannt dieses die externen Speicher und pflegt den Content automatisch in seine Datenbank ein.
    1 point
  13. Thanks @Squid! Just started looking around ... when I switch browsing to docker hub I can't install the container or even open the info:
    1 point
  14. Edit: Issue was confirmed as insufficient CUDA memory. https://imgur.com/a/tvN804n So it appears each custom model essentially runs as an independent process. I did not realize this, and am going to have to do some testing with with Yolov5s models to see if I can get decent models to lower GPU headroom, consider changing my GPU in the server, or offloading deepstack to my main PC with a far better GPU. @ndetar, you are a rockstar for helping me figure this out.
    1 point
  15. Oh my god you are my friggin hero!!!! I can’t believe I didn’t think of that as that is one of the big things I had to make sure I had in my opencore boot args when I had this guy in the hackintosh. i should’ve just pulled the old opencore file and compared. lol. IT’S UP AND RUNNING!!! Sent from my iPad using Tapatalk
    1 point
  16. Looks like your 'C---l' share has Minimum Free of 1000000000K, in other words, 1TB. Go to the page for that user share and click on the label for Minimum Free to see what I mean.
    1 point
  17. Salut! Moi c’est Alexandre, 34 ans. Je viens de l’Ontario, dans une ville francophone au Canada. Quel plaisir de voir enfin une section francophone en espérant que je puisse trouver de l’aide pour optimiser mon serveur UNRAID car j’ai un peu de mal. Envoyé de mon iPhone en utilisant Tapatalk
    1 point
  18. Well i'm hoping i have finally sorted it! Never had that long uptime since it was built. I found out the 128gb of Corsair LPX 16gb dimms in the server have different version numbers which relates to different chip sets! luckily i had more dimms in another machine so i have managed to sort a 128gb set with same chips and looks like it has got me sorted at long last. Link to the below quote from reddit about version numbers.
    1 point
  19. Adding or removing drives is the same whether single or dual parity. Adding a disk to a new data slot in any array with valid parity will clear the new disk (unless already clear) so parity remains valid. Removing data disk(s) from the array requires New Config with parity rebuild, since parity isn't valid without the removed disk(s). It is possible to "clear" a disk while it is still in the array by writing zeros to the entire disk. Then New Config without it can be done without parity rebuild, since parity was updated with all those zeros and so the disk can be removed without affecting parity. Reordering disks will invalidate parity2 since its algorithm depends on disk order.
    1 point
  20. use this find . -name "cacert.pem" it should be under /usr/local/bin/nzbget/
    1 point
  21. @milfer322 @thebaconboss @pieman16 @dest0 I have created a new release on the development branch! Most important fixes are, added 2FA support and fixed the "new device spam" https://github.com/PlexRipper/PlexRipper/releases/tag/v0.8.7 Next high priority fixes are these: - Download media files are being wrongly named due to a parsing error and given the incorrect file permissions. - Download speed limit configurable per server as not to fully take up all available bandwidth - Slow download speeds with certain servers - Disabling the Download page commands for TvShow and Seasons download tasks, these are currently not working and might confuse users I've also added the feedback I received here to the Github issues, please subscribe to those for updates!
    1 point
  22. With only one did the logs output the same errors? In extra parameters did you add: --runtime=nvidia Try removing the container image completely and pulling it again, and try the template I made in the app store fallowing the instructions for GPU use. I'm curious what the logs show when you use the template.
    1 point
  23. Wow, looking forward to getting my hands on this! Fantastic idea, and nice job!
    1 point
  24. Should be possible when you put this in as an extra argument: --bwlimit=KBPS Just add it like this (in this example 10Mbit/s):
    1 point
  25. This was the main intention from the start, but I can't find a proper good solution to this yet. "by-path" is fine for most cases (as far as I can see), but it seem to have problems addressing SSD's. I have one SATA SSD and two nVME SSD's it does not add into the path section, and I don't want to mix two different systems for organizing drives in Disk Location. Not sure if this is a kernel issue, never meant to be, or if there's another solution for this altogether. Command below will show devices with paths defined. Even if you would see your SSD's there, it will break many systems and can't be implemented now. ls -l /dev/disk/by-path/ | egrep -v "part|usb"
    1 point
  26. I wanted to thank the team for how seamless they've made the backup and restore process! My USB drive failed last night, but luckily, the latest configuration was stored safely in the cloud. Restoring the configuration was extremely easy and got the server up-and-running in just a few minutes. A+ work, everyone!
    1 point
  27. still having this problem - is there a paid support service available? this is quite problematic.
    1 point
  28. Hey guys, selexin here, creator of Plex Library Cleaner. I'm happy to answer any questions you might have Just a heads up, the project has (just) been renamed to Cleanarr (see https://github.com/se1exin/Cleanarr/issues/30#issuecomment-903114995).
    1 point
  29. I've successfully setup ONLYOFFICE to work in my self hosted Nextcloud instance. Like many others here I use NginxProxyManager to setup reverse proxy and get a certificate from Let's Encrypt for my subdomains. As already mentioned here you don't need to create any config files and copying certificate and private key to a newly created folder certs in appdata ...... I think anyone who added at least one proxy host to NginxProxyManager with SSL certificate from Let's Encrypt pointing to the newly created subdomain will be able to configure OnlyOffice Document Server to work properly with Nextcloud. If not I will gladly provide some assistance. My main intention here is creating a brief HOW TO in order to restrict the access to ONLYOFFICE Document Server (for security reasons and data integrity) with encrypted signature, known as Secret Key. Let me emphasize that I don't own the credit for this tutorial, I'm just posting something found among user comments in @SpaceInvaderOne YouTube video tutorial How to Install and Integrate Only Office with Nextcloud. Many, many thanks to @SpaceInvaderOne for providing great tutorials for us "nerds" and make our experience with unRAID easier. HOW TO add and configure Secret Key in ONLYOFFICE: Stop the OnlyOffice Document Server container. In the "edit" option for the OnlyOffice Document Server docker, do "+ Add another Path, Port, Variable, Label or Device". Config Type: Variable Name: JWT_ENABLED (can be whatever you want to see in the UI, I used the key) Key: JWT_ENABLED Value: true Description: Enables use of Secret key [press ADD] Then add another variable: Config Type: Variable Name: JWT_SECRET (same thing, up to you) Key: JWT_SECRET Value: [WhateverSecretKeyYouWantToUse]. You can use the following command in Terminal (without quotes) to create a random Secret Key: "openssl rand -base64 48" and press Enter Description: Defines the Secret key value [press ADD] [press APPLY] Start OnlyOffice Document Server container. Go to Nextcloud > Settings > ONLYOFFICE page. Enter the Secret Key and click Save (you should get the message: Settings have been successfully updated ..... No restart of the Nextcloud docker was needed.
    1 point
  30. After 2 hours of searching on the internet I finnaly found 2 solutions. Both worked for me in resolving my issue with unRAID shared folders mapped in Windows 10. 1. If you cannot mount SMB share to windows 10 use NFS share client in Windows. It works great. enable NFS in your unRAID server follow this tutorial on your Windows 10 (only Pro and Enterprise versions) computer 2. I tried and read every single tutorial here and couldn't mount SMB share in my Windows 10 computer. I finnaly realized that I have a special character "€" in my password and that caused SMB shares never worked for me. I change the password in unRAID server and Windows 10 PC and BOOOOOOOOM it worked straight away. Maybe this will help others. Thank you @trurl and @Frank1940 for trying to provide some help.
    1 point
  31. It’s most likely the wsdd process. Check in “top’ from the command line. It’s Windows discovery and the only way to permanently get rid of that problem is to disable WSD in SMB settings. Personally I prefer WSD to enabling SMB 1.0 so I just use the “i- br0” parameter in WSD options in SMB settings. That seems to minimize the 100% CPU problem for me. It will take a reboot (or a restart of SMB from the command line) to fix the pegged CPU issue. That problem has never locked up my server so it may not be the cause of your server locking up, but, it is still a problem. Sent from my iPhone using Tapatalk
    1 point
  32. I just had the same issue! Formatting the USB drive to FAT32 and assigning a drive letter (had to do this manually) fixed the issue for me.
    1 point
  33. To my best knowledge and from everything I have read these messages are purely debug only, they are not errors. What settings are not saving? this could all be due to the fact rutorrent does NOT modify tTorrent settings. I well be cranking down the logging which will stop these messages filling the log Sent from my LG-V500 using Tapatalk It reverts many settings from the GUI back to default everytime I restart the docker, and also loses track of where the torrents were moved to. They are all on pause until I "save as" to the right place without checking the move files box, then I need to recheck the files and then I can start them again. Currently I have to uncheck DHT and Peer everytime, change the port, and also adjust the default download directory after every restart. It seems like a few settings that were set on first start stay, but some don't. If there is any way I can log the stopping of the docker to see if there are any write error of some sort, that would be great ok so some of your issues are due to a misunderstanding as to what rutorrent can and cannot do (by design). So rutorrent is purely a web frontend to rtorrent, and as such does NOT modify any settings, the only settings you can save using rutorrent are settings for rutorrent itself, i.e. things like enabling/disabling plugins, settings for plugins etc. If you want to modify things like incoming port, enabling/disabling dht, and folders for incomplete/complete downloads then you will have to modify the rtorrent config file, this is located in /config/rtorrent/config/rtorrent.rc please make sure you use something like notepad++ (not notepad) to prevent windows line endings getting added.
    1 point