Leaderboard

Popular Content

Showing content with the highest reputation on 01/13/21 in all areas

  1. This thread will serve as the support thread for the GPU statistics plugin (gpustat). UPDATE: 2022-11-29 Fix issue with parent PID causing plugin to fail Prerequisite: 6.7.1+ Unraid-Nvidia plugin with NVIDIA kernel drivers installed. 6.9.0 Beta35 and up no longer require a kernel build, but now require the Nvidia plugin by @ich777. Intel support requires the plugin install for Intel GPU TOP from @ich777. Both plugins can be found in Community Apps. Plugin is now live on CA but if you want to manually install see the below -- To review the source before installing (**You should always do this**): https://github.com/b3rs3rk/gpustat-unraid Manual Plugin Installation URL: https://raw.githubusercontent.com/b3rs3rk/gpustat-unraid/master/gpustat.plg Enjoy! ====================================================================== Information to Include when asking for Support: 1a) the result of 'nvidia-smi -q -x -i 0' from the UnRAID console (via SSH or the webterminal is fine) for NVIDIA support AND/OR 1b) the result of 'timeout -k .500 .400 intel_gpu_top -J -s 250' for Intel support AND/OR 1c) the result of 'radeontop -d - -l 1' 2) the result of 'cd /usr/local/emhttp/plugins/gpustat/ && php ./gpustatus.php' 3) a screenshot of the dashboard plugin (if issue is only seen during transcoding, then a snippet during transcode is best) 4) the contents of '/boot/config/plugins/gpustat/gpustat.cfg'
    1 point
  2. Welcome to the "new" new method of working around HP's RMRR problem on unRaid. For the previous/deprecated method which was no longer working, see: https://forums.unraid.net/topic/72681-unraid-hp-proliant-edition-rmrr-error-patching/ Additionally, for 6.9 there was a way to compile the patch yourself using the now deprecated kernel helper. This method starting with 6.10 is more streamlined as the patch is pre-built into unRaid now. First the disclaimer: This patch been tested with no negative attributes observed by myself and many others. . Many have been running an RMRR patched version since mid 2018 when we first started producing them, and so far have had no reported issues. In 2021 I sold my last proliant, so as new releases of unRaid are made public, it will be on the users to report any issues as they may (or may not) arise.. As the patch only omits RMRR checks, it should not affect any other core function of the OS. But as a general notice, neither Limetech or myself, or any contributors are responsible/liable for any loss of data or damage to hardware in enabling this patch in unRaid, or on any other system you install it. The "New" New unRaid HP Proliant Edition - RMRR Error Patching Description/Problem It is well documented that many HP Proliant servers have RMRR issues using certain BIOS versions after about 2011 when trying to passthrough devices in a linux environment. Device passthrough fails and the onscreen error will show: vfio: failed to set iommu for container: Operation not permitted And a look further into the logs show: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. HP is aware of this problem and is not updating older machines. There are some bios options to try to fix this on newer models with some success. On unRaid, the problem is addressed by patching out the RMRR check. As of 6.10 the patch is now built into unraid, but off by default. Information regarding the patch can be found here: https://github.com/kiler129/relax-intel-rmrr Note: the patch author states: ------------------ --->>>A big thanks to @ich777 for creating the ability for users to previously easily compile the patch themselves, and to @limetech for now incorporating it into unRaid itself. <<<--- ------------------ Previously, @AnnabellaRenee87 and I maintained and provided the patched file for users. But now you will be able to easily enable the patch as it is included in unRaid starting with 6.10. Installation/Enable Procedure valid starting with 6.10 For users already using a previously patched version of unRaid, follow these steps: 1. Make a backup of your flash device by clicking on the main tab, then scroll down to your boot device section, click flash, then click flash backup and wait for the download. 2. Modify your syslinux.cfg by going to the main tab>boot device>flash in the "Unraid OS" section. Modify to the following: append intel_iommu=relax_rmrr initrd=/bzroot The patch is off by default and requires this to enable it. 3. Update to at least 6.10 of unRaid, reboot. 4. After the server has booted up, open a terminal and enter dmesg | grep 'Intel-IOMMU' If the patch is active you will get the following response DMAR: Intel-IOMMU: assuming all RMRRs are relaxable. This can lead to instability or data loss For users already running 6.10 and above follow these steps: 1. Make a backup of your flash device by clicking on the main tab, then scroll down to your boot device section, click flash, then click flash backup and wait for the download. 2. Modify your syslinux.cfg by going to the main tab>boot device>flash in the "Unraid OS" section. Modify to the following: append intel_iommu=relax_rmrr initrd=/bzroot The patch is off by default and requires this to enable it. 3. Reboot. 4. After the server has booted up, open a terminal and enter dmesg | grep 'Intel-IOMMU' If the patch is active you will get the following response DMAR: Intel-IOMMU: assuming all RMRRs are relaxable. This can lead to instability or data loss Disable To disable the patch, remove the modification to syslinux.cfg and reboot. Other Proliant Problems Check this thread for fixes to other common Proliant problems: https://forums.unraid.net/topic/59375-hp-proliant-workstation-unraid-information-thread/ Happy unRaiding!
    1 point
  3. Unfortunately, this doesn't work for me. In case anyone's interested, I found a different free invoice software that can be used in a Docker container. It's called Crater. There isn't a pre-made image for it on DockerHub, though, so, despite messing with it for an hour or two, I have no idea how to install it... They provide instructions but I'm nervous about following them instead of using the UnRaid UI.
    1 point
  4. The vendor-reset got an update for navi users. I tested it on my system. I have no broken audio anymore after resets. For me this is a real breakthrough! I don't need the old navi patch anymore now. I can now boot between Windows 10 20H2, macOS Big Sur 11.1 and Ubuntu 20.10. For all navi user who wanna try it out: Force update the docker Edit the docker and add a variable like this: Try and hopefully enjoy! Keep in mind, this only fixes the specific audio issue for navi users. Please report your expierences here. Special Thanks to @ich777 for that fast edit.
    1 point
  5. Warum willst du eigentlich auf BTRFS wechseln? Falls du dir davon ein erhöhte Datensicherheit versprichst, muss ich dich leider enttäuschen. Zwar hat BTRFS Checksummen, aber die helfen dir nicht, da im Unraid Array, jede Platte für sich alleine steht. Dh du erfährst zwar von einem Bitrot, aber du kannst ihn nicht reparieren. Das geht nur bei BTRFS im RAID1. Da BTRFS wie jedes CoW Dateisystem einen hohen Overhead besitzt, empfehle ich stattdessen alle paar Wochen das Dynamix File Integrity Plugin ausführen zu lassen und evtl Dateien mit Bitrot aus einem Backup wiederherzustellen.
    1 point
  6. Attach your supervisord log to your next post.
    1 point
  7. Glad to hear that everything works Please feel free to contact me again if something isn't working but it should at least...
    1 point
  8. Schau dir mal diese 2 Threads an. Sie sind zwar schon etwas älter aber mit durchaus interessantem Inhalt und Überlegungen. Aber mal ne Frage.... Warum willst du von XFS auf BTRFS?
    1 point
  9. Exactly, it will revert all back to default if something fails or you uninstall the plugin. Not every system is the same please try it first and then we could troubleshoot it. Keep in mind that these cards should work with LibreELEC and also with the TBS-OS drivers, please test both if possible and one doesn't work (You can select the drivers on the Plugin page). My DigitalDevices cards run just fine with this Plugin and the TVHeadend Container from linuxserver, eventually you can also try my Unraid-Kernel-Helper where you can integrate everything into the images itself if the Plugin doesn't work for you. Appreciated the nice words. EDIT: A buddy runs the TBS-OS drivers just fine with his TBS-6902
    1 point
  10. Laut hier soll das kein Problem sein: https://forums.unraid.net/topic/92315-mixing-array-drives-xfs-and-btrfs/ Schritte wie dort beschrieben durchgeführt?
    1 point
  11. Hey, just installed this alongside Intel GPU TOP - for some reason nothing displays at all on the dashboard I am running an Intel 6600K btw EDIT: Forgot to select Intel in the plugin settings! That resolved my issue
    1 point
  12. I haven't seen anyone using it during transcoding operations. From Intel:
    1 point
  13. @JorgeB Thanks for your help, it was both. I first changed the power supply setting. It stopped again at 20 percent. Then I changed my ram speed to auto, it's all fixed now.
    1 point
  14. iGPU support is now merged to master and plugin is updated. Thanks to everyone who tried it on the test branch. You can now revert to the CA version by removing my plugin altogether and reinstalling from CA for cleanliness.
    1 point
  15. Usually yes, as long as no RAID controllers are involved, any VM pass-trough settings might also need tuning if applicable.
    1 point
  16. Hey, seems I missed a few posts on this thread - sorry about that - I will answer them here: @Keith Ellis yes you can pause the dockers /basetopic/servername/dockername/paused should do the trick @paulmorabi I am working on a fix for the latest RC of unraid @dbs179 & @neepninja if you are getting 503 errors in the logs the problem is between the unraid API and the unraid server. Your mqtt setup within ha seems fine. The problem could be many things, if you're using the latest RC that could cause issues which I'm working on, or if the server uses https and the API hasn't been configured to connect over https, or if your UI uses non standard ports you need to list those in the URL field, could be a simple user/password mismatch as well. Finally if you're MQTTCacheTime or MQTTRefreshTime is too fast unraid can block the API for spamming it with requests. i hope this helps - let me know if there are further questions
    1 point
  17. Install Community Applications plugin then use the Apps page in the webUI of your Unraid server to access and install the hundreds of Apps available including Unassigned Devices https://unraid.net/community/apps
    1 point
  18. @b3rs3rk looking all good here the IMC Bus is always active as note, with or without transcodes, but also in igpu top .. so plugin works asexpected and i guess its normal for permanent activity
    1 point
  19. Für alle ist mir keines bekannt. Ich kenne es nur so: mdcmd spinup 0 mdcmd spinup 1 usw mdcmd spindown 0 mdcmd spindown 1 usw Hier ein Beispiel mit Schleife: https://forums.unraid.net/topic/12356-command-for-spinning-up-all-disks/?tab=comments#comment-121493
    1 point
  20. You could also use this to COMBINE stats form multiple servers as well. So if you split libraries or something like that, you are covered. Or if you want to see your total LAN/WAN bandwidth across all PLEX streams across multiple servers, etc... Just add 2 or more Server Tags like: server = "1" OR server = "2"
    1 point
  21. Honestly, I haven't got that far and It's not a use case I have. I'm just guessing about the secondary datasource, but it does sound right when I think about it. Tell you what, I'll go through the motions when I get some time and setup a second Plex/Tautulli/Varken setup on my backup server to test this. Once I have it all figured out, I'll include it in a future UUD update (if it is possible and not a limitation). Anyone else running 2 or more Plex servers? Is this is huge advantage for anyone else? @Boomháuer If you happen to figure it out on your own, let me know how you solved it. @GilbN I'm interested to get your thoughts/take on this. Feel free to chime in man. You're welcome.
    1 point
  22. I don't know what to make of it either, and agree some other issue (gremlin) lurks, but I am glad it's finally working.
    1 point
  23. When setting up an automated build on Dockerhub there is an option to have a build trigger based on the base image.
    1 point
  24. I've done some digging on this - here's what I found. Mount unraid share system on my mac and check its spotlight status: [macbook-pro]:~ $ mdutil -s /Volumes/system /System/Volumes/Data/Volumes/system: Server search enabled. [macbook-pro]:~ $ But as best I can tell "server search" is not in fact enabled. Turns out samba 4.12.0 changed this search-related default: Note that when upgrading existing installations that are using the previous default Spotlight backend Gnome Tracker must explicitly set "spotlight backend = tracker" as the new default is "noindex". If I add the following to SMB extras: [global] spotlight backend = tracker in addition to this share-specific code: [system] path = /mnt/user/system spotlight = yes Search works again! When I check spotlight status I get the following: [macbook-pro]:~ $ mdutil -s /Volumes/system /System/Volumes/Data/Volumes/system: Indexing disabled. [macbook-pro]:~ $ Hopefully this is useful toward a general fix. I'd rather avoid a custom entry for each share.
    1 point
  25. I hope you are joking...not to be polemical, but Unraid is not raid, and it's one of the most user friendly os for managing virtual machines I tried: what you are comparing it to? Try to build and manage a qemu virtual machine in any linux distro, one of your choice, and I think you will change your opinion.
    1 point
  26. Hi NuWanDa, I had the same problem. To solve it, stop the docker, remove it, remove the docker template by clicking add template, then choose the MacInABox Template and hit the minus button. After that use a file manager like Krusader and delete the MacInABox folder in Appdata. (This is the important step that the scripts will get installed, later) For good measure, also delete the iso-files of the MacOSX version in the iso folder. After that, reinstall the MacInABox-Docker. Hope that helps. Cheers, MH
    1 point
  27. Ja, browst man durch seine Bilder-Ordner, müssen die Fotos erst komplett geladen werden, um sie anzuzeigen. Auch bei einer nächsten Session wieder. Thumbnail-Index wird nicht erzeugt.
    1 point
  28. No this is not a feature of DockerMan or any other docker manager that i am aware of. Generally speaking containers are meant to be static things with everything you need already bundled in. Now on to how you can achieve what you are trying to do. If you happen to be using a LinuxServer.io container, you should look into their Docker Mods feature here. It is exactly the kind of thing you are looking to do, which is adding an extra variable to the container which will install extra packages. I dont know for sure if they have an ffmpeg DockerMod for Nextcloud but last i looked, it seemed pretty easy to create new mods. Alternatively if you dont use an lsio container (or they dont have one) I would think the best option would be to look for a different nextcloud image with ffmpeg built in. Finally if the above options dont pan out you can always roll your own image. Just write a simple Dockerfile that uses your image of choice as a base, and does a single run line to install your additional packages. With a free dockerhub account and github account you could setup your image to autobuild whenever the base image is updated.
    1 point
  29. NICs are found but fail to load the driver because of an error: Jan 10 14:02:06 dstore kernel: bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10) Jan 10 14:02:06 dstore kernel: bnx2x 0000:03:00.0: msix capability found Jan 10 14:02:06 dstore kernel: bnx2x: PCI device error, probably due to fan failure, aborting Jan 10 14:02:06 dstore kernel: bnx2x 0000:03:00.1: msix capability found Jan 10 14:02:06 dstore kernel: bnx2x: PCI device error, probably due to fan failure, aborting So this is a Linux or hardware issue, maybe look for a firmware update.
    1 point
  30. The current method of RMRR patching is now deprecated. For the new method of RMRR patching please see here:
    1 point
  31. I am trying to set this up, and I am stuck on setting the Backup Storage Path. I've been an Unraid user for sometime, but I am well below noob user level. I would like to get this set-up, but I will need exact instructions to make it happen. TIA for any help that you can provide.
    1 point
  32. Just FYI to anyone passing a X520-DA2 10gb NIC through may only have one interfeace load in freebsd pfsense.... I had to follow the following post and edit xml...mutltifunction not enabled was issue. This may save someone hours of time and suffering Cheers, Burnstation19
    1 point
  33. Ok so new Macinabox is now on CA. Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell. Basic usage instructions. Macinabox needs the following other apps to be installed. CA User Scripts (macinabox will inject a user script. This is what fixes the xml after edits made in the Unraid VM manager) Custom VM icons (install this if you want the custom icons for macOS in your vm) Install the new macinabox. 1. In the template select the OS which you want to install 2. Choose auto (default) or manual install. (manual install will just put the install media and opencore into your iso share) 3. Choose a vdisk size for the vm 4. In VM Images: Here you must put the VM image location (this path will put the vdisk in for the vm) 5. In VM Images again : re enter the same location as above. Here its stored as a variable. This will be used when macinabox generate the xml template. 6. In Isos Share Location: Here you must put the location of your iso share. Macinabox will put named install media and opencore here. 7. In Isos Share Location Again: Again this must be the same as above. Here its stored as a variable. Macinabox will use this when it genarates the template. 8. Download method. Leave as default unless for some reason method 1 doesnt work 9. Run mode. Choose between macinabox_with_virtmanager or virtmanager only. ( When I started rewriting macinabox i was going to only use virtmanager to make changes to the xml. However I thought it much easier and better to be able to use the Unraid vm manager to add a gpu cores ram etc, then have macinabox fix the xml afterwards. I deceided to leave vitmanager in anyway, in case its needed. For example there is a bug in Unraid 6.9.beta (including beta 35.) When you have any vm that uses vnc graphics then you change that to a passed through gpu it adds the gpu as a second gpu leaving the vnc in place. This was also a major reason i left virtmanger in macinabox. For situations like this its nice to have another tool. I show all of this in the video guide. ) After the container starts it will download the install media and put it in the iso share. Big Sur seems to take alot longer than the other macOS versions. So to know when its finished goto userscripts and run the macinabox notify script (in background) a message will pop up on the unraid webui when its finished. At this point you can run the macinabox helper script. It will check to see if there is a new autoinstall ready to install then it will install the custom xml template into the VM tab. Goto the vm tab now and run the vm This will boot up into the Opencore bootloader and then the install media. Install macOS as normal. After install you can change the vm in the Unraid VM Manager. Add cores ram gpu etc if you want. Then go back to the macinabox helper script. Put in the name of the vm at the top of the script and then run the script. It will add back all the custom xml to the vm and its ready to run. Hope you guys like this new macinabox
    1 point
  34. Using Win 10 Pro 1909 and all is working fine. What i did, was different. On unraid under "Settings", "SMB", "SMB-Extras", "Samba extra configuration:" i implemented the following: min protocol = SMB2 Can access my unraid-shares from every Win10 Comp on the network.
    1 point
  35. So I have a Raspberry Pi 4 with 4gb of ram and was wondering if I could possibly get Unraid booted on it. So using Qemu I got it running and created an array. Shame about the performance though !!
    1 point
  36. I figured it out! Under the IDRAC for the server, there was a power cap policy set to limit the power usage to 210w. Any load that hit the server would make the server throttle because it hit that load. I changed it to 400w and the CPUs are boosting now.
    1 point