Leaderboard

Popular Content

Showing content with the highest reputation on 01/21/21 in all areas

  1. As @alturismo said that's not possible you can only view the GPU's that are not in use by a VM, since it's not exclusively available on the host and the GPU Statistics Plugin simply can't "see" it. You can't install a second instance of the Plugin, the developer would need to make something like a drop down to see and switch between multiple cards, but I think that involves a lot of work for the dev since the Plugin has to change significantly in terms of the code and also think that wouldn't make much sense because the normal use case is that you use one Card or the iGPU for tr
    2 points
  2. This thread will serve as the support thread for the GPU statistics plugin (gpustat). UPDATE: 2021-02-21 Released - Some fixes and rework of app displays Prerequisite: 6.7.1+ Unraid-Nvidia plugin with NVIDIA kernel drivers installed. 6.9.0 Beta35 and up no longer require a kernel build, but now require the Nvidia plugin by @ich777. Intel support requires the plugin install for Intel GPU TOP from @ich777. Both plugins can be found in Community Apps. Plugin is now live on CA but if you want to manually install see the below -- To review the so
    1 point
  3. ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media f
    1 point
  4. 之前折腾diy的freenas,因掉盘问题弃坑(可能是硬件问题),某日在B乎看到还有付费的nas系统,想着能收费,必是有所特长。 试用了两个礼拜,果断pro版。 有几个功能十分满足我的需求: 1、硬盘混搭,我有挺多各种大小的磁盘,正好可以串起来。 2、就算阵列崩溃(比如坏了3块),正常硬盘的数据会保留。而不是像riad会全崩。 3、虚拟机、docker 4、硬盘加密 5、硬盘休眠,在riad中,无法对单盘进行休眠。 同时有一些不足的地方: 1、阵列写入速度,比较慢,只能追加缓存池来弥补 2、大陆网络问题,经常要下插件比较麻烦 3、缓存策略,不支持类似 傲腾内存 的那种 4、如果能够建立两个array,一个数据,一个缓存,会比较刺激。目前的缓存池只是软raid或single,磁盘混搭不方便。 体会: 对于新人上手还是难度比较大的,但熟了后,发现unraid才是all in one的最佳server os选择,棒极了。
    1 point
  5. Connect to you database using this and run the variable command there. http://influxui.s3-website-us-east-1.amazonaws.com/ It works locally. Or exec into the container and run it that way. That will atleast confirm that there is data in the dB. Have you tried to remove and re add the datasource in grafana.
    1 point
  6. IMHO, this is a wonderful suggestion. I would say `Parity Update` is the best option - short and intuitively understandable. Also, I would like to propose that Parity Check and Parity Update would be split into separate buttons in the UI. Furthermore, it would be nice if: Parity Update button would be disabled by default. As much as I understand, it is recommended to always run a Parity Check not and Update, at least according to wiki. Notice, how much easier it is to say Check and Update, instead of Check and Check without writing... yada...yada...yada... 😉 When
    1 point
  7. You are trying to run the commands before starting the emhttp daemon. Also /mnt/disk8 will not be available until after the array has successfully been started. Maybe you would better off running commands that need the array started via the User Scripts plug-in?
    1 point
  8. Based on the link you provided you should have setup 3 volume mounts for your container - /path/to/local/config.json:/app/config.json # For easy editing of the config - /path/to/local/plugins:/app/plugins # Only needed if using plugins - /path/to/local/libraries:/app/libraries # Only needed if using external libraries In the case of unraid the "/path/to/local" would likely be a folder in your appdata folder like "/mnt/user/appdata/amoungus". Within the amoungus folder you would have a subfolder "plugins" and a subfolder "libraries". The config.json is a bit of
    1 point
  9. ...nein, sind zwei klassische SATA SSD.
    1 point
  10. Nein. UUID und UUID_SUB haben zwei unterschiedliche ID's EDIT: Doch. SDJ und SDF haben beide die gleiche UUID.
    1 point
  11. Das und du solltest natürlich mal über Backups nachdenken. Ich nutze zB dieses Script (mit dem User Scripts Plugin) um /mnt/user/appdata auf /mnt/user/backups zu sichern: https://forums.unraid.net/topic/97958-rsync-incremental-backup/ Das lasse ich 1x täglich ausführen.
    1 point
  12. It is a bad ideea. I recommend installing something like Ngnix proxy manager and adding at least some basic auth in front of it over SSL. The reason why it can be a bad ideea is that you might never be sure that the whole UI is propperly protected. By adding a proxy in front or even better a vpn like wireguard, the attacker will have to bypass this authentication before getting to the unraid server. Nginx basic auth and witeguard auth are way more battle tested (not to mention a lot smaller as attack surface) than the whole unraid UI.
    1 point
  13. I am back. No further call traces, errors, or lockups. I have had the power supply idle control setting set to auto and the global c-states set to auto. This has not caused any further issues. I will install some plugins and see if that causes any issues. At this point I think the issue is related to my 10Gig NIC somehow. However I will install some plugins and report back in another two weeks.
    1 point
  14. I probably understood why, this morning I set a new icon (Macmini) manually in the smb.service file, everything worked until now, I don't know how, but the Xserve icon returned.
    1 point
  15. This was it, I didn't realize that a BIOS setting even had that effect. Its working now, thank you for your patience and help troubleshooting.
    1 point
  16. This sounds wrong. It should be the full path to the vdisk file on the NVME drive and I would expect it to be something along the lines of /mnt/disks/nvme_mount_point/vdisk_file_name. You may have ended up specifying a location in RAM
    1 point
  17. Dir scheint das Prinzip von unRaid noch nicht ganz klar zu sein. Schau Dir diesen Beitrag einmal an. Eventuell wird es dann klarer: https://www.kodinerds.net/index.php/Thread/60933-How-To-unRAID-und-die-Verwendung-von-Cache-Drives/ Die HDD's (inkl. Parity) laufen nur, wenn drauf geschrieben wird. Nach welcher Zeitspanne sie in den Spindown gehen sollen, kannst Du einstellen.
    1 point
  18. I followed this guide which is very complete. You can find the configuration file for Authelia as well as for the NginxProxyManager at the top and the page. https://github.com/ibracorp/authelia Just tell me if you need more info, I'll gladly share.
    1 point
  19. if the GPU is in use for a VM you cant "use" it in unraid OS or fetch data for gpustats, so i guess that wont happen.
    1 point
  20. Firstly, great plugin and presentation of information! The drop down box to select GPU in a multi GPU setup is a great function. Is it at all possible to have both (or more) GPU's visible on the dashboard? I use 1 GPU for transcoding and 1 for VM. While the 1 for the VM isn't visible when in use by the VM, it would be good to still be able to monitor it when it's not (even if it's not doing anything). Alternatively, how could I go about installing a second instance of gpustats?
    1 point
  21. Just an update. I'm sitting on over 4 days uptime, keeping my fingers crossed but I feel good about this. Thanks Hoopster for pointing out MacVlan. I thought it wasn't in effect since I wasn't using a custom ip or br0 on any docker, but just having the br0 option checked in docker was enough to add it.
    1 point
  22. If I had to take a wild guess, it would be that there is a typo at or near where you edited the ModIDs... maybe a space where there shouldn't be... Are you using the automod manager in ark? or adding the files manually? I have a new XML and container image that triggers the ARK automodmanager to download and update them, I will try to post it when I get home.
    1 point
  23. Im not sure how advanced that feature is, like if it follows the html tag for favicon, or just assumes it lives on /favicon.ico, i have those errors too sometimes, and i havent seen anything bad from it.
    1 point
  24. Seems like the latest(v4.3.3) update is broken, it doesn't load the old torrents. Does anyone have the Repository tag for v4.3.2, there are so many to choose from. EDIT: I just tried this linuxserver/qbittorrent:14.3.2.99202101080148-7233-0cbd15890ubuntu18.04.1-ls110 And it worked, but if it's not the correct one i'm happy to be corrected.
    1 point
  25. Have you updated the plugin amd have you one of the Plugins (Nvidia-Driver or Intel GPU TOP) installed? I now have a few reports that it is working and I also don't have a problem since the last update.
    1 point
  26. Looks like this isn't being maintained anymore I am working on setting up my own docker to upload an updated version of the pihole-dot-doh Edit - I have submitted to Squid to get added onto CA Edit 2 - Pihole DoT-DoH now on CA, and with latest FTL version 5.5.1 etc This can be installed over testdasi's pihole without any issues.
    1 point
  27. thank you so much. by the way just wanted to say its a awesome plugin. thanks for all your hard work
    1 point
  28. Thats good then. Sounds like you're all set. So I could try and describe how to do the process but its easier to just reference another post I am assuming that you have the nvme mounted by unassigned devices. Basically just copy the VM image file from the cache to the nvme using your choice of methods; although I recommend using the --sparse=always option as it keeps the image size smaller. Then edit the VM to point to the new disk location (XML editor may be easier) If you have any questions, feel free to come back here
    1 point
  29. Das liegt daran das im Dockerfile kein 'EXPOSE 8081' drin ist (Port 8081 hab ich hier nur angenommen weil ich es oben gelesen hab). Das hängt aber eher mit der Unraid WebUI zusammen aber kann Unraid im prinzip auch nichts für, am besten wäre der Maintainer erweitert das Dockerfile mit EXPOSE [DERENTSPRECHENDEPORT] damit es in der WebUI angezeigt wird und die Schaltfläche wieder da ist. Hier ein Beispiel, ich verwende Homeassistant in br0 und Gitea in br0. Bei Homeassistant ist auch kein Expose im Dockerfile somit wird mir auch die option für das WebUI auch nicht angezeigt
    1 point
  30. Amazing job on this! I saw the post on reddit this morning about the Intel GPU, so I commented out the modprobe in my go file, installed the Intel GPU Top plugin, rebooted and installed GPU statistics, setup the plugin and everything worked great! It's really nice to be able to see some visualizations and stats of life in the Intel GPU if it's doing some transcoding or something else. Quick question I was wondering, is if it is possibly to poll more than a single graphics card? I have an i9700k as the main CPU and use the Intel GPU for the dockers that benefit from it, but I also h
    1 point
  31. We are actively working towards this!
    1 point
  32. FYI I've upated the details about the RX550 compatibility as a bug report to Dortania. As for passing it through to the Mac VM. There is nothing special required. Just track it down in your PCIe IOMMU group. IOMMU group 33: [1002:67ff] 10:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] (rev ff) [1002:aae0] 10:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] In my case device 10:00.0 for video and 10:00.1 for audio. Then map that bus 0x10 function
    1 point
  33. Hola amigos, Hay algunas traducciones de la webgui que se agregaron en el último minuto. Si pudieras echar un vistazo a lo que tengo, sería muy apreciado. Se bueno por favor. Mi español no es muy bueno 🙂 ; users.txt ;------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Click to select PNG file=Haga clic para seleccionar el archivo PNG ;helptext.txt ;-------------------------------------------------------------------------------------------
    1 point
  34. So after my Unmanic vs Tdarr testing, I've decided to use Tdarr for now. I like the simplicity of Unmanic, but Tdarr supports NVENC, so my encodes there are much faster. Initially I was using Unmanic because the resulting files were significantly smaller than Tdarr's output, but that was until I realized Unmanic was clobbering my audio. Once I disabled the audio transcoding in Unmanic the file sizes between the two were about the same. Besides the faster GPU encoding, I like the control Tdarr gives me through the use of plugins. It was more difficult to get set up and figure out the workflow a
    1 point