• Content Count

  • Joined

  • Last visited

Everything posted by Glasti

  1. Found the issue! i didnt had device_tags = ["ID_SERIAL"] set in [[inputs.diskio]] All working now!
  2. love the dashboard! have been trying to set it up. 90% works. the only thing i am not able to figure out is how to get the SSD section to work. i think it is because for some reason i am not able to get disks to show here they all show as in the screenshot.. i went through my telegraf.conf again with Gilbn's guide, but everything is setup according the guide. any ideas?
  3. I see the same, pretty annoying
  4. I will poke around and see if it is possible to either setup multiple cron jobs OR add `/usr/local/sbin/mover.old start` to the mover button. I will post it here if i find a solution! Thank you for your work!
  5. Thank you for the info, this makes the Mover button in the `Main` tab obsolete. No big deal for me. Would it be possible to add multiple cron jobs? So you can force move multiple times a day if necessary?
  6. Does this mean that the when the plugin is enabled it will only move above the treshold set and until the treshold is met? That is what it looks to be doing here. ATM i can not invoke mover manually if below 75% and it will not move anything below my treshold Edit: i do have a cron schedule set.
  7. I had a similar issues fixed it by doing the following. - delete the nvidia plugin - grab fresh unraid kernel - install plugin and latest driver. Maybe it helps. I have not changed any hardware settings. Ryzen 3700x, strix b450-f and a gtx 1660
  8. Update: Hopefully the last one. I haven't run Memtest yet. But tried to go back to RC2 but that didnt solve the issue.. So i decided to uninstall the plugin and upgrade to Stable again. After updating OS i grabbed the plugin and i saw that newer drivers where added, i was using 455.** before. Now i am running driver 460.56 and the issue hasnt occured for 28 hours, which is great.. Of course i wasnt smart enough to grab the driver i was using before after reinstalling the plugin to see if it was an issue with upgrading from RC2 to Stable without reinstalling the plugin and/or
  9. The issue/error has returned again. Testing with not transcoding to ram but to 1 of my cache pools. If that doesnt help i am going to run a MemTest.
  10. Thank you for your reply. I should have given a bit more details It is a custom build machine. - ROG Strix B450-F Gaming board - Ryzen 3700x I have not changed anything hardware wise in the last few months, beside replacing some HDD's in december. There are no VM's running or any devices bound to VFIO. Also, plex is the only container using the GPU. What i did realize, and forogot to mention here. I recently didnt unplug the HDMI cable from the GPU, but unplugged it from the monitor. I have since removed it. The GPU dissapeared once after, but it has been available
  11. Hello, SInce a couple of days my GPU will stop showing in the plugin and with nvidia-smi and i am seeing these errors in the logs. It will come back, and then dissapears again. NVRM: GPU 0000:08:00.0: Failed to copy vbios to system memory. NVRM: GPU 0000:08:00.0: RmInitAdapter failed! (0x30:0xffff:802) NVRM: GPU 0000:08:00.0: rm_init_adapter failed, device minor number 0 It has worked fine since then for about 8 months. I am not able to find to much information about these messages. But sounds like it could be a driver issue. Any hints to where to start troubl
  12. Glasti

    Happy Birthday!

    Happy birthday!
  13. DOH, changing to `/mnt/cache/plex_data/*` worked. I swear i have tried this before and claimed it not working.... But i am wrong..
  14. DOH, changing to `/mnt/cache/plex_data/*` worked. I swear i have tried this before and claimed it not working.... But i am wrong
  15. Running the Hotio Plex Container. Unraid 6.9 RC2 I have a some costum mapping to move some folder outside the appdata folder for more efficient backups. Checked my backups from 6.8.3 before, and there the mapping worked. These are my mappings. /config <---> /mnt/user/appdata/plex /transcode <---> /dev/shm /data <---> /mnt/user/data /config/app/Plex Media Server/Media <---> /mnt/user/plex_data/Media /config/app/Plex Media Server/Cache/Transcode/Sync+ <---> /mnt/user/plex_data/Cache/Transcode/Sync+ /config/app/Ple
  16. Shutting down or restarting from the dashboard works fine. Permissions is something I checked and confirmed after posting here. I changed my usb to a Cruzer recently so I can not disconnect it as easily, by bumping against it when like moving my case a little. Something I have done in the passed accidentally with other usb sticks. I deleted the plugin for now. Don't have the time/feel like troubleshooting right now. When I have time it will be a fun little project to see if it's a docker issue. It's a good point, could very well be.
  17. Rebooting with Dynamix System Buttons starts a parity check for me in 6.9 RC1. Is this normal/known?
  18. Hello! I have been trying to get wireguard to work on my container, Halianelf gave me some tips on how to get it working. But now my container hangs on starting the wireguard client. `2020-12-06 21:06:26,512 DEBG 'start-script' stderr output: [#] ip link add wg0 type wireguard 2020-12-06 21:06:26,513 DEBG 'start-script' stderr output: [#] wg setconf wg0 /dev/fd/63 2020-12-06 21:06:26,520 DEBG 'start-script' stderr output: [#] ip -4 address add dev wg0 2020-12-06 21:06:26,523 DEBG 'start-script' stderr output: [#] ip link set mtu 1420 up dev wg0 2020-12-06
  19. Could someone have a look? Would be appreciated. Here are some logs from this morning
  20. Hello, I am seeing this interesting issue where i get the FCP message saying: Your server has run out of memory. This has started happening after configuring these awesome scripts: The scripts dont seem to be the problem, have checked and confirmed. But the error comes after the 'backup_all_appdata' script ends. Script ends at 04:26 AM, FCP error at 04:40 AM. I normally check the system around 9 AM and then there is no ram issues. All my appdata runs of Unassigned drives: - Main appdata folder is
  21. OK. So when I added the new drives I removed a cache drive. For this I moved all shares from the cache drive with the mover function. I didn't rebuild the docker image. This is what I did last night. Will monitor if it still happens
  22. after adding some drives to my array i have been getting the following when my VM (w10) is running. from what i could find on google it is not anything to worry about. But when the vm is running my unpinned CPU's are on max load.
  23. Got it fixed! It was a fairly simple fix! Found a working vbios and unplugged the screen from power. I'll definitely keeps these tips for possible future troubleshooting if needed!