Jump to content


  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

AgentXXL's Achievements


Enthusiast (6/14)



  1. @thatja I've been using the mergerfs plugin for a few months now and have seen no issues similar to yours. Looking through the syslogs you've managed to capture, there is nothing I can see that indicates a mergerfs problem. I suspect a RAM issue. I would suggest shutting down and running a RAM test using Memtest86. At least for 24 - 36 hrs since your crashes appear to happen in that time frame. Also just to confirm, you do have syslog server (Settings --> Syslog Server) set to archive the syslog to a share/folder? Your syslogs don't seem to be retaining anything prior to the reboots/crashes, so they're a little less useful.
  2. Release 2024.05.07 has been installed. Initially I thought it was working as I saw the UD drives and shares. I did some tests by changing the tab I was on in unRAID and then going back to the Main tab. Each time it took between 12 and 15 seconds for the UD drives and shares to show up. Great! For a refresh started by clicking the refresh symbol from the UD controls, it timed out after 30 seconds (which it should since that's the new timeout value). Alas that refresh attempt seems to have borked it again as now changing tabs or reloading the Main tab won't display the drives or shares. But interestingly, I left it on the Main tab for a few minutes and then came back and the drives and shares were now visible. I suspect some of this behavior (maybe all of it) is also being influenced by the constant spamming of the syslog. I have been noticing other things taking more time to complete since the BMC firmware died. Regardless, it's working enough for my needs right now. Thanks again!
  3. Thanks! Hopefully it will work. Still puzzled as to how the plugin updates itself automatically during a reboot. I may have missed something in the release notes for 6.13 beta 2 (or earlier). I do have Docker containers updating automatically via Appdata Backup but that doesn't do plugins. I tried another reboot and the same thing happened - the 2024.05.01 version I put in /boot/config/plugins/unassigned.devices/ gets replaced by 2024.05.06 (which I had deleted). I mentioned this in another post but I still haven't been able to fix the BMC firmware issue - the flash chip can't even be read by either the Supermicro UEFI flash utility or by my external CH341a programmer. So I've ordered a replacement chip which should be here early next week. In the meantime I can't even grab diagnostics... the constant spamming of the system log by these phantom messages from the iKIVM (part of the BMC/IPMI functionality) just holds things up. It's even taking my unRAID server almost double the time to boot (over 10 minutes). These messages start to appear the moment the system powers on and initializes the BMC. It's making trying to troubleshoot any issues a nightmare. Regardless, thanks for increasing the timeout for this. Perhaps maybe make it a value we can set in Settings --> Unassigned Devices? As always, appreciate the effort you always make. I'll update you once I apply the new release to let you know if it's working.
  4. OK, that's odd. I saw the drives all show up again after reverting to the older version, but now it's back to the same timeout issue. And when I checked the USB key, the plugin had been re-updated to 2024.05.06. Is there some new process in unRAID 6.13 that updates plugins automatically on start of the array?
  5. Alas I was unsuccessful in attempting to flash the BMC firmware - the chip won't even read, which is what the Supermicro UEFI programming utility said. I'll be ordering a replacement chip later today. Once I get the replacement chip I'll program it and then do the surgery to replace the dead one on my motherboard. In the meantime, is it as simple as replacing the plugin package on my USB key with the previous version and rebooting? NVM - I just restored the 2024.05.01 version of the plugin and it's working. I'll have to ignore updates until I get the motherboard issue fixed. Not having a useable syslog is very frustrating to say the least.
  6. Something appears to have gone awry with this new release. I am no longer seeing any disks or remote shares under the UD section. I have 34 disks under UD with 26 of them mounted by UD to use with the mergerfs plugin. They are still mounted and mergerfs is working so it's just a display issue it seems. This is what I see: My syslog shows no messages from UD since I updated the plugin yesterday. Alas messages get purged quickly as I have a hardware fault with the BMC firmware on my Supermicro H11SSL-i motherboard and syslog is being spammed with messages about it about 20 - 30 times a minute. Later today I hope to try and fix this issue by manually reprogramming the BMC flash chip. When I stopped the unRAID array the UD drives all appear to unmount properly. They're set to automount in UD settings with the manual mount button disabled for each drive). I'm just rebooting now to see if the drives will show up. After the reboot it initially showed the same as the screenshot above, but now that I've started the array both the disks and my remote shares are showing up until I refresh the page. Then it reverts to the screenshot above. If I get my BMC issue fixed I'll try another reboot and grab some diagnostics.
  7. Thanks! I read that last night but haven't given it a try yet. Simple enough solution. While it would be nice to prevent those drives from being seen by UD, at least it should be a workable system.
  8. Hi Rysz. I'm giving your mergerfs plugin a try and have a couple of quick questions. For my test I'm using my test box. I've added 10 drives from my offline backups ranging in size from 2TB to 8TB. All are XFS formatted with a single partition and all use the same root disk name of OfflineBU, eg. OfflineBU00, OfflineBU01, etc. If I try to use mergerfs via commandline with the following command, it fails: root@AnimTest:~# mergerfs -o cache.files=partial,dropcacheonclose=true,category.create=mfs /mnt/disks/OfflineBU* /mnt/addons/BUPool fuse: invalid argument `/mnt/disks/OfflineBU02' It appears to work if I don't use a wildcard but list each drive separately like this: mergerfs -o cache.files=partial,dropcacheonclose=true,category.create=mfs /mnt/disks/OfflineBU00:/mnt/disks/OfflineBU01:/mnt/disks/OfflineBU02:/mnt/disks/OfflineBU03:/mnt/disks/OfflineBU04:/mnt/disks/OfflineBU05:/mnt/disks/OfflineBU06:/mnt/disks/OfflineBU07:/mnt/disks/OfflineBU08:/mnt/disks/OfflineBU09 /mnt/addons/BUPool It appears all disks in the mergerfs pool will still show up under Unassigned Devices. I suspect there's no way to hide these drives from showing under UD? Any thoughts on why I can't use a wildcard with all drives mounted using the same root disk name? If I have to use all drives on the command line, it's going to get quite long - the pool I want to mount on my production server will have 30 disks. Not sure if unRAID has a limit to the length of the command? Appearing under UD isn't a big issue, other than each visit to the Main tab takes longer to refresh while it waits for the list from UD to populate.
  9. Agreed. Especially since I really only look at the Dashboard widget when troubleshooting a transcode issue, and that's pretty rare with QSV. I expect I won't even worry about the lack of power draw info as my efforts to reduce power consumption in the homelab are starting to show some fruit already. Thanks for looking into it!
  10. I expected Emby was also working but I don't use it myself. And yes, the test AV1 encodes I've done were all on Windows. Hadn't even looked at it for Linux. Yes, I fully realize that they aren't the same thing. I suspect the Windows focus (possibly better sensor support) is why I was able to pull power draw from GPU-Z. I made the mistake of going to pkgs.org and looking for the generic intel-gpu-tools package, which intel_gpu_top is apparently part of. The package returned has the 2.99.917 version number, but I haven't gone as far as extracting the package to see if intel_gpu_top has been updated. Plus the package appears to only be available as a .rpm so I'll have to run it through rpm2targz to convert it for Slackware. Need to get my test box back in service before I try that. Link here if you want to take a look: https://pkgs.org/download/intel-gpu-tools So ignore my fumbling... 🤔 🤣 And thanks again for your quick response. When do you get time to sleep? 😂
  11. I'm one of those who have access to the 6.13 beta. I installed an A380 a couple of days ago. Intel GPU Top is installed and the /dev/dri/ device has been added to each template and transcoding is working fine with Plex and Jellyfin. Intel GPU Top is installed as the GPU Statistics plugin requires it. Alas the version of Intel GPU Top in the Apps store doesn't appear to have all sensors added to it as power draw is not being displayed for the A380, but it is working on my N100 uSFF PC which uses the UHD770 iGPU. And I know I can get power draw from the A380 as I did test it with GPU-Z on Windows. There's an odd change in version numbering for intel_gpu_top - looks like yours is 1.28 and the latest is only one generation newer but it's listed as version 2.99.917. Kind of an odd bump in version so that needs further investigation.
  12. @ich777 Is there a way of setting the name of the browser tab/window that opens when you open the webgui? I have Krusader installed on all 3 unRAID systems and often will have all of them open at the same time. Being able to set the tab name would make it easier to identify which tab is for which server. I can use a browser extension to right-click and rename, but that's necessary each time the container is started. FYI - the binhex-krusader container can do this by adding an environment variable called WEBPAGE_TITLE and setting its value to the name you want for the tab/window. Any chance of similar functionality in your containers? Thanks!
  13. As I'm dealing with very erratic power pricing and have limited disability income, I'm getting desperate to combine my two unRAID systems into one. I was hoping that 6.13/7.0 would have the ability to run multiple pools with the unRAID parity scheme, but the 6.13 beta test is just underway so it'll be a while yet before we see a RC. It's now confirmed that 6.13 will not implement the change to an 'all pool' model and hence the idea of using multiple pools with the unRAID parity scheme is moot until a future release implements it. As I need to tackle my power consumption ASAP, I'm looking at 2 options: 1. Create an unRAID VM on my main unRAID system. I have a HBA and USB ports to passthrough so that shouldn't be an issue. It will be for storage only - no other containers or VMs would be run on the unRAID VM. Only the absolutely necessary plugins like the UD series will be installed. 2. Use the mergerfs plugin to create a pool of different sized devices. Alas I don't see a way to do this and also implement something like SnapRAID so that there's some fault tolerance. I see multiple SnapRAID containers on Dockerhub but none are available in the unRAID App store so I'll have to try and build a container template for it. Looks like the official releases are from https://github.com/amadvance/snapraid Suggestions or other ideas?
  14. I've asked Eluteng since they are the ones who make the mSATA adapter that has a unique GUID. So far they haven't responded - asked them the expected lifetime of the mSATA adapters and if they could confirm if their m.2 to USB adapter also offers the unique GUID.
  • Create New...