Fireball3

Members
  • Posts

    1355
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Fireball3

  1. Did you try others PCIe slots also? Maybe another mainboard - just for ruling out incompatibilities with this board. Wiki says, the H240 is a plain host bus adapter. unRAID should find the card and the attached drives. The card will most likely not be presented in BIOS at all. If it has a BIOS installed, it might POST during boot with an own screen (checking for drives or something like that). The tools for flashing the card should find it in any case though.
  2. See both posts above yours. No need to disable auto update. Consider the arberg repository as the "official" solution as long Alex is maintaining the plugin and script. I'm not sure if @limetech considers this "hack" as a feature to be in the core unRAID. But I agree, without this, the spin-down benefits are killing usability.
  3. Quoting @Squid This means, the @Alex R. Berg - "fork" of the plugin is not checking the @bonienl repository for updates and vice versa. If my understanding ist correct, then it should be safe to update the plugin if it checks out from the arberg repository.
  4. Guys, why don't you give it a try? After replacing the files you need to reboot - that's right. The script version is 2.2.0d. I'm not sure what's displayed in the GUI - need to check when I'm home. 2018.09.30 should be the right version of the dynamix plugin shown in the GUI. If the plugins page shows a newer version, ignore it! It might bring some GUI changes, but the script file 2.2.0d is not included as of today. Once you're on the arberg fork of the plugin, you can update from GUI as usual. There is a check box to enable scanning of user shares. No need to use additional user options. # - Added disabling of multithreaded scan of disks flag -T # - Added diagnostics generate zip, flag -L (usage: cache_dirs -L) # - Added -P print file count flag (usage: cache_dirs -P -i your_share_name) I'm not sure what's your situation. Given, you are using the posted files you might face some other issue. Then please go back some posts and try the troubleshooting instructions that @Alex R. Berg posted for me. If still buggy post the diagnostics here for Alex so he can check into it.
  5. Yes, it was me. Thanks for your help - the issue is solved in my opinion. Here is the post containing the files. Copy txz to \\tower\flash\config\plugins\dynamix.cache.dirs and replace the plugins\dynamix.cache.dirs.plg Rename your existing files for backup purposes. There is much confusion about the dynamix plugin frontend, and the underlying shell-script. I wish the dynamix plugin would be updated asap. Apparently more and more people face this issue.
  6. There are several conditions you can choose in the plugin settings. Drive activity is one of those.
  7. It is possible to work with public/private keys. Of course, setting up is a bit of work. I'm sure there are also some how-to's in this forum.
  8. Maybe something like plink.exe -ssh -pw yourpassword root@yourIP "echo -n mem > /sys/power/state" You have to ensure that the whole expression is parsed. Edit: Another possibility is pasting the expression into a file, making it +x and calling the file with plink.
  9. Try this one instead of the "powerdown" echo -n mem > /sys/power/state If it's not working, use it to search this forum and you'll find a couple of answers.
  10. Yes indeed, I see. Unfortunately not much better. I must have mixed the forum. There are forum solutions where it makes a difference. Sorry for bothering you.
  11. @interwebtech Would you mind wrapping your posted code into "code" tags? It really improves readability! Thank you!
  12. That one is the known bad version. The latest work in progress is here. This one is working very well. I don't know, when Alex will release it into the plugin, but you can exchange the files on your USB manually and give it a try (effective on next reboot). Some more testers are welcome.
  13. Yes, this seems to work. Drives are not spinning any more. I will monitor if its reproducible.
  14. I ran find against all disks and everything was cached as it should. Then I accessed the share and the 3 known (xfs) drives were spinning up.
  15. There is a version not yet commited to the repository, as there are just a few testers. Please feel free to test and feedback in this tread.
  16. Not sure if it matters, but the disks that spin up are XFS while the others are reiserFS.
  17. I guess it won't hurt if he gives some feedback also before committing. Would you please read my addition to my last post. I edited while you were typing.
  18. OK, I can adjust that. Is the new version available via the plugin update? A friend of mine also had the plugin updated and then it wouldn't start. I could tell him to update as the bug seems fixed. Edit: I suspect some content is not cached. Is there a way to check what drives are and what not? When I'm accessing the share - just browsing, it will pause to spin up some discs. I checked the GUI and found only 3 disks spinning. The share definitely spans mor than those 3 disks.
  19. I tried to explain the solution in this post.
  20. OK, copied the files and rebooted the server. Checked the plugin status on the GUI and...tadaaaa...running. Attached a cache_dirs_diagnostics. find_defunct.sh is now also running, just in case. cache_dirs_diagnostics_30.09.2018.zip
  21. No, the dynamix-plugin didn't start at start-up. The cache_dir -L was the only instance running. Your modified script invoked with ./cache_dirs -i Video -e Audio -e Backup -e Fotos -e Spiele -l on didn't block the shutdown. I will confirm with other runs. How to instiall the modded version instead the onf of the dynamix suite? And finally, what's wrong with the autostart of the plugin?
  22. Here I am. I updated the diagnostics in the post of this morning. Can't see anything of use. Attached to this post the first cache_dirs_diagnostics together with a screen shot of the cache_dirs GUI. I'm not sure if the log generated with -L is showing the running plugin or the instance that is pulling the log data!? Therefore I attached the ps and pstree output. cache_dirs -q says it it not running. This means, the GUI is not lying. I fired up the modified version as planned. ./cache_dirs -i Video -e Audio -e Backup -e Fotos -e Spiele -l on The second file show the logs right after the script start. The GUI is also showing the plugin status "running". The defunct.sh is also running. I think I managed to remove the mover & logger spamming the log. The mover settings had logging enabled. # Generated mover schedule: 0 0 1 * * /usr/local/sbin/mover |& logger Once it is set to disabled the file \flash\config\plugins\dynamix\mover.cron contains # Generated mover schedule: 0 0 1 * * /usr/local/sbin/mover &> /dev/null It seems there is no switch to fully disable the mover. cache_dirs_diagnostics.zip cache_dirs_diagnostics_arberg_mod_28.09.2018_2252.zip
  23. OK, thanks for your effort! Here is the plan: 1. check if the GUI is lying. Execute the ps and pstree right after reboot to have evidence. 2. kill all running instances of cache_dirs ./cache_dirs -q 3. run the one you posted above -without -T (wait until it locks up again, dump diagnostics) ./cache_dirs -i Video -e Audio -e Backup -e Fotos -e Spiele -l on -with -T (see if it locks up again, dump diagnostics) ./cache_dirs -i Video -e Audio -e Backup -e Fotos -e Spiele -l on -T -without -T but with memory limit -U100000 if it still locks up ./cache_dirs -i Video -e Audio -e Backup -e Fotos -e Spiele -l on -U 100000 I have to check the log, but my mover has no timed job. I fire up the move by hand if necessary. Edit: I picked the wrong diagnostic.zip in the hurry. The file is from yesterday evening 20:14... Edit2: I was up late yesterday. By that time, the S3sleep plugin should have shutdown the server but it wasn't. That was suspicious, but the CPU wasn't locked up by that time. I will post the correct diagnostics dumped this morning when I'm home later. Maybe there is some clue within. The S3sleep is logging if a condition is preventing shutdown.
  24. Here is a set of new logs. This morning one CPU core was locked @100%. I will comment on your post later. diagnostics_28.09.2018.zip tuerke-diagnostics-20180928-0734.zip