olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. Not sure how to answer The script runs and it stops cache_dirs immediately. I also get back the prompt immediately. However, 'ps -ef | grep find' shows the actual child 'find' process is still running and finishes when it finishes depending on how large is the dir. So killing cache_dirs works like a charm, but the sub-process stays there (PPID changes to 1 after parent has gone) and does not get killed.
  2. No, I didn't. I am aware the adjustments wouldn't been disappeared
  3. WOW, that was quick, thank you for picking this up and working on it. Unfortunately exactly the same thing is happening with this script. Is there any detail I can provide you to help? Seems like I have to take back the starting part though. Cache_dirs is actually starting now on array start. Weird, I don't think you modified that part, so I am not sure why it was not working before. I have to admit I was not doing long testing and I might have fired too quick on this.
  4. OK, sure, here we go: The situation seems to be definitely different in case of reboot/shutdown and stopping the array. As per your request I initiated a reboot immediately after boot up and the shutdown went quickly and cleanly in a few seconds. However, if I just stop the array without shutting down or rebooting, the child 'find' process stays hanging there as described above. I know you don't like me guessing, but I presume on reboot actually the powerdown plugin did the killing of the 'child' find process instead of cache_dirs doing it on its own? Not sure how helpful the logs will be here as there is nothing specific about the killings in them, but I attached logs for both situations. I hope it tells more to you than for me... I think you will figure, but the earlier is about the rebooting, the latter is about the stopping exercise. tower-syslog-20151210-1334.zip tower-syslog-20151210-1345.zip
  5. Uhm, this hurts! I think I well described the situation and I hope this is just bad mood today rather than thinking I am doing stupid things here. Please take into consideration that I am not a newbie and not totally dumb either. I understand the frustration of supporting users who might know what they are using/doing and their setup is full hackery, but I am not that (type of) guy... To clarify: I have NO K or S script at all and I am testing this with a full clean setup. I was only referencing what happens IF actually there is a K script for force killing. To also answer the question: it takes minutes for find to process my music directory. It doesn't get killed. It stays there fully active and un-mounting only occurs when the child find process finished after those few minutes. I will post the log shortly.
  6. Yes, I just installed the plugin so couldn't even have older version I haven't tried shutdown or reboot; I actually only tried to manually stop the array from the UI. I understood it should work also in this case? Or only at shutdown and reboot? ...but if I have the K script with forced kill of finds, then it's OK even only on manual stop via UI without shutdown or reboot. Knowing what is holding back the un-mounting is easy and fully clear. I perform a 'ps -ef' from the console and I still see the actual child 'find' process running. The situation is definitely better than before, because before the update even cache_dirs was still running which is not the case now - maybe that's why you experience better behavior now. In case you stop/shutdown at a time when the child 'find' process is processing a smaller dir, then it finish relatively quickly and un-mounting occurs. However, in case you perform stop/shutdown in case finds processing a large dir (e.g. in my case "Music") in the moment, then you can clearly see the find process doesn't get killed. Strange, if I stop the array manually and cache_dirs get killed, it doesn't get restarted for me when I start the array manually. I will raise this in the cache_dirs forum. Would that be still needed in the above described case? For me it is clear it is the child 'find' process. If I kill that process manually from the console, un-mounting occurs immediately. Thanks for the support again!
  7. Hi, I am using this plugin to auto-mount and hare my permanent SSD apps drive. Thank you for the great tool! I have a question about the sharing: should share=on survive reboots? In other words, should it automatically re-share the drive after rebooting? I am asking because auto-mounting works like a charm, but I need to manually turn on sharing on each reboot. Is there a setting somewhere I miss for this? thank you in advance for the support! Edit: actually I have the feeling sometimes it is auto-resharing the drive, sometimes not. Not sure about that though.
  8. OK, I take you advice to move on and use the plugin. I just tested it, but killing child 'find' processes still doesn't seems to happen. Apparently the cache_dirs process gets killed, so unmounting occurs after the active find process finishes. However, if it is a long one, then it can still take a lot of time before unmounting is possible. I understood from you that the active child 'find' processes should be killed by now as well and not only cache_dirs itself? What I was doing for testing is that I tried to stop the array immediately after booting up (so the finds were still active). Thank you for your help on this! ...shall we actually move/ continue discussing this in the cache_dirs thread? Do you also have some hints how could cache_dirs be started on array start? I mean not during boot, because that's working fine, but during manual start after manual stop.
  9. Good question. I think it is more for sentimental reasons; out of respect to Joe L. ...but you are right, probably it's time to move on. Great! Thank you for your support once again! Let's cross the fingers to get your proposals in by Bonienl. I guess it is the cache_dirs plugin what will need to be amended? I am now using the K script approach as a workaround and it works well! Thanks ICDeadPpl for suggesting this! Another slightly related question: how do you start it again on "Start"? I tried to use an S script for that (same command I had in the go file), but actually the array was hanging on start and I had to remove the S script and reboot.
  10. Many thanks dlandon for looking at this! I actually don't use the cache_dirs plugin, but the original script started from the "go" file. If I understand correctly your suggestions to Bonienl, if he was implementing those, one would need to use the plugin to get it right, is that correct? Also, my understanding is that you are proposing to wait until all child 'find' processes stop regardless of how much time this will take. This would mean _minutes_ would be spent only to wait these processes to stop. Wasn't there a solution to get those stop immediately? I tend to go with the solution ICDeadPpl suggested if there is no harm, but I believe there is no?
  11. Hi dlandon, would it be possible to implement some special handling for cachedirs to make sure clean shutdown when cachedirs is still having active child processes (finds). It occurs to me every now and then (...and I would think for others as well) that I need to stop the array shortly after booting up. Now when cachedirs is still having active 'find' processes, unmounting of the disks are not possible and I either need to wait a couple of minutes until the active find processes finish or I need to telnet in and kill them manually. Would that be somehow possible by the powerdown plugin? Thank you for the help (and for the great plugin) in advance!
  12. I bought the x-case RM420 case from pras1011 and it was shipped super fast (within same day) and he even found a good courier with a reasonable price. The case is in good condition as it was advertised.
  13. Yes, this is exactly what happened. Fortunately it was only 1 disk, so a rebuild from parity could've helped. The corrections were not in the very beginning, but somewhere in the in the first half of the parity check. I think if the HPA crap would've been written to the very end of the disk, then the corrections should've been in the very end of the parity check, no? It is only 128k, so the size in not too much, but if it all spread over the disk, then in worst case it could impact 258 files (since it was 258 sector), or am I mistaken here? ...anyhow, do you think undoing corrections would be a useless feature or not feasible at all?
  14. I believe it happened to many of us that we only realized after performing a correcting parity check that it should've been run as non-correcting. This just happened to me today due to the following reason: - After a BIOS reset on my Giga-Byte MB I forgot to disable HPA - Certainly after the first boot one of my disk has got HPA enabled coming up as wrong disk in unRAID - I disabled HPA in the BIOS and removed HPA from the drive - unRAID was starting up nicely on the next boot - Without thinking twice (i.e. it was a stupid move) I starded a correcting parity check what was completed with 258 error corrections - destroying the correct parity (I immediately realized that I should've rebuilt the disk after removing HPA intead of running a parity check) Now since parity has got corrupted there is no way that I am aware of to get this corrected. I actually don't even know if it was somehow possible to identify which files got corrupted (maybe somehow based on the corrected sector numbers?). I imagine the number of affected files are between 1 and 258 . I don't know how feasible it was, but it would be a great feature if one could undo paritiy corrections right after the parity check. I think this should be - at least logically - possible until no new write ocured to the array?
  15. Confirmed it is all working perfect again! Thank you sparklyballs! Your contribution here is priceless!
  16. This is driving me crazy. Rolled back to RC3 and it is the same. Can someone confirm please that latest version of the docker works with unRAID RC4?
  17. I haven't run this docker for a while, but I just updated to the latest and now when I am trying to connect, then all looks good, I can select the Filezilla on the the Guacamole interface or via RDP, the session seems to start, but then I only get to a blank balck screen. Then I am going back to the Guacamole interface and I see Filezilla running on the thumbnail, but when I click on it, once again I get to a black screen. I tried it on another unRAID and it is the same. Is this only me or something went wrong either with the latest update or maybe RC4 related as I updated to that also only just now at the same time? Thank you for any help in advance.
  18. Thank you for the hint! However I am a bit lost with where/ how this should be added. I edited "my-Filezilla.xml" in /boot/config/plugins/dockerMan/templates-user/ directory and replaced <Environment/> with <Environment>WIDTH=1280 HEIGHT=720</Environment> ...but this didn't have any impact on the resolution, so I guess either I edited the wrong file or I edited it incorrecty? Could you please help me out here? Thank you in advance! I don't know off the top of my head what environment variables look like in the XML, but I know that wouldn't work. In any case, it is not necessary to directly edit the file. Just do it in the docker manager. You have to turn Advanced View (a slider in the top right) to get the section for environment variables. Ahh, alright, got it! Thanks! It works now, strangely Filezilla fonts and windows are still looking big even on high resolution, so it's not a lot what fit into 1 screen, but hey, it is still more than amazing
  19. Thank you for the hint! However I am a bit lost with where/ how this should be added. I edited "my-Filezilla.xml" in /boot/config/plugins/dockerMan/templates-user/ directory and replaced <Environment/> with <Environment>WIDTH=1280 HEIGHT=720</Environment> ...but this didn't have any impact on the resolution, so I guess either I edited the wrong file or I edited it incorrecty? Could you please help me out here? Thank you in advance! edit: I found in the meantime sparklyballs updated filezilla.xml, so I updated my-Filezilla.xml with the below, but this doesn't make any difference either. What am I missing here? <Environment> <Variable> <Name>WIDTH</Name> <Value>1280</Value> </Variable> <Variable> <Name>HEIGHT</Name> <Value>720</Value> </Variable> </Environment>
  20. right click the docker icon, select edit and in the port section add 3389 on host and container side. save out the changes. if you already have port 3389 assigned to another docker, change the HOST SIDE ONLY to something else and use that port in whatever you connect with, keeping the container side as 3389. Great, it works Thank you for the quick pointer! One more question though: is there any way to increase the resolution? the base image developer has another branch that has better resolution but i'm holding off on it till he gives the ok on stability. OK, cross the fingers then, thank you once again!
  21. right click the docker icon, select edit and in the port section add 3389 on host and container side. save out the changes. if you already have port 3389 assigned to another docker, change the HOST SIDE ONLY to something else and use that port in whatever you connect with, keeping the container side as 3389. Great, it works Thank you for the quick pointer! One more question though: is there any way to increase the resolution?
  22. This is indeed fantastic sparklyballs! Thank you very much for this! One question: is it also possible to connect to the filezilla docker with a non-browser based rdp tool? ...mainly in order to better resolution handling.
  23. Hmmm. I am using unraid b614b with the latest upcups plugin. Maybe it depends on versions then.
  24. Apctest is actually part of the apcupsd plugin, so you don't need to download and install separate package ;-)
  25. Hmmm, does this mean that the 10% load what I have when all HDDs are down in my unRAID server is an issue for me in terms of this increases the consumption of the UPS itself?