Jump to content

olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. Hi Joe L., thank you for your response. I am also not doing any extraordinary hacking when I invoke the command, in fact I am also only doing a couple of excludes and nothing else. However I have 4GB RAM and a diverse dir/file structure. Cache_dirs is working perfectly, and the initial find stops after a while, and then there is no issue with shuting down the array. But this "after a while" for the initial scan takes about 5-15 minutes (never measured that) and at my friend who really has an insane structure it is more 15-25 minutes. This means if I want to stop the array for whatever reason in this initial scan, then I have to telnet in and kill the find command(s) in order to do that. So based on this experience that this period can be even 25 minutes, I thought it would come more handy if cache_dirs -q was doing that kill in its own.
  2. Hi Joe L., Was it that dumb question that you don't have any feedback on it or you missed this one? Cheers!
  3. Hi Joe L., not sure this has been already brought up by someone, but I think it hasn't. When you stop cache_dirs with 'cache_dirs -q' it is only terminating the cache_dirs instance running in the background, but it is not killing any running (ongoing) 'find' command. So when you try to stop the array soon after you started up cache_dirs, the running 'find' command(s) can keep drive(s) busy even for longer time, resulting a long 'unmounting' on the unRAID interface unless you kill the 'find(s)' manually. Could killing of 'find' command(s) be added to 'cache_dirs -q' so that it can shut down gracefully? Thank you for your feedback in advance.
  4. Hey ClunkClunk, Do you have a recent handbrakecli SVN revision packaged to unRAID by any chance?
  5. First post updated, I will finish the rest once I got home Yes, if I check the link above on the drive enclosure, you will see how it's slidable.
  6. Here is mine. If you are interested in the parts, let me know. Chassis: Lian-Li Armorsuit PC-P50 http://www.lian-li.com/v2/en/product/product06.php?pr_index=321&cl_index=1&sc_index=25&ss_index=62 Drive enclosure 4in3 Nr.1 (top): Lian-Li EX-34N http://www.lian-li.com/v2/en/product/product06.php?pr_index=321&cl_index=1&sc_index=25&ss_index=62 Drive enclosure 4in3 Nr.2 (middle): Scythe Hard Drive Stabiliser X4 http://www.scythe-usa.com/product/acc/042/scyhdsx4-detail.html Drive enclosure 4in3 Nr.3 (bottom): Lian-Li EX-H34 http://www.lian-li.com/v2/en/product/product06.php?pr_index=285&cl_index=2&sc_index=5&ss_index=71 Motherboard: to be added CPU: Intel® Celeron® CPU E3300 @ 2.50GHz RAM: 3GB CPU Heatsink: Thermalright HR-01 Plus http://www.thermalright.com/new_a_page/product_page/cpu/hr01plus/product_cpu_cooler_hr01plus.htm Chipset Heatsink: Thermalright HR-05 http://www.thermalright.com/new_a_page/product_page/chipset/hr05/product_chitset_cooler_hr05.htm Drives: Parity: WD Caviar Green WD20EARS 2TB Data: 7 x WD Caviar Green WD15EADS 1.5TB 1 x WD Caviar Green WD10EADS 1.0TB 2 x WD Caviar Green WD10EACS 1.0TB 1 x SAMSUNG_HD103UJ 1.0TB Cache: WD Caviar Blue WD3200AAKS 320GB SATA Controllers: Adaptec 1430SA (4X sata) Generic Sil 3124-based SATAII Controller (2x sata) PSU: Seasonic S12-430
  7. That was it Joe L.! Thank you very much! It works perfectly this way. Now, just out of curiosity: if this is a memory issue, shouldn't it happen also with your original script when all the ISOs are in the same dir?
  8. Hi Joe L., Still from abroad, but I had some time and connection to try it out, and now I am even more confused. It seems that this way, not only the handbrake command for the first ISO gets written to /tmp/iso_hb (as I would expect from the above), but for the last ISO. For me it doesn't make any sense and I feel really like a dumb, but please see the attached screenshot. The script lists all the 3 iso files I have in that test dir, but it only writes the handbrake command for the last to /tmp/hb_iso I hope everyrhing will be visible on the screenshot.
  9. Thanks Joe L.! Sorry for the late reponse, but I am abroad for a week with very limited access to Internet. I will try out your suggestion when I am back at home. (Concerning the memory issue, I think 3GB should be enough for my test dir with only 3 ISOs , so I guess there has to be something else). Thank you again!
  10. I am really not able to figure this out. I have tried many combinatation without success. If I copy over a few ISOs to a single dir and using your very original script from the first page, then it works perfectly and convert all the ISOs in the dir. But if I put the same ISOs to a subdir and running the script above, then it always only convert the ISO in the first dir. More weirdly, if I put the echo command before the Handbrake command line, then if I run the script it is correctly listing the Handbrake command for all the ISOs in all the subdirs. I've also noticed, that in the first case (when all the ISOs are in the same dir and I am using the original script), if I hit ctrl-c to cancel the script, it is only canceling the current convert and the script starts the next one in the queue, so I have to hit ctrl-c as many ISOs are in the queue. In case I use the "new" script on a subdirs structure, if I hit ctrl-c, the script exits immediately (it is not starting the next ISO in the queue). Is this something coming for the different behaviour of for and while?
  11. Sorry Joe L., obviously I didn't expect you to see exactly what's wrong here. I just use the wrong expression. ...and I wasn't so clear either. I meant, when I run "find /mnt/user -name "*.ISO" -print " command separately, then all the ISOs found recursively in the target folder. Now I've tried the "Echo" debug step to check what the script would invoke and it also looks perfect, it lists the command for all the ISOs, but when I actually run the script, it's still exiting after the first ISO. So the script is perfect, thank you for this again, the problem is on my end, now it's my turn to do my homework and try to figure out what's going on here. Thank you!
  12. Hi Joe L., I have tried it and seems that at on my end it is only converting the first find of "find /mnt/user -name "*.ISO" -print | while read filename" If I just run the command in itself then it finds all the ISOs recursively in the target dir, but the encode gets done only on the first item with a returned prompt: Rip done! HandBrake has exited. /mnt/user/TargetDir/FirstItem.iso completed root@Tower:/boot/custom/HandBrakeCLI# I haven't modified anything on the script you created. Am I doing something wrong?
  13. Thank you very much Joe L.! As always, you are more than helpful! Cheers, Bence
  14. Hi Joe L. Would it be an easy and quick excercise for you to help me out on how to modify your script to be recursive on say a hard coded path and always save the converted file in the same folder as the original iso was? Or that would be more difficult task to just quickly draft it down? Thank you!
  15. Thank you for the feedback. Yes, this could potentially work (I think), but the bigger problem is with the number of audio tracks. I want to preserve all, ie. if there is 2 or 3, then all 2 or 3, but of course a general command would needed in a batch process. Unfortunatelly if I set -a 1,2,3 then HB cli is exiting if the input file doesn't exactly has 3 tracks. Means you always have to give the correct number of tracks for this parameter which of course you don't know in a batch process. The scan first, then parse scan results approach was suggested on HB's irc channel, but my scripting knowledge is close to 0, so I am not able to accomplish this...
  16. I've just started to look into reencode a lot bunch of documentary DVDs I have. @ClunkClunk, thank you for the precompiled packages. My origintal intent was to reencode all the DVDs to mkv with x.264 parameter and whatever quality, but I would like to preserve all the audio tracks in their original format (ie. keep them as they are). It seems that as there is no such parameter, it is not a simple task. I figured that the only way to do this is to do a scan first and then save and parse the output to set the correct number of tracks and formats during the encode. Have somebody experimented something like this yet? Thank you for the feedbacks.
  17. Ooops, yes, sure it's belongs to here. Sorry for messing this up. Can clearing with -n option considered to be safe against undelete/unerase/unformat tools?
  18. Hi Joe L., I am wondering if it would be easy for you to include an option for completely wiping the disk instead of preclearing it. Or the preclearing method in it's current form can be used for this purposes? (but at least the readback seems unneccessary in this case) That would be extremly useful on a disk replacement, when the old disk is going to be sold out. Thank you in advance for your feedback.
  19. if you are running cache_dirs, then that is it. The preclear script reads/writes the entire disk being pre-cleared in a way that is pretty much guaranteed to make the directory entries attempted to be cached by cache_dirs to eventually end up as the least recently accessed blocks and subsequently returned to the pool of blocks available to use as disk cache. The cache_dirs script can only work if the rate at which it can access the directory entry "blocks" is more frequent than the rate at which you use other disk blocks in your array. The preclear script is accessing the disk being cleared far faster than normal use when playing a movie, or scanning directories. I can see how it can easily end up with the blocks on its disk as being cached and more recently accessed than any from the directory scans. (remember, oldest/least-recently-used in the buffer cache are those that are re-used for current access needs) The solution... cancel the cache_dirs... or live with the fact that it is doing its job, trying to keep your directory listings of the shares on your server as responsive as possible. If you kill cache_dirs, in an hour or so, the other disks will spin down, and you will have to wait for directory listings until they spin up. Joe L. Thank you for the comprehensive explanation Joe L. That's fully logical, except I thought cache_dirs only doing its job on the array and the cache drive. That's why I thought, that a drive, which sits outside the array cannot affect it. But now I know I was wrong. Thank you again for both great scripts! cache_dirs is only scanning the disks in the array and one the cache drive...your thinking was correct. However, the disk buffer cache is shared by all disks, regardless of their assignment to the array or not. That same buffer cache is used by anything accessing the disks. If the preclear script uses disk buffer blocks faster than the cache_dirs script can re-scan a given directory, the cache_dirs script must re-read the directory to get satisfied. There is only one set of buffer memory for disk I/O, it is shared by everything. Joe L. Now I fully understand. Thank you very much Joe L.!
  20. if you are running cache_dirs, then that is it. The preclear script reads/writes the entire disk being pre-cleared in a way that is pretty much guaranteed to make the directory entries attempted to be cached by cache_dirs to eventually end up as the least recently accessed blocks and subsequently returned to the pool of blocks available to use as disk cache. The cache_dirs script can only work if the rate at which it can access the directory entry "blocks" is more frequent than the rate at which you use other disk blocks in your array. The preclear script is accessing the disk being cleared far faster than normal use when playing a movie, or scanning directories. I can see how it can easily end up with the blocks on its disk as being cached and more recently accessed than any from the directory scans. (remember, oldest/least-recently-used in the buffer cache are those that are re-used for current access needs) The solution... cancel the cache_dirs... or live with the fact that it is doing its job, trying to keep your directory listings of the shares on your server as responsive as possible. If you kill cache_dirs, in an hour or so, the other disks will spin down, and you will have to wait for directory listings until they spin up. Joe L. Thank you for the comprehensive explanation Joe L. That's fully logical, except I thought cache_dirs only doing its job on the array and the cache drive. That's why I thought, that a drive, which sits outside the array cannot affect it. But now I know I was wrong. Thank you again for both great scripts!
×
×
  • Create New...