yodine

Members
  • Posts

    75
  • Joined

  • Last visited

About yodine

  • Birthday 03/21/1970

Converted

  • Gender
    Male
  • URL
    http://www.maddvdhd.com/
  • Location
    France
  • Personal Text
    Gigabyte GA-EP35-DS3R (B-)

yodine's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. This seems about right. How much ram? My system pulls from 160-200W when 9 drives are spinning, but I also have 8GB of ram and an i-RAM ramdisk. I have 2x1Go RAM, but i think to switch to 2x2Go RAM for the file structure to stay more in cache
  2. I have 12 drives and a core 2 duo 3GHz and the max Watt it took was 325W, of course at startup. But all drives awake, it will eat less than 200W and will go down 79W when all the drive spindown.
  3. So after running the ls script every 10s for a week, it doesn't work 100%, even when watching a simple DVD. Once i was relinking the files for my mediaceter while my kid was watching a movie (around 8Go). During that time, it had to spin up 3 drives out of the 12. But the nice thing is that not the 12 spinup. Now i know why reading the latest answers. Even with just streaming videos, the ls every 10s doesn't garanty the file structure will always stay in RAM, but it helps a lot, which is great. I am thinking of going from 2Go to 4Go to help it even more, as i have 11To full so that makes a lot of files (i have HD titles in ISO but DVD riped as folders/files) I use it because when it takes too long to get a file, the software timeout. And awaking 11 drives takes a lot of time, more than 2 minutes.
  4. Thanks a lot, i added the "sleep 30" just before the call of ls-R. I'll try to remember the next time a reboot the server to check the crontab In the end, it wasn't an easy task to just do a simple ls
  5. :'( :'( :'( Ok, so i made the step and installed the script. Well first it didn't work because the ls script has to be converted to Unix. After this i got no error at boot, but doing a "crontab -l" didn't show the frequent ls lines. I then tried to run ls-R manualy from telnet, it went well and i could see the entry in the crontab. So the script seems to work when run manualy, but not when run from the go script. Does someone knows why at boot it didn't show ? Is the crontab build after some time by unraid, thus erasing the crontab created from the script ? Here is my complete go script : #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & # Set cron job for ls -R task, keep directories cached /boot/ls-R # Increase the directory caching echo vm.vfs_cache_pressure = 0 >>/etc/sysctl.conf sysctl -p /etc/sysctl.conf Any idea would be much apreciated
  6. It is not that it doesn't work, they spin down, it just takes more than 1 hour for the samsung drive. I didn't time it to see exactly how long, all i know is that they spindwon everytime. Did you give it some time to spindown, like the night without activity for example ?
  7. Arg ! Once i finaly decide myself to test this (i am far from beeing a hacker, this is why i am a litle scared about this), someone pops in with terrible news, even if i didn't understand all point 1 of ReneV. Now during the night i thought about something else with a run every 10 seconds When the server starts, all the disc are spinup, but the first ls will last more than 10 seconds, as it will have to go through all the 11 physical drives to fill the cache the first time. This means the second ls will run before all the data will be in memory, so i guess it will have to parse all the drives also. Maybe the third ls will do the same. Will that harm unraid ? If this is bad, is it possible in the go script to make a first ls statement alone (not in the crontab), so it will take the time it takes to fill the cache, then run the script that set every 10 seconds the ls ? Is that a good idea ? With Joe L. script every 10 seconds, should i also add the folowing RobJ script in the go script ? You should also add the following lines to your go script, to 'encourage' the kernel to keep directories cached. # Increase the directory caching echo vm.vfs_cache_pressure = 0 >>/etc/sysctl.conf sysctl -p /etc/sysctl.conf And does it have to come before or after the call to the ls-R script ? Oh yes, i would feel much more confident if it was out of the box
  8. I have 11 of these samsung drive. Truly they don't spindown after 1 hour. BUT Truly also they end up sleeping. Give him a night and in the morning it will have spindown on its own. Each time a check after a night without activity (like copyong new files), the 11 drives are spindown and the computer just eats 79W.
  9. Joe L Oh, every 10 seconds, sounds scary to me, but if you say that works fine, i'll finaly make the big jump and add my first go script tonight (well in fact your) Thanks a lot for all your detailed answers Reading all this, i will try to go from 2Go to 4Go of RAM when possible, to increase the chance the file structure stays in cache
  10. Thanks, so you're telling me the script is ok to ls twice a minute. But what are the consequences when it will run during a parity check where the cache will be emptied in less than 30 seconds, so the ls script will have to go through all the physical drives, and that 2 times every minute for 18 hours. Isn't this going to harm unraid ?
  11. Thanks for the information. Arg ! So lets say it fails, then it will wake up all the drives and do the ls. This is going to take nearly 3 minutes. This means during that time, the ls will be run 5 more times, with the drives trying to wake up and all the other ls script still running. Will this be a problem ? And during a parity check/build, what will be the impact of twice a minute the ls script that runs ? And if i have to launch it twice a minute, is this the correct syntax for the script ? echo "* * * * * for i in 1 2 ;do sleep 28; ls -R $shared_drive 1>/dev/null 2>&1; done" >>/tmp/crontab I took as model a script that was doing 3 times 20 seconds. What i wonder, and why i put 28seconds and not 30, is that using 30s, it will start the second listing at more than the minute timing for the next ls script to run. Is that a trouble to have 2 or more ls script running at the same time on the unraid server ?
  12. Ok, i decided to try to less bother the specialists with my low level questions and to answer most of them. So here are the results of my searches. When all the drives are sleeping, accessing one file using a user share will only wake up the drive containing the file, all the others are still sleeping. Reading a file will fill up the cache and erase the folder structure in cache. This means the next listing took 2m58s to wake up and list all the drives. The flash share is in fact the boot, so i can put the files in it using the share, then reboot the server. Now i searched how the crontab worked, and now i know that all stars will mean a launch every minute. If i want 2 minutes, i can write it "*/2 * * * *" Now some computing : an HD movie will eat around 2-3% of my giga card. I imagine max 10%. This makes 10Mo/s. That will take 3 minutes to fill up my 2Go. So i could use */3 for the ls script. And i finish with what i think 2 more advanced questions : 1- Is there a cons of launching the ls script every minute, and pros to find our fill cache limite not to launch it too often. In fact how that behavior will impact unraid 2- In all the cases, a parity check goes at around 30Mo/s with 11 drives that makes more than 300Mo/s, so in few seconds the memory will be full, meaning the ls script will have to go through all the drives to do its work. So how will the ls script going through all the physical drives will impact unraid during a parity check or build Thanks in advance Now some thinking : if unraid keeps an internal filesystem for shares to make the link between the file and its physical location, could it read it when browsing the share. I imagine that would make the ls script and the memory caching useless.
  13. Ok, so i made few tests as you described. The first `ls -R /mnt/user' took 1m47s, without taking into account the time it took to wake up the drive (quite long as i have 11 data drives). I started the chrono when i saw the first listed files on the console. The second same call took just 14s to run. After i used `ls -R /mnt/user 1>/dev/null 2>&1' and it took just 2s to run. Knowing that, what timing do you think i should use for the ls script so that i can read files and still keep the directory structure in cache ? if i take RobJ script : #!/bin/sh # Edit as needed, if user shares not used, then list /mnt/disk1, /mnt/disk2, etc. shared_drive="/mnt/user" crontab -l >/tmp/crontab grep -q "frequent ls" /tmp/crontab 1>/dev/null 2>&1 if [ "$?" = "1" ] then echo "# frequent ls to keep directory blocks in memory:" >>/tmp/crontab echo "* * * * * ls -R $shared_drive 1>/dev/null 2>&1" >>/tmp/crontab crontab /tmp/crontab fi Where in this script do i see how often it is launched ? From the answer from others, it seems the first * will make it, but when it is a *, what timing does that mean ? I saw to replace it with '0-59/2' for 2 minutes. But do you think in my case i have to change RobJ script ? Now i guess to implement this, i have to stop the array, then make a clean powerdonwn, then pull out the flash and put this ls-r file in the boot directory of the flash. Of course i also have to add to the go script the 2 pieces of code RobJ gave. Is that the right way to do ? As i will create this file under windows, is it ok if it is saved with notepad with windows CR+LF or do i have to convert to unix for the CR to be handled correctly ? I guess that when reading a file, unraid only spinup the right drive and doesn't have to spinup all of them to check where the file is, so maintaining the link between the share file structure and the real structure on each drive. Because if i am wrong, this means i would have to list also all the discs to keep their folder structure in cache. Thanks in advance and sorry for all these maybe stupid questions, but the window guy i am is having a lot of trouble understanding all those cron lines. And as i don't want to break anything, i'd rather ask first to see if what i try to copy is done the right way
  14. I don't remember exactly where in the threads ,but i remember reading there could be cons in some cases with the constant ls technic (trouble when parity ? spinup all the drives that where sleeping ?). Anyway, reading the posts, it had never be clear to me it could be used without any trouble. I just didn't want to take the risk as if i run into trouble, i would have no way to solve it alone (i don't understand a word of what you want me to type, well i understood before " 1>/dev/null 2>&1"). Now if there is a sure workaround that can't trouble unraid, i am ready to try it. Now for the softwware part, when the "user share" arrived, a lot already has changed to support this. When you access a file from the share, unraid now has to find on which particular drive the file is to get it. This is why i was thinking it would keep an internal listing of all the files on the share drives, so it wouldn't have to search for the file on all drives. That would have solve the trouble without needing any workaround. Now not using "user share" isn't that bad, and that still means to me unraid is a great software. It is just that, as it is, user share take away some very nice unraid features, which i prefer more than user share. I will stil try tonight (it is 11H30 AM in France) at home to list has RenV adviced, to see the result. For information, i only put movies on my unraid server (12 drives, so 11To of data). The HD movies are riped in ISO, so 2 files per movie (.dvd and .iso) and the DVD are plain DVD file structure copy, so much more files. This is why i am sure to empty the cache with movie stream when playing. This is also why i don't want all the drive to wake up when watching just 1 movie as it will take too long to start (less drawback when copying), beside the fact it will eat a lot of power. So if the workaround is not 100% sure (well beside a bug), i'd rather point each movie to the specific disk than the global share. A litle more pain to do (only one time action for each movie) but that guaranties that i will benefit of all the other unraid advantages
  15. Thanks for the feedback I saw these thread doing ls all the time to keep directory structure in cache. But it seems it can lead to trouble in some cases. So this doesn't sound like a nice and clean solution, specialy with my linux level (i can't live without a mouse ) As unraid handles all the disc activity, i thought it would keep for himself the list of all the directory structure of shares somewhere. Like this it wouldn't rely on just linux cache for this, as each file access files up the cache and whipes the cached directory structure. It should be handle by unraid, internaly. I think not doing this gives the "user share" fonctionality a very big cons, as it eliminates some very important advantage of unraid in general (single drive access and sleeping drive). Spinup 11 drives instead of 1 makes a lot of difference in time taken and power. I guess i will then stick to just the drives and disable shares. A pity because this was a wonderfull fonctionality.