graywolf

Members
  • Posts

    675
  • Joined

  • Last visited

Everything posted by graywolf

  1. You might want to look into setting up a SAMBA Recycle Bin. It would help in your accidental deletion via Windows. It would NOT help if you telnetted into unraid unix and did a unix delete (rm filename) Been quite a while for me but think this is the thread you want to check http://lime-technology.com/forum/index.php?topic=5446.msg52258#msg52258 Hope it helps in the future. Also, you might want to set up a cron to periodically clear up the RecycleBiin. I have the following done daily and it'll permanently delete anything over 7 days in the RecycleBin location find /mnt/user/RecycleBIN -type f -atime +7 -print -exec rm -fr {} ';'
  2. I have noticed on my system that cache_dirs seems to die about every 4-6 hours. How do I know this? I have a script that checks if cache_dirs is running, and if not, starts it again. #!/bin/bash while [ true ]; do RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l` if [ ${RUNNING} -eq 0 ] ; then free -l >> /var/log/syslog /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt fi sleep 600 done snippet from syslog Wed Aug 7 16:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4016680 129236 0 141856 3277628 Low: 865076 744028 121048 High: 3280840 3272652 8188 -/+ buffers/cache: 597196 3548720 Swap: 0 0 0 Aug 7 16:57:40 Tower cache_dirs: ============================================== Aug 7 16:57:40 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 7 16:57:40 Tower cache_dirs: vfs_cache_pressure=200 Aug 7 16:57:40 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 7 16:57:40 Tower cache_dirs: max_depth=9999 Aug 7 16:57:40 Tower cache_dirs: command=find -noleaf Aug 7 16:57:40 Tower cache_dirs: version=1.6.5 Aug 7 16:57:40 Tower cache_dirs: ---------- caching directories --------------- Aug 7 16:57:40 Tower cache_dirs: FanArt Aug 7 16:57:40 Tower cache_dirs: Specials Aug 7 16:57:40 Tower cache_dirs: UnArchived Aug 7 16:57:40 Tower cache_dirs: ---------------------------------------------- Aug 7 16:57:40 Tower cache_dirs: cache_dirs process ID 4071 started, To terminate it, type: cache_dirs -q Wed Aug 7 17:45:01 EDT 2013 Wed Aug 7 18:45:01 EDT 2013 Wed Aug 7 19:45:01 EDT 2013 Wed Aug 7 20:45:01 EDT 2013 Wed Aug 7 21:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4024564 121352 0 161140 3287004 Low: 865076 751516 113560 High: 3280840 3273048 7792 -/+ buffers/cache: 576420 3569496 Swap: 0 0 0 Aug 7 21:45:13 Tower cache_dirs: ============================================== Aug 7 21:45:13 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 7 21:45:13 Tower cache_dirs: vfs_cache_pressure=200 Aug 7 21:45:13 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 7 21:45:13 Tower cache_dirs: max_depth=9999 Aug 7 21:45:13 Tower cache_dirs: command=find -noleaf Aug 7 21:45:13 Tower cache_dirs: version=1.6.5 Aug 7 21:45:13 Tower cache_dirs: ---------- caching directories --------------- Aug 7 21:45:13 Tower cache_dirs: FanArt Aug 7 21:45:13 Tower cache_dirs: Specials Aug 7 21:45:13 Tower cache_dirs: UnArchived Aug 7 21:45:13 Tower cache_dirs: ---------------------------------------------- Aug 7 21:45:14 Tower cache_dirs: cache_dirs process ID 24388 started, To terminate it, type: cache_dirs -q Wed Aug 7 22:45:01 EDT 2013 Wed Aug 7 23:45:01 EDT 2013 Thu Aug 8 00:45:01 EDT 2013 Thu Aug 8 01:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 3085692 1060224 0 152480 2356952 Low: 865076 676180 188896 High: 3280840 2409512 871328 -/+ buffers/cache: 576260 3569656 Swap: 0 0 0 Aug 8 02:20:20 Tower cache_dirs: ============================================== Aug 8 02:20:20 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 02:20:20 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 02:20:20 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 02:20:20 Tower cache_dirs: max_depth=9999 Aug 8 02:20:20 Tower cache_dirs: command=find -noleaf Aug 8 02:20:20 Tower cache_dirs: version=1.6.5 Aug 8 02:20:20 Tower cache_dirs: ---------- caching directories --------------- Aug 8 02:20:20 Tower cache_dirs: FanArt Aug 8 02:20:20 Tower cache_dirs: Specials Aug 8 02:20:20 Tower cache_dirs: UnArchived Aug 8 02:20:20 Tower cache_dirs: ---------------------------------------------- Aug 8 02:20:20 Tower cache_dirs: cache_dirs process ID 24880 started, To terminate it, type: cache_dirs -q Thu Aug 8 02:45:01 EDT 2013 Thu Aug 8 03:45:01 EDT 2013 Thu Aug 8 04:45:01 EDT 2013 Thu Aug 8 05:45:01 EDT 2013 Thu Aug 8 06:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4011360 134556 0 106820 3332164 Low: 865076 749720 115356 High: 3280840 3261640 19200 -/+ buffers/cache: 572376 3573540 Swap: 0 0 0 Aug 8 06:57:54 Tower cache_dirs: ============================================== Aug 8 06:57:54 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 06:57:54 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 06:57:54 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 06:57:54 Tower cache_dirs: max_depth=9999 Aug 8 06:57:54 Tower cache_dirs: command=find -noleaf Aug 8 06:57:54 Tower cache_dirs: version=1.6.5 Aug 8 06:57:54 Tower cache_dirs: ---------- caching directories --------------- Aug 8 06:57:54 Tower cache_dirs: FanArt Aug 8 06:57:54 Tower cache_dirs: Specials Aug 8 06:57:54 Tower cache_dirs: UnArchived Aug 8 06:57:54 Tower cache_dirs: ---------------------------------------------- Aug 8 06:57:55 Tower cache_dirs: cache_dirs process ID 6440 started, To terminate it, type: cache_dirs -q Thu Aug 8 07:45:01 EDT 2013 Thu Aug 8 08:45:01 EDT 2013 Thu Aug 8 09:45:01 EDT 2013 Thu Aug 8 10:45:01 EDT 2013 Thu Aug 8 11:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 3030892 1115024 0 128020 2297612 Low: 865076 701360 163716 High: 3280840 2329532 951308 -/+ buffers/cache: 605260 3540656 Swap: 0 0 0 Aug 8 11:55:29 Tower cache_dirs: ============================================== Aug 8 11:55:29 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 11:55:29 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 11:55:29 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 11:55:29 Tower cache_dirs: max_depth=9999 Aug 8 11:55:29 Tower cache_dirs: command=find -noleaf Aug 8 11:55:29 Tower cache_dirs: version=1.6.5 Aug 8 11:55:29 Tower cache_dirs: ---------- caching directories --------------- Aug 8 11:55:29 Tower cache_dirs: FanArt Aug 8 11:55:29 Tower cache_dirs: Specials Aug 8 11:55:29 Tower cache_dirs: UnArchived Aug 8 11:55:29 Tower cache_dirs: ---------------------------------------------- Aug 8 11:55:29 Tower cache_dirs: cache_dirs process ID 14707 started, To terminate it, type: cache_dirs -q Thu Aug 8 12:45:01 EDT 2013 Any thoughts on what might be killing cache_dirs and how I should proceed?
  3. Tom - Will there eventually be a "Check for Update" (or something like that) button for the WebGUI?
  4. You need to check you file system for issues http://lime-technology.com/wiki/index.php?title=Check_Disk_Filesystems Read and follow directions to run reiserfsck --check then post the results here for further assistance also post your syslog file (instructions here) http://lime-technology.com/forum/index.php?topic=9880.0
  5. It's quite easy to do yourself. -Stop unraid (power down) -with another (not running at this moment) windows VM on the machine: -- edit the windows machine settings and add a hard disk -- select existing vmk, and select the unraid vmk -boot this windows VM, you'll see it has an extra drive/HD -copy all files from the unraid vmk to a backup place -copy bzimage, bzroot, make_bootable.bat, memtest, menu.c32, syslinux.cfg & syslinux.exe to this unraid vmk (16c files off course). -stop (shutdown) the windows VM -edit the settings of the windows VM to remove the unraid vmk - pull the unraid flash drive and put the 16c Zeron VMtools .tgz file on the flash drive (/extra dir), remove any other VMtools package from the flash drive (.plg and/or .tgz file) - copy bzimage, bzroot, make_bootable.bat, memtest, menu.c32, syslinux.cfg & syslinux.exe to the flash drive (16c files off course), backup old files on flash drive first. start the unraid VM (in viclient so you see what is happening) if unraid does not start (does not display the unraid boot menu), then you need to run make_bootable.bat -do the above again until the vmk is visible in the windows VM -run make_bootable.bat (as admin) -if it complains (no removeable drive, use -f option), change "%~d0\syslinux -ma %~d0" into "%~d0\syslinux -fma %~d0" and run make_bootable.bat again (as admin) -shut down the windows VM and remove the unraid vmk -now unraid should boot... Warning : This is all from memory, did it a week ago I did similar recently but I did have to remove the vmk hard disk from the unRaid VM before bringing up the Windows VM and then later readd the vmk hard disk back to unRaid VM.
  6. If drive 8 is red balled, then unRaid is not using it What you are actually seeing for drive 8 is the "reconstructed" data. You would see that all your drives are spun up and that the read counts on all the drives (including parity) are increasing as you access drive 8 files.
  7. Correct, I couldn't remember if unRaid was odd or even parity, that is why in my examples I specified odd parity. The point was to show why adding a drive that didn't have all zeros but unRaid thought it did would be a bad thing.
  8. Drives (faked precleared drive 4 added) 1 2 3 4 Parity 0 0 0 1 1 0 0 1 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 0 0 1 1 1 1 1 0
  9. Lets do some examples basing on jonathanm's info 1 Parity = All drives plus Parity must = 1 (i.e. need odd number of 1s) Since a precleared drive has all 0s, parity does not change and is not recalculated Drives (original) 1 2 3 Parity 0 0 0 1 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 0 Drives (precleared drive added) 1 2 3 4 Parity 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 1 1 0 0 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 Drives (drive 3 fails) 1 2 4 Parity Sim 3 0 0 0 1 = (0+0+0+1)=0 0 0 0 0 = (0+0+0+0)=1 0 1 0 0 = (0+1+0+0)=0 0 1 0 1 = (0+1+0+1)=1 1 0 0 0 = (1+0+0+0)=0 1 0 0 1 = (1+0+0+1)=1 1 1 0 1 = (1+1+0+1)=0 1 1 0 0 = (1+1+0+0)=1 Drives (faked precleared drive 4 added) 1 2 3 4 Parity 0 0 0 1 1 0 0 1 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 0 0 1 1 1 1 1 0 Drives (drive 3 fails with "fake cleared drive") 1 2 4 Parity Sim 3 0 0 1 1 = (0+0+1+1)=1 (should have been 0) 0 0 0 0 = (0+0+0+0)=1 0 1 1 0 = (0+1+1+0)=1 (should have been 0) 0 1 0 1 = (0+1+0+1)=1 1 0 1 0 = (1+0+1+0)=1 (should have been 0) 1 0 1 1 = (1+0+1+1)=0 (should have been 1) 1 1 0 1 = (1+1+0+1)=0 1 1 1 0 = (1+1+1+0)=0 (should have been 1) So if you had "faked" a pre-clear signature and the drive was not all 0s, then you can see that the simulated failed drive would not match what was originally on the failed drive (Drive 3 in my examples). That would be your file/data corruption.
  10. I understand your scenario, thats cool we could go with that as well, we'll call it scenario#2. I don't get why it would use anything (0/1) from this new drive to "simulate" the other failed drive? I will explain my thoughts, garycase stated no computations are done (not disputing I don't know), and nothing has been written to the new faked added drive, so how more like why would this faked drive be utilized to help simulate any drive that was to fail? When you add a precleared drive (or unRaid clears it), then unRaid knows that the new drive has all 0's, and therefore no parity computations are required since X+0=X....parity does not change. But, when a drive fails, it is simulated by unRaid by taking the 1s & 0s of all the data drives plus parity drive to figure out what the bit should/would be on the failed drive.
  11. He is running Jun 30 20:51:04 Tower emhttp: unRAID System Management Utility version 5.0-rc12a Saw several BADCRC errors and ckmbr: read: Input/output error You'll need to wait on one of the experts but definitely think it is having problems with the MBR (master boot record). Or at least that is my assumption
  12. Lets say you did all that. unRaid thinks the new drive has all 0's (zero's). Lets say before you even add anything else to your array, a different drive dies. unRaid now tries simulating the dead drive via the parity and all the other drives. Wherever your faked drive has a 1, it will corrupt the "simulated" drive data with the reverse of what it should be. Then if you tried to rebuild the failed disk, you just permanently corrupted data on the rebuilt drive but you wouldn't know it.
  13. Depends upon if you are looking at this for the masses or the techies. You would probably want a checkbox or some other hoops to jump thru for any beta/RC updates. But for FINALS, it would be a good idea for the "masses"
  14. You have the right steps. 1. Pre clear the 3tb. 2a. Stop the array 2b. Remove the 2tb parity. 3a. Add the 3tb as a parity drive. Allow the array to rebuild. 3b. Start the array Allow the array to rebuild. 4. Pre clear the 2th (old parity) 5a. Stop the array 5b. Add the 2tb (old parity) to the array. 5c. Start the array
  15. Thanks. Tried it before but other VM didn't come up. Was missing that I had to remove the VMDK from unRaid before bringing up the other VM. I'm good to go now.
  16. Messed up my boot VMDK. Copied the syslinux.exe by mistake instead of the syslinux.cfg Now it hangs at syslinux. I did copy to a subfolder the original files. Just not sure how to connect to it since unRaid is not coming up.
  17. Go to Settings ==> Disk Settings and check what you have set for Enable Auto-Start
  18. I've never had to worry about it. Max temp I've seen has been 37 and currently my highest is 31. Most my drives are in the 25-29 range
  19. Thanks JoeL. Works like a charm now. Greatly appreciate your assistance
  20. No one? Am I doing something wrong or can you no longer modify emhttp so that it is better protected from OOM situations?
  21. Just remember, your parity drive needs to be the largest drive. So using the 4TB drive as your parity drive does not buy you any additional storage but does allow you to add other 4TB drives as data drives later. (if you are running unRaid 5.x)
  22. running the following script in background so that I can get a better feel of what is happening and how frequent. #!/bin/bash while [ true ]; do RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l` if [ ${RUNNING} -eq 0 ] ; then free -l >> /var/log/syslog /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt fi sleep 600 done at least this way, when cache_dirs dies, I start up another one (within 10 mins) and put to syslog the output from free -l
  23. http://lime-technology.com/forum/index.php?topic=28482.0
  24. A little more info. Did not show up in syslog but did on the screen. cache_dirs had been running approx 3.5 hrs at this failure /boot/scripts/cache_dirs: xmalloc: execute_cmd.c:3599: cannot allocate 72 bytes (901120 bytes allocated) /boot/scripts/cache_dirs: line 449: [: : integer expression expected /boot/scripts/cache_dirs: xmalloc: execute_cmd.c:578: cannot allocate 305 bytes (901120 bytes allocated) Line 449 is the IF statement num_dirs=`find /mnt/disk[1-9]* /mnt/cache -type d -maxdepth 0 -print 2>/dev/null|wc -l` if [ "$num_dirs" -eq 0 ] then # array is not started, sleep and look again in 10 seconds. sleep 10 continue fi