Jump to content

graywolf

Members
  • Posts

    675
  • Joined

  • Last visited

Everything posted by graywolf

  1. *nix OSes try to use all available memory. Cache/Buffers is memory that isn't used for other programs and if a program needs memory that isn't available, it will pull from the Cache/Buffer area. cache_dirs uses the Cache/Buffer area, but also anything else you read/use. Stream a movie, goes into Cache/Buffer. Look at text files...goes into Cache/Buffer. Stuff gets thrown out of Cache/Buffer basically on a oldest/least used algorythm
  2. not 100% but I'm pretty sure it does. If you had stopped the array then copied the super.dat file then I doubt it would have gone into parity check.
  3. super.dat contains your disk assignment information. If you have a listing of your disk assignments. You just need to know which is parity, cache, unassigned. Then with the 64-bit, you do your new config, assigning the disks accordingly and that will create the super.dat file
  4. actually I was getting too fancy there. mind was wrapped up in the middle of another script for work I was working on. simplier version. output to file following line: ls -lR /mnt/disk* d'oh
  5. output the following to a file should do the trick ls -l /mnt/ | grep disk | ls -lR /mnt/`awk -F " " '{print $8}'`
  6. not possible that I'm aware of but you could setup a Recycle Bin for when you delete (except if you deleted from within unRaid via a telnet connection) http://lime-technology.com/forum/index.php?topic=5446.0
  7. Nice deal. Picked up 2 1 with my account, 1 with the wife's account since the "discount" had a $20 max
  8. Agree entirely. I do like 'unRAID classic' myself though! The classic seems to be only meaningful if you know the history of unRAID. Would you buy a software product called Classic? Linux Classic? Windows Classic? MAC OS Classic? Photoshop Classic? ... Sounds kinda cheesy, old, left behind. It really depends on where the core product/bread and butter is going to be for limetech. Coke Classic
  9. I hope they put the thread back (minus all the attacks which should go to bilge) think there was some valid/useful discussions in there before the attacks.
  10. You could add it to your go script crontab -l > /tmp/file echo '#' >> /tmp/file echo '# Start of Custom crontab entries' >> /tmp/file echo '10 05 * * * /boot/scripts/yourscript 1>/dev/null 2>&1' >> /tmp/file echo '# End of Custom crontab entries' >> /tmp/file crontab /tmp/file rm -f /tmp/file
  11. To be pedantic, the limitation is how it's currently coded. It will take a bit of refactoring in the md-kernel-driver and inside emhttp to allow for more drives. Also, Linux does drive letter by using "[a -> z] -> [aa -> az] -> [ba -> bz] ... [za -> zz] -> [aaa -> aaz] -> [aba -> abz] ... [zza -> zzz] -> [aaaa -> aaaz] ...". It does not do something as going with case sensitive device names. That naming convention makes sense... Do we know for a fact that the emhttp module is limited based strictly on the [a -> z] naming convention or is that speculation? Ogi Been quite awhile but somewhere in another thread Tom has said in the past that it is based on the [a -> z] naming limitation and that it would involve an extensive rewrite with subsequent extensive testing to change it to allow double character drive letters
  12. Here is a current link to TAMS ebay stuff. They currently have 3 different 24x Drive 4U servers to choose from. http://www.ebay.com/sch/tamsolutions/m.html?item=170839830638&rt=nc&_trksid=p2047675.l2562
  13. If you are ok with used and have time to possibly wait, you might want to check this http://lime-technology.com/forum/index.php?topic=21957.msg194646#msg194646 You get a complete working 4U server for about the price of a new 4224. Of course you would want to make some mods if you can not place it somewhere where the noise doesn't matter.
  14. Hope to get a chance to play with it this weekend, but not looking good. On-Call and a major application implementation is having tons of issues. Wished they would do better QA before rolling into production
  15. Do you also have SimpleFeatures? Think I saw that awhile ago when I was kicking the tires on the SimpleFeatures add-on. It was nice and everything but then issues arose that didn't get fixed and I just removed it and using the stock gui
  16. How "full" are your disks? I've noticed slower writes when my disks start getting full (reads have always been fine). Writing to disk share would be same as to the User share but you have disk#/ in front of the share i.e. For user share Movies/Thor the disk share could be disk1/Movies/Thor The User Share will automatically recognize it also.
  17. I have noticed on my system that cache_dirs seems to die about every 4-6 hours. How do I know this? I have a script that checks if cache_dirs is running, and if not, starts it again. #!/bin/bash while [ true ]; do RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l` if [ ${RUNNING} -eq 0 ] ; then free -l >> /var/log/syslog /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt fi sleep 600 done snippet from syslog Wed Aug 7 16:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4016680 129236 0 141856 3277628 Low: 865076 744028 121048 High: 3280840 3272652 8188 -/+ buffers/cache: 597196 3548720 Swap: 0 0 0 Aug 7 16:57:40 Tower cache_dirs: ============================================== Aug 7 16:57:40 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 7 16:57:40 Tower cache_dirs: vfs_cache_pressure=200 Aug 7 16:57:40 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 7 16:57:40 Tower cache_dirs: max_depth=9999 Aug 7 16:57:40 Tower cache_dirs: command=find -noleaf Aug 7 16:57:40 Tower cache_dirs: version=1.6.5 Aug 7 16:57:40 Tower cache_dirs: ---------- caching directories --------------- Aug 7 16:57:40 Tower cache_dirs: FanArt Aug 7 16:57:40 Tower cache_dirs: Specials Aug 7 16:57:40 Tower cache_dirs: UnArchived Aug 7 16:57:40 Tower cache_dirs: ---------------------------------------------- Aug 7 16:57:40 Tower cache_dirs: cache_dirs process ID 4071 started, To terminate it, type: cache_dirs -q Wed Aug 7 17:45:01 EDT 2013 Wed Aug 7 18:45:01 EDT 2013 Wed Aug 7 19:45:01 EDT 2013 Wed Aug 7 20:45:01 EDT 2013 Wed Aug 7 21:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4024564 121352 0 161140 3287004 Low: 865076 751516 113560 High: 3280840 3273048 7792 -/+ buffers/cache: 576420 3569496 Swap: 0 0 0 Aug 7 21:45:13 Tower cache_dirs: ============================================== Aug 7 21:45:13 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 7 21:45:13 Tower cache_dirs: vfs_cache_pressure=200 Aug 7 21:45:13 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 7 21:45:13 Tower cache_dirs: max_depth=9999 Aug 7 21:45:13 Tower cache_dirs: command=find -noleaf Aug 7 21:45:13 Tower cache_dirs: version=1.6.5 Aug 7 21:45:13 Tower cache_dirs: ---------- caching directories --------------- Aug 7 21:45:13 Tower cache_dirs: FanArt Aug 7 21:45:13 Tower cache_dirs: Specials Aug 7 21:45:13 Tower cache_dirs: UnArchived Aug 7 21:45:13 Tower cache_dirs: ---------------------------------------------- Aug 7 21:45:14 Tower cache_dirs: cache_dirs process ID 24388 started, To terminate it, type: cache_dirs -q Wed Aug 7 22:45:01 EDT 2013 Wed Aug 7 23:45:01 EDT 2013 Thu Aug 8 00:45:01 EDT 2013 Thu Aug 8 01:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 3085692 1060224 0 152480 2356952 Low: 865076 676180 188896 High: 3280840 2409512 871328 -/+ buffers/cache: 576260 3569656 Swap: 0 0 0 Aug 8 02:20:20 Tower cache_dirs: ============================================== Aug 8 02:20:20 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 02:20:20 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 02:20:20 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 02:20:20 Tower cache_dirs: max_depth=9999 Aug 8 02:20:20 Tower cache_dirs: command=find -noleaf Aug 8 02:20:20 Tower cache_dirs: version=1.6.5 Aug 8 02:20:20 Tower cache_dirs: ---------- caching directories --------------- Aug 8 02:20:20 Tower cache_dirs: FanArt Aug 8 02:20:20 Tower cache_dirs: Specials Aug 8 02:20:20 Tower cache_dirs: UnArchived Aug 8 02:20:20 Tower cache_dirs: ---------------------------------------------- Aug 8 02:20:20 Tower cache_dirs: cache_dirs process ID 24880 started, To terminate it, type: cache_dirs -q Thu Aug 8 02:45:01 EDT 2013 Thu Aug 8 03:45:01 EDT 2013 Thu Aug 8 04:45:01 EDT 2013 Thu Aug 8 05:45:01 EDT 2013 Thu Aug 8 06:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 4011360 134556 0 106820 3332164 Low: 865076 749720 115356 High: 3280840 3261640 19200 -/+ buffers/cache: 572376 3573540 Swap: 0 0 0 Aug 8 06:57:54 Tower cache_dirs: ============================================== Aug 8 06:57:54 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 06:57:54 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 06:57:54 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 06:57:54 Tower cache_dirs: max_depth=9999 Aug 8 06:57:54 Tower cache_dirs: command=find -noleaf Aug 8 06:57:54 Tower cache_dirs: version=1.6.5 Aug 8 06:57:54 Tower cache_dirs: ---------- caching directories --------------- Aug 8 06:57:54 Tower cache_dirs: FanArt Aug 8 06:57:54 Tower cache_dirs: Specials Aug 8 06:57:54 Tower cache_dirs: UnArchived Aug 8 06:57:54 Tower cache_dirs: ---------------------------------------------- Aug 8 06:57:55 Tower cache_dirs: cache_dirs process ID 6440 started, To terminate it, type: cache_dirs -q Thu Aug 8 07:45:01 EDT 2013 Thu Aug 8 08:45:01 EDT 2013 Thu Aug 8 09:45:01 EDT 2013 Thu Aug 8 10:45:01 EDT 2013 Thu Aug 8 11:45:01 EDT 2013 total used free shared buffers cached Mem: 4145916 3030892 1115024 0 128020 2297612 Low: 865076 701360 163716 High: 3280840 2329532 951308 -/+ buffers/cache: 605260 3540656 Swap: 0 0 0 Aug 8 11:55:29 Tower cache_dirs: ============================================== Aug 8 11:55:29 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt Aug 8 11:55:29 Tower cache_dirs: vfs_cache_pressure=200 Aug 8 11:55:29 Tower cache_dirs: max_seconds=10, min_seconds=1 Aug 8 11:55:29 Tower cache_dirs: max_depth=9999 Aug 8 11:55:29 Tower cache_dirs: command=find -noleaf Aug 8 11:55:29 Tower cache_dirs: version=1.6.5 Aug 8 11:55:29 Tower cache_dirs: ---------- caching directories --------------- Aug 8 11:55:29 Tower cache_dirs: FanArt Aug 8 11:55:29 Tower cache_dirs: Specials Aug 8 11:55:29 Tower cache_dirs: UnArchived Aug 8 11:55:29 Tower cache_dirs: ---------------------------------------------- Aug 8 11:55:29 Tower cache_dirs: cache_dirs process ID 14707 started, To terminate it, type: cache_dirs -q Thu Aug 8 12:45:01 EDT 2013 Any thoughts on what might be killing cache_dirs and how I should proceed?
  18. Tom - Will there eventually be a "Check for Update" (or something like that) button for the WebGUI?
  19. It's quite easy to do yourself. -Stop unraid (power down) -with another (not running at this moment) windows VM on the machine: -- edit the windows machine settings and add a hard disk -- select existing vmk, and select the unraid vmk -boot this windows VM, you'll see it has an extra drive/HD -copy all files from the unraid vmk to a backup place -copy bzimage, bzroot, make_bootable.bat, memtest, menu.c32, syslinux.cfg & syslinux.exe to this unraid vmk (16c files off course). -stop (shutdown) the windows VM -edit the settings of the windows VM to remove the unraid vmk - pull the unraid flash drive and put the 16c Zeron VMtools .tgz file on the flash drive (/extra dir), remove any other VMtools package from the flash drive (.plg and/or .tgz file) - copy bzimage, bzroot, make_bootable.bat, memtest, menu.c32, syslinux.cfg & syslinux.exe to the flash drive (16c files off course), backup old files on flash drive first. start the unraid VM (in viclient so you see what is happening) if unraid does not start (does not display the unraid boot menu), then you need to run make_bootable.bat -do the above again until the vmk is visible in the windows VM -run make_bootable.bat (as admin) -if it complains (no removeable drive, use -f option), change "%~d0\syslinux -ma %~d0" into "%~d0\syslinux -fma %~d0" and run make_bootable.bat again (as admin) -shut down the windows VM and remove the unraid vmk -now unraid should boot... Warning : This is all from memory, did it a week ago I did similar recently but I did have to remove the vmk hard disk from the unRaid VM before bringing up the Windows VM and then later readd the vmk hard disk back to unRaid VM.
  20. running the following script in background so that I can get a better feel of what is happening and how frequent. #!/bin/bash while [ true ]; do RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l` if [ ${RUNNING} -eq 0 ] ; then free -l >> /var/log/syslog /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt fi sleep 600 done at least this way, when cache_dirs dies, I start up another one (within 10 mins) and put to syslog the output from free -l
  21. A little more info. Did not show up in syslog but did on the screen. cache_dirs had been running approx 3.5 hrs at this failure /boot/scripts/cache_dirs: xmalloc: execute_cmd.c:3599: cannot allocate 72 bytes (901120 bytes allocated) /boot/scripts/cache_dirs: line 449: [: : integer expression expected /boot/scripts/cache_dirs: xmalloc: execute_cmd.c:578: cannot allocate 305 bytes (901120 bytes allocated) Line 449 is the IF statement num_dirs=`find /mnt/disk[1-9]* /mnt/cache -type d -maxdepth 0 -print 2>/dev/null|wc -l` if [ "$num_dirs" -eq 0 ] then # array is not started, sleep and look again in 10 seconds. sleep 10 continue fi
  22. Playing with cache_dirs and finding something strange. Start it by: cache_dirs -p 200 -i Specials -i UnArchived -i FanArt Runs fine for awhile and I see the processes ps -ef | grep cache root@Tower:/root -> ps -ef | grep cache root 14369 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14370 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14372 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14373 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14375 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14376 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14377 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14378 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14379 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14380 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14381 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14382 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14383 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14384 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14385 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14386 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14387 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14388 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14389 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14390 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt root 14472 1 0 10:49 pts/1 00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt After awhile though, if I check again, it appears that cache_dirs has died. Nothing in syslog to let me know what happened though. If I do a cache_dirs -q it tells me that #### is not running. unRaid version: 5.0-beta11 VM 4GB Ram Any thoughts?
  23. I agree. RMA it. You DO win the "drive with the most reallocated sectors" I've ever seen reported contest (if it is any consolation). Reallocated_Sector_Ct = 72 100 10 ok 35928 Current_Pending_Sector = 91 100 0 ok 1600 definitely will RMA the drive later this week. 2nd cycle bad also ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 117 118 6 ok 120952088 Reallocated_Sector_Ct = 71 72 10 ok 38328 Seek_Error_Rate = 60 100 30 ok 1254718 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Reported_Uncorrect = 1 1 0 near_thresh 14635 Airflow_Temperature_Cel = 73 74 45 ok 27 Temperature_Celsius = 27 26 0 ok 27 Current_Pending_Sector = 94 91 0 ok 1112 Offline_Uncorrectable = 94 91 0 ok 1112 No SMART attributes are FAILING_NOW 1632 sectors were pending re-allocation before the start of the preclear. 1624 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 1112 sectors are pending re-allocation at the end of the preclear, a change of -520 in the number of sectors pending re-allocation. 36072 sectors had been re-allocated before the start of the preclear. 38328 sectors are re-allocated at the end of the preclear, a change of 2256 in the number of sectors re-allocated.
×
×
  • Create New...