nsta

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by nsta

  1. Making Perfect sense, and helps to know it has to be lower case aswell...... THANKS
  2. FIXED )) Thanks to all for their HELP....i dont know which part fixed it, but here were my steps: *Removed movies folder off disk3 (which shouldnt have been there) *used all lowercase format for all my exclusions in the webgui *Disabled user shares and restarted *re-enabled shares after reboot and didn't touch the share settings (left them as they were) Tested and now everything i copy to my "Movies" share will go to the appropriate disk )) THANKS TO ALL
  3. So deleting the user shares within the Unraid console is ok aswell?
  4. Just to make sure....if i disable user shares.....will that remove the contents on that share aswell???....
  5. PROBLEM - IT didnt work!!! I deleted the movies folder off disk3, and rebooted my unraid box.....transferred file over to my movies share......and it re created a movies folder on Disk3 (data share), and started copying to it accordingly!!! What should i try next???
  6. Cheers ill give that a shot tonight, i hope it works.... And i think i do remember creating a TEMPORARY "movies" folder on the root of DISK3 which explains all of this!.... Just to make sure, it's only if the folder is on the ROOT of the User share, that it will disregard the exclusions?? i.e in my case: i had a movies share on the root of Disk1 and Disk2 (which is how it should be)....and i had a temporary movies folder on the root of Disk3, which is why it started copying files accross to it and disregarded my exclusions??? ....if say i had this structure: Disk3/data/movies...would it still have adverse affects (even though its not in the root of the user share) Thanks ....hope this was all it is
  7. My Unraid is 4.4.2 Hey All - Unraid has been running perfect recently....until now ....here is my current unraid setup: 1 500GB disk parity 2 x 500GB disks for my "movies" share - Disk1 and Disk2 set as included disks and Disk3 set to exclude 1 x 500GB disk for my "data" share - I have set Disk3 as included, and Disk1 and 2 as Excluded disks.... here's the problem....the high water mark for my Movies share (disk 1 and 2) reached 50%...i transfer a file onto my user share, and to my suprise the files moved accross to Disk3 which is set as a excluded disk on the share!!!.... What am i doing wrong, i have attached my user share setup if you can see anything wrong!....i need to sort this asap. Also, since the file has been moved to my excluded disk, is there any reason why i cannot transfer it over to the movie share manually?....it wont harm anything will it? Thanks http://f.imagehost.org/view/0033/untitled_16
  8. Awesome - Thanks!, will give it a go today.....is there any harm in having use both /user and disk settings, if the disks are on a user share anyway.....NAS advised for him it was slightly faster having it set to disk, so i thought why not.... Regards
  9. Does this script look right, if i were to add disk 1,2,and 3 to the script...and remove user??? #!/bin/bash if [ ${DEBUG:=0} -gt 0 ] then set -x -v fi P=${0##*/} # basename of program R=${0%%$P} # dirname of program P=${P%.*} # strip off after last . character cache_loop() { echo "$$" > /var/run/${P}.pid trap "rm -f /var/run/${P}.pid" EXIT HUP INT QUIT TERM logger -is -t${P} "Starting" while [ -f /var/run/${P}.pid ] do ls -R /mnt/disk1 >/dev/null 2>&1 do ls -R /mnt/disk2 >/dev/null 2>&1 do ls -R /mnt/disk3 >/dev/null 2>&1 sleep 10 done logger -is -t${P} "Terminating" } if [ -f /var/run/${P}.pid ] then echo "$0: already running? pidfile: /var/run/${P}.pid" ps -fp $(</var/run/cache_user.pid) exit fi cache_loop > /var/log/${P}.log 2>&1 & JPID=$! logger -is -t${P} "Spawned (Pid=$JPID)" # ps -fp "$JPID" disown "$JPID"
  10. Thanks - So much help here in these forums its awesome:) Im going to give it a try tonight... Few quick q's: 1) Before i have the script running, id like to do some quick tests, before and after....is there an easy way to do this (via command line)? 2) I have disk 1 and 2 that spans between one user share (movies), and i have disk3 thats solely used for data, which i transfer directly to (dont use user shares for disk3).....do i need to add another command to enable the ls-r hack for the disk3? 3) would there be a reason to kill PID....Thanks:)
  11. Hey Weebotech, your information is very useful....id like to try this myself, and see what sort of real-world improvements i get.... Could you confirm im doing everything correctly step by step....im still real linux/command line iliterate! 1) First create a "cache.sh" script, and have the following contained within it: #!/bin/bash if [ ${DEBUG:=0} -gt 0 ] then set -x -v fi P=${0##*/} # basename of program R=${0%%$P} # dirname of program P=${P%.*} # strip off after last . character cache_loop() { echo "$$" > /var/run/${P}.pid trap "rm -f /var/run/${P}.pid" EXIT HUP INT QUIT TERM logger -is -t${P} "Starting" while [ -f /var/run/${P}.pid ] do ls -R /mnt/user >/dev/null 2>&1 sleep 10 done logger -is -t${P} "Terminating" } if [ -f /var/run/${P}.pid ] then echo "$0: already running? pidfile: /var/run/${P}.pid" ps -fp $(</var/run/cache_user.pid) exit fi cache_loop > /var/log/${P}.log 2>&1 & JPID=$! logger -is -t${P} "Spawned (Pid=$JPID)" # ps -fp "$JPID" disown "$JPID" 3) I then place this script in /boot, i'm assuming the cached_user.pid is created automatically?.....and do i have to rename "user" in the line below, to the name of my user share (ie Movies) or not? do ls -R /mnt/user >/dev/null 2>&1 4) I then add this to my go script: /boot/cache.sh Thanks in advise?
  12. Cheers - Does it look all good?....is there any way to know if it really is working or not, if currently im not really having an issue...i got 2gb ram so dont know if itll suffice...
  13. sorry im really new to Linux...but i hope i got this right... Do i copy the contents below to a .txt file and save it as cachedaemon.sh in /boot/scripts/ then call on the script with: nohup nice /boot/scripts/cachedaemon.sh & ...... I noticed yours is for User Shares.....if i wanted it for a specific disk would i use: #!/bin/sh i=1 while [ 1 ] do ls -R /mnt/hda >/dev/null 2>&1 # Modify sleep time (in seconds) as needed below sleep 10 #let i=i+1 # echo $i; done OR #!/bin/sh i=1 while [ 1 ] do ls -R /mnt/Disk3 >/dev/null 2>&1 # Modify sleep time (in seconds) as needed below sleep 10 #let i=i+1 # echo $i; done Regards,
  14. Just going by what JimWhite said....does this look about right? Ok, so i obviously i must have to create some sort of custom script to let Unraid know to run my Ls-R.sh script?....is this correct?, How do i create this?....and from what he's said i guiess i should add this line to this custom script: echo "*/1 * * * * /boot/bin/ls-r.sh >/dev/null 2>&1" >> /var/spool/cron/crontab.5000 ..... I then need to create an ls-r.sh script with my contents as follows: ls -R /mnt/user >/dev/null 2>&1 sleep 30 ls -R /mnt/user >/dev/null 2>&1 .....does this mean it will not spin up the drives so often when browsing the contents of my User Shares?.....i have one drive i dont use as a user share, and browse directly off the disk....would this mean i'd use this command also: ls -R /mnt/hda >/dev/null 2>&1 sleep 30 ls -R /mnt/hda >/dev/null 2>&1 "hda" is the disk im referring to, which is Disk3 in my unraid system.... Help much appreciated..
  15. i say 1-3 titles, because i may be watching one, and my son another, and partner another....all network...i wish i had time to watch 3 titles in a day on my own!...
  16. I have four hdd's total which includes one parity.....i have one data drive that is solely used for filing and data work which i would be constantly using during the day, whilst the others not so much as they are media disks and only get used when watching films or shows ( i may watch 1-3 titles daily) How about the parity drive?...should this be fine spinning down whilst others are spun up?....no problem when writing to a disk whilst your parity takes time to spin up? ......Im with CHRIZ, i would like to try the LS-R trick (also checked on forums to ALSO find variations to the script).... Any help would be much appreciated:)...i have 2GB ddr667, i dont know if it would suffice, but can easily upgrade to 4gb for FREE:) Thanks!
  17. Yeah i was wanting a conroe - though my motherboard doesnt support it (
  18. Hey all - i just added another Parity drive to my System?....im really curious to know what the read speeds are for the parity dirve, how could i test? Thanks:)
  19. Im transferring files off it still....should be tommorow before replacement.
  20. Thanks - ive set mine individually and all seems to be fine so far:)... Now to my next question.....is spinning drives up and down ok for the HDD?...i mean doing it too often wont cause it to start deteriorating no???
  21. Recently ive noticed....that even though the disk is set to spin down every 15minutes....it doesnt (theres definitely no read/writes to it aswell... Is this a sure sign that the disk has started to fail, since it doesnt want to spin down no more? Thanks
  22. Hey guys, just wanted to know if this CPU will be fine for Unraid.... Celeron D 2.66GHz/256/533 Its going for a really good price down here in NZ, and i was thinking of buying it.... Is the 533mhz, and 256k cache going to be ok?...and do these CPU's run a little less voltage, heat than the others?... Thanks in advise:)
  23. Thanks - yeah i really like my cooling now....my disks now run no hotter than 32c:)....very happy
  24. Just done another Smart Report today....heres the results..I think it may appear to be failing? Statistics for /dev/hda ST3500630A_6QG1P3L9 smartctl version 5.36 [i486-slackware-linux-gnu] Copyright © 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: ST3500630A Serial Number: 6QG1P3L9 Firmware Version: 3.AAF User Capacity: 500,107,862,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Jan 18 07:12:48 2009 GMT-12 SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. See vendor-specific Attribute list for failed Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 73) The previous self-test completed having a test element that failed and the test element that failed is not known. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 163) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 108 092 006 Pre-fail Always - 101125128 3 Spin_Up_Time 0x0003 093 092 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 144 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 059 059 030 Pre-fail Always - 1864400238952 9 Power_On_Hours 0x0032 092 092 000 Old_age Always - 7104 10 Spin_Retry_Count 0x0013 052 052 097 Pre-fail Always FAILING_NOW 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 137 187 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 0 189 Unknown_Attribute 0x003a 100 100 000 Old_age Always - 0 190 Unknown_Attribute 0x0022 067 057 045 Old_age Always - 572522529 194 Temperature_Celsius 0x0022 033 043 000 Old_age Always - 33 (Lifetime Min/Max 0/17) 195 Hardware_ECC_Recovered 0x001a 064 055 000 Old_age Always - 204214132 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: unknown failure 90% 7104 935713446 # 2 Short offline Completed without error 00% 7074 - # 3 Short offline Aborted by host 40% 7073 - # 4 Short offline Completed without error 00% 7073 - # 5 Short offline Completed without error 00% 7073 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  25. Im having a wierd problem with user shares. Basicaly yesterday, i added a new disk (primarily for data files and not media).....i only had one share in the system prior, and i excluded my new disk from being part of it. Yesterday, i copied my contents across to this share directly (ie \\tower\disk3)...the folder name was "data"- with all my subfolders below it... ie \\tower\disk3\data\..... Today i wake up to find in the web management, under user shares - there's a new user share created called "data"....i definitely did not create this, as i was going to copy direct to disk for my data instead( for speed increase, and not having need for a split level span accross disks) Any help? Im running 4.3.3