FrozenGamer

Members
  • Posts

    368
  • Joined

  • Last visited

Everything posted by FrozenGamer

  1. It is normal that new drives are seen as unformattted. Parity has no idea what is on the disks and as such is unaware of the data that is on them at the file system level. It just sees the disk as a bunch of sectors that need protecting against failure of anyy disk, so yes on any new system the whole of each disk will be read as part of creating parity. would this email/popup be normal - ? Event: unRAID Parity disk error Subject: Warning [PIPE] - Parity disk, parity-sync in progress Description: ST8000AS0002-1NA17Z_Z8408NKD (sdb) Importance: warning
  2. I am creating a 2nd server - testing it for now, then to put data on once i trust it. I have a few questions if someone can help. 1- Unraid irc channel says broken - should i just wait for beta 22? At the moment all i need to do is load it up with data and samba share, no vms, at some point i would use docker containers. 2- I precleared 4 seagate 8tb harvested from usb enclosure drives 2 cycles each. After that i made 1 a parity and 3 assigned as data, then i just tried to mount, it appears that there are some errors/warnings, are these errors normal in that the server sees drives aren't formatted and formats them? I see a number of other errors in my syslog as well, is there something wrong? 3- It is doing a parity sync/data rebuild, at 2.2%, i know that a parity sync is normal when creating an array for first time, but is it normal to take so long when there is no actual data? If someone has time to look at my log/answer questions i would appreciate it, before i move forward with transferring data over and enabling my 2nd unraid pro key on this trial. pipe-diagnostics-20160411-1152.zip
  3. Thanks Brit for the quick reply, i am gonna keep those 2 cycles going and then create the array. I don't even want to put data i cant afford to lose on this yet as i am using harvested drives from seagate 8tb.
  4. I am building a new test unraid server with 6.2 beta 21 - fresh install. I have 4 8tb seagate harvested drives. I found the beta preclear plugin and i am running it on the drives 2 cycles per drive. It appears to be working. But, i am reading a few comments saying preclear isn't necessary anymore. I am assuming that 6.2 beta formats the drive without bringing the array down and doesn't actually predict reliability as well as preclear? By the way, thanks for providing the beta plugin and all of your time that went into it.
  5. OK, looks like i am back online and as far as i can tell the new config worked, however things were very very weird for a bit and it took a few reboots to work. I had to unplug machine, because the array wouldn't stop. When i got to the monitor of machine it was showing some errors, which i didn't take time to write down. (oops) - I restarted and got access to the webgui and started array. It was on starting dockers and then the entire network in the house went down, all computers were inaccessable and games got disconnected etc. I unplugged the network cable off my server and it went away, at that time it had the following error "Tower: login:XFS (md1): metadata I/O error: block 0x74702fd8 ("xfs_trans_read_lf_map") error 5 numblks 16 I unplugged it again, rebooted and started array via gui and it all appears to be working fine. Parity is rebuilding and dockers up and running. I am seeing the following new smart error though on drive 7 188 Command timeout 0x0032 100 099 000 Old age Always Never 4295032833
  6. Thanks! I appreciate the help and patience from all. Just as a matter of note if another linux newb reads this in the future - i am successfully copying the files with the following command after typing mc in telnet. cp -r -v - /mnt/disk15/* /mnt/disk18 Off for a much procrastinated jog on a balmy 48 degree Fahrenheit day in Alaska. Question for:Trurl - if the same files exist on both disk 15 and 18 is that transparent to the shares/system? ie - during the 24 hours or so it will take to copy there will be duplicate files, but not visible as duplicates in the share?
  7. Thanks Trurl: So i can move the data from 6 to 5 with both in the array using MC, stop array, then new config ensuring that the the parity disk is kept as parity disk. Should it all shares etc remain after new config? I apologize for repeating questions, but its 80 tb of data, and its a bit scary.
  8. I also found this which seems to apply and says it is still accurate - https://lime-technology.com/forum/index.php?topic=37210.msg343941#msg343941
  9. Ok, i have found the MC app on my telnet to server (the gui version). I haven't used it before, but is this the the correct idea/plan? 1- i have a mover from my 5tb cache set for 3:40 am, disable mover schedule if it appears this will not copy in time. 2- add precleared empty 5tb drive to array - assuming it will stay empty if my cache is storing any new data. 3- copy entire contents of 6 tb drive to empty 5tb drive with MC, stop array and pull out 6tb. (would i not move versus copy?) 4-figure out how to use new config (i have found the following reference to unraid 5 stuff which also refers me to this as a more up to date version http://lime-technology.com/wiki/index.php/Shrink_array ) - "On the 5.X series of unRAID[edit] Stop the array by pressing "Stop" on the management interface. Un-assign the drive you wish to remove. On the Utils tab you'll see a New Config option. Invoking it will set a new disk configuration. A side effect will be the immediate invalidation of any prior parity calculation. When the Apply button is clicked, old parity data will be immediately discarded. When you next start the array, the process of parity calculation on the remaining assigned and working drives will begin. At this point, your array will not be again protected from a disk failure until the system can complete the process of generating new parity information." Is it possible to forgo all this and just pull the 6tb (with 4.5tb space used) out, replace it with the precleared 5tb, then rebuild the 5tb as the 6?) Or essentially with the suggested plan would i be moving (versus copying) all the data off of the 6tb i would like to remove and make back into a hot spare, then doing a new config to restart the array without the 6tb in the array? My other option is to introduce 8tb seagate drives harvest from usb enclosures into the array, but that scares me a bit as the only stuff i have read on the forum is using internal 8tb seagates so until i hear more i am waiting.
  10. What is the easiest way to copy data from 6 to 5? i am assuming i would not want it to be part of the array or it would possibly get other data onto it?
  11. Can i replace a 6 tb drive with a 5tb drive? That way i can put my 6 back on as a hot spare in case of a failure. Seems 6tb drives are pretty expensive at the moment.
  12. Thanks for the replies. I have started my first parity rebuild, i can't believe i'm replacing a toshiba with a seagate, never thought that would happen. It appears that the old data is "emulated" while it rebuilds. So i wont even lose access to data. I am a newbie in the unraid world, and am pretty impressed with the simplicity. I assume that any data still on my cache will be fine? (i think the mover is set to start in about 20 hours, possibly before the rebuild is done). Toshiba is calling me back on the rma as to which division of their company will do the rma. They said they have a cross shipment program, so i will not have to ship drive first. Will post when they call me back and rma is finalized. No problems at all on the rma via phone, didn't ask what was wrong with drive, they are cross shipping another drive and i will pay for return shipping. All they needed was serial number, shipping address and a credit card for collateral until they receive the drive.
  13. I just noticed a yellow triangle on this drive - i don't think its been in service too long. The only other factors that are unusual since i noticed it is that i switched the box i had connected to 2 of my 16 bay drive enclosures attached via sas cable. I rebooted the box and the cable was a little loose so half the drives were not showing, i just shoved the SAS cable in so that it was firmly plugged in and noticed the error after that. Initially the drives werent showing up and a moment later they were there when i refreshed. Also the enclosure is supposed to be hot swappable, but not sure pushing cables in to card is ok. It is entirely possible this is just a coincidence, but i thought i would mention it. Here is the smart report. Current pending sector and reallocated sector count are the 2 in yellow on box. smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.13-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: TOSHIBA MD04ACA500 Serial Number: 55E4K5GTFS9A LU WWN Device Id: 5 000039 64b88173e Firmware Version: FP2A User Capacity: 5,000,981,078,016 bytes [5.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Wed Jan 13 12:22:48 2016 PST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 120) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 584) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 8914 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 384 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 32 7 Seek_Error_Rate 0x000b 100 099 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 2028 10 Spin_Retry_Count 0x0033 107 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 0 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 2136 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 29 (Min/Max 19/34) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 3 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 816 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 0 222 Loaded_Hours 0x0032 098 098 000 Old_age Always - 1167 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 300 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  14. dannioj, i didn't read anything in your post on harvested or internal? Thanks for posting.
  15. To all that have replied, thanks for the informative replies! Ohlwiler, are you using 8 harvested drives or purchased as internals?
  16. Hi Brit, Thanks for your help. This caused me to learn a bit more just figuring out how to do the full smart report (which is good )... Here it is. smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.7-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: ST8000AS0002-1NA17Z Serial Number: Z8408NKD LU WWN Device Id: 5 000c50 086e8af53 Firmware Version: AR15 User Capacity: 8,001,563,222,016 bytes [8.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5980 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Sat Dec 5 09:09:55 2015 PST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 923) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x30a5) SCT Status supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 37767840 3 Spin_Up_Time 0x0003 094 094 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 6 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 16773827 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 63 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 5 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 076 072 045 Old_age Always - 24 (Min/Max 20/28) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 3 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 8 194 Temperature_Celsius 0x0022 024 040 000 Old_age Always - 24 (0 20 0 0 0) 195 Hardware_ECC_Recovered 0x001a 111 099 000 Old_age Always - 37767840 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 67473936220223 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 15628972484 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 32155612333 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  17. I didn't see anyone anwering this question yet. I just ordered 5 from amazon, i have precleared one successfully, i am not sure if these smart values are ok though. Can anyone comment -? Is anyone using the harvested drives yet? I am hesitant to use, but would love to have the increased capacity. ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdv = cycle 1 of 1, partition start on sector 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 26C, Elapsed Time: 59:01:20 ========================================================================1.15b == ST8000AS0002-1NA17Z Z8408NKD == Disk /dev/sdv has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdv /tmp/smart_finish_sdv ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 117 103 6 ok 149191024 Seek_Error_Rate = 72 100 30 ok 16611905 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 74 77 45 ok 26 Temperature_Celsius = 26 23 0 ok 26 Hardware_ECC_Recovered = 117 103 0 ok 149191024 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. root@Tower:/usr/local/emhttp#
  18. Turns out these are quite easy to harvest, compared to the older version of this drive - this one is blue and i was able to do it with a flat head screw driver and a few quarters. - No apparent damage to enclosure and no serial number on enclosure, so it would be pretty easy to keep it and return the enclosure in the event the drive failed. - I did 1 preclear successfully. Here are the results - seems some of the numbers are high, but i dont really know that much regarding smart values I will preclear another as well as this one again... Also i had problems with clearing the 5tb which i harvested from seagate backup plus a year or so ago, so i never used them in an enclosure. I bought 5 of them during amazon's sale, would love to hear other's experiences with the harvested 8tb's. What i have read here is that the drives purchased as 8tb internals are working fine. update - i have asked these questions here, and gotten some good answers if anyone else stumbles on this - http://lime-technology.com/forum/index.php?topic=39526.0 - ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdv = cycle 1 of 1, partition start on sector 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 26C, Elapsed Time: 59:01:20 ========================================================================1.15b == ST8000AS0002-1NA17Z Z8408NKD == Disk /dev/sdv has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdv /tmp/smart_finish_sdv ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 117 103 6 ok 149191024 Seek_Error_Rate = 72 100 30 ok 16611905 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 74 77 45 ok 26 Temperature_Celsius = 26 23 0 ok 26 Hardware_ECC_Recovered = 117 103 0 ok 149191024 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change.
  19. http://www.amazon.com/Seagate-Desktop-External-Storage-STDT8000100/dp/B00R45V3SW/ref=sr_1_1?s=pc&ie=UTF8&qid=1448903620&sr=1-1&keywords=seagate+8tb Anyone harvest and use yet? I know there has been success with internal 8tb. Also if you have chase freedom card with amazon 10% enabled its going to be 162.00. I have 4 coming from various amazon orders so i can answer the harvest question sometime soon. SO i added my wife to amazon family prime benefits to get a 2nd drive and this may have expired while it was in the cart - Update - -got it again on her account 30 mins later - using amazon store card so 171.00 with 5%. I sure hope these work as internal drives.
  20. Here is a camelcamelcamel link - shows current price is a good one. - http://camelcamelcamel.com/Seagate-Desktop-External-Storage-STDT8000100/product/B00R45V3SW?active=watch I have activated 10% back at amazon for chase freedom card so this would be 180.00 shipped to alaska. There is also a 15 off of 60 dollars Amex out there too for Amazon Purchase. So i am thinking of buying a few of these. I am reading that the internal drive is OK for unraid, but has anyone been pulling these out of usb enclosures with success as desktop drives? The 5 TB seagate backup drives i pulled failed preclear and i had read that they can't be used in desktop. Please chime in if you have experience with the 8tb USB and have removed them and put them in a PC or unraid server. Thanks and Happy Holidays. Update - i decided to buy 3. Searching for definitive answers hasn't been going so well. Hopefully it is the same drive inside as the internal drive which sells for more.
  21. I have a few questions and a suggestion. 1. Super newb question, but should i just use the default and press preclear. Do i need to do the 63 or 64 sector thing. Suggestion is to specify specifically what settings should be done(perhaps in the first post, i scanned the entire thread and didn't see it). My assumption is to only change skip pre-read to yes and cycles to 3, leave the rest of settings the same. Then run it? 2. Can i just delete the old preclear script and rename the preclear_bjp.sh to preclear_disk.sh in the install folder using //tower/flash/ from a windows machine? Do i need to reboot again to get it to recognize? It recognizes the original script now? Thanks.
  22. Mover Didn't make a difference, but i noticed that sickrage didn't see the drives anymore (mapped network drives) - so i assume that that sickrage was started in a state where the mapped drives were not connected - a 2nd reboot of windows machine seems to have solved it where the 1st one didn't.
  23. There are 14 hard drives in the unraid box. I am not sure they are all on the same drive, if that makes a difference. Yes on Cutting, i will edit the post to say cut n paste. Also i can double click on the files and open them (TV shows) even though don't think they are actually moving. When i try to add them to sickrage via a gui, it doesn't see the directory. Yesterday it did see the directory after a lengthy copy. I have also taken the array down on unraid box and restarted it. That didn't help. My unraid knowledge is not that extensive, perhaps this is normal behaviour? Next step is invoking the mover and see if that helps.
  24. I am pretty sure this is happening on the unraid box, not on windows machine because its happening on a windows 7 x64 and a windows vista x64. I cut (ctrl-x) a few folders from a share on unraid, then i paste (ctrl-v). It instantly shows that it is copied, but i don't really think it is. Then when i undo the copy it takes 5 minutes to copy the folder back. I rebooted the windows machine and that didn't solve. Also it was behaving normally for several hours and then started doing this. Any ideas? Unraid version 6.1.3
  25. http://slickdeals.net/f/8004703-amazon-124-99-crucial-16gb-2-x-8gb-240-pin-ddr3-sdram-ecc-unbuffered-ddr3-server-memory?p=77303829&utm_source=dealalerts&utm_medium=em-i&utm_term=26994&utm_content=u226989&utm_campaign=tu-9999#post77303829 I think this is probably a fairly good deal - not sure how important ECC is to unraid, but can't hurt to have some of this around when the price goes back up.