Jump to content

Hoopster

Members
  • Posts

    4,573
  • Joined

  • Last visited

  • Days Won

    26

Posts posted by Hoopster

  1. I recently went through the reiserfs ---> xfs transition on my array as well.  I got the same results you did, just a list of the contents of the first disk in verify.txt.  To "verify" the contents, I ended up doing a directory compare on each disk from Windows to verify that the number of files and size counts were exactly equal.  I also spot checked several files in each directory by comparing them, opening them (pictures, videos, movies and TV shows).  The array has been in use for several days now since the transition and I have not yet encountered a problem file.

  2. If you have allowed the User Shares to use all disks, then you do not have to do anything else. 

     

    If you have restricted and shares to only use specific disks, then afterwards you should change the share settings to match your new layout.

     

    OK, thanks.  I have several shares restricted to certain disks and one that uses all current data disks.  For example, I have a Pictures share that uses all the current data disks (disk1 - disk4).  After the changes, would I need to set that to disk1, disk2, disk3, disk 5? 

     

    I have a Videos share that uses disk2, disk3, disk4 so that would require a change to disk1, disk2, disk3, correct?  And so on with all shares where the contents have moved to different physical disks?

     

     

    EDIT:  Perhaps the easiest thing to do would be to change all shares to use all disks (at least the larger ones that already span multiple disks).

  3. OK, I have read through the instructions several times and it all makes perfect sense.  I have five data disks in my array and a parity disk.  Disk 5 was recently added (formatted as RFS) and is empty. All five data disks are 3TB WD Reds.  I have formatted Disk 5 (it was empty as I had not added it to a user share yet) with XFS and am copying the contents of Disk 1 to Disk 5 as per instructions.  My planned migration is like this since all disks are identical in size:

     

    Disk 1 ---> Disk 5

    Format disk 1 XFS

    Disk 2 --> Disk 1

    Format disk 2 XFS

    Disk 3 --> Disk 2

    Format disk 3 XFS

    Disk 4 --> Disk 3

    Format Disk 4 XFS

     

    Disk 4 then becomes the extra disk I can add to user shares as needed.

     

    My question is what does this do to user shares where I have the current disk 1 ... disk 4 assigned to the shares?  Do I need to unassign the disks and reassign the "new Disk 1 (Disk 5)" as Disk 1, etc. after the changes?  As you can see, I am not 100% certain about how physical disks, array disk assignments and user shares that use these disk are all related after making these changes.  Of course, initial array configuration and share creation all makes perfect sense, but, I am unclear on what happens when contents are moved from one disk to another after changing file system.

  4. OK, I think I have this problem figured out and resolved.  It is related to this thread: http://lime-technology.com/forum/index.php?topic=39237.0

     

    Somehow in the creating and configuration of the Crashplan container, I ended up with a user share called ":" that contained an empty /mnt/user directory structure.  That and the creation of appdata/Crashplan on disk1 kept disk1 and parity spinning as long as Crashplan was running.  Just removing the Crashplan docker and docker.img was not enough.  Once I cleaned up the mystery ":" share and recreated docker.img and Crashplan container, things appear to be working normally.  Crashplan is currently synchronizing block information, but, Parity and Disk1 are not spinning.

  5. Unless I stop Crashplan, Parity and Disk1 are constantly spinning.  In checking out my docker config and Crashplan config, I realized the appdata share had not been set to cache only and that, in fact, Crashplan program files were installed on Disk1/appdata with some files on Cache/appdata.  I removed the Crashplan container and files, ran a rmdir -r appdata on Disk1 to get rid of it completely on disk1, verified the appdata share was set to cache only and reinstalled the Crashplan container.  With Crashplan running, Parity and Disk1 continue to spin.  They will not spin down and stay spun down unless I stop Crashplan.

     

    CrashPlan is setup per default config with /config set to /mnt/user/appdata/crahplan and /data set to /mnt/user.  What have I done wrong?

     

    EDIT:  Since Crashplan was my only installed docker so far, I deleted it and the docker.img file. At least with Crashplan and docker completely gone, I can manually spin down disks and they stay spun down (Parity and Disk1 would spin back up with Crashplan installed).  I have no idea why Parity is spinning up in the first place since I am not writing to the array.  All disks eventually spun down per disk settings.  I"ll check all settings before reinstalling Crashplan docker.  I am first trying to eliminate everything else as the cause.

  6. I set the container volume to /mnt and selected backup locations based on share names rather than disks (didn't show anyway).  This caused Crashplan to warn me that I was removing files from the backup archive and that they would be permanently deleted.  I went ahead and confirmed thinking I was about to delete my entire 2.8tb worth of backup and start over.  However, Crashplan recognized the files as already backed up without a change in folder structure and simply rescanned existing files (still took many hours) and backed up a few new files that had been added.  After about 18 hours it was done.

  7. The only other thing that im finding is a little funny is i put data path as /user/mnt/  but in the folder to back up slection it looks like it mounted at /user/ and it wont expand /mnt/ directory but i can expand shares  in /data/ directory.

     

     

    what exactly is /data/  some kind of symlink? because i know that /user0/ is cache disk and /user/ is parity.

     

     

    Just wondering?

     

    edit:

     

    Thanks for the lightening fast reply generalz

     

    I upgraded yesterday from v5.06 to v6b14b.  Upgrade went very smoothly with no issues.  I am now trying to setup Crashplan docker.  I have ~2.8TB data already backed up to Crashplan Central.  When I setup the Crashplan container with /data path set to /mnt/user/ and adopted the previous backup set, the Crashplan GUI shows all backed up files as "missing."  I have pictures, videos, movies etc. to backup and each of these shares spans 2 or more physical disks.  In my prior configuration in the Crashplan GUI I had to expand  DISK1, DISK2, DISK3, etc. and select the corresponding directories that contained the data to be backed up for each backup set.  With /mnt/user/ specified as data path, the disks and directories do not expand in the same fashion (in fact, I can not drill down to directories that are part of the share in this way) and everything is reported as missing.  The /data node expands to the share names, but if I include these, Crashplan wants to remove everything I have already backed up and start over as it is a path change and thinks everything is "new.".

     

    Generalz responded that he had set /data to /mnt/user/:/mnt/user/:rw to have Crashplan continue backing up existing data sets.  When I do this, Crashplan will not install successfully and the container is removed saying this is an invalid path.  I tried adding /data paths for each of the physical disks.  The result was the same as setting /data to /mnt/user/

     

    Any ideas for proper /data configuration that will allow Crashplan to continue on with prior configuration?

    I don't use this docker, but it sounds like you and the people you are quoting are confused about volume mappings.

     

    Maybe post what you have for volume mappings and we can help figure it out.

     

    It also sounds like you must have had Crashplan set up before to use disk shares instead of user shares. You can't get to the disk shares from /mnt/user. Maybe try /mnt instead.

     

    Yes, I am sure I am confused, happens quite often!

     

    Here is the Crashplan Docker configuration with /data set to /mnt. It shows the physical disks in the array and looks like it should work:

    dme3cg.jpg

     

    Below is how Crashplan sees things.  Note that I had to drill down under physical disk names to select backup folders.  Perhaps this was simply an error on my part in setting the original backup set folders and I could have/should have done it under user shares (don't recall if that was something I could have done a couple of years ago when I defined these).

     

    Under Mnt it only shows disk1 and disk 2 and they are not expandable.  I see the disk and share names under the data node, but, selecting this causes Crashplan to discard the prior backup files and start over.  Under User I see the UnRAID share names and this is probably the preferred way of selecting files to backup as it is not physical disk dependent, but, this also results in Crashplan wanting to discard prior backup.

    bbgx2.jpg

     

    As seen in the Crashplan GUI, the Pictures share is currently storing files on disk1 and disk2 (although in UnRaid config it can span disk1..disk4; Videos is on Disk2 and Disk3 (also set to disk2...disk4) and Movies and TV are both on Disk 4.  Disk 5 was recently added and currently has nothing on it.

  8. The only other thing that im finding is a little funny is i put data path as /user/mnt/  but in the folder to back up slection it looks like it mounted at /user/ and it wont expand /mnt/ directory but i can expand shares  in /data/ directory.

     

     

    what exactly is /data/  some kind of symlink? because i know that /user0/ is cache disk and /user/ is parity.

     

     

    Just wondering?

     

    edit:

     

    Thanks for the lightening fast reply generalz

     

    I upgraded yesterday from v5.06 to v6b14b.  Upgrade went very smoothly with no issues.  I am now trying to setup Crashplan docker.  I have ~2.8TB data already backed up to Crashplan Central.  When I setup the Crashplan container with /data path set to /mnt/user/ and adopted the previous backup set, the Crashplan GUI shows all backed up files as "missing."  I have pictures, videos, movies etc. to backup and each of these shares spans 2 or more physical disks.  In my prior configuration in the Crashplan GUI I had to expand  DISK1, DISK2, DISK3, etc. and select the corresponding directories that contained the data to be backed up for each backup set.  With /mnt/user/ specified as data path, the disks and directories do not expand in the same fashion (in fact, I can not drill down to directories that are part of the share in this way) and everything is reported as missing.  The /data node expands to the share names, but if I include these, Crashplan wants to remove everything I have already backed up and start over as it is a path change and thinks everything is "new.".

     

    Generalz responded that he had set /data to /mnt/user/:/mnt/user/:rw to have Crashplan continue backing up existing data sets.  When I do this, Crashplan will not install successfully and the container is removed saying this is an invalid path.  I tried adding /data paths for each of the physical disks.  The result was the same as setting /data to /mnt/user/

     

    Any ideas for proper /data configuration that will allow Crashplan to continue on with prior configuration?

  9. I've received pms and seen multiple new threads with questions about S3 sleep with version 5. So bringing this thread back up to post a revised version of Bagpuss's comprehensive S3 script with the changes for unRAID v5 mentioned in my post above: Changed sleep command to echo -n mem > /sys/power/state and logs to v5 directory structure.

     

    Thanks for this.  I too need a sleep script for version 5.0 final since it disappeared from the main page in the new interface.  I was using SF prior to upgrading to v5.0 final, but, SF has display problems with this release.  I know some just leave their serving running 24x7; I do not want to do that as it is lightly used right now.

     

    I have modified the unmenu sleep script on the user scripts page such that it now works properly with v5.0 final and I now have a manual sleep button that works great.

     

    With your modified Bagpuss auto sleep script, do you put it in your go file so it is active on boot up or do you run it elsewhere?  Just curious as to how you are invoking it.

     

    I am a total Linux script noob so, I am trying to wrap my head around what the the script is actually doing as far as checking NIC activity and the amount of inactive time before sleep is invoked, but, I very much appreciate the work you and Bagpuss have done on this.

  10. If you are looking for a SW to do wake the server you find one here -> http://www.depicus.com/wake-on-lan/wake-on-lan-cmd.aspx

     

    //Peter

    Thanks, I have downloaded wolcmd and modified it with the MAC address of my NIC and the appropriate IP and subnet mask addresses.  Even though the NIC supports WOL, it appears my BIOS may not; however, my concern at the moment has more to do with automating the server sleep function before I move on to waking it up.

     

    Speeding_Ant's Simple Features Sleep button works great, but, I don't know the code behind it to try it in a script.

  11. The nature of my unRAID server is that it is (for the moment) strictly a media server.  I store photos, videos, movies and music on it that does not always need to be accessed.  I want to put my unRAID box to sleep automatically after the disks spin down and wake it again when it is needed.  I am running v5 beta 14.  The sleep script on the wiki page uses "echo 3> /proc/acpi/sleep" which does not put the server in an s3 sleep state with the latest kernels.  I have Simple Features installed and the "Sleep" button works perfectly.

     

    Is there anyway to modify the sleep script to use whatever method Simple Features is using to sleep the server 5 minutes after disk spin down? 

     

    This question seems to have been asked before, but, I do not see a definitive answer in this thread.  It is likely this has been answered and I just missed it.  If so, I apologize in advance for the reading comprehension fail.

     

    After I get the sleep portion reliably working, I'll tackle the WOL.  Right now, I can successfully wake the server by pressing the blinking power button, by specific keyboard command or a specific time of day.  Eventually, the preferred method is WOL though a magic packet.

     

    ethtools eht0 shows that "pumbg" is supported and wake is set to "g" so this indicates the NIC supports WOL, correct?

     

     

  12. In your case, the Seek_Error_Rate has dropped too low.  The line could be interpreted to read as: Seek_Error_Rate has dropped to the 28 percentile (from VALUE column), and previously had even dropped to the 26 percentile (from WORST column), which is lower than the 30 percentile rating (from THRESHold column) that the engineers at the drive manufacturer have deemed the minimum reliability percentile, below which this drive should be considered FAILED.

     

    Just to add a clarification, because it may be confusing to some that a drive seems to be working fine, yet the SMART report says it has FAILED.  Part of the idea behind the development of the SMART system is to try to alert users to imminent failure BEFORE it is too late to save data.  When a drive indicates a SMART failure, it is trying to warn you that there is a very high probability of complete drive failure in the very near future.  The drive may or may not be fully operational at this moment, but even more catastrophic failure is very possible very soon.  If there is any important data on the drive, you should attempt to relocate it as soon as possible.

    OK, thanks for the detailed response, I really appreciate it.  Since this is a brand new drive and is already in pre-fail, I will return it for a new one.

  13. I just received two Seagate ST2000DL003 2TB drives from Amazon.  I have run two preclear cycles.  One has passed both preclear cycles with no SMART failures; One has failed on both preclear cycles with a SMART seek_error_rate failure. Raw_Read_Error_Rate looks very high as well.  The drive that "passed" also had high values for these parameters although not nearly as high as these.

     

    Of five drives I have precleared, this is the only one to show a failure and I have never seen a failure in any desktop drive so I am not sure how reliable the SMART reports are.  What say ye; should I return this to Amazon?  I assume this is a legitimate indication of a bad drive as the seek_error_rate values seems incredibly high.

     

    Here is the SMART report generated at the end of the preclear with some information redacted:

     

    SMART status Info for /dev/sdc

     

    smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build)

    Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

     

    === START OF INFORMATION SECTION ===

    Device Model:    ST2000DL003-9VT166

    Serial Number:    5YD6HZP5

    Firmware Version: CC3C

    User Capacity:    2,000,398,934,016 bytes

    Device is:        Not in smartctl database [for details use: -P showall]

    ATA Version is:  8

    ATA Standard is:  ATA-8-ACS revision 4

    Local Time is:    Sun Jan  1 11:39:35 2012 MST

    SMART support is: Available - device has SMART capability.

    SMART support is: Enabled

     

    === START OF READ SMART DATA SECTION ===

    SMART overall-health self-assessment test result: FAILED!

    Drive failure expected in less than 24 hours. SAVE ALL DATA.

    See vendor-specific Attribute list for failed Attributes.

     

    General SMART Values:

    Offline data collection status:  (0x82)  Offline data collection activity

                  was completed without error.

                  Auto Offline Data Collection: Enabled.

    Self-test execution status:      (  0)  The previous self-test routine completed

                  without error or no self-test has ever

                  been run.

    Total time to complete Offline

    data collection:        ( 623) seconds.

    Offline data collection

    capabilities:          (0x7b) SMART execute Offline immediate.

                  Auto Offline data collection on/off support.

                  Suspend Offline collection upon new

                  command.

                  Offline surface scan supported.

                  Self-test supported.

                  Conveyance Self-test supported.

                  Selective Self-test supported.

    SMART capabilities:            (0x0003)  Saves SMART data before entering

                  power-saving mode.

                  Supports SMART auto save timer.

    Error logging capability:        (0x01)  Error logging supported.

                  General Purpose Logging supported.

    Short self-test routine

    recommended polling time:    (  1) minutes.

    Extended self-test routine

    recommended polling time:    ( 255) minutes.

    Conveyance self-test routine

    recommended polling time:    (  2) minutes.

    SCT capabilities:          (0x30b7)  SCT Status supported.

                  SCT Feature Control supported.

                  SCT Data Table supported.

     

    SMART Attributes Data Structure revision number: 10

    Vendor Specific SMART Attributes with Thresholds:

    ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

      1 Raw_Read_Error_Rate    0x000f  116  100  006    Pre-fail  Always      -      105386328

      3 Spin_Up_Time            0x0003  092  092  000    Pre-fail  Always      -      0

      4 Start_Stop_Count        0x0032  100  100  020    Old_age  Always      -      9

      5 Reallocated_Sector_Ct  0x0033  100  100  036    Pre-fail  Always      -      0

      7 Seek_Error_Rate        0x000f  028  026  030    Pre-fail  Always  FAILING_NOW 13464724458629

      9 Power_On_Hours          0x0032  100  100  000    Old_age  Always      -      38

    10 Spin_Retry_Count        0x0013  100  100  097    Pre-fail  Always      -      0

    12 Power_Cycle_Count      0x0032  100  100  020    Old_age  Always      -      9

    183 Runtime_Bad_Block      0x0032  100  100  000    Old_age  Always      -      0

    184 End-to-End_Error        0x0032  100  100  099    Old_age  Always      -      0

    187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

    188 Command_Timeout        0x0032  100  100  000    Old_age  Always      -      0

    189 High_Fly_Writes        0x003a  100  100  000    Old_age  Always      -      0

    190 Airflow_Temperature_Cel 0x0022  068  065  045    Old_age  Always      -      32 (Min/Max 28/35)

    191 G-Sense_Error_Rate      0x0032  100  100  000    Old_age  Always      -      0

    192 Power-Off_Retract_Count 0x0032  100  100  000    Old_age  Always      -      7

    193 Load_Cycle_Count        0x0032  100  100  000    Old_age  Always      -      9

    194 Temperature_Celsius    0x0022  032  040  000    Old_age  Always      -      32 (0 22 0 0)

    195 Hardware_ECC_Recovered  0x001a  037  024  000    Old_age  Always      -      105386328

    197 Current_Pending_Sector  0x0012  100  100  000    Old_age  Always      -      0

    198 Offline_Uncorrectable  0x0010  100  100  000    Old_age  Offline      -      0

    199 UDMA_CRC_Error_Count    0x003e  200  200  000    Old_age  Always      -      1

    240 Head_Flying_Hours      0x0000  100  253  000    Old_age  Offline      -      66468913872934

    241 Total_LBAs_Written      0x0000  100  253  000    Old_age  Offline      -      2469991065

    242 Total_LBAs_Read        0x0000  100  253  000    Old_age  Offline      -      4264727547

×
×
  • Create New...