Jump to content

squarefrog

Members
  • Posts

    41
  • Joined

  • Last visited

Posts posted by squarefrog

  1. I'd like to remove my cache drive, and I don't intend on replacing it. I currently have some Docker containers using it, but they can go too. 

     

    Is it just a case of deleting the Docker containers, stopping the array, and unassigning the cache drive, then starting the array back up?

  2. That drive looks fine.  Issue is, if it starts to go south, it will happen fast.

    If that is your parity drive, then I would not worry as much unless you do not trust your data drives.

     

    Yeah a couple of the drives are getting a bit old. I had planned on replacing a drive every 3 months until they have all been replaced, so I'll move the brand new drive to the parity. While I wouldn't be devastated if I lost the data on my array (the important stuff is backed up online), it would be very inconvenient.

     

    I would suggest a smart long test (disable spin down timer) on your data drives for peace of mind.

     

    Also, make sure you are doing at least monthly parity checks.

     

    Already do the monthly checks, but I'll run some long tests tonight before bed.

  3. OK looking at the long SMART report, the drive doesn't look too bad at all. I think I'm still going to replace it as I'll always regret it if I don't and it fails! Plus it is almost 4 years old now, so it has served me well.

     

    smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.13-unRAID] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Seagate Barracuda 7200.14 (AF)
    Device Model:     ST3000DM001-9YN166
    Serial Number:    W1F0MS4V
    LU WWN Device Id: 5 000c50 05119bc8f
    Firmware Version: CC4B
    User Capacity:    3,000,592,982,016 bytes [3.00 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    7200 rpm
    Device is:        In smartctl database [for details use: -P show]
    ATA Version is:   ATA8-ACS T13/1699-D revision 4
    SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
    Local Time is:    Wed Feb 10 08:29:19 2016 GMT
    
    ==> WARNING: A firmware update for this drive may be available,
    see the following Seagate web pages:
    http://knowledge.seagate.com/articles/en_US/FAQ/207931en
    http://knowledge.seagate.com/articles/en_US/FAQ/223651en
    
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    General SMART Values:
    Offline data collection status:  (0x00)	Offline data collection activity
    				was never started.
    				Auto Offline Data Collection: Disabled.
    Self-test execution status:      (   0)	The previous self-test routine completed
    				without error or no self-test has ever 
    				been run.
    Total time to complete Offline 
    data collection: 		(  584) seconds.
    Offline data collection
    capabilities: 			 (0x73) SMART execute Offline immediate.
    				Auto Offline data collection on/off support.
    				Suspend Offline collection upon new
    				command.
    				No Offline surface scan supported.
    				Self-test supported.
    				Conveyance Self-test supported.
    				Selective Self-test supported.
    SMART capabilities:            (0x0003)	Saves SMART data before entering
    				power-saving mode.
    				Supports SMART auto save timer.
    Error logging capability:        (0x01)	Error logging supported.
    				General Purpose Logging supported.
    Short self-test routine 
    recommended polling time: 	 (   1) minutes.
    Extended self-test routine
    recommended polling time: 	 ( 340) minutes.
    Conveyance self-test routine
    recommended polling time: 	 (   2) minutes.
    SCT capabilities: 	       (0x3085)	SCT Status supported.
    
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
      1 Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       164811800
      3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
      4 Start_Stop_Count        0x0032   099   099   020    Old_age   Always       -       1084
      5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x000f   070   060   030    Pre-fail  Always       -       11153886
      9 Power_On_Hours          0x0032   080   080   000    Old_age   Always       -       17574
    10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
    12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       64
    183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
    184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
    187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
    188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
    189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
    190 Airflow_Temperature_Cel 0x0022   071   061   045    Old_age   Always       -       29 (Min/Max 21/36)
    191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
    192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       14
    193 Load_Cycle_Count        0x0032   094   094   000    Old_age   Always       -       12160
    194 Temperature_Celsius     0x0022   029   040   000    Old_age   Always       -       29 (0 15 0 0 0)
    197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
    198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
    199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
    240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       634h+47m+36.067s
    241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       63125348141158
    242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       2738158456079
    
    SMART Error Log Version: 1
    No Errors Logged
    
    SMART Self-test log structure revision number 1
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Extended offline    Completed without error       00%     17565         -
    # 2  Short offline       Completed without error       00%     17560         -
    
    SMART Selective self-test log data structure revision number 1
    SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    

     

    This also gives me the opportunity to take my time and read through some SMART report topics and see if I can figure out what all the values mean. The main thing that confuses me is that I'm not sure whether I should be looking at VALUE, WORST, THRESH or RAW_VALUE. It certainly seems that this will be cleared up by reading Understanding SMART Reports wiki.

     

    Thanks for all the help everyone.

  4. If the 3tb has been working fine all this time, check the smart values to determine the urgency on it's replacement.

     

    Yes, I can probably do this. I'm not entirely sure what I should be looking for when reading the smart report, is there a sticky post or wiki somewhere that explains what to look for?

     

     

    For peace of mind I would start shopping for a replacement after determining urgency on replacing the ST300DM001.

     

    I had planned on replacing the ST300DM001 with a larger better quality further down the line anyway so this is no real hardship.

     

    If the data on the 250 gb drive is not important or has been moved off to the other drives, I might elect to replace parity earlier, but probably would not.

    Depends on age of 3TB drive and if reallocated, uncorrectable or pending sectors are showing up. (but that is me).

     

    I had already moved all the data off the 250GB, so it's been sat unused for a few months, until a reasonable deal came along.

     

    I would probably order and preclear a drive for parity that I can live with for a while.

    For me, these days I go to the 6tb hgst drives. They get 225MB/s on the outer tracks. YMMV.

     

    If the goal is cheapest price per GB available and you don't need the expanded space right now, then put the new drive in service as parity while shopping for an additional drive.

     

    As I'm still only about 60% utilised, I can't quite justify the leap to 6TB, as much as I'd like to! I think it makes sense to just reallocate it as a parity and wait for prices to drop further, or for me to outgrow my current storage.

     

    As far as removing the missing 250GB drive and replacing old parity with the new drive, I would move data off the 250gb drive with rsync to any spare space on the other drives.  Capture a printout or image of the current drive layout, then rebuild the array via new config from scratch utilizing the new drive as parity.

     

    Am I right in thinking this is achieved by stopping the array, and going to Tools > New Config? The alert was a bit alarming - so just to confirm it wont format my drives, but simply trigger the rebuild of the parity? Any harm in removing the missing 250GB and swapping the parity in one step?

  5. OK, i'm about 99% into Zeroing the new 3TB drive. However, just idly reading through posts here trying to learn more about SMART errors, I see this post about replacing Seagate ST300DM001 drives immediately as a precaution for them being very error prone.

     

    I check my current array to see if I have any drives, and of course I do... my parity drive  :-X

     

    So now I'm worried. I figure I have two options now:

     

    [*]Replace the failed 250GB drive, with the soon to be precleared 3TB Red, order a new larger 4TB drive for a new parity drive, then hope nothing bad happens while I wait

    [*]Remove the failed 250GB drive from the array, and replace the suspect parity with the 3TB Red. Order a larger 4TB drive, preclear in the now empty slot, then swap the parity 3TB Red with the 4TB. Format the Red and add it to the array

     

    I currently have about 3 TB free on my array, so I'm not desperate for the storage right now. I can wait for prices to come down for 4TB. Because of that I think it'd definitely be wise to do the former of the two options.

     

    Is there anything in particular I need to do to remove the missing 250GB drive and replace the old parity with the new Red drive?

  6. The N40L with thebay bios mod allows access to the eSATA port on the back.

    An external bay on top of the n40l like the startech or something with a fan and esata would suffice.

    It's also reported that the eSATA port supports port multipliers.

     

    I use the rear eSATA port to run an internal SSD :)

     

    Got 6 drives successfully stuffed in there.

  7. Expensive? They're virtually giving them away here. The Celeron CPU and 4 GB RAM are perfectly adequate for running unRAID.

     

    I swear 3 days ago they were out of stock everywhere! I agree the N40L is fast enough for unRAID, but I wanted to run plex media server on it which requires extra grunt. The only downside to switching is I couldn't cram all 6 of my drives in which is a little sad.

  8. Oh that's an interesting thought! I had planned on replacing the N40L with a Gen8, but they've suddenly become very expensive (new ones coming perhaps?)

     

    I'm going to stick with the N40L for another couple of years until I outgrow my current storage, then I'd like to DIY. Providing I can find a nice case thats not too much larger than the N40L and as quiet.

  9. Another option is to stop the array; unassign the 250GB drive; and start the array with that drive missing.  unRAID will now 'emulate' that drive using the remaining drives plus parity.  That will free up a slot to plug in the new 3TB drive for pre-clearing purposes.  Once the preclear has finished and the drive checks out OK you can assign it in place of the missing drive and unRAID will rebuild the 250GB drive contents onto the new drive.

     

    I probably should have mentioned that the 250GB drive doesn't contain any files, as I moved them once I started getting SMART errors. I presume in this case, this second option will allow me to pre-clear the drive inside my box, and not really suffer any downtime?

     

    I do want to preclear, as some of the UK couriers aren't particularly careful when handling hard drives and I want to check its reliability before committing data to it. Yep, my parity is 3TB.

  10. I have an HP N40L, with 6 drives in (the maximum I can fit in). One of the drives is 250GB and failing. I've ordered a 3TB replacement, which I'd like to preclear before swapping out the 250GB.

     

    How can I achieve this without significant downtime on my unRAID setup? Having read the Configuration Tutorial, it seems the recommended way is to use the Preclear Plugin, but I can't attach the drive to my NAS.

  11. I think CouchPotato updated yesterday, and now I can't get it to start up. The log contains the following error, looped:

     

        db.open()
      File "/app/couchpotato/libs/CodernityDB/database_super_thread_safe.py", line 43, in _inner
        res = f(*args, **kwargs)
      File "/app/couchpotato/libs/CodernityDB/database_super_thread_safe.py", line 93, in open
        res = super(SuperThreadSafeDatabase, self).open(*args, **kwargs)
      File "/app/couchpotato/libs/CodernityDB/database.py", line 571, in open
        index.open_index()
      File "/app/couchpotato/libs/CodernityDB/tree_index.py", line 160, in open_index
        self.root_flag = struct.unpack('<c', self.buckets.read(1))[0]
    error: unpack requires a string argument of length 1
    

     

    I don't know exactly when it updated, I used it yesterday, and today I can't access it.

     

     

     

  12. Thanks trurl I finally got round to doing this. The file I was looking for was in

     

    ~/.config/mc/hotlist
    

     

    So I simply ran the following

     

    cp ~/.config/mc/hotlist /mnt/cache/appdata/mc/mc_hotlist
    

     

    Then added the following to bottom of the go file (/boot/config/go):

     

    # Copy midnight commander hotlist
    cp /mnt/cache/appdata/mc/mc_hotlist /root/.config/mc/hotlist
    

     

    I think in future I'll probably create a symbolic link to this file so I dont have to manually copy the file any time I add a directory path.

  13. Seems to me that its not a drive spin up issue, since you're using mariadb to share the database.

     

    Do you have on boot up for it to wait until the network is available (can't remember the entry).  Without it, Kodi can boot up, see that mariadb is not available and then show you a blank database

     

    You know, that actually makes much more sense. A quick googling after and I found this:

     

    OpenELEC addon -> wait for network

     

    Looks like that will do the trick nicely!

  14. I realise this is more a Kodi question than unRAID, but I wonder if anyone else has experienced this.

     

    I have a couple of shares mounted in Kodi using NFS (using a mariadb docker to share the database). Generally this works great - fast, responsive, no dropouts. However, when I first switch on my Kodi box (Raspberry Pi 2), it boots so quick that the drives haven't spun up, so my library appears empty.

     

    I don't suppose anyone knows of an equivalent to the samba client timeout option?

     

    <samba>
      <clienttimeout>10</clienttimeout>  <!-- timeout (in seconds) -->
    </samba>
    

     

    Or is there anything else I can look at other than leave my drives running 24/7?

  15. I'm trying to install vim, but something isn't right. Heres what I do:

     

    $ cd /mnt/cache/Downloads
    $ wget http://mirrors.slackware.com/slackware/slackware-current/slackware/ap/vim-7.4.692-i486-1.txz
    $ installpkg vim-7.4.692-i486-1.txz
    $ vim 
    -bash: /usr/bin/vim: cannot execute binary file
    

     

    I tried to make the binary executable with:

     

    $ chmod +x /usr/bin/vim
    

     

    But that didn't help. Any suggestions? Vi is weird and I can't use nano :)

  16. Edit the docker. Turn on Advanced View in upper right. USERNAME and PASSWORD are in the Environment Variables section.

     

    I use Transmission Remote GUI to interact with Transmission. It has more features than the web UI, including individual seeding ratios.

     

    Ah-ha! I wondered where you could set the mythical environment variables! I actually just stumbled across TransmissionGUI from a Google search too, will try at home. Shame as I generally use an iPhone or iPad to control Transmission. Maybe I'll look at the JSON RPC docs and write my own!

     

    Thanks trurl.

  17. How do you change or remove the username and password in Transmission? Is it a case of editing the config file or can this be set somewhere?

     

    The web UI from the Mac version seemed to allow you to set individual seeding ratios per torrent. How can you access more options than the web UI offers?

  18. Do you mean the application data that a specific docker container uses? Many dockers will need to constantly read and write this data, so if the User Share is on the array then not only will that disk spin from the read/writes, but parity will also spin when it writes.

     

    This is why many people put this in a cache-only User Share.

     

    This answers my question. I wondered if it loaded then ran from memory, but what you say does makes sense. I just didn't really want to put a cache drive in purely for Docker use. I'm due to move soon so will have to use Powerlines so not a lot of point in speedy writes and losing a drive bay.

     

    Another question I have is does the virtualisation require a lot of CPU grunt? I have an HP N40L which is fairly common, but low end. I see myself using sparklyballs headless kodi, and possibly a SQL docker.

×
×
  • Create New...