Jump to content

PeteAron

Members
  • Posts

    272
  • Joined

  • Last visited

Posts posted by PeteAron

  1. I echo everything Energen said.  APC or CyberPower are both good, and have usb connections - just double check the one you think you want to make sure.  Get the largest wattage you can afford for the longest uptime, although all you really need is 10 min or so - long enough that the array wont shut down during a short outage and to have a long enough period of time to safely power down the array.  

     

    IMO

  2. OK I think i know what to do, but am concerned about exactly how to do this.  Please look this over and let me know if the following steps will not overwrite any of my data disks.

     

    I have 11 disks plus a single parity disk.  one of those disks was a 4 tb drive (disk 7), and my last parity check was fine.  I shut down my array, and changed the disk in the slot containing the 4 tb disk to a "new" 8 tb disk (new disk 7).  I then rebooted and unraid began to rebuild to the new disk.  This disk (new disk 7) was red-balled due to crc errors.  

     

    The Plan:  i would like to return the 4 tb disk to slot 7 and make unraid build parity anew.  I have a new parity drive precleared and ready to use so i will make that disk parity at the same time, since parity is no longer valid unless i replace the 8 tb disk with a new 8 tb disk.

     

    Task list:

    -1   power on the server (currently off)

    -2  change disk 7 from the 8 tb back to the existing 4 tb disk.

    -3 change my parity disk assignment to my new 10 tb parity disk

    -4  go to the tools menu and select "new config"

    -5  start the array and i think i have to click a button on the main page saying something like "trust my array"

    -6 wait for the new parity to build using the existing data

     

    Is this all correct?  my data drives, after replacing the new disk 7 with the old disk 7, are all intact and i just want to rebuild my parity drive.

     

    thanks for the help.

  3. I am not the most experienced user, although i also have a 2011 vintage server.  Looking at your disk smart report, the only potential issue I see is the raw read error rate.  your reallocated sector count and pending sector count are fine.  It may well be ok.  

     

    But your problem is that unraid has red-balled it.  Your easiest solution is to replace that drive and allow the array to rebuilt it.  After that you can attempt to preclear this drive and see if it will work in your array.  Personally i would preclear it to make sure it is working and then retire it to a windows box where i am not concerned about a drive failure.  It isnt that old, but old enough that it's pretty likely got a good life ahead of it.

     

    Maybe others would suggest another path.  

  4. Thank you for the comments, guys.  I plan to investigate the drive itself later.  Right now, my problem is that i have a red-balled disk in my array.

     

    I am thinking i will put the 4 tb drive back in place, with the new 10 tb parity drive, use the "trust my array" button, and build my new parity drive.  Then i will replace the 4 with my new 8 that's on its way using the normal process.

     

    Any thoughts on this strategy?  What would you do?

     

    P.S.  I do not have diagnostics from the array when this occurred - I wasnt thinking.  The array was up long enough only to start the parity rebuild.  Here are the diagnostics from the shutdown immediately previous to this event.  This was just before shutting down in order to add the new 10 tb drive.  After adding that drive, i restarted my array and precleared that drive.  A few days after that, my monthly parity check completed normally.  After that, i did the disk swap mentioned above.

    repository-syslog-20210205-2207.zip

  5. Edit:

    My issue has been resolved.  Read below to understand the problem.  I have never used the new config option before.  This is a simple thing to do but my data was not protected (read below) and once I made the choice to do this the array is unprotected until parity can be rebuilt. Here's exactly what I did, in case anyone else has a similar situation.

     

    When using the "new config" you are assuming that the data on all of your disks is intact and you have no disk errors.  You will not have parity protection after selecting this.  Your array needs to be off-line to do this.  

     

    (1)  The server was off.  Power up server.  If you dont have a list or screenshot of your current array configuration/disk assignments GET ONE NOW.  I had 3 disks in my case which were not part of the array.  

    (2) Note configuration of array, in this case, disk 7 is red-balled.  Old disk 7 present.

    (3) Go to the  Tools menu, locate "new config" and click

    (4) Preserved or dont preserve as appropriate for your need

    (5) Return to the main menu.  Your array will have no disks associated with it.

    (6)  Assign your parity drive - safety first.  This drive will be completely over-written.  Dont make a mistake here.

    (7)  One by one, assign each drive to a position in your new array.  As long as you have each data drive assigned when you are finished, it makes no difference which data drive goes where.  You can add or remove drives from your new config as you please - that's the point of this.

    (8)  When you have all of your devices assigned, double check everything.  Make sure you have all the devices you wanted from your screenshot in the new array.

    (9)  Spin up your array.  A parity build will begin immediately.  I allowed this to complete before doing anything.  

     

    I posted my current diagnostics below if anyone is interested.

     

    Thank you JorgeB for helping me out and for your patience.

     

     

    =====

    the plan:

    OK I think i know what to do, but am concerned about exactly how to do this.  Please look this over and let me know if the following steps will not overwrite any of my data disks.

     

    I have 11 disks plus a single parity disk.  one of those disks was a 4 tb drive (disk 7), and my last parity check was fine.  I shut down my array, and changed the disk in the slot containing the 4 tb disk to a "new" 8 tb disk (new disk 7).  I then rebooted and unraid began to rebuild to the new disk.  This disk (new disk 7) was red-balled due to crc errors.  

     

    The Plan:  i would like to return the 4 tb disk to slot 7 and make unraid build parity anew.  I have a new parity drive precleared and ready to use so i will make that disk parity at the same time, since parity is no longer valid unless i replace the 8 tb disk with a new 8 tb disk.

     

    Task list:

    -1   power on the server (currently off)

    -2  change disk 7 from the 8 tb back to the existing 4 tb disk.

    -3 change my parity disk assignment to my new 10 tb parity disk

    -4  go to the tools menu and select "new config"

    -5  start the array and i think i have to click a button on the main page saying something like "trust my array"

    -6 wait for the new parity to build using the existing data

     

    Is this all correct?  my data drives, after replacing the new disk 7 with the old disk 7, are all intact and i just want to rebuild my parity drive.

     

    thanks for the help.

     

    ==========

    Original post below:

    ==========

     

    Hi all, I believe I know what to do next, but i would like some critical feedback.  

     

    I am in the process of upgrading my 13 disc array from 8 tb parity to 10 tb parity.  I also have a 4 tb drive I want to replace with an 8 tb drive.  i have a hot 8 tb and 6tb drive outside of my array awaiting service.  Both were previously in the array.  I purchased my 10 tb drive to be used as parity, shut down my array after a parity check, rebooted and precleared the 10 tb drive.  Then i waited for my monthly parity check to complete as a conservative measure.  There were no issues with my array, all was running smoothly.  

     

    I decided to swap the 4 tb drive for the 8 as a first step, planning to then swap parity.  I turned off the array, went to the slot containing the 4 tb drive, and changed that position to the 8 tb drive.  I then started the array again, and allowed it to begin to rebuild the "new" 8 tb drive.

     

    within a minute or two, i noticed that the drive was throwing CRC errors.  there were 60 some, then 150, 220, 290, 350, etc.  I paused the rebuild, checked the cables, and un-paused.  more CRC errors.  after about 600 CRC errors the array disabled the drive.  Slightly panicked, i shut down the array to think.

     

    I believe my best course of action at the moment is to put the 4 tb drive back into my array, parking the 8.  At the same time, i can swap my parity drive and just allow the array to rebuild parity, since I am confident the data is good.  

     

    After this has completed, i can preclear the drive throwing CRC errors and see if i can determine if it's good.  I can also swap my existing parity drive into the position i want it, replacing my 4 tb drive.  I'd do this after the new 10 tb parity has been built.  

     

    i ordered a fresh 8 tb drive this morning too, so I am ready with that as well.

     

    Any thoughts on this situation?  Comments on my strategy?  

     

    thank you,

     

    kf

     

    P.S.  the array is about 10 yr old, i can post specs later tonight.  No known issues.  

     

    edit: found a recent post about my array if interested:  

     

     

  6. Hi all,

     

    I am beginning to wonder about the useful lifetime of my server.  I have enjoyed nearly 10 years of service with unraid and about 8 years from my current build.  i want to get ahead of its failure and i am beginning to think of a replacement build.  I just dont know what to expect - how long will this server continue to run trouble free?  I think the only thing to worry about is the motherboard - cpus dont go bad, do they? ram too, right?   hard drive failures are easy to spot and to replace.  same with power supplies.  

     

    So, here is my server.  i am about to replace that G620 with an i3-3220 that i have.  otherwise i am going to keep it this way for now.  how long should i expect this motherboard to last?  

     

    Model: N/A

    M/B: Supermicro C7P67 Version V1.02 - s/n: xxx

    BIOS: American Megatrends Inc. Version 4.6.4. Dated: 02/15/2011

    CPU: Intel® Pentium® CPU G620 @ 2.60GHz

    HVM: Enabled

    IOMMU: Disabled

    Cache: 128 KiB, 512 KiB, 3072 KiB

    Memory: 8 GiB DDR3 (max. installable capacity 32 GiB)

    Network: eth0: 1000 Mbps, full duplex, mtu 1500
     eth1: interface down

    Kernel: Linux 4.19.88-Unraid x86_64

    OpenSSL: 1.1.1d

  7. I don't have a disk to replace the failing disk - I bought a new disk and its on its way.  I can wait until I get it and replace the failed disk with the new disk using parity to rebuild it, or I could remove the failing disk, finish the rsync copy using parity for the data, then complete the xfs conversion process for the disk.

     

    obviously the safest way is to rebuild the failed disk but since I'm only 300 gb away, would it make sense to remove the disk, finish the copy and then re-set the configuration and parity using the xfs replacement?

  8. So, if you were in my position, what would you do?  I have about 300 gb left to copy, but it is moving at less than 50 kb/sec, with errors growing.  Would you stop the copy where it is, rebuild the failing disk with a new disk, then do the copy again?  I think if I continue as is it will take another day or so at this speed.  I'm not sure what the best course of action is.

  9. trurl,

     

    I used

    rsync -avPX /mnt/disk10/ /mnt/disk11/

     

    it looks like the array is copying to the xfs swap disk using the parity protection.  it is nearly complete - 2.54 out of 2.73 TB have been copied, and I am up to 43000 errors on the failing disk.  I have a new disk on the way - after this copy is complete, I will probably use the procedure to remove the failing disk.

  10. I have a failing disk - it shows over 1000 reallocated sectors.  So instead of doing the usual rebuild of the disk by replacing it, I was trying this method to copy the files to the new xfs disc.  I had just completed a parity check with no errors, so I thought I would be ok.  I began the disk copy last night following the published procedure. about 2/3 the way through, the copy rate dropped to about 5-30 kb/sec, and I am seeing 36000 or so errors on the unraid display for this copy. 

     

    should I abort the copy, remove the failing disk, and rebuild it to a new replacement, and after that work on the xfs conversion?  Or, should I just wait for the copy to complete?

     

    if I need to abort, how do I do that? I am using a direct console command to do this.

  11. Joe, thanks for your help.  I am not complaining and I am sensitive to the amount of flak you get from folks (some of us need to read Dale Carnegie's book IMO).  The problem is that my ignorance gets in the way of implementing things outside of unmenu =).  I was confused, as you say, by the plugin thread, and I have learned to stick with the basic unmenu plugins.  Some of the behavior I've seen makes sense now that you've pointed out to me the single threaded nature of the process.  I'll continue to be conservative with it. 

     

    I should be able now to implement cache_dirs on my server.  Thanks again.

     

     

  12. Joe,

     

    Actually I was not referring to Unmenu - I have used that for a long time.  I was referring to cache_dirs. 

     

    This thread talks about an unmenu plugin:

     

    http://lime-technology.com/forum/index.php?topic=19790.msg195098#msg195098

     

    this is where I was trying to making a new directory. 

     

    WRT the multiple posts on installation, the cache_dirs wiki in your OP in this thread:

     

    http://lime-technology.com/forum/index.php?topic=4500.msg40570#msg40570

     

    refers to this wiki,

     

    http://lime-technology.com/wiki/index.php?title=Improving_unRAID_Performance#Keep_directory_entries_cached

     

    I will give mkdir a shot and follow the instructions in the first thread I linked.

     

    Edit:  that wiki seems to have been modified today - there were a few threads referenced on customizing cache_dirs after the two bullet points, now they are not there. 

     

    the OP in this thread discusses a lot about how cache_dirs works but not how to actually install it.  I can't find cache_dirs in the unmenu package manager; thus my confusion.

     

    Thank you.

     

    I apologize for asking a stupid question, but I cannot figure out how to install cache_dirs.  I have been searching for hours.  I was interested in the unmenu plugin, but I see I dont know how to create a directory - I thought "md" was the command.

    It is the command for MS-DOS, but not for linux/unix.

     

    The command for unMENU would be

    mkdir /boot/unmenu

     

    Apparently, you did not look here:

    http://code.google.com/p/unraid-unmenu/

    or here

    http://lime-technology.com/forum/index.php?topic=5568.0

    Both give the "mkdir" command.

     

    I take this as a sign that I am an idiot and i need to ask for help to do this.

     

    Could someone point me directly to install instructions?  The wiki page refers to 5 separate threads.  It is very confusing.

     

    thanks!

    Which "wiki" page are you referring to.  Since the wiki can be edited by anyone, it is  impossible for me to know where you looked.  It probably was not this one

    http://lime-technology.com/wiki/index.php/UnRAID_Add_Ons#UnMENU

     

    Joe L.

  13. I apologize for asking a stupid question, but I cannot figure out how to install cache_dirs.  I have been searching for hours.  I was interested in the unmenu plugin, but I see I dont know how to create a directory - I thought "md" was the command.  I take this as a sign that I am an idiot and i need to ask for help to do this.

     

    Could someone point me directly to install instructions?  The wiki page refers to 5 separate threads.  It is very confusing.

     

    thanks!

×
×
  • Create New...