PeteAron

Members
  • Posts

    265
  • Joined

  • Last visited

Everything posted by PeteAron

  1. Thank you for the comments, guys. I plan to investigate the drive itself later. Right now, my problem is that i have a red-balled disk in my array. I am thinking i will put the 4 tb drive back in place, with the new 10 tb parity drive, use the "trust my array" button, and build my new parity drive. Then i will replace the 4 with my new 8 that's on its way using the normal process. Any thoughts on this strategy? What would you do? P.S. I do not have diagnostics from the array when this occurred - I wasnt thinking. The array was up long enough only to start the parity rebuild. Here are the diagnostics from the shutdown immediately previous to this event. This was just before shutting down in order to add the new 10 tb drive. After adding that drive, i restarted my array and precleared that drive. A few days after that, my monthly parity check completed normally. After that, i did the disk swap mentioned above. repository-syslog-20210205-2207.zip
  2. I realize I am being impatient, but is there someone knowledgeable who can comment? thank you.
  3. Edit: My issue has been resolved. Read below to understand the problem. I have never used the new config option before. This is a simple thing to do but my data was not protected (read below) and once I made the choice to do this the array is unprotected until parity can be rebuilt. Here's exactly what I did, in case anyone else has a similar situation. When using the "new config" you are assuming that the data on all of your disks is intact and you have no disk errors. You will not have parity protection after selecting this. Your array needs to be off-line to do this. (1) The server was off. Power up server. If you dont have a list or screenshot of your current array configuration/disk assignments GET ONE NOW. I had 3 disks in my case which were not part of the array. (2) Note configuration of array, in this case, disk 7 is red-balled. Old disk 7 present. (3) Go to the Tools menu, locate "new config" and click (4) Preserved or dont preserve as appropriate for your need (5) Return to the main menu. Your array will have no disks associated with it. (6) Assign your parity drive - safety first. This drive will be completely over-written. Dont make a mistake here. (7) One by one, assign each drive to a position in your new array. As long as you have each data drive assigned when you are finished, it makes no difference which data drive goes where. You can add or remove drives from your new config as you please - that's the point of this. (8) When you have all of your devices assigned, double check everything. Make sure you have all the devices you wanted from your screenshot in the new array. (9) Spin up your array. A parity build will begin immediately. I allowed this to complete before doing anything. I posted my current diagnostics below if anyone is interested. Thank you JorgeB for helping me out and for your patience. ===== the plan: OK I think i know what to do, but am concerned about exactly how to do this. Please look this over and let me know if the following steps will not overwrite any of my data disks. I have 11 disks plus a single parity disk. one of those disks was a 4 tb drive (disk 7), and my last parity check was fine. I shut down my array, and changed the disk in the slot containing the 4 tb disk to a "new" 8 tb disk (new disk 7). I then rebooted and unraid began to rebuild to the new disk. This disk (new disk 7) was red-balled due to crc errors. The Plan: i would like to return the 4 tb disk to slot 7 and make unraid build parity anew. I have a new parity drive precleared and ready to use so i will make that disk parity at the same time, since parity is no longer valid unless i replace the 8 tb disk with a new 8 tb disk. Task list: -1 power on the server (currently off) -2 change disk 7 from the 8 tb back to the existing 4 tb disk. -3 change my parity disk assignment to my new 10 tb parity disk -4 go to the tools menu and select "new config" -5 start the array and i think i have to click a button on the main page saying something like "trust my array" -6 wait for the new parity to build using the existing data Is this all correct? my data drives, after replacing the new disk 7 with the old disk 7, are all intact and i just want to rebuild my parity drive. thanks for the help. ========== Original post below: ========== Hi all, I believe I know what to do next, but i would like some critical feedback. I am in the process of upgrading my 13 disc array from 8 tb parity to 10 tb parity. I also have a 4 tb drive I want to replace with an 8 tb drive. i have a hot 8 tb and 6tb drive outside of my array awaiting service. Both were previously in the array. I purchased my 10 tb drive to be used as parity, shut down my array after a parity check, rebooted and precleared the 10 tb drive. Then i waited for my monthly parity check to complete as a conservative measure. There were no issues with my array, all was running smoothly. I decided to swap the 4 tb drive for the 8 as a first step, planning to then swap parity. I turned off the array, went to the slot containing the 4 tb drive, and changed that position to the 8 tb drive. I then started the array again, and allowed it to begin to rebuild the "new" 8 tb drive. within a minute or two, i noticed that the drive was throwing CRC errors. there were 60 some, then 150, 220, 290, 350, etc. I paused the rebuild, checked the cables, and un-paused. more CRC errors. after about 600 CRC errors the array disabled the drive. Slightly panicked, i shut down the array to think. I believe my best course of action at the moment is to put the 4 tb drive back into my array, parking the 8. At the same time, i can swap my parity drive and just allow the array to rebuild parity, since I am confident the data is good. After this has completed, i can preclear the drive throwing CRC errors and see if i can determine if it's good. I can also swap my existing parity drive into the position i want it, replacing my 4 tb drive. I'd do this after the new 10 tb parity has been built. i ordered a fresh 8 tb drive this morning too, so I am ready with that as well. Any thoughts on this situation? Comments on my strategy? thank you, kf P.S. the array is about 10 yr old, i can post specs later tonight. No known issues. edit: found a recent post about my array if interested:
  4. Hi all, I am beginning to wonder about the useful lifetime of my server. I have enjoyed nearly 10 years of service with unraid and about 8 years from my current build. i want to get ahead of its failure and i am beginning to think of a replacement build. I just dont know what to expect - how long will this server continue to run trouble free? I think the only thing to worry about is the motherboard - cpus dont go bad, do they? ram too, right? hard drive failures are easy to spot and to replace. same with power supplies. So, here is my server. i am about to replace that G620 with an i3-3220 that i have. otherwise i am going to keep it this way for now. how long should i expect this motherboard to last? Model: N/A M/B: Supermicro C7P67 Version V1.02 - s/n: xxx BIOS: American Megatrends Inc. Version 4.6.4. Dated: 02/15/2011 CPU: Intel® Pentium® CPU G620 @ 2.60GHz HVM: Enabled IOMMU: Disabled Cache: 128 KiB, 512 KiB, 3072 KiB Memory: 8 GiB DDR3 (max. installable capacity 32 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 eth1: interface down Kernel: Linux 4.19.88-Unraid x86_64 OpenSSL: 1.1.1d
  5. fwiw, I had two of these drives fail (X300) in warranty. I wont buy another.
  6. thanks for your replies, I understand both now. my array seems to be running as well as ever now. thanks.
  7. I finished preclearing a new disk and I see a lot of errors: wrong csrf_token, over and over. what does this mean? the plugin reports that preclear has completed successfully. log and diagnostics posted. thanks for your help. repository-diagnostics-20180412-0728.zip repository-syslog-20180412-0724.zip
  8. Update. after having pulled the Seagate, I restarted the array and rebuild the drive onto a new disk. that completed without errors. I am now going to add and preclear a new drive, and work on completing the process of switching my array to xfs. while I am doing that I will be replacing my remaining 3 ST3000DMs. I have attached the syslog and diagnostics. thank you again for your help. repository-syslog-20180410-2132.zip repository-diagnostics-20180410-2131.zip
  9. aw man I would love to have gotten my money back on these damn things.
  10. thanks tru. I did that and the rebuild is in progress. we are well past the point where it failed before removing that bad Seagate drive. I believe it will be fine from here. I have owned 6 or so of these ST3000DM's and have not had great success with them. this is the second one that has failed in service, and I still have ST2000DM's humming along nicely in this server. I have 3 more 3000's to replace, I am going to do them asap now. I will post a final update and a log after the rebuild is complete tomorrow. I really appreciate t the help guys.
  11. the Seagate was disk 3. I replaced it and started a parity rebuild before disconnecting the Seagate. the new disk 3 is a new precleared disk. the rebuild failed, and disk 3 is now emulated. after pulling the Seagate out of the machine, it appears to be working properly, but I need to rebuild disk 3 ontothe new (current) disk 3.
  12. so how do I restart the rebuild process? bringing the array online doesn't trigger it.
  13. Looks ok to me: comments? thank you! ill read-only check consistency of the filesystem on /dev/md3 Will put log info to 'stdout' ########### reiserfsck --check started at Sun Apr 8 08:58:47 2018 ########### Replaying journal: Replaying journal: Done. Reiserfs journal '/dev/md3' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 668460 Internal nodes 4094 Directories 4072 Other files 36534 Data block pointers 671967325 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Sun Apr 8 09:38:26 2018 ###########
  14. thanks johnnie. the unassigned Seagate is the drive that failed. I had disk 3 precleared and ready as a hot spare. I reassigned disk 3 to the new disk (now disk 3) and unassigned that Seagate. I have now pulled the Seagate, restarted, and am repairing the file system on the (partially rebuilt) new disk 3. thanks for your help and i'll post when its complete and when the rebuild is complete.
  15. I restarted the system and collected this. do I need to bring the array back online and post diagnostics after that too? repository-diagnostics-20180408-0628.zip
  16. Thank you for the reply. I restarted the machine and tried to allow it to rebuild again, now I have two unmountable disks, one of them being the disk I am trying to replace - which is not in the same location as the disk I removed. is it possible this is a power supply problem? attached is a log. thanks for any help. repository-syslog-20180408-0537.zip
  17. basically the question is, what happens when a rebuild fails? do I need to re-clear the drive and start over? the array knows the new drive now, but its data isn't rebuilt. Can anyone tell me basically what is going on?
  18. basic info on my server: Intel G620 processor Supermicro C7P67 m/b, with two pcie x16 slots, 4 sata 3.0 and 4 sata 2.0 ports, dual realtek nics Supermicro AOC-SASLP-MV8 8-Port SAS/SATA card I built it in 2011
  19. I had a disk fail, and replaced the disk with a precleared drive and started a parity rebuild. this failed within 15 min. here is the log from prior to replacing the failed drive. can I just re-start and try to repeat the rebuild? the replacement disk was partially rebuilt with a small amount of data. repository-syslog-20180407-1527.zip
  20. Here is the smart report on the disk that is apparently failing with reallocated sectors. repository-smart-20171013-2245.zip
  21. I booted the system and downloaded the diagnostics. Here it is. I have not started the array. repository-diagnostics-20171013-2241.zip
  22. Any thoughts from anyone? Can I give more information?
  23. I was travelling the last couple days and returned home to a confusing problem. My array showed no disks available and no shares were available on the lan. The web interface was relatively unresponsive - I was able to go to tools and display a log but not download it. Going to the dashboard, no disks were displayed. Same in Main. the only buttons I could see were reboot and shutdown - and neither worked. I did a hard shutdown and restarted. Upon restart, I noticed a disk (a 5000 hr x300) was missing. I went to dashboard, then back to main, and the disk was now showing as available, but with reallocated sectors. I rebooted, and the raw value went from 8 to 24. What worries me is that I just lost an older disk last week to reallocated sectors failure. I have attached my log, and I would appreciate any advice. I have powered down and I have a new disk to replace the failing disk but it is not precleared. thanks. log 10102017.txt
  24. OK. I have shut down the array waiting for the new drive. I will pull the failing drive and let the system rebuild it onto the new drive. it will take about a day and a half I think since it has to preclear the new drive first.
  25. I don't have a disk to replace the failing disk - I bought a new disk and its on its way. I can wait until I get it and replace the failed disk with the new disk using parity to rebuild it, or I could remove the failing disk, finish the rsync copy using parity for the data, then complete the xfs conversion process for the disk. obviously the safest way is to rebuild the failed disk but since I'm only 300 gb away, would it make sense to remove the disk, finish the copy and then re-set the configuration and parity using the xfs replacement?