Jump to content

rharvey

Members
  • Posts

    213
  • Joined

  • Last visited

Everything posted by rharvey

  1. OK so saving the data has become priority number one, what's the best way to do that. If I need to boot a lower version of UNRAID to successfully fix the corruption I would rater use a new USB drive and not mess with this one as it has a couple of VM's and Dockers too.
  2. So said another way, if I did re-build the data from parity that the result would be the exact same corruption on the new disk...?
  3. I still have good parity so would't the safest thing to do is install a new drive, let it rebuild the data and then re-format this troubled drive...?
  4. Here is the diagnostic file tower-diagnostics-20170330-1609.zip
  5. Oh boy I'm in trouble now. I put the system in maintenance mode and ran the disk check from the GUI, it finished with a ton of errors all suggesting that I needed to do a -rebuilt-tree which I did. It ran for several hours and ended before it completed with these errors. 0%....20%....40%..block 368996707: The number of items (4096) is incorrect, should be (1) - corrected block 368996707: The free space (49152) is incorrect, should be (4048) - corrected pass0: vpf-10110: block 368996707, item (0): Unknown item type found [514 570430216 0x6f1 (15)] - deleted Segmentation fault If I run the check again it quickly comes back with reiserfsck --check started at Thu Mar 30 15:49:46 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/md4' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) I just attempted to start the array normally and it is seeing the drive needed to be formatted. Of course I did not back it up, really hard to back up 3TB's of data. Have I lost everything...? I says parity is still good.
  6. I recently upgraded from 4.5 to the latest version 6. In doing so it seems many directories and files have been set to read only. I'm in the process of moving lots of files from old drives onto new ones and would then like to delete the files off the original drive. I first tried the mv command but that was failing. I then used the "new permissions" tool and it also failed because these files are set as read only. How do I remove this read only flag...?
  7. I think I just figured out my issue, the script in the Lime GUI only set "rw" on files and does not set the "x" which would allow me to delete the files. The error I'm getting is that the files can't be deleted. DUH - Is there any way to adjust the script to include "rwx" on files...?
  8. Hey guys - The migration continues but I hit a roadblock....! Have all the 4TB drives installed and they are all working fine. I'm now starting to MOVE data from the smaller (old) drives over to these new 4TB drives and I'm having an issue with it. I used the "New Permissions" script to allow me to make quick and simple work to copy about 10GB and thousands of files nested in folders. That script does not seem to work on all files and my "mv" commands are failing. I'm not good with command line stuff so I'm a little lost as to why the script does not always work. Any thoughts...?
  9. Sorry for the delayed response, yes that parity check did finally complete. From that point I started moving files off drives that I was not sure about and had issues with previously. Got the array down from 16 data drives to 12 without issue, just took some time but coping right on the server is way faster. So now I'm in the process of replacing old 1TB and 1.5TB drives with new Seagate IronWolf NAS 4TB drives. I'm on data drive 3 of 5 right now, as expected this is taking time to complete but it's working so far. One very odd thing is that when the data re-builds are done the GUI becomes un-reachable, this has taken place on both of the two first drives I swapped out. The array is alive, the shares are reachable, even PLEX Server is working (a nice new benefit of version 6) but the GUI is not. A re-boot brings everything including the GUI back, I just don't like powering the box down without looking into it's real state at the time. This is my first upgrade in MANY years and my expectations were very high for a far more capable and stable UNRAID. What I think I have found with just a few days experience is a far more capable product (like docker) but not one that is more stable, it still seems very finicky. Once I have ALL of the old disks out and ONLY new NAS quality drives in it maybe it will be more stable. So once I have all the 6 new 4TB (1 parity, 5 data) drives swapped in I will move the rest of the files on the old smaller drives over to the new ones and then lastly pull all the old drives out. At that point I will have an array about the same size I had but with far fewer drives and higher quality ones at that. Hopefully things will be more stable by then and I can enjoy a trouble free UNRAID for years to come. One thing I have not been doing is pre-clear, I know it would have been smart to do but without a spare box to pre-clear with it would have added several days to the entire process.
  10. UPDATE - A cold re-start of the box and now the original parity drive is shown again so I re-assigned it. Started a new parity check and it's running BUT crazy slow this time. I normally get an average of 60 MB/Sec but it's now running at just above 4 MB/Sec and says it will take 5 days at this speed. 2nd UPDATE - I let the parity check run for a while and it has picked up some speed, not normal speed yet but it's close. I'm going to let it run till finished and see what I have then.
  11. Hey folks, thanks for the guidance...! It seemed to me that there was little risk just creating a new flash drive and giving it a go. That way if all did not go well I could still put in the old version 4 flash and boot from it. Well it looked like it was going to actually work OK. Everything came up, I was able to re-assign all the drives and then started up a new parity check. I looked last night before bed and I had 6 hours left to the check. This morning the array is alive but the parity drive was marked with an X and I could see that drive 15 had a shit load of errors. Down in the control area it said parity was stopped by user, I did not stop it. Drive 15 was one that I was having issues with running V4 so I decided to remove it from the array but once I did that my parity drive is now gone as well as drive 15. And I can't re-assign the parity drive as it's no longer even in the drop down. HELP....!
  12. I have been a VERY long time user of UNRAID, I migrated from a very early version to my current system about 6 years ago and it's served me very well. Unfortunately this system was built when hard drives were smaller than they are now so I have a system with way too many drives in it. I'm starting to have drive issues and I would like to come up with a plan to upgrade from my current 4.5.6 version of UNRAID and get away from all these smaller disks and get to an array that has only maybe 5 total disks but as large as I can afford. Where I'm struggling is how to get the upgrade and the data migration done, smoothly and without loss I would be willing to purchase new computer hardware and start with a totally new system but I'm not sure i need to do that as the current SuperMicro system was top of the line many years ago and the hardware is running fine. Can I do an in-place upgrade to 6.X on the existing box, put in a larger parity drive and start moving files around...? I'm thinking this may be risky, what would be the steps. I have 16 total drives today and not a single spare driver bay so that needs to be considered as well. I would like to go with at least 4TB drives, and I have about 20TB of data to move onto the new drives I buy so that I can remove the old ones. That too is a challenge as removing a drive from the array is never simple. Any thoughts on the best way to accomplish this large task...? And what the steps would be to this OS and data migration...?
×
×
  • Create New...