Jump to content

barrygordon

Members
  • Content Count

    501
  • Joined

  • Last visited

Community Reputation

3 Neutral

About barrygordon

  • Rank
    Advanced Member
  • Birthday 10/01/1939

Converted

  • Gender
    Male
  • URL
    http://www.the-gordons.net
  • Location
    Merritt Island FL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. That explains it. I will be upgrading to the PRO license after the rebuild is finished but before I start doing the RSF to XSF just to give me breathing room. Thanks
  2. On May 10 I had to buy a new license Key and paid Lime-Tech $89. This would indicate to me that I have a Plus license which allows up to 12 disks. The question is which disks do Lime-Tech count towards the limit. The Flashdrive The disk inside the chassis not used for unraid and shows as unassigned 12 slots used for unraid disks, normally 11 slots are populated. So if the internal disk which would allow me to do a boot of an OS other than unraid is counted then I have too many disks if all 12 Hot-swappable slots are in use (loaded with a drive). I thought that only the disks in the array counted towards the limit. I do not have an issue upgrading to the Pro license
  3. Interesting. I shut down the array. The OS is 6.6.7. I removed the current disk 5 (the failing disk of 2TB) from the chassis. I installed a new precleared 4 TB disk in the slot that disk 5 was in. I placed a brand new 4TB disk in the one unused chassis slot in preparation for running a preclear. I assigned disk 5 as the new precleared 4TB disk. I tried to start the array but the start command was greyed out with a message saying that my registration key did not allow for the number of disks installed. I believe My registration key allows for unlimited disks!! I shut down the system and removed the Spare 4 TB disk that had not yet been precleared and started the system up. I could now start the array. I am not sure if my pulling the spare drive allowed that or it just needed to be re-booted. It got me a little nervous in that it stayed on "mounting disks" holding at disk 5 for a while. After a few minutes, it got past that and is now in the process of rebuilding disk 5. Very good information regarding rebuild status is presented both on the web page and via notifications. Unraid has come a long way since 2010. My only question is why the problem with my registration key? The chassis has 12 hot-swappable slots. I use 11 for the array (1 parity and 10 Data) and the 12th slot is where I preclear a disk. Is this a possible bug? I will be using this slot when I do the conversion to XFS and upgrade my disks to 4TB from 2 TB. My next major endeavor is to start replacing the 2TB disks with 4TB disks and changing over to the XFS file system at the same time. I have read the document on doing this several times and do not believe I will have a problem. Unfortunately, I do not see a one-step process for doing the disk capacity upgrade and changing to XFS at the same time, or am I missing something? By the way, Thanks to all who have assisted me on this thread and others in the past.
  4. So to replace a failing drive (It still works but is reporting errors) I only need to: Stop the array DO NOT POWER DOWN Remove the bad drive and replace it with the new larger drive (which was precleared to validate the drive) Start the Array Any special messages I need to respond to or is it just that simple?
  5. My system chassis has 12 bays that are hot swap-able. Must instructions have the system shut down when drives have to change positions or new drives are installed, updated etc. I was wondering, if with hot swap-able drives, the system has to be powered down or is stopping the array, moving the drives or installing new ones and re-starting the array okay. That is can I avoid a shutdown or are the drive details e.g. serial number only read at boot time?
  6. It has not yet "Truly failed", however, I am getting notifications of problems with the drive. I can actually read the drive with no problems at this time, and there is no writing being done to it. It is the only drive reporting problems.
  7. I have a 12 hot-swappable drive slot chassis. Slot 1 is The parity drive. slots 2-11 are the data drives 1-10. Slot 12 is normally empty but right now it is running a preclear so I can replace drive 5 which is reporting errors. The OS version is 6.6.7. Last night I upgraded the parity drive to 6TB. That ran fine. I would like to begin the process of converting the array to XFS, Currently, 2 of the 10 data drives are XFS the other 8 being RFS. I have a new 4 TB drive which is being precleared. Should I first copy the bad drive to the new drive and then ???? (I am not sure what t do to get the new 4TB drive with the data on it into the array replacing the failing 2TB drive), or just replace the failing 2TB drive with the cleared 4TB drive, let the rebuild occur, and worry about the RFS to XFS conversion later. Any and all advice appreciated.
  8. I have done two non-correcting parity checks; one before I deleted a bunch of old data, and one after. They both showed zero errors. I also have scheduled a non-correcting parity check to run monthly. I believe I am now covered.
  9. I believe I am almost done dealing with my unRaid Tower, and can once again start treating it as an appliance. First I want to thank all in this community who contributed advice, but most of all for the lowering of my anxiety level. I have run two successful parity checks (non-correcting) with zero errors. I have removed all sorts of older data which I no longer need. I have spot checked each of the drives and I can read from them fine. The array has 6.09 TB of free space. I have ordered the following equipment's: 1- A 3 TB hard drive to replace Drive 5 which was causing sector reallocation notifications 2- A 6TB hard drive to become my new parity drive 3- A LSI 6Gbps SAS HBA LSI 9201-8i It is my intent to replace Drive 5 first. Wait a while and then Replace the Parity Drive. I will keep the LSI controller, but I am not sure when I will install it. Obviously, if the current Marvel controller fails I will replace it My only concern is that the last time I ran a "Correcting" parity check There were over 4000 sync errors but at the end of the check, it stated that parity was valid. A little confusing. Once again thanks to all, and if there are additional concerns/comments/advice just post them. Barry Gordon
  10. trurl, That was my impression. I just wasn't sure if the mere booting of windows would try and write to all the drives it could see itimpi, That was what I planned to do. As I mentioned there is an internal drive (sdb) in the case that has the grub loader and two OS's Win XP and Ubuntu. I am not sure what versions. If I were to pull all of the array drives and boot from that internal drive selecting Ubuntu I should be able to verify that is intact and working. I could then do it again with all of the array drives plugged in and Ubuntu should see them all. The above is an activity for another day, another time.
  11. I am having no other problems. I can access data on each of the data drives. I have a lot of data that I don't need any longer and I plan to delete these files after the parity check has completed. I will delete them from my win 10 system using the network access. Is there a way to delete files from the unRaid GUI? With regard to booting another OS I was just curious. I have a very inquisitive mind. Prior to all of this, I had both shares and disks visible when I looked at the Tower from my WIN system. The Tower was running unRaid 6.1.3. Now to get that view in my WIN system I need to enable both disk shares and user shares. I don't remember what was enabled when I was running 6.1.3; it might have been both. Other than the known bug regarding copying files onto themselves is there any issue in enabling both? I do not use the command line of the unRaid (Tower) system, in fact, I don't even log in; I just use the graphical interface from my browser. I am going to bed. Tomorrow is another day.
  12. I have just started a memory test. When it is done I will report any errors and reboot/restart the array and run a parity check WITHOUT writing corrections to the array. Could a loose cable be causing the types of problems being seen? Right now it is a bitch to take the case out of the rack as it is quite heavy and I am restricted in what I can lift, however, I can solve that problem with a little carpentry. I was wondering: The system has an internal hard drive which it does not use but can be booted from. I believe it has Windows XP on it as it was built in 2009-2010. The drive is seen by unRaid as not part of the array it is a Samsung 120 gig drive which the system recognizes as an unassigned drive (sdb) so it is obviously connected It also has a DVD drive which I do not believe is currently connected. If I boot win XP will it muck up the hard drives in the hot-swap slots? I can always pull them out. If I build Win 10 onto the samsung drive and boot from it will it muck up the hard drives? By muck-up, I mean write to them. Can I make a self-contained version of linux on a flash drive, boot it and have it come up without affecting any of the array drives? Is there a chkdisk program or its equivalent that I could then use to check each of the array disks in a read only mode? If that is all possible, since I have a spare spindle can I make a copy of an array drive to a fresh drive using that system. Is there a plugin that does this sort of thing? If the SAS/SATA controller is bad I can get a new one for not too outrageous a price, but how do I determine if it is bad?. Advice? Comments?
  13. bonienl: 1- I would suggest that the instructions for replacing a failed drive or installing a larger drive should explicitly state the need to re-assign the slot with the new drive 2- I have attached a set of screen snapshots as requested Frank1940(1): 1- Yes the write corrections to parity is checked 2- I have attached a listing of the screen from Tools>System Devices. The actual board that controls the drives is: SAS/SATA HBA SuperMicro AOC SASLP MV8 SAS-SAS Cables 3 Ware SFF-8087 to SFF-8087 3- I have attached a copy of the "History" as requested trurl: 1- I will run a non-correcting parity check and a ram test later today. I had run a memory test earlier and it showed no memory errors. Frank1940(2): I am not sure what the significance of the error spacing you have pointed out is. Is there any way of ascertaining which drive it is pertaining to? Tools_System Devices.txt Parity History.txt tower-syslog-20190512-1631.zip unRaid screen snapshots.zip
  14. First of all, let me reiterate my thanks. I am not a beginner in this field (computer science) having been in it since 1960. I have been a programmer, software designer, hardware designer and almost all other subfields of CS. I was an adjunct lecturer in Computer science graduate school at a major university. I was part of the original GPS team in the mid-1960s, I retired in 2005 as Director of IT operations for a fortune 100 company. I wrote my own HA systems in node.js and built all of the control systems for my Home Theater. I have always treated unRaid as an appliance. I am not fluent in Unix/Linux but can get by. I believe I do understand how things work at the lowest levels. I have run a parity check 3 times over the past week, always with write "corrections to parity" checked. It always comes back with parity valid and finding between 4000 and 4200 errors. I understand why it believes parity is valid if the corrections were written to the parity drive. If parity is valid then why will a parity check run again a short time later report the same scale of error (4000+). It is as if parity was not being corrected on the parity drive or some other disk(s) is failing constantly. I want to start replacing the older 2 TB drives with larger 3 or 4 TB drives and the parity drive with a 6TB drive. I understand why one should not replace a failed drive if parity is not valid. My plan was to replace disk 5 first as that seems to be reporting the most warning notifications, then replace the parity drive with a 6 TB unit, and finally replace the older 2TB drives with newer 3 or 4 TB drives. I understand the advice "do not replace a failed drive unless parity is valid". I do know how the parity system works. Because of the errors being reported on a parity check I am not sure what the best approach would be. What information can I provide for someone to assist me. I don't want to waste everybody's time, but I do want to get this resolved. The instructions for replacing a failed drive appear straightforward, e.g. Replace a Single Disk with a Bigger One This is the case where you are replacing a single small disk with a bigger one: Stop the array. Power down the unit. Replace smaller disk with new bigger disk. Power up the unit. Start the array. I am surprised that there is no need to assign the new drive into the array Thanks.
  15. Things are getting clearer. My only concern at this time is that the last parity check (writing corrections to the parity drive) returned over 4000 errors. I assume that a new parity check should return 0 errors unless something strange is really going on. I am going to start one later this evening (writing corrections to the parity drive). Hopefully, it will come back with 0 errors. I will then order two new drives a 6TB for parity and a 3 TB for data. I then plan to replace drive 5 (2TB) with the 3 TB drive and when that all settles down, replace the 3TB parity drive with the TB drive. That will give me a 3 TB unused drive and a 2 TB unused drive. Question - what happened to the concept of pre-clearing a disk? This community is FANTASTIC. Reminds me of the old Pronto PRO Professional community on Remote Central where I was very active (my younger days)