Jump to content

jphipps

Members
  • Posts

    334
  • Joined

  • Last visited

Everything posted by jphipps

  1. It is Saturday, I would watch a movie instead... There is always tomorrow...
  2. I agree, I think the new config is the way to go... I would probably hold off on the preclear until the new parity is built. I found if you overload the IO, it causes issues. If you have another PC you could boot up with unRaid, you could preclear on another machine, or wait until the parity is built, and then preclear the other drives if you only have the one machine...
  3. There went that good idea... Sounding like the preclear might be the only option..
  4. I do have one, but not sure how good it is.. I think when you write to the location of the pending sector it corrects it. So in theory if you did a full parity build, it should correct the parity drive. From what I read, it means the block has unstable contents and a write would correct it. I have only had one and the preclear fixes it, but I would assume the build would write to every block as well and correct it, but you would be without the parity protection during the rebuild.
  5. From my experience, the only way I have fixed them is a preclear. Would be nice to know if someone does have a better idea, that is a pretty painful fix..
  6. My main server is a Norco case, it only uses 5 molex connectors I believe to power the backplane. My other server, I just used a combination of sata power connectors from the PSU and molex to sata power converter cables for the rest.
  7. If you search amazon, I have: Corsair RM Series 1000 Watt ATX/EPS 80PLUS Gold-Certified Power Supply - CP-9020062-NA RM1000 I am running 20 hard drives and haven't had any issues, and it is only $149 and it is modular... 1000W should easily handle the 12 drives.. I would try the Xen boot before anything else... Some of those counts may not mean anything if they are old. My parity drive has a few errors on it, but occurred years ago, and haven't increased and I haven't seen an issues from it..
  8. Not sure if this would help you or not, but a few of us found if you boot up under xen, some drive and parity issues seem to go away.
  9. If you have a spare drive, it would probably be good to rule out a disk issue. It is strange, I don't see any errors in your logs. Were you only seeing the slowness from that one drive, or all data access?
  10. You probably have to routes: 1. rebuild the disk12 disk either on the same disk or another drive from the emulated disk12 2. Start with a new config and use the data from the physical drive 12. If you are confident in the data on the physical disk12, I would verify the contents with the array offline, and use option 2. If you put all disks back into the original slots and bring up with parity valid the parity check should clean up a any invalid parity bits.
  11. From your syslog, it looks like ata5 is having some IO issues, which I think is sdc (disk2) I wonder if that is causing your slowness.
  12. The only thing you could probably try is the Xen boot, but anything remote is always risky. If you modify the syslinux.cfg you can switch the default boot from regular to Xen and reboot. Most other things would be best being local..
  13. Looks like you have a lot of IO errors occurring on ata7 and ata8 when you get the parity errors. You may want to shut it down and double check all the cabling and check the smart report on those 2 disks to verify there is no disks errors going on.
  14. A few of us have had some parity issues and crashes running under the normal unRaid boot. For some reason if we boot under Xen, the parity check seems to work fine. Not seeing your logs it is hard to tell if there is another cause, but you may want to give that a try and see what happens.
  15. What do the permissions look like on the files from Terminal? You could probably just copy the zip file under a share, and then from a shell sessions on the unraid server copy the files to the flash drive.
  16. I believe the /mnt/user0 is only really used by the mover script to migrate data off cache, you don't every reference that directly. My guess is if you look through all your share settings, one as cache drive turned on. I had that on my server after upgrading, I didn't have a cache drive, but it was turned on on a couple of shares for some reason...
  17. You can find them using find, similiar to: find /mnt/user/ -name ".*" -print In theory, you can use: find /mnt/user/ -name ".*" -exec rm {} \; I am always a bit worried about mass delete scripts, so i might just build a file of rm commands so I can verify what is happening. From a shell: find /mnt/user/ -name ".*" -print | while read x do echo "rm ${x}" >> mydeletefiles done Then your mydeletefiles will just be alot of rm (delete commands) you can verify and then run if you are happy with the list...
  18. I do use screen sometimes. I am on my workstation so much of the time, i just use an ssh sessions normally. Probably with screen, is you can't easily scroll back, so unless you are logging to a file, it is hard to see the history. Plus since you can restart with rsync so easily, it is not a big deal if it drops...
  19. I have migrated all the disks in my 2 servers to XFS using the following command from a putty/shell: rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ This will migrate the data, and remove the source files once each is successfully copied. I like this method, because if it gets killed, you can easily restart and it picks up where it left off. After the first run completes, I run it a few more time to catch any files that were created during the copy. The target file is a hidden ".file" during the copy so from the share point of view there is only a single copy of the file existing during the copy. After the copy, the source file is deleted and the target is renamed so the switch is barely noticeable to the user. You can skip the remove option if you don't the overhead of the delete, but then you also have dups on the share until you are completed.
  20. I am seeing the same issue. I ran 2 parity checks in a row, and it repaired the same block both runs with no errors at all in the logs.
  21. Yeah, I agree with itimpi. I only use that if I know the parity is pretty close and I know there wont be many updates. You should be fine, a new config usually kicks off a correcting parity check and it should true up all the incorrect blocks. As itimpi said, I would kick off another check right after and you should get 0 corrections or there is something else going on...
  22. The new config is not really as bad as it sounds, it doesn't effect the shares or users just your data. The main thing is to know what disk was in what slot so you don't clobber a data drive. I usually take a screen shot of the main tab so I know what disk serial number was in which slot. 1. Stop the array. 2. Under tools, just click on new config, check the are you sure box and click apply. 3. Add all the disks back to the configuration with the same serial number in the same slot. They should all show auto on the filesystem type, or at least mine did 4. Click on disk9, and switch the filesystem type to XFS 5. Start the array It should bring the array up and start a parity build but the array should be available immediately with all your data, and disk9 should show as unformatted. Just click the format button, and it should finish creating the new FS on that disk in about 5 minutes or so. Then in about 12 hours you should be protected..
  23. The only options you have to get that disk put back is either replace the disk ( either the same disk or a new disk) and let it rebuild or start with a new config. If you start with a new config, you can specify the disk9 as another filesystem type and format it once the array starts.
  24. Cool, glad that worked... From my experience, I am not really a fan of BTRFS...
×
×
  • Create New...