Jump to content

ClunkClunk

Members
  • Posts

    277
  • Joined

  • Last visited

Everything posted by ClunkClunk

  1. I know you're frustrated - so am I. It'd be nice to have more communication, more firm (and met) timeframes on 5.0 final, but saying "don't be offended" and then insulting someone ("just man up") is an unfortunate choice on your part. We're all adults here; let's keep it professional.
  2. Just encountered this today - and your results are consistent to mine. Set to autostart. Don't touch the web UI at all. Preclear lists all disks, but doesn't list my disk that is available for preclearing Load the web UI, do another preclear list, and expected behavior.
  3. I'll say that rc16c seemed to resolve my Transport Endpoint Not Connected issues with Plex. I was able to recently add my entire music library (approx 9,000 tracks) and scan it on Plex with nary an issue, when previously even scanning a few TV shows at a time (maybe 200 episodes?) would cause the issue to happen.
  4. Is this to address the Transport Endpoint not connected errors that Plex users were seeing?
  5. I didn't really do too much that was special. I just ran Slackware 13 in a virtual machine, and compiled it following some of the instructions in the first post. I think there was an additional note from me how to disable gtk (the GUI) from compiling, but that was about it. I tried it last week as I had a little time, but had to rebuild my Slackware 13.37 image from scratch, and then Handbrake 0.9.9 wouldn't compile cleanly as I needed fribidi and libass if I recall. I will try to revisit it if I can, but it's doubtful I'll put much time in to it, so if someone else wants to work on it, go for it!
  6. I think I will do something like that for the future. I was checking out your stuff you've posted about regularly scheduling md5deep cataloging on a per disk basis, and it's quite interesting.
  7. So final update about my situation: I ended up successfully building parity on the 2TB just trusting the data disks. Over the past few days, I've not noticed any issues on my array, and haven't found any missing/corrupt files, but the steps I took, it could be possible that I'll find them in the future. I'm going to re-test the 3TB drive very carefully again, and assign it as parity on a later date once I'm back in my comfort zone on it. For now, marking this as solved. I hope the ddrescue image mounting notes help someone else in the future :-)
  8. A few more things: If you prefer, you can use the -r flag when you set up the image using losetup on the loopback device to set it as read only. When you're all done, you can unmount the image using: umount /mnt/500gb and then clear the loopback device using: losetup -d /dev/loop0 The final thing I'm not sure of is how this works for 4K "Advanced Format" disks. I believe the partitions start on different sectors, and I know they sort of emulate 512 bytes even though they're using 4096 bytes per sector. Not sure how that plays out in the end for mounting images created from a 4K disk.
  9. I think I figured out how to calculate the offset using fdisk on the image file. fdisk 500gb.img Response will be: You must set cylinders. You can do this from the extra functions menu. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): Type "x" then enter for Expert mode, followed by "p" then enter for "print partition table." In the case of my 500GB drive this is the response: Disk 500gb.img: 1 heads, 63 sectors, 0 cylinders Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID 1 00 0 0 0 0 0 0 63 976773105 83 Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 0, 0) logical=(1, 0, 1) Partition 1 has different physical/logical endings: phys=(0, 0, 0) logical=(15504335, 0, 63) Partition 1 does not end on cylinder boundary. 2 00 0 0 0 0 0 0 0 0 00 3 00 0 0 0 0 0 0 0 0 00 4 00 0 0 0 0 0 0 0 0 00 Type "q" then enter to quit without saving any changes. The key is the "Nr" column to find partiton 1 (only one partition on this image), then "Start" to find the sector it starts on (in this case 63). Using 63 (sectors) times 512 (bytes per sector), you get your bytes offset amount.
  10. So here's where I'm at today: • I copied my questionable 2TB to an image file using ddrescue on that 3TB that's out of the array • Did the same with one of my 500GBs that redballed as well, when using a different super.dat • Neither reported any errors in copying with ddrescue • Explored both the physical drives contents and the images. No missing files that I could tell, and it seemed to be my best choice to just accept the disks as OK. • Did an initconfig to invalidate parity (backed up my various super.dats though just to be careful), set back up all my data disks and parity disks carefully. • Started the array, started building parity from scratch on all disks. • Exploring the two disks that unRAID marked as redballed at various points. Nothing major missing that I can tell, but I don't have any logs of what precisely was on them, nor any checksums. I guess I'll just have to stumble across any problems, and replace the files from their originals as necessary. I determined how to mount those img files from ddrescue in case it helps anyone in the future: Make a mount point for your file: mkdir /mnt/500gb Use fdisk to determine the offset (I had to do this on the physical disk I was working on. unRAID doesn't have the "file" command to check the offset in the image file itself.): fdisk -l /mnt/sdj Use losetup to set up the image on the loopback device using the offset you determined: losetup --offset 32256 /dev/loop0 500gb.img Mount the loopback to the mountpoint you created: mount /dev/loop0 /mnt/500gb
  11. I figured I could probably just dd or rsync the files to another location, which I may end up doing. I went with ddrescue just to save a bit of time, as it seems to copy nearly as fast as dd, but in case it eventually encounters an unreadable block, I want to have that time and wear and tear on a possibly failing drive spent wisely. If it finishes ddrescue with 0 reported errors, at least I'll know that the drive itself is probably OK from a physical standpoint, and I'll just need to explore and see if there's any data missing from it. Frankly I was kind of surprised that reiserfsck didn't report any issues, because unRAID reported the drive as unformatted when I attempted to include the drive in the array using my super.dat that had the 3TB as parity.
  12. Also, I'm at 1137GB, and no errors, so I'm hoping that it's a good sign.
  13. I didn't have another 2TB available to copy to – a 3TB or 1.5TBs were all that I had spare. I believe ran reiserfsck on the physical drive that redballed once before, since I switched out to my old super.dat, it was reporting as green (for some reason - Disk 12 is reporting as redballed, so I'm assuming that's the parity-emulated one).
  14. Just an update: • Smart short and long tests showed no issues • I managed to swap in the super.dat file I had from about a week ago and got the array started in maintenance mode • Ran reisierfsck -check on the 2tb drive that redballed. No issues reported. • Formatted my 3TB drive that I intended to be parity with unmenu's reiser formatting, and mounted it as read/write. I determined since the parity rebuild on to this 3TB was not complete, it was essentially useless, and I was comfortable with the stability of that 3TB since it passed 3 preclear sessions with no issues. • Installed Weebotech's ddrescue package, and started ddrescue, invoking just the -n flag, and outputting to the 3TB about 15 minutes ago. It's completed about 54GB so far with no errors. My plan is to let this complete, and if it reports no issues, I'll also copy the contents of Disk 12 using ddrescue, as with the older super.dat it's redballed (though I'm guessing it has no true issues). Then I need to determine how to get the data *out* of the .img files I saved from ddrescue, and eventually just invalidate parity and force it to rebuild on to my 2TB, and if that's successful, do the same on the 3TB.
  15. Looks like you ran a smart short test. You should run another one, saving to a separate file and then compare the two. Here's how I did it: Save the initial smart report, then start the short test: smartctl -a /dev/sdb > /boot/smart_sdb_1.txt smartctl -test short /dev/sdb The test will indicate that it's been started and give an estimate of time. It took about a minute on mine. After the time has passed, gather the results in to a second text file and then compare: smartctl -a /dev/sdb > /boot/smart_sdb_2.txt diff /boot/smart_sdb_1.txt /boot/smart_sdb_2.txt The "diff" command shows you the differences in the files. See if anything appears abnormal. Next up is starting the long test: smartctl -test long /dev/sdb On my system, it estimated 4.25 hours, but ended up taking about 4.5 hours. You can check if it's done by running this: smartctl -a /dev/sdb Look for the line about CURRENT_TEST_STATUS. If it says it still is running, wait longer, and run the above command again to check. Once it's complete run this to gather the results in to a third file and then the diff vs. the second one to compare the two: smartctl -a /dev/sdb > /boot/smart_sdb_3.txt diff /boot/smart_sdb_2.txt /boot/smart_sdb_3.txt On my situation, both short and long tests did not show anything out of the ordinary, so I did a reiserfs check, and that went well, so I'm going to copy the drive using ddrescue to another drive and work from there today.
  16. WeeboTech, thanks for the advice. Did a short test, no major differences. Doing a long test now.
  17. Are you me? Because it sounds like we both have the same exact issue: http://lime-technology.com/forum/index.php?topic=26265.0 Ironically enough, I was also going from 2TB parity to 3TB.
  18. One more thing that might be helpful is that I backed up my super.dat about a week ago. I tried swapping that in, and it found my original 2TB as parity and gave it a green ball, though it gave a red ball on a completely different 500GB disk of mine. I'm hesitant to start the array like this, as I have no idea if it's safe, considering the 2TB parity drive was valid before the 3TB parity was started, so any changes to the array at that point wouldn't be on the 2TB (though to be honest, I don't understand all the finer points of the parity rebuild process).
  19. I was upgrading my parity drive today from 2TB to 3TB and encountered a red ball on a completely different drive while it was going. Here's what I did: • Shut down the server • Removed a 640GB that was in the unRAID in my case, put it in an eSATA enclosure • Installed the 3TB (left my original 2TB parity still installed) • Also added a 60GB SSD that I'll later use as cache • Booted back up • Saw unRAID found all the drives I was expecting and started normally (remember I left the 2TB original parity still installed) • I think I restarted but set the array to not auto-start • Assigned the new 3TB as parity • Started the parity rebuild on to the new parity drive About an hour and a half later, my wife mentioned that some of the video shares weren't working, and I saw that Disk 5 was reporting a red ball. Checked the syslog. It was huge - nearly all of it was filled with entries similar to this: Feb 28 17:32:54 Tower kernel: md: disk5 read error Feb 28 17:32:54 Tower kernel: handle_stripe read error: 121995568/5, count: 1 Feb 28 17:32:54 Tower kernel: md: disk6 read error Feb 28 17:32:54 Tower kernel: handle_stripe read error: 121995568/6, count: 1 Attempted to use the clean powerdown, ended up using that, some kill commands and eventually a hard shut off. I double checked and reseated all cabling/connections to Disk 5 and the motherboard SATA port it's on. I'm a bit at a loss of what to do at this point. I still have the 2TB parity, as the 3TB parity didn't complete, but with Disk 5 redballing, I'm not sure how to proceed. Thanks for any advice and help! I've linked to syslogs, since the one is decently large. The small one is the syslog from right before the issue in case that's helpful. https://www.dropbox.com/s/0c50a0jhs7f1dag/syslog-20130228-163140.txt.zip https://www.dropbox.com/s/sae98vllcwvr684/syslog-20130228-175311.txt.zip
  20. I haven't gotten around to doing anything with HandBrake and unRAID 5.0 (my unRAID server went from having one of the fastest CPUs in the house, to one of the slowest over the last few years, so it's not been a priority for me), but I think the install instructions will work to at least install 0.9.6, though it won't be a plugin.
  21. I took a quick glance at it, but the Slackware 12 virtual machine I used to build 0.9.5 wouldn't successfully build 0.9.6 or the snapshots. I have a pretty new baby in the house, so I haven't had much time to dig at it. What did you end up doing, aiden? Did you get it compiled on unRAID 5.0? A Slackware install?
  22. Attached are two unMENU .conf files for HandBrake. handbrake-unmenu-package.conf is for installing 0.9.5, the most recent release version of HandBrake (same as what was posted a few months back). handbrake-svn-unmenu-package.conf is for installing the most recent SVN build of HandBrake, 3853 (I haven't tested this one, so please give me feedback). If you're not using unMENU, and still want to use HandBrake, here are the URLs for HandBrake for unRAID: http://www.clunkclunk.com/HandBrakeCLI-0.9.5-unraid.tgz http://www.clunkclunk.com/HandBrakeCLI-svn3853-unraid.tgz handbrake-unmenu-package.conf handbrake-svn-unmenu-package.conf
  23. kapperz, I haven't updated it beyond 0.9.5, and that's what the current unmenu .conf is referencing. Since the handbrake unmenu .conf isn't distributed as part of Joe L.'s unmenu, there's no current way to autoupdate. However, if there's a desire for me to compile a newer snapshot, or a newer final version comes out, post in this thread, and I can compile and update the .conf file.
×
×
  • Create New...