Jump to content

SilntBob

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

SilntBob's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm running unRAID version 4.7. I've attached the syslog files. I copied them before I restarted the box, however, they unfortunately don't have much useful in them. When I first had problems, I stopped the array, but some of the drives that had been flagged with errors were completely non-responsive. I proceeded to start validating the filesystems on the drives that appeared to be okay...that took a couple of days, but they all checked out. In the meantime, the syslog files rolled over and I no longer had the syslog data from the actual time of the incident...but I've attached what I have. EDIT: okay, I *couldn't* attach the syslog files, because they are each about 15MB and even after compression they are still about 900K each...I've got 3 of them, these messages are repeated: Jul 18 04:40:01 UnRaidServer emhttp: shcmd (4418): umount /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:01 UnRaidServer emhttp: _shcmd: shcmd (4418): exit status: 1 Jul 18 04:40:01 UnRaidServer emhttp: shcmd (4419): rmdir /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:01 UnRaidServer emhttp: _shcmd: shcmd (4419): exit status: 1 Jul 18 04:40:01 UnRaidServer emhttp: shcmd (4420): umount /mnt/cache >/dev/null 2>&1 Jul 18 04:40:01 UnRaidServer emhttp: _shcmd: shcmd (4420): exit status: 1 Jul 18 04:40:01 UnRaidServer emhttp: shcmd (4421): rmdir /mnt/cache >/dev/null 2>&1 Jul 18 04:40:01 UnRaidServer emhttp: _shcmd: shcmd (4421): exit status: 1 Jul 18 04:40:01 UnRaidServer emhttp: Retry unmounting disk share(s)... Jul 18 04:40:01 UnRaidServer emhttp: mdcmd: write: Invalid argument Jul 18 04:40:01 UnRaidServer emhttp: disk_spinning: open: No such file or directory Jul 18 04:40:01 UnRaidServer kernel: mdcmd (5803): spindown 0 Jul 18 04:40:01 UnRaidServer kernel: md: disk0: ATA_OP_STANDBYNOW1 ioctl error: -22 Jul 18 04:40:02 UnRaidServer emhttp: disk_spinning: open: No such file or directory Jul 18 04:40:02 UnRaidServer emhttp: disk_spinning: open: No such file or directory Jul 18 04:40:02 UnRaidServer emhttp: shcmd (4422): /usr/sbin/hdparm -y /dev/sdj >/dev/null Jul 18 04:40:03 UnRaidServer emhttp: _shcmd: shcmd (4422): exit status: 52 Jul 18 04:40:06 UnRaidServer emhttp: shcmd (4423): umount /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:06 UnRaidServer emhttp: _shcmd: shcmd (4423): exit status: 1 Jul 18 04:40:06 UnRaidServer emhttp: shcmd (4424): rmdir /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:06 UnRaidServer emhttp: _shcmd: shcmd (4424): exit status: 1 Jul 18 04:40:06 UnRaidServer emhttp: shcmd (4425): umount /mnt/cache >/dev/null 2>&1 Jul 18 04:40:06 UnRaidServer emhttp: _shcmd: shcmd (4425): exit status: 1 Jul 18 04:40:06 UnRaidServer emhttp: shcmd (4426): rmdir /mnt/cache >/dev/null 2>&1 Jul 18 04:40:06 UnRaidServer emhttp: _shcmd: shcmd (4426): exit status: 1 Jul 18 04:40:06 UnRaidServer emhttp: Retry unmounting disk share(s)... Jul 18 04:40:11 UnRaidServer emhttp: shcmd (4427): umount /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:11 UnRaidServer emhttp: _shcmd: shcmd (4427): exit status: 1 Jul 18 04:40:11 UnRaidServer emhttp: shcmd (4428): rmdir /mnt/disk3 >/dev/null 2>&1 Jul 18 04:40:11 UnRaidServer emhttp: _shcmd: shcmd (4428): exit status: 1 Jul 18 04:40:11 UnRaidServer emhttp: shcmd (4429): umount /mnt/cache >/dev/null 2>&1 Jul 18 04:40:11 UnRaidServer emhttp: _shcmd: shcmd (4429): exit status: 1 Jul 18 04:40:11 UnRaidServer emhttp: shcmd (4430): rmdir /mnt/cache >/dev/null 2>&1 Jul 18 04:40:11 UnRaidServer emhttp: _shcmd: shcmd (4430): exit status: 1 Jul 18 04:40:11 UnRaidServer emhttp: Retry unmounting disk share(s)... Jul 18 04:40:13 UnRaidServer emhttp: mdcmd: write: Invalid argument Jul 18 04:40:13 UnRaidServer emhttp: disk_spinning: open: No such file or directory I've also attached smart reports for all of the drives that reported read/write errors....they are fine. As far as "mounting" the drives in another system...I never did that. I PUT the drives in another system (my forensics box) but I didn't make any changes to any of the drives....until the very end. I first started working with the 1TB drive since I figured it would be quicker to experiment with since it was smaller. I first used ddrescue to pull a full image of the drive and started working with that.....I was expecting the worst, and I wanted to eliminate other potential system hardware problems, thus the use of a known-good machine and forensics/data recovery tools that I'm familiar and comfortable with. The ddrescue imaged the drive with no hiccups. After that I ran reiserfsck against the image, and it was fine. From there I proceeded to run reiserfsck in a read-only check mode against the original 1TB drive....it came up clean. I then did the same thing for the 2TB drive that reported the write error....it was clean. Finally I ran it against the other 2TB drive that logged read errors....fsck reported that there were 6 corruptions and suggested to re-run with fix-fixable, which I did. I understand that by doing that outside of the unRAID system I invalidated parity...I'm okay with that. With 3 drives that had 137M writes on them while the parity drive only had 703K writes, I don't really trust the parity disk much anyway...it will be interesting to see how many parity errors are reported when I do the parity check. I could make an image of the parity drive before I start, but I really don't know what good a parity drive image would do for me since multiple drives have had questionable activity on them. At this point, I think everything is as good as it can be, although I don't know if any of the files themselves were corrupted by the 137M write operations that occurred on those 3 drives. I really wish I understood what could have caused those....there were only 528K reads from the cache drive, so it couldn't have been caused by that. There were only 703K writes to the parity drive, so it really doesn't make sense to me how all those write operations occurred on the 3 drives that were showing problems without a corresponding update to the parity disk... I was considering building up a new system, adding some blank drives and migrating all of the data back into the array, but after looking at the status of things, it looks like things may not be as bad as they initially seemed (depending on whether or not the actual files got corrupted by the mystery writes). Before I reinstall the drives and power back up and try to get the array to start I thought I'd check to see if there is anything else I should do first. I assume from previous experience that when I start things up the one drive is still going to be marked in RED so the only way to get unRAID happy again will be to follow the "Trust My Array" procedure.... smart.zip
  2. Grab this tool: http://www.ufsexplorer.com/rdr_reiser.php Take a look at your drive, if you are able to see data and want to recover it, pay the $25 for the software. Its solid software and should be easy enough for you to use.
  3. I'm working on recovering from a pretty bad situation and I need some advice. A few days ago I went to access some of my data and I noticed entire user shares that were empty...this didn't make much sense. Checking the unRAID web interface I saw that one of my drives was marked RED and it indicated 7 errors. 2 other drives in the array also showed errors (one with 93 errors, the other with 69 errors) but they were still green. Additionally, looking at the read/write stats on the drives I noticed some strange things Model / Serial No. Temperature Size Free Reads Writes Errors parity _ * 1,953,514,552 - 433,384 703,653 0 disk1 WDC_WD20EARS-00M 0°C 1,953,514,552 - 160 134,874,888 93 disk2 SAMSUNG_HD103UJ_ 25°C 976,762,552 - 128,332 60 0 *disk3 WDC_WD20EADS-00R 0°C 1,953,514,552 - 90 134,548,877 7 disk4 WDC_WD20EARS-00M * 1,953,514,552 - 781,639 214,171 0 disk5 WDC_WD20EARS-00M * 1,953,514,552 - 445,872 72,117 0 disk6 WDC_WD10EADS-00L 0°C 976,762,552 - 445,872 134,871,616 69 disk7 WDC_WD20EARS-00M * 1,953,514,552 - 1,660,823 62,410 0 cache WDC_WD7501AALS-_ * 732,574,552 - 528,746 1,341,675 0 The capture was taken after I unmounted drives, so free space isn't reported. I'll admit that most of those drives have very little free space. This is my fault, I've been meaning to address that, not sure if it contributed to my problems. What is really strange is that the 3 drives that had issues all had a 134M+ writes to them and relatively few reads. I don't know why, I don't think I'd been writing that much data.....especially since the parity drive only has 700K writes in the same period of time. After stopping the array, before shutting down I attempted to run reiserfsck read-only check on each of the drives. The drives without errors all completed the reiserfsck check fine but it complained loudly about the other 3 drives. I then powered down the server to locate the problematic drives. Interestingly all 3 of those drives were connected to the same SFF-8087 SAS break-out cable.....did something go crazy with the cable/controller ? I dunno. So I powered down and pulled the drives, and I'm working on them on a separate system. Running reiserfsck on them one at a time and they all checked out, no major complaints or corruption indicated by fsck. I'm still a bit nervous about all of those writes...why did they happen? why roughly the same number to all 3 drives? What should my next step be? To throw them back in the array, boot it up, "trust the array" cross my fingers and do a parity check? Any other ideas or suggestions? Once I get it back up and stable, I'm going to add another drive and make sure those drives have a bit more free space on them.... Thanks.
  4. I can also now confirm success in this area. I think I posted a previous message saying that I had issues, but I determined that those issues stemmed from using the Unraid VM prebuilt ISO that was posted much earlier in this topic thread. It worked pretty well at the time for raw device passthough, but I am now back to using the standard unraid image with the plop ISO to boot from USB. So far things are working pretty well. I've got 7 total drive attached to the AOC-SASLP-MV8 right now (including the parity and cache drive which I plan on relocating to see if I can get better speed from raw device mapping) but I'm still working on getting data migrated, and completing the upgrade to 5 beta. However, after making the change, the VM has definitely been stable. THANK YOU for the discovery.
  5. Well, I successfully made AOC-SASLP-MV8 work with ESXi 4.1 and VMDirectPath. With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file: Now open your VM's .vmx file and change this: to this: The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement. Reboot the hypervisor and start your unRAID VM! Good luck. I'm attempting to replicate what you have done here. After making all of the configuration changes, rebooting, etc, the UnRaid VM is running fine with the card attached, but I'm not able to find any of the drives....any tricks, things I might be missing, or something I should try? Thanks.
  6. Using plpcfgbt and plpbt-createiso you can create a .iso file that is configured to boot directly to the USB. That is how mine works. These are both good suggestions....of course I power off my unraid vm once in a blue moon, I just press the spacebar to get it to startup when I'm ready, but these other options are good as well.
  7. No. You get the plop ISO, use that as the boot source for the VM When the VM boots to the plop CD ISO image, you tell plop to boot from the USB stick.... You have to have already passed-through the USB stick to the VM in ESXi At least that's what I do. That is the way I make an ESXi VM boot/run from a usb stick.
  8. what you are going to want to do is get an iso of plop and use that to boot to the usb. The vm won't boot DIRECTLY from the usb, but it will via redirection.
  9. but if you are using the cache drive, you don't get any parity protection, right? At that point, what is the point?
  10. I'll definitely second that idea.....once I get my cache drive in place, I'll try mounting a VM on UnRaid via an NFS mount, just for giggles to see what the performance is like, but I'm not planning on that for "production" purposes.
  11. I think I paid about $100 or $125 for mine off ebay, with cables.....either perc cards are cheaper than you think, or you got a hell of a deal on your other components....those cards are so common that at used prices they should certainly be within budget. I'm still working on my performance and I don't yet have a cache drive configured (it is temporarily being used for something else) but read performance has been just fine, write performance tends to be a little slow, but the rest of the system is cooking just fine. I've got another VM that is acting as my SageTV/Playon/Twonky transcoder box...Initially I have a dedicated drive for that VM to use for recording to, but once I get the cache drive implemented I'm going to consider recording TV directly to the array i thought the perc cards are like $600+ havent' looked into them in a while. lotsa things to think about here. New price, probably......off-warranty ebay price is much less, which is nice, because they are solid cards.
  12. I think I paid about $100 or $125 for mine off ebay, with cables.....either perc cards are cheaper than you think, or you got a hell of a deal on your other components....those cards are so common that at used prices they should certainly be within budget. I'm still working on my performance and I don't yet have a cache drive configured (it is temporarily being used for something else) but read performance has been just fine, write performance tends to be a little slow, but the rest of the system is cooking just fine. I've got another VM that is acting as my SageTV/Playon/Twonky transcoder box...Initially I have a dedicated drive for that VM to use for recording to, but once I get the cache drive implemented I'm going to consider recording TV directly to the array
  13. Can you? I know you can use iSCSI as a datastore destination, but I didn't know you could do it with NFS....
  14. I have no idea what performance would be like running the VMDKs from the UnRAID array, but with the UnRAID write speed and IO op overhead, I wouldn't recommend running the primary virtual disk from the array. You are going to need SOME sort of drive to boot ESXi from and have an ESXi datastore, etc. Here are the options if you go ESXi 1 drive= ESXi + datastore for VMDK. If you want to use the "extra" space on this drive, you *could* create a virtual drive on a portion of this space and use that virtual drive as a disk in your UnRAID array....you could also use the virtual drive for the UnRAID cache drive. Presently, I'm currently running a 1 drive Datastore setup, but I'm going to be changing that in the near future to: Using a perc5i raid controller for the ESXi Datastore... I'm going to set up a HW RAID array using some older disks. The HW RAID will support the ESXi installation as well as the datastores for my VMs....this keeps my on-board SATA ports free for pass-through to the UnRAID VM. Additionally, you aren't going to be able to run the VMs off of the UnRAID array, because UnRAID doesn't provide iSCSI pass-through to present the storage array to ESXi.....you are going to need at least one drive to host the ESXi installation + datastore. As far as Carbonite goes, Carbonite is a decent program and I'm sure you got a good deal on it, however, it has too many limitations that prevent it from being nearly as well suited for this use as CrashPlan. I used to use Carbonite and I've been 100x happier since I switched....the last time I checked, Carbonite doesn't have a linux client and it doesn't let you back-up a network location to the cloud....it isn't going to work for cloud-syncing an UnRAID server, Crashplan does. If you are interested in moving forward and have a server you can play around with, first try installing ESXi on it, just to get a feel for it.....it is different with a slight learning curve. Then, install UnRAID on a USB stick and play around with *that* on the server.....THEN try to build the UnRAID VM to run inside of ESXi. Now that I'm getting the kinks worked out of the system I'm really happy with it, and I pleased to be able to get everything consolidated to run on a single server for the house.
  15. You are on the right track, but a little confused. I'm not sure what you are reading about "marrying" hard drive chunks, though. You wouldn't want to run the VHD for the other VMs off of the Unraid array since performance would suck. You will have 1 or more drives for the ESXi datastore...that is where all of the Virtual Disks will be stored. You will then pass through multiple physical drives or an entire controller over to the UnRaid VM. The UnRaid VM will manage those drives. For backups, I'd recommend throwing away Carbonite (I started with that one too) and moving over to Crashplan. Run Crashplan on your UnRAID VM and select specific shares for online/cloud backup. All of my data on my individual PCs/VMs backs up to the UnRAID server that is running CrashPlan Server, and the data stored in cloud-protected locations on the UnRAID server are pushed to the cloud. So, in my individual VMs, I configure the persistent storage locations to be shared space in the array and then the local VMDK is just running on the ESXi datastore. In my case, I'm working towards building an ESXi datastore that is RAID protected, but it really isn't *that* necessary, other than for avoiding down-time.
×
×
  • Create New...