kevinsyn

Members
  • Posts

    27
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kevinsyn's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Adding diagnostics blackhole-diagnostics-20230225-1251.zip
  2. Hi there, I accidently mounted the wrong drive on my unassigned drives to replace one of the drives on my main shared drives. I immediately stopped the array and replaced the right drive. My main array is all good, but my unassigned drive XFS is understandably corrupt. I had some files on it is there any way to recover? On running the filesystem check script on unassigned: FS: crypto_LUKS Opening crypto_LUKS device '/dev/sdf1'... Executing file system check: /sbin/xfs_repair -n /dev/mapper/WorkingDrive 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x46b78d, xfs_inobt block 0xaea80678/0x1000 btree block 3/3 is suspect, error -74 bad magic # 0xbe99d14 in inobt block 3/3 sb_fdblocks 2424336792, counted 2441366498 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 - agno = 8 - agno = 9 - agno = 7 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 6 No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting. Closing crypto_LUKS device '/dev/sdf1'... File system corruption detected! RUN WITH CORRECT FLAG DONE When trying to repair it with "Run with correct flag" FS: crypto_LUKS Opening crypto_LUKS device '/dev/sdf1'... Executing file system check: /sbin/xfs_repair -e /dev/mapper/WorkingDrive 2>&1 xfs_repair: cannot open /dev/mapper/WorkingDrive: Device or resource busy Closing crypto_LUKS device '/dev/sdf1'... File system corruption detected! Anyway to recover this or to repair the drive?
  3. Just wanted to highlight this as I ran into this problem recently and THIS is what fixed it. Cleaned up user shares and problem went away.
  4. Hey guys, Getting these errors on my cache drive (sdc) Is this drive gone? Icon is showing its still a normal drive. Its an older drive and nothing really important on the drive obviously. Any recommended test on the hard drive to do a diagnosis? Also can't take the array offline at the moment through the webui which i'm assuming its the cache drive not letting the array shut down cleanly. Thanks in advance! Apr 16 18:43:12 blackhole kernel: end_request: I/O error, dev sdc, sector 10688976 (Errors) Apr 16 18:43:12 blackhole kernel: REISERFS error (device sdc1): zam-7001 reiserfs_find_entry: io error (Errors) Apr 16 18:43:22 blackhole emhttp: get_filesystem_status: statfs: /mnt/user/backup Input/output error (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Unhandled error code (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Result: hostbyte=0x04 driverbyte=0x00 (System) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 01 00 d8 00 00 08 00 (Drive related) Apr 16 18:43:22 blackhole kernel: end_request: I/O error, dev sdc, sector 65752 (Errors) Apr 16 18:43:22 blackhole kernel: REISERFS error (device sdc1): zam-7001 reiserfs_find_entry: io error (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Unhandled error code (Errors)
  5. Running just regular instance of Ubuntu Server for the couchpotato/sab/sickbeard. In terms of settings, I think i have it set with 2GB ram and 1 core? I have the hard drives mounted and using vmxnet3 for both guest. What problems are you having? is it with couchpotato setup or setting up esxi or a particular problem with couchpotato and esxi? Set up for me was pretty standard once ubuntu server guest was set up. Just install git, download it to ur /home directory or whatever you want to use and that's about it.
  6. As Helmonder pointed out, it does not support VT-D. I had to sell mine and buy the Xeon processor. Surprisingly the 2120 held its resale value pretty well. I bought it originally for $129 retail and sold it for $90. Similar to Helmonder, I moved all my mods to a separate VM and unRAID runs noticeably faster/snappier. Now the next milestone for me is to see how stable it runs long term. With my old setup, i had a problem at around 70-80 days uptime where I had to restart. So far looks so good.
  7. Unfortunately this would not work for me as I would have to pass through a whole MV8 to the new VM. However I'm already using 18 slots already and logistically speaking this would be a pain of ass if I were to RDM the drives just for pre-clears and then connect them to the MV8. This is pretty much what I did. I have unraid VM on its own with just unmenu installed. SAB/couchpotato/sickbeard etc etc on separate VM's with unraid drives mounted. Make sure you use vmxnet3! 10GbE locally makes a huge difference!! I had a peculiar problem with my Unraid. I originally had 8GB before upgrading to EXSi. However when upgrading i moved all the plugins such as sickbeard/couchpotato/sab out to their own respective VM so i lowered the unraid VM RAM to 4GB. However I started noticing kernel panics whenever I did preclears. Since then changed it back to 8GB since i have room to spare at the moment. Anyways it kinda seems like a waste at 8GB since most of it is using as swap. Anyone else have this problem? You aren't using a SAS expander by any chance are you? The reason I ask is that I have problems preclearing drives on my VM's and someone else pointed out to me that I needed to upgrade the firmware on my SAS Expander (see my sig for model). I haven't had a chance to update the firmware and test again. I should also upgrade to a newer firmware on my M1015's since I'm still using P11. So you might try those options if they apply. I don't remember getting any panic's but the preclear's were failing saying they could NOT preclear the drives and taking them to my preclear station that isn't on a SAS expander or virtualized would clear the drives just fine. I'm not 100% sure it is a problem with preclears + RAM to be honest. However since booting the unraid VM with 8GB ram, i've successfully precleared 4 drives in the VM and server has been up for 2 weeks with no panics. Anwyays I'm not using any type of SAS expanders, just the MV8's.
  8. I am passing through the entire controller cards. I just tried disabling the VT-D Option in the bios. Upon rebooting the unraid VM did not load because passthrough was not supported. I enabled it again and passthrough is working again. But parity checks and rebuilds are unbearably slow. I have pulled ESXi out of the boot sequence and running unraid on the machine as if it were the only thing installed. Rebuilding a failed drive now at 77mb/sec. Will be done in just over 300 minutes. Then I will be back to the drawing board on how to get unraid to work fast like this in ESXi. To be honest I'm a little stumped on this one. I don't think there is a way to easily diagnose the problem unless you were to take all the parts out and insert them in one at a time and find your root cause. If you take EXSi stick out and throw boot unraid directly do the parity checks run as normal? I had a peculiar problem with my Unraid. I originally had 8GB before upgrading to EXSi. However when upgrading i moved all the plugins such as sickbeard/couchpotato/sab out to their own respective VM so i lowered the unraid VM RAM to 4GB. However I started noticing kernel panics whenever I did preclears. Since then changed it back to 8GB since i have room to spare at the moment. Anyways it kinda seems like a waste at 8GB since most of it is using as swap. Anyone else have this problem?
  9. Yes unfortunately I dont think that has passthrough as welll. I ran into the same problem. I decided to upgrade to the Ivy Bridge series (v2) mostly because of availability at the time of purchase. There are reports of a number of user issues with Ivy Bridge but thus far everything has gone flawlessly and haven't had any issues. If you are upgrading to an Ivy Bridge CPU, remember you have to flash your BIOS to 2.00b in order to support the CPU. Motherboard has built in videocard so don't worry about it. And use IPMI if you have the X9SCM-F
  10. What BIOS version are you using for Ivy Bridge? I've upgraded to Ivy Bridge and have no problems with parity checks, preclears through VM. Maybe upgrade your BIOS before throwing away (not literally of course) the cpu? Just a thought. Only problem I've had thus far has been kernel panic when preclearing but I believe I found the problem with allocating too little ram to the VM.
  11. I actually experienced this exact problem as well. I was preclearing 2x 3TB drives when this occurred. Im suspecting it has something to do with the preclearing? Machine was running fine before. Will do some test and report back on this. Edit: Also running no mods other then VMtools
  12. Speakers: KEF Q700 Floorstanding Speakers KEF Q200c Center Speakers KEF Q300 Rears KEF Q400 Subs
  13. I believe I have found the root cause. The RAM was not being used by system cache, it was just being eaten by the server. Not sure how I didn't notice but the host date was wrong and set to March 2013 instead of February(today date). This was screwing up the crontabs (logs were filled crontab complaining about time discrepancy) and some other stuff. Anyways fixing that... server is back to its usual resource usage of about 1.5gb. Moved the python stuff off the server and its now sitting around 800mb usage. Not bad.
  14. No problems thus far... maybe if they were closer to the sub but the sub is on the other side of the room...