kevinsyn

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by kevinsyn

  1. Adding diagnostics blackhole-diagnostics-20230225-1251.zip
  2. Hi there, I accidently mounted the wrong drive on my unassigned drives to replace one of the drives on my main shared drives. I immediately stopped the array and replaced the right drive. My main array is all good, but my unassigned drive XFS is understandably corrupt. I had some files on it is there any way to recover? On running the filesystem check script on unassigned: FS: crypto_LUKS Opening crypto_LUKS device '/dev/sdf1'... Executing file system check: /sbin/xfs_repair -n /dev/mapper/WorkingDrive 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x46b78d, xfs_inobt block 0xaea80678/0x1000 btree block 3/3 is suspect, error -74 bad magic # 0xbe99d14 in inobt block 3/3 sb_fdblocks 2424336792, counted 2441366498 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 - agno = 8 - agno = 9 - agno = 7 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 6 No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting. Closing crypto_LUKS device '/dev/sdf1'... File system corruption detected! RUN WITH CORRECT FLAG DONE When trying to repair it with "Run with correct flag" FS: crypto_LUKS Opening crypto_LUKS device '/dev/sdf1'... Executing file system check: /sbin/xfs_repair -e /dev/mapper/WorkingDrive 2>&1 xfs_repair: cannot open /dev/mapper/WorkingDrive: Device or resource busy Closing crypto_LUKS device '/dev/sdf1'... File system corruption detected! Anyway to recover this or to repair the drive?
  3. Just wanted to highlight this as I ran into this problem recently and THIS is what fixed it. Cleaned up user shares and problem went away.
  4. Hey guys, Getting these errors on my cache drive (sdc) Is this drive gone? Icon is showing its still a normal drive. Its an older drive and nothing really important on the drive obviously. Any recommended test on the hard drive to do a diagnosis? Also can't take the array offline at the moment through the webui which i'm assuming its the cache drive not letting the array shut down cleanly. Thanks in advance! Apr 16 18:43:12 blackhole kernel: end_request: I/O error, dev sdc, sector 10688976 (Errors) Apr 16 18:43:12 blackhole kernel: REISERFS error (device sdc1): zam-7001 reiserfs_find_entry: io error (Errors) Apr 16 18:43:22 blackhole emhttp: get_filesystem_status: statfs: /mnt/user/backup Input/output error (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Unhandled error code (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Result: hostbyte=0x04 driverbyte=0x00 (System) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 01 00 d8 00 00 08 00 (Drive related) Apr 16 18:43:22 blackhole kernel: end_request: I/O error, dev sdc, sector 65752 (Errors) Apr 16 18:43:22 blackhole kernel: REISERFS error (device sdc1): zam-7001 reiserfs_find_entry: io error (Errors) Apr 16 18:43:22 blackhole kernel: sd 0:0:1:0: [sdc] Unhandled error code (Errors)
  5. Running just regular instance of Ubuntu Server for the couchpotato/sab/sickbeard. In terms of settings, I think i have it set with 2GB ram and 1 core? I have the hard drives mounted and using vmxnet3 for both guest. What problems are you having? is it with couchpotato setup or setting up esxi or a particular problem with couchpotato and esxi? Set up for me was pretty standard once ubuntu server guest was set up. Just install git, download it to ur /home directory or whatever you want to use and that's about it.
  6. As Helmonder pointed out, it does not support VT-D. I had to sell mine and buy the Xeon processor. Surprisingly the 2120 held its resale value pretty well. I bought it originally for $129 retail and sold it for $90. Similar to Helmonder, I moved all my mods to a separate VM and unRAID runs noticeably faster/snappier. Now the next milestone for me is to see how stable it runs long term. With my old setup, i had a problem at around 70-80 days uptime where I had to restart. So far looks so good.
  7. Unfortunately this would not work for me as I would have to pass through a whole MV8 to the new VM. However I'm already using 18 slots already and logistically speaking this would be a pain of ass if I were to RDM the drives just for pre-clears and then connect them to the MV8. This is pretty much what I did. I have unraid VM on its own with just unmenu installed. SAB/couchpotato/sickbeard etc etc on separate VM's with unraid drives mounted. Make sure you use vmxnet3! 10GbE locally makes a huge difference!! I had a peculiar problem with my Unraid. I originally had 8GB before upgrading to EXSi. However when upgrading i moved all the plugins such as sickbeard/couchpotato/sab out to their own respective VM so i lowered the unraid VM RAM to 4GB. However I started noticing kernel panics whenever I did preclears. Since then changed it back to 8GB since i have room to spare at the moment. Anyways it kinda seems like a waste at 8GB since most of it is using as swap. Anyone else have this problem? You aren't using a SAS expander by any chance are you? The reason I ask is that I have problems preclearing drives on my VM's and someone else pointed out to me that I needed to upgrade the firmware on my SAS Expander (see my sig for model). I haven't had a chance to update the firmware and test again. I should also upgrade to a newer firmware on my M1015's since I'm still using P11. So you might try those options if they apply. I don't remember getting any panic's but the preclear's were failing saying they could NOT preclear the drives and taking them to my preclear station that isn't on a SAS expander or virtualized would clear the drives just fine. I'm not 100% sure it is a problem with preclears + RAM to be honest. However since booting the unraid VM with 8GB ram, i've successfully precleared 4 drives in the VM and server has been up for 2 weeks with no panics. Anwyays I'm not using any type of SAS expanders, just the MV8's.
  8. I am passing through the entire controller cards. I just tried disabling the VT-D Option in the bios. Upon rebooting the unraid VM did not load because passthrough was not supported. I enabled it again and passthrough is working again. But parity checks and rebuilds are unbearably slow. I have pulled ESXi out of the boot sequence and running unraid on the machine as if it were the only thing installed. Rebuilding a failed drive now at 77mb/sec. Will be done in just over 300 minutes. Then I will be back to the drawing board on how to get unraid to work fast like this in ESXi. To be honest I'm a little stumped on this one. I don't think there is a way to easily diagnose the problem unless you were to take all the parts out and insert them in one at a time and find your root cause. If you take EXSi stick out and throw boot unraid directly do the parity checks run as normal? I had a peculiar problem with my Unraid. I originally had 8GB before upgrading to EXSi. However when upgrading i moved all the plugins such as sickbeard/couchpotato/sab out to their own respective VM so i lowered the unraid VM RAM to 4GB. However I started noticing kernel panics whenever I did preclears. Since then changed it back to 8GB since i have room to spare at the moment. Anyways it kinda seems like a waste at 8GB since most of it is using as swap. Anyone else have this problem?
  9. Yes unfortunately I dont think that has passthrough as welll. I ran into the same problem. I decided to upgrade to the Ivy Bridge series (v2) mostly because of availability at the time of purchase. There are reports of a number of user issues with Ivy Bridge but thus far everything has gone flawlessly and haven't had any issues. If you are upgrading to an Ivy Bridge CPU, remember you have to flash your BIOS to 2.00b in order to support the CPU. Motherboard has built in videocard so don't worry about it. And use IPMI if you have the X9SCM-F
  10. What BIOS version are you using for Ivy Bridge? I've upgraded to Ivy Bridge and have no problems with parity checks, preclears through VM. Maybe upgrade your BIOS before throwing away (not literally of course) the cpu? Just a thought. Only problem I've had thus far has been kernel panic when preclearing but I believe I found the problem with allocating too little ram to the VM.
  11. I actually experienced this exact problem as well. I was preclearing 2x 3TB drives when this occurred. Im suspecting it has something to do with the preclearing? Machine was running fine before. Will do some test and report back on this. Edit: Also running no mods other then VMtools
  12. Speakers: KEF Q700 Floorstanding Speakers KEF Q200c Center Speakers KEF Q300 Rears KEF Q400 Subs
  13. I believe I have found the root cause. The RAM was not being used by system cache, it was just being eaten by the server. Not sure how I didn't notice but the host date was wrong and set to March 2013 instead of February(today date). This was screwing up the crontabs (logs were filled crontab complaining about time discrepancy) and some other stuff. Anyways fixing that... server is back to its usual resource usage of about 1.5gb. Moved the python stuff off the server and its now sitting around 800mb usage. Not bad.
  14. No problems thus far... maybe if they were closer to the sub but the sub is on the other side of the room...
  15. Sorry for late reply haven't checked back in a while! I used double sided tape to bind the bottom and the PSU strap. I didnt want to drill through it as I might want to reposition it later. The cage is fairly stable when you put it against the PCI Clamp Slots so it doesnt move around much. HOWEVER, the trick is to space the drive cage from the motherboard slightly. If the cage touches the motherboard pins, it will not boot up so i put a spacer in between the cage and the motherboard. Blackhole has been updated with new parts and ESXi!
  16. John, I've had 2 of those Corsair Performance Pro's fail on me in past year and half. Not very happy with their performance. Switched over to the Samsung 840 Pro and will hopefully have better results. Just recently converted my server to ESXi successfully. Noticing something really strange though that the server is EATING through RAM. RAM is stuck at 95-100% usage all the time after 2 days uptime. Checked processes and cant find what is accounting for the 5GB of RAM (allocated 8GB, usually server sits around 2.5GB). I had SAB, couchpotato, and sickbeard on it running at the time but these three processes were taking up less then 200mb of ram. I have disabled them and will be transferring them into a separate VM Guest soon. Any thoughts on what could be causing this? I dont think a unraid guest needs more then 8GB of ram, and was thinking of putting it down to 4GB after i move the python addons off but maxing out on 8GB is kind of ridiculous no? A quick follow up. I had a power failure today and i was actually home.. so it was a good time to pull both SSD's and test them. the good SSD reported zero errors. back into the server it goes the bad SSD reported so many relocated sectors I cant count. every time i tried to do anything to the SSD that was a write, it would disconnect and go offline until i unpluged its power. Back to Corsair it goes. It didn't even last a year. I also noticed they have discontinued the Performance pro series off SSD's .. I am wondering what I get back. I also noticed none of their current drive line have enhanced garbage collection.. if they send me anything less then another Performance Pro or a Neutron ill be insulted.
  17. I read somewhere that passing the motherboard controllers can yield unexpected results? But where would your datastore drives go if your passing the entire motherboard controller to VM?
  18. Yes, there are people reporting a few issues passing though some cards in ESXi with the ivybridge CPU's unknown if it is a chip or software glitch. best to stick to what the motherboard was made for. also, save the money with a 12x5 chip. that adds the hd3000 graphics chip to the CPU that you cant use with that board. stick with a 12x0 If you on unRAID 5x, moving the drives to the card will require no user interaction. if you are on 4.x you have to do some extra work. since you mention 3TB drives. I would do that upgrade first. you can even do that and test before going to esxi. yes. at anytime, you can pull the esxi thumbdrive out and it should boot right to unraid (unless it finds a bootable HDD in the server) read the first page of my atlas thread. it should cover all of your questions.. Thanks for the tip and tutorials John! Conversion went 100% smooth with no problems. Have not run into any problem so far with X9SCM-F and updating to 2.00b using Ivy Bridge. Will report back if any problems arise. Forgot that ESXi does not support 3TB drives (wanted to map cache/parity directly) so had to put those on the MV8 controllers. Running out of room so going to have to pick up another soon. But MV8 working flawlessly so far.
  19. You are correct in the CPU upgrade. that is the minimum I would use. (you really dont need larger for most people. only if you are doing heavy CPU lifting like a bluray ripping VM or an exchange server VM.) The RAM upgrade is recommended but optional. 8Gb is minimum and limits how much you can with the system. the AOC-SASLP-MV8 pssthough in ESXi with a minor hack. 3TB (and larger) work just fine. No data loss if done correctly.. Keep in mind that installing ESXi will format ALL drives it can find. including your unRAID drives if plugged in. make sure you have all hard drives except your datastore unplugged during install Thanks for the reply! Any compatibility issues with the new IVY bridge processors? i.e E3 1245v2? Worth the upgrade? Also maybe thinking of purchasing SSD's for the ESXi Virtual as well. Only thing scaring me to be honest is the data loss as I have about 15 drives and 30+ TB of stuff... Currently all my data drives are connected to the MV8's. Worst case scenario, I should be able to just plug back in the unraid usb and boot that back to my original configuration assuming something terrible happened and couldnt get ESXi to work?
  20. Was looking into this as well as was goina make a new post on this! I was contemplating on fiddling around with this as well and upgrading my Unraid to ESXi with Unraid. Have you been able to successfully convert without any data loss? CPU: Intel I3 2120 3.3Ghz Motherboard: Supermicro MBD-X9SCM-F-O RAM: Kingston KVR1333D3E9SK2/8G 8GB DDR3 ECC Unbuffered Ram Drive Cage(s): 4x Supermicro CSE-M35T-1B Hot Swap Bay Power Supply: Corsair TX750M SATA Extender: 2x Supermicro AOC-SASLP-MV8 SATA Extender Upgrading to 32GB ram and Xeon E3-1230 Have you had any problems with MV8and ESXi and have you tried this with 3TB harddrives as well? Any problems passing through the MV8? Any data loss when converting from MB->MV8 and plugging in MV8 to ESXi?
  21. Would love to know if anyone has been able to set this up as well? I havent been able to get it to work.
  22. I must admit i'm impressed with the SSD mount. That is an awesome idea. I dont have an SSD in there but that definitely works. is it stable since its only mounted on one side? Sure i'll post what i can. Speakers: KEF Q700 Floorstanding Speakers KEF Q200c Center Speakers KEF Q300 Rears KEF Q400 Subs Onkyo 809 Receiver Unraid server serves as storage and download software (sab, sickbeard, couchpotato, etc). Media server is bitstreaming full HD (DTS-MA/DD-TrueHD) to receiver. Not sure what else to post lol. Lemme know if you wanna know any more details. I've built dozens of computers in my day to be honest the A77 was one of the nicest to work with. Highly recommend the case in every way and would love to see more people use it!
  23. Glad to hear it. Yes the top 3 drive bays need to be flatten. But since it is an aluminum case it was very easy to mold and work with so i didn't have much difficulties. Man it is a very very tight fit for the CSE. I spent a good 3 hours getting all 4 of them inside. I dont even know how i did it really just pushed really hard. To be honest i don't think they are coming out as it seems to be a one way trip Yes i usually use IPMI to powercycle. I've had no problems with the PSU compatibility with the motherboard.
  24. I have the TX-750 (same model almost)... you can flip the power supply so it draws air from the bottom (since there is spacing at the bottom) and ur good to go.