dogbowl

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by dogbowl

  1. I've got the same issue as m4f1050 a few posts above with my couchpotato docker. My log file is nearly an exact duplicate as the one he posted. Anyone have any ideas?
  2. If I'm following, does that mean the best command for creating the docker would instead be this: docker run -it --name pocketmine-classic -p 19132:19132/udp -v /mnt/user/appdata/pocketmineGenisys/:/srv/genisys/ itxtech/docker-env-genisys and just need to be sure and place the latest genisys.phar file in that folder.? I haven't had any success yet in importing existing worlds and I've got another odd problem -- none of the mobs move. I thought that was one of the limitations that Genisys build fixed with PE edition though...
  3. spazmc - we you able to get this working? I had a few struggles but was finally able to get it up and running. IM able to connect with the latest version of minecraft pe now. (or rather, my 5 year old is !) Working on getting mods loaded into the server now...
  4. 6.1.4 did not seem to fix my issue and I was not able to complete a parity_check. With 6.1.2 and manually setting the nr_request, I was finally able to complete a parity sync though. For what its worth, the red balls were only ever thrown on my reiser drives and never on drives attached to my SASLP controller. Other then that, I could not find any patterns and it I would receive the disconnects/read errors seemingly at random times during parity sync/checks. I've since swapped out my SAS2LP controller for a m1015 and everything now is working as expected.
  5. That could be the case. I was previously running 6.1.2 and by manually setting nr_request to 8, I was able to complete a parity sync. I'm running 6.1.4 as "stock" right now, have not changed any of the new tweakable settings.
  6. I'm running a SAS2LP card have have been experiencing the red ball(s) during parity sync/check issues described elsewhere. After installing 6.1.4 and subsequently performing a parity check, it again resulted in a "failed" disk. Unraid alerted that a disk was in an error state however 20 minutes later "array turned good and Array has 0 disks with read errors". The drive is still 'red balled' though. Is this expected?
  7. Following up on this. I have recovered everything and my array is back, parity verified and had no data loss (that I'm aware of). It does seem as if this was due to a bad SATA cable as I was only able to fully rebuild and verify parity after replacing that particular break out cable from the SAS2LP. Not something that I would have initially expected as all my drives are in hot swap caddies but I have no other explanation... Thanks for the help above.
  8. I have had my system running and stable for so long, I had gotten relaxed with running parity checks. So, yes, when I went to upgrade my parity drive, I didn't run a parity check ( and it had been about 30 days since a parity check had been run). When I first restarted after installing the 4TB drive, my dockers all came back to life and other, automate backups also ran before the parity drive upgrade was complete. If I don't have a hardware issue, something with all of that activity could have caused this. Regardless, I'm going to take this opportunity to replace a number of drives and then possibly switch over to the new filesystem. I'll also be setting up a scheduled parity check
  9. I really appreciate any advice on this -- Things are looking better and I can only assume this has been due to a hardware issue somewhere. I plan to swap out the SATA breakout cable as soon as I can get a replacement. All other hardware has been up and running for some time although I did recently change the PCI-E slot that this particular SAS2LP is plugged into. (Both drives that I've had these issues with are on the same SAS2LP controller card and both are on the same breakout cable.) For what its worth, the drives are in different hot swap caddies. So, I ran a reiserfsck --fix-fixable /dev/md2 and that resulted with some recommendations: so I then executed the reiserfsck --rebuild-tree and that completed: I then stopped the array, removed drive2, started, stopped, added drive2 back, started again and it began to rebuild. That completed successfully and my array seems to be back to normal. I'm currently running a parity check
  10. Have had some interesting results. The SATA cables are all breakout cables from my SAS2LP card and I don't have a spare of one of those handy. So I've just primarily relied on the New Config and hoped for the best with the parity rebuilds. With the new (4TB) parity drive in, the parity check results in numerous errors on that same disk (disk 3) to the point that it all fails and I'm left with both a bad parity disk as well as a 'emulated' disk 3. Syslog for that here http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-diagnostics-20151031-0049.zip So, then switching back to the original 3TB parity drive and doing a new config (I was unable to select a non-correcting parity check as my UI goes unresponsive after reboot for about 120 seconds). That parity check completed successfully -- however *another* drive threw errors and is now redballed: Heres the diagnostics from that event http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-diagnostics-20151031-0935.zip The SMART report on that disk 2 looks good (I think?) so I'm not sure what to think now. Throughout all of this, I have not intentionally written to any disks however even though they are set to not start, my Docker apps start after every reboot and I suspect they are writing new files. I don't know of this would cause these errors though.
  11. Current Status: I have the new parity disk (The 4TB i was originally upgrading to) installed in the server. Have removed my bad Disk 3 from the array, started it. Stopped it, and added it back. I now have an invalid parity disk as well as a "Device Contents Emulated" for disk 3. So I believe my only options are either a Tools -> New Config or a Rebuild to start a data rebuild. I'm sure that each of those options are actually opposite actions but I'm not familiar enough with the details to know which would be the best choice...
  12. I think a this point, my best option would be to force the (old) parity drive following instructions here: http://lime-technology.com/wiki/index.php/Make_unRAID_Trust_the_Parity_Drive,_Avoid_Rebuilding_Parity_Unnecessarily However, I'm not sure if those are instructions for v5 or v6. The hope would be to bring the array back online but still in an unprotected state as that 2tb drive is bad. With that original parity drive active again, I could then rebuild the 2tb to get everything back. EDIT: Heres a link to the SMART report from my drive 3. Could this drive be okay? http://vinelodge.com/wp/wp-content/uploads/2015/10/WDC_WD20EFRX-68AX9N0_WD-WMC1T0987467-20151030-1548.txt
  13. Thats a possibility but I do have these drives in a hot swap cases. This one in particular: http://www.supermicro.com/products/accessories/mobilerack/CSE-M35T-1.cfm I went forward with replacing the original party drive and that resulted with a "too many wrong and/or missing disks" and I don't have the option to start the array. Nothing has changed across the drives (that I'm aware of) so they old parity drive should still be 'valid'. Not sure what the best, next step would be here... (Other than purchasing a good 2tb drive)
  14. I replaced my parity drive tonight (upgrading from 3TB to a 4TB) and at about the 4% rebuilt mark, I had a drive red ball due to SMART errors. Whats odd is that the array is accessible and the failed disk shows "contents emulated" -- however without a proper parity disk, I wouldn't expect this to be possible. My sys log (http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-syslog-20151030-0008.zip) shows multiple "disk3 read error" and the SMART report on that drive reads: Whats the best approach here? I do still have my old parity drive available. This is on 6.1.2 system on a SUPERMICRO MBD-X9SCM-F motherboard with AOC-SAS2LP-MV8 controller cards.
  15. Thanks for you instructions net2wire. My docker ended up stuck in Maintenance Mode after the upgrade and your steps helped me out.
  16. I've having issues with Binhex's sonarr docker. Specifically, whenever I restart the service/container, all of my settings are lost and restore to the default of an initial installation. My /config points to /mnt/cache/appdata/sonarr/ which is where I believe it needs to be. In my sonarr logs after rebooting, I do see this entry: ProfileService Setting up default quality profiles 3:28pm Could that be wiping out my previous data? nzbdrone.zip
  17. I'm having a strange issue. Upon a reboot of my Unraid server, all settings for my Sonarr docker are reset. Not only that, my web UI will not start. As soon as I chmod 777 my config folder, then the UI comes up. This is my first experience working with dockers -- and I've done everything through the unraid UI. My binhex-sonarr docker was installed through the 'community applications' plugin. Anyone have any ideas what could be wrong here? My /config points to /mnt/cache/appdata/sonar My /media points to/mnt/user/videos
  18. Plop wouldn't find my USB either. I switched to Plopkexec and was able to find it. Following up from my earlier post -- upgrading to ESXi 5.5 and I was now able to set the passthrough of my USB device. Got that configured (and removed the old device specific passthrough) and everything seems to be running well now and no errors are being thrown in my log (yet). System is much more responsive.
  19. Got it through the autoupdate plugin: http://lime-technology.com/forum/index.php?topic=41617.0
  20. I'm experiencing this problem as well. I'm running Unraid 6 as a guest on ESXI 5.1 (and a supermicro X9SCM motherboard with SASLP-MV8 raid controllers). However, I'm unable to pass through the USB controller -- when I set the value to pass it through, it doesn't 'stick' through the reboot. Any one know if an upgrade to the latest ESXI would correct this? Edit: I've upgraded to 6.1rc and am still seeing this same issue: Jul 18 23:42:22 Tower kernel: usb 1-1: reset high-speed USB device number 2 using ehci-pci Jul 18 23:42:22 Tower kernel: sd 3:0:0:0: [sda] 121307136 512-byte logical blocks: (62.1 GB/57.8 GiB)
  21. Problems -- I replaced the drive that showed crc_errors (with a larger drive). That data rebuild ran overnight and completed this afternoon. The raid is up and running as expected however I've now got a notification that "Parity updated 192062696 times to address sync errors" Disk 2(The one I ran hdparm against) shows 192062778 "errors". At this point, my best guess is to run a parity check and see how it turns out. My syslog is 130 megs uncompressed (compressed as 7zip but attached with .zip extension) update - when I attempted to run the parity check, unraid immediately brought that drive offline. Says "Disabled, old disk present". So, looks like 3rd drive swap is in store for me.... syslog-2015-06-25_small.zip
  22. Updating my posts here. After 6 days of rebuilding (with 4 more to go) I couldn't take it any longer. Stopped the rebuild and reboot into safe mode and my speed came back to normal (80 Mb/sec). My GO file has some suspicious entries, so I've now commented those out. My data-restore finally finished and I first corrected the hd size with the hdparam -N command. I was able to do that through the console. Currently rebuilding the data again and once that is done will address the SMART errors that have come up through all this disk thrashing. I'm guessing my slowness had something to do with an old version of Samba being loaded. I can't recall why that was in my GO file.
  23. Latest syslog attached. Not much added since the initial boot up. My plan for now is to wait out this slow rebuild and then replace the drive with multi_zone_errors. Once I've rebuilt from that, then I'll address the HPA size mismatch. Thanks for any help syslog-2015-06-18.txt
  24. Is it okay to power off the server while a data rebuild is in place? I don't recall the firmware versions I've got off hand, that would be the only way to check. This isn't something I would suspect as the only thing I've changed is to to replace a 1tb drive with a 2tb drive... After 24 hours, I'm only at 10% complete. Is this stressing my other drives?
  25. I'm not sure I follow. I've listed my version and attached my full syslog. Are you suggesting I need to upgrade to 5.0.6 before posting?