dogbowl

Members
  • Posts

    27
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

dogbowl's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've got the same issue as m4f1050 a few posts above with my couchpotato docker. My log file is nearly an exact duplicate as the one he posted. Anyone have any ideas?
  2. If I'm following, does that mean the best command for creating the docker would instead be this: docker run -it --name pocketmine-classic -p 19132:19132/udp -v /mnt/user/appdata/pocketmineGenisys/:/srv/genisys/ itxtech/docker-env-genisys and just need to be sure and place the latest genisys.phar file in that folder.? I haven't had any success yet in importing existing worlds and I've got another odd problem -- none of the mobs move. I thought that was one of the limitations that Genisys build fixed with PE edition though...
  3. spazmc - we you able to get this working? I had a few struggles but was finally able to get it up and running. IM able to connect with the latest version of minecraft pe now. (or rather, my 5 year old is !) Working on getting mods loaded into the server now...
  4. 6.1.4 did not seem to fix my issue and I was not able to complete a parity_check. With 6.1.2 and manually setting the nr_request, I was finally able to complete a parity sync though. For what its worth, the red balls were only ever thrown on my reiser drives and never on drives attached to my SASLP controller. Other then that, I could not find any patterns and it I would receive the disconnects/read errors seemingly at random times during parity sync/checks. I've since swapped out my SAS2LP controller for a m1015 and everything now is working as expected.
  5. That could be the case. I was previously running 6.1.2 and by manually setting nr_request to 8, I was able to complete a parity sync. I'm running 6.1.4 as "stock" right now, have not changed any of the new tweakable settings.
  6. I'm running a SAS2LP card have have been experiencing the red ball(s) during parity sync/check issues described elsewhere. After installing 6.1.4 and subsequently performing a parity check, it again resulted in a "failed" disk. Unraid alerted that a disk was in an error state however 20 minutes later "array turned good and Array has 0 disks with read errors". The drive is still 'red balled' though. Is this expected?
  7. Following up on this. I have recovered everything and my array is back, parity verified and had no data loss (that I'm aware of). It does seem as if this was due to a bad SATA cable as I was only able to fully rebuild and verify parity after replacing that particular break out cable from the SAS2LP. Not something that I would have initially expected as all my drives are in hot swap caddies but I have no other explanation... Thanks for the help above.
  8. I have had my system running and stable for so long, I had gotten relaxed with running parity checks. So, yes, when I went to upgrade my parity drive, I didn't run a parity check ( and it had been about 30 days since a parity check had been run). When I first restarted after installing the 4TB drive, my dockers all came back to life and other, automate backups also ran before the parity drive upgrade was complete. If I don't have a hardware issue, something with all of that activity could have caused this. Regardless, I'm going to take this opportunity to replace a number of drives and then possibly switch over to the new filesystem. I'll also be setting up a scheduled parity check
  9. I really appreciate any advice on this -- Things are looking better and I can only assume this has been due to a hardware issue somewhere. I plan to swap out the SATA breakout cable as soon as I can get a replacement. All other hardware has been up and running for some time although I did recently change the PCI-E slot that this particular SAS2LP is plugged into. (Both drives that I've had these issues with are on the same SAS2LP controller card and both are on the same breakout cable.) For what its worth, the drives are in different hot swap caddies. So, I ran a reiserfsck --fix-fixable /dev/md2 and that resulted with some recommendations: so I then executed the reiserfsck --rebuild-tree and that completed: I then stopped the array, removed drive2, started, stopped, added drive2 back, started again and it began to rebuild. That completed successfully and my array seems to be back to normal. I'm currently running a parity check
  10. Have had some interesting results. The SATA cables are all breakout cables from my SAS2LP card and I don't have a spare of one of those handy. So I've just primarily relied on the New Config and hoped for the best with the parity rebuilds. With the new (4TB) parity drive in, the parity check results in numerous errors on that same disk (disk 3) to the point that it all fails and I'm left with both a bad parity disk as well as a 'emulated' disk 3. Syslog for that here http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-diagnostics-20151031-0049.zip So, then switching back to the original 3TB parity drive and doing a new config (I was unable to select a non-correcting parity check as my UI goes unresponsive after reboot for about 120 seconds). That parity check completed successfully -- however *another* drive threw errors and is now redballed: Heres the diagnostics from that event http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-diagnostics-20151031-0935.zip The SMART report on that disk 2 looks good (I think?) so I'm not sure what to think now. Throughout all of this, I have not intentionally written to any disks however even though they are set to not start, my Docker apps start after every reboot and I suspect they are writing new files. I don't know of this would cause these errors though.
  11. Current Status: I have the new parity disk (The 4TB i was originally upgrading to) installed in the server. Have removed my bad Disk 3 from the array, started it. Stopped it, and added it back. I now have an invalid parity disk as well as a "Device Contents Emulated" for disk 3. So I believe my only options are either a Tools -> New Config or a Rebuild to start a data rebuild. I'm sure that each of those options are actually opposite actions but I'm not familiar enough with the details to know which would be the best choice...
  12. I think a this point, my best option would be to force the (old) parity drive following instructions here: http://lime-technology.com/wiki/index.php/Make_unRAID_Trust_the_Parity_Drive,_Avoid_Rebuilding_Parity_Unnecessarily However, I'm not sure if those are instructions for v5 or v6. The hope would be to bring the array back online but still in an unprotected state as that 2tb drive is bad. With that original parity drive active again, I could then rebuild the 2tb to get everything back. EDIT: Heres a link to the SMART report from my drive 3. Could this drive be okay? http://vinelodge.com/wp/wp-content/uploads/2015/10/WDC_WD20EFRX-68AX9N0_WD-WMC1T0987467-20151030-1548.txt
  13. Thats a possibility but I do have these drives in a hot swap cases. This one in particular: http://www.supermicro.com/products/accessories/mobilerack/CSE-M35T-1.cfm I went forward with replacing the original party drive and that resulted with a "too many wrong and/or missing disks" and I don't have the option to start the array. Nothing has changed across the drives (that I'm aware of) so they old parity drive should still be 'valid'. Not sure what the best, next step would be here... (Other than purchasing a good 2tb drive)
  14. I replaced my parity drive tonight (upgrading from 3TB to a 4TB) and at about the 4% rebuilt mark, I had a drive red ball due to SMART errors. Whats odd is that the array is accessible and the failed disk shows "contents emulated" -- however without a proper parity disk, I wouldn't expect this to be possible. My sys log (http://vinelodge.com/wp/wp-content/uploads/2015/10/tower-syslog-20151030-0008.zip) shows multiple "disk3 read error" and the SMART report on that drive reads: Whats the best approach here? I do still have my old parity drive available. This is on 6.1.2 system on a SUPERMICRO MBD-X9SCM-F motherboard with AOC-SAS2LP-MV8 controller cards.
  15. Thanks for you instructions net2wire. My docker ended up stuck in Maintenance Mode after the upgrade and your steps helped me out.