Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

4 Neutral

About MisterLas

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @CHBMB I was wondering when the new.... jk. Keep up the good work, this plugin is awesome. I work for a rather large software support company, so I feel your pain on the releases sometimes. Most of the community should realize you all are doing this great work, mostly on free time, and have lives as well. Take your time, imho. Kudos! @limetech, thanks for the awesome releases as well.
  2. I'm running my new P2000 (got it this weekend) in conjunction with dual L5640s. They're not the slowest, but rather old, and I saw major improvements since bringing the quadro in the mix.
  3. Hey all, got my P2000 in over the weekend and I just wanted to say thanks for this awesome plugin! Working VERY well and I'm impressed with how much of a difference it's made not killing my CPUs with transcodes.
  4. I'll have to give this a shot sometime. I was on 6.2.4 for 30 days with no issues, upgraded to 6.3.2, and then experienced a lockup during the first night. So I rolled back to 6.2.4 and been stable for 5 days again. I have the CA Backup disabled, but I never verified it gets turned on on 6.3.2 (but I know I didn't enable it, just wondering if it did on its own).
  5. Sorry been slammed at work lately so haven't followed up much. I rolled back to 6.2.4 to 12 days ago. No other changes other than the rollback, and have not had an issue since. Sent from my Pixel XL using Tapatalk
  6. I do not have CA Backup and Restore configured to run, but do have it installed. I was still seeing the same issue on 6.3.0, but I downgraded to 6.2.4 and have been running for 3.5 days with no issues thus far... I'll see if I can make it through the weekend.
  7. I've been experiencing this since 6.3.2 as well, all XFS disks. Seeing more and more of these threads pop up. I've tried downgrading to 6.3.0, and made it past the 2 day mark that 6.3.2 would die at. I made it to 4 days and it died during the night as well. When I return from work, I'll be downgrading to 6.2.4. While I agree that XFS > reiser, I feel there is something more at play here.
  8. Fair point, was not meaning to thread jack, and definitely will open my own thread if I go to share logs, but was simply pasting my general experience in what I felt was a related situation.
  9. While I won't argue against converting to xfs, I sincerely don't think this is the issue. I have 0 reiser filesystems, and have repeatable failures. Like clockwork, my 6.3.2 died again during the night. I'm back from travel today and have 2 good sets of logs from FCP plugin in troubleshooting mode that I can hopefully analyze today and see if anything is reported. Could possibly share out too after I review what is actually captured.
  10. Not mgladwin, but I know in my case, I have 0 reiserfs. All xfs on my end. And my Mover script runs once an hour. My time frame is oddly consistent. It is happening every other night (well wee hours in morning), seems too predictable to be a coincidence. I am still on travel, so have not had a chance to review the logs saved off from FCP plugin in troubleshooting mode. Hopefully will get to that tomorrow, and tomorrow morning is a predicted failure time based off of previous observations.
  11. My mover is set to run every hour, I believe. Will doublecheck tonight.
  12. Not wanting to thread jack, but I too have been having issues since upgrading to 6.3.2. I found the Fix Common Problems plugin and have it running in Troubleshooting mode. unRAID died again during the night (happening every other night between 3-5 am) and shares are dead, webui unresponsive (sometimes is responsive but shares are gone and only come back after a full reboot), ssh is down. Will be combing through the logs sometime this evening or tomorrow, but wanted to let you know that you are not alone, rippernz. I had no issues on 6.2.x builds, and I don't recall any issues on 6.3.0 (possibly none with 6.3.1), but every second night (early morning) since 6.3.2, my server goes tits up. Wanted to share my experience in case our problems are related. I have NOT opened my own thread yet, still gathering information, but can if needed.
  13. True true. That would work... Just waiting on the last run of these preclears to ease my mind that the disks are okay. And then I'll add them back in and move stuff off and keep rolling these RFS filesystems out.
  14. Thanks, thats what I was referring to when I mentioned I cannot possibly remove any more disks... I have 4 disks preclearing on a different box (multiple times for verification) and I can't shift any more data around until I know these are good. I have had to create a new config each time, and wait for a day or so for the parity check to finish... only to have another random red ball. I believe every one that has crapped out has been on a reiserfs, but if I run an fsck on them, everything returns clear... So I'm just trying to hold steady, not add anything more to my unstable array, and just keep rolling down the disks removing RFS and adding XFS back in.
  15. Yeah, I'm in the process of converting all my drives from RFS to XFS as well... I have tried the Xen boot and disabling spindown on the 6TBs as well, to no avail. I have pulled a total of 4 drives that all are constantly throwing a "red alert" of failed SMART health check and the immediately a "green alert" of passed SMART health check. I cannot shift any more data around to pull a drive... I'm running out of ideas. All these "failed" drives preclear just fine. I can do a new config (since unRAID wont clear a freaking red ball) wait 2 days for the parity check to run, and boom, another random red ball. All different ports, different cables in my setup, it's just random. No memory issues, and my PSU is sufficiently sized. I really don't think this is hardware issue. I cannot even revert back to Unraid5 because of XFS...