JonathanM

Moderators
  • Posts

    16146
  • Joined

  • Last visited

  • Days Won

    65

Report Comments posted by JonathanM

  1. Respectfully, while I agree that it's urgent in the sense that there is something wrong that needs to be addressed, there is a valid workaround in place to run unraid without triggering the issue, and it only effects a small subset of hardware. GUI mode just doesn't work properly on some systems. It's been that way since it's been introduced.

     

    I don't think this deserves the urgent tag, which implies a showstopper issue for general usage in a majority of hardware with no workaround.

  2. 20 hours ago, johnnie.black said:

    I can't reproduce this, if I unassign all cache devices, leaving slots as they were I get this on the log:

    
    root: mover: cache not present, or only cache present

     

    mover is not executed

    Try this.

    After you unassign the physical cache devices, try creating a /mnt/cache folder, like what would happen if a container were mis-configured to use the disk path instead of /mnt/user

     

    I suspect the OP was filling up RAM with some misconfiguration, causing the crash.

    • Like 1
  3. On 12/15/2019 at 4:10 PM, ds679 said:

    Ahhh....good idea...and...drumroll...IT WORKS!  I'm in the terminal window and no blanking/whiting out!

    Were you able to go 'one by one' and see which one (or was it the whole 'shields' part) was causing the problem?

     

    Thanks for the idea!

    =dave

     

    On 1/2/2020 at 8:24 PM, ds679 said:

    I appreciate all of the help - but this issue still is persistent & repeatable....and has not occurred with other releases.  There is still an issue with the current codebase.

     

    =-dave

    Earlier you said you figured out the issue.

  4. 15 minutes ago, Helmonder said:

    I am perfectly aware, and it was attached ?

    I'm not seeing it on any of the posts in this thread. It's supposed to be in the report itself, lacking that, attached to your first reply.

     

    Did you read the guidelines for posting a bug in this section?

     

     

  5. 2 hours ago, marcusone1 said:

    any solutions to this. i'm seeing it and backups using rdiff-backup are now failing due to it :(

    Since this report references rc5, I'd advise updating to 6.8.0 and see if the issue still exists. If it does, then a new report needs to be filed, with all the diagnostics and steps needed to recreate it so the devs can fix it.

  6. 10 minutes ago, dalgibbard said:

    i've installed the unraid nvidia plugin

    For future reference, the nvidia and dvb modifications are not supported by limetech. Before filing a bug report, please be sure to revert back to the limetech release and duplicate the issue there. If the issue only occurs with 3rd party modifications you need to bring that up the the folks doing the modifications.

  7. 3 hours ago, Carlos Talbot said:

    What's the easiest way to reformat the drive to XFS?

    Make sure that when the array is stopped, only 1 cache slot is shown. Then you can select XFS as the desired format type on the cache drive properties page, and when you start the array you should be presented with the cache drive as unmountable, and the option to format. Be sure the ONLY drive listed as unmountable is the cache drive, as the format option operates on all unmountable drives simultaneously.

  8. 40 minutes ago, eagle470 said:

    Simple request, I'd like a check box on the NEXT branch where I can ask for the system to not notify me until there is a stable 6.8 release.

    If you are on the NEXT branch, you are expected to install updates as you can, and participate with diagnostics if you find an issue. If you don't want to be bugged until the a stable release, you need to be running the stable branch.

     

    I know there are valid reasons to not stay on 6.7.2, but it's not reasonable to expect to treat the NEXT branch as stable.

  9. 3 hours ago, jbartlett said:

    Anybody try creating a new Windows 10 VM under RC4? I had a DEVIL of a time trying to get it to work. The install would copy the files, go all the way to the "Finishing up" and then display any one of several errors - corrupted media, cannot set local, could not load a driver, could not continue, or just jump right back to the setup button at the start.

     

    Rolled back to 6.7.2 and poof - installation went like a champ though I did have to edit/save because using the RC4 built XML gave an invalid machine type error.

    What vdisk type did you choose? RAW or qcow?

  10. 35 minutes ago, sittingmongoose said:

    Also, I didn't upload diagnostics yet because I haven't upgraded to rc3 yet from rc1.  I am waiting on nvidias plugin to come back from the digital ocean outage.

    That's an unofficial build, and while they try to make it as close as possible while adding nvidia support, it's not completely the same code as the limetech build. You need to duplicate the issue and post diagnostics with the official rc3.

    • Thanks 1
  11. 5 hours ago, Marshalleq said:

    I’m pretty sure that actually is the move that does that. I’m not aware of anything else doing it...

    You can configure things either way. If the final destination is set to be only on the array bypassing the cache, then when the download to the cache is done, the media manager immediately moves the file from cache to array without the mover being activated. Or, you can have the final destination use the cache drive, in which case after the download is done the file is renamed to the final destination share and sits on the cache drive waiting for the mover to put it on the array.

     

    Normally both methods work well, but with a small cache it's better to write to the array immediately rather than risk filling up the download temp space, and having mover scheduled every couple hours is not a good solution.

    • Like 1
  12. Here's my take on the situation. The sql thing has been an issue for a LONG time, but only under some very hard to pin down circumstances. The typical fix was just to be sure the sql database file was on a direct disk mapping instead of the user share fuse system. It seems to me like the sql software is too sensitive to timing, gives up and corrupts the database when the transaction takes too long.

     

    Fast forward to the 6.7.x release, and it's not just the fuse system, it's the entire array that is having performance issues. Suddenly, what was a manageable issue with sql corruption becomes an issue for anything but a direct cache mapping.

     

    So, I suspect fixing this concurrent access issue will help with the sql issue for many people as well, but I think the sql thing will ultimately require changes that are out of unraid's direct control, possibly some major changes with the database engine. The sql thing has been an issue in the background for years.

    • Like 1
  13. This issue is effecting a wide swath of people. Personally I have 27 containers on one of my servers, all 10 of the LSIO show update ready, none of the other containers do. They were all up to date yesterday.

     

    No connection issues at my end of the pipe, one of my other servers behind the same router just updated a binhex container just fine.

     

    A different server at another location with 10 containers, all 3 of the LSIO show update ready. If it's unraid's fault, then unraid is preferentially picking on LSIO.