Jump to content

grtgbln

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by grtgbln

  1. On 4/27/2024 at 1:36 AM, Ikeasofa said:

    It's still not possible to access the container through a custom port. I thought the latest update would fix it. Any Ideas? 

    Good news, I figured out the problem and have fixed it. The template was set to "host" mode rather than "bridge" mode for networking, meaning it wasn't allowing different host-container ports. All my templates have been updated accordingly, since they were all suffering from this issue. My apologies. The new templates should be available in 24 hours, or you can manually change it by editing your container and changing "Network" from "host" to "bridge"

  2. 4 hours ago, bbrodka said:

    mdblistarr

     

    unraid 6.12.10 install from CA default install
    web ui won't open
    am I missing an install step or config file I need to edit?
    log says
    cp: cannot stat '/usr/src/app/mdblist/urls.py.1': No such file or directory
    python: can't open file '/usr/src/app/manage.py': [Errno 2] No such file or directory
    python: can't open file '/usr/src/app/manage.py': [Errno 2] No such file or directory
    python: can't open file '/usr/src/app/manage.py': [Errno 2] No such file or directory
    cp: cannot stat '/usr/src/app/mdblist/urls.py.2': No such file or directory
    python: can't open file '/usr/src/app/manage.py': [Errno 2] No such file or directory
    python: can't open file '/usr/src/app/manage.py': [Errno 2] No such file or directory

    Oops, looks like I got tripped up by how the volume mappings. I'm updating the template currently, should be available within 24 hours.

  3. 4 hours ago, drmetro said:

    LocalAI

    its not pulling the layers at last it fails , wasted 80gb data downloading but at last it always failed to install the docker container I selected cuda 12 option 

    any help please 

    I'm not sure I understand. The Docker container failed to install/setup, or LocalAI was up and running, but downloading models failed?

  4. 18 minutes ago, scottygood said:

    current log states:


    Error: SQLite database error
    attempt to write a readonly database
       0: sql_schema_connector::sql_migration_persistence::initialize
               with namespaces=None
                 at schema-engine/connectors/sql-schema-connector/src/sql_migration_persistence.rs:14
       1: schema_core::state::ApplyMigrations
                 at schema-engine/core/src/state.rs:201

    Collector hot directory and tmp storage wiped!
    Document processor app listening on port 8888
    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma

    ✔ Generated Prisma Client (v5.3.1) to ./node_modules/@prisma/client in 224ms

    Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
    ```
    import { PrismaClient } from '@prisma/client'
    const prisma = new PrismaClient()
    ```
    or start using Prisma Client at the edge (See: https://pris.ly/d/accelerate)
    ```
    import { PrismaClient } from '@prisma/client/edge'
    const prisma = new PrismaClient()
    ```

    See other ways of importing Prisma Client: http://pris.ly/d/importing-client

    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma
    Datasource "db": SQLite database "anythingllm.db" at "file:../storage/anythingllm.db"

    17 migrations found in prisma/migrations

     

    Does this look right? Is there supposed to be a WebUI? I don't have one of those. I ask because "The all-in-one AI app for any LLM with full RAG and AI Agent capabilities." sounds like a WebUI is included. I'm very new to all of this, I come from a Windows background. Been dabbling in a bit of this and that, enough to not Royally screw it all up. 

    > attempt to write a readonly database

     

    Sounds like a permissions issue between the container and the database file you created.

  5. 9 minutes ago, Ikeasofa said:

    LocalAI:
    Why is it not possible to change the port. 8080 is being used already and I have to change it to something else.
    The template does start, but its not accessible through the new port.
    How is it possible to enable gpu utilization when I have a CPU with an iGPU. I tried adding the /dev/dri flag inside the template but no luck.

    RE: Port, update to template coming in the next few hours:

     

    RE: iGPU, as far as I know, LocalAI only supports Nvidia CUDA, not iGPU.

    https://localai.io/features/gpu-acceleration/index.html

  6. 44 minutes ago, scottygood said:

    I'm guessing the env.example file from here: https://github.com/Mintplex-Labs/anything-llm/blob/master/docker/.env.example is correct. 

    Also, I navigated to  "/mnt/user/appdata/anythingllm" in the unraid terminal and ran the command. 

     

    root@GoodRaid:/mnt/user/appdata/anythingllm# touch anythingllm.db
    root@GoodRaid:/mnt/user/appdata/anythingllm#

     

    Is the result.

    Am I missing something else?

    Should l get more of an output?

    Do I need to do "docker pull mintplexlabs/anythingllm" to force an update or just delete the template and reinstall via CA?

    You should just need to restart the app now that the file it expects is there.

  7. 14 minutes ago, scottygood said:

    That whole .env file issue, what are the contents of it? where can I find it? Also, I renamed an env.example file I found that seems to be what I need but then I run into this:

    Error: Schema engine error:
    SQLite database error
    unable to open database file: ../storage/anythingllm.db

    Error: Schema engine error:
    SQLite database error
    unable to open database file: ../storage/anythingllm.db

    Error: Schema engine error:
    SQLite database error
    unable to open database file: ../storage/anythingllm.db

    Collector hot directory and tmp storage wiped!
    Document processor app listening on port 8888
    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma

    ✔ Generated Prisma Client (v5.3.1) to ./node_modules/@prisma/client in 198ms

    Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
    ```
    import { PrismaClient } from '@prisma/client'
    const prisma = new PrismaClient()
    ```
    or start using Prisma Client at the edge (See: https://pris.ly/d/accelerate)
    ```
    import { PrismaClient } from '@prisma/client/edge'
    const prisma = new PrismaClient()
    ```

    See other ways of importing Prisma Client: http://pris.ly/d/importing-client

    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma
    Datasource "db": SQLite database "anythingllm.db" at "file:../storage/anythingllm.db"

    Collector hot directory and tmp storage wiped!
    Document processor app listening on port 8888
    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma

    ✔ Generated Prisma Client (v5.3.1) to ./node_modules/@prisma/client in 221ms

    Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
    ```
    import { PrismaClient } from '@prisma/client'
    const prisma = new PrismaClient()
    ```
    or start using Prisma Client at the edge (See: https://pris.ly/d/accelerate)
    ```
    import { PrismaClient } from '@prisma/client/edge'
    const prisma = new PrismaClient()
    ```

    See other ways of importing Prisma Client: http://pris.ly/d/importing-client

    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma
    Datasource "db": SQLite database "anythingllm.db" at "file:../storage/anythingllm.db"

    Collector hot directory and tmp storage wiped!
    Document processor app listening on port 8888
    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma

    ✔ Generated Prisma Client (v5.3.1) to ./node_modules/@prisma/client in 203ms

    Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
    ```
    import { PrismaClient } from '@prisma/client'
    const prisma = new PrismaClient()
    ```
    or start using Prisma Client at the edge (See: https://pris.ly/d/accelerate)
    ```
    import { PrismaClient } from '@prisma/client/edge'
    const prisma = new PrismaClient()
    ```

    See other ways of importing Prisma Client: http://pris.ly/d/importing-client

    Environment variables loaded from .env
    Prisma schema loaded from prisma/schema.prisma
    Datasource "db": SQLite database "anythingllm.db" at "file:../storage/anythingllm.db"

     

    and no files in any of my appdata folders

    Run

    touch anythingllm.db

    from inside your "/mnt/user/appdata/anythingllm" folder. I have just updated the template to mention this.

  8. 5 hours ago, NotHere said:

    Thanks so much for the information. Sucks that I just couldn't figure it out :(. 

     

    Is there a way to change the port from 8080 to anything else? No matter that i do I just cant change the port. Could be a me problem of course, I just have changed ports to containers before without issues, but this one wont do it. 

    image.thumb.png.b198a59a80bed353822e1df2d39bd5b2.png

    I just pushed out an update to the template that should fix this, check back tomorrow once it's synced.

  9. 5 hours ago, NotHere said:

    Hello there. I just installed your "LocalAI". It seemed quick and easy. My issue is that I cant change the port from 8080 to anything else. I have tried to change it at the "WebUI HTTP Port:" variable you added to the templet and that did not do it. I know it works cause I was able to go into the WebUI from the docker and it shows the installed modules. However, I have no clue how to use them :/. Any more information on how to use the modules? I looked at the documentation, "https://localai.io/" but that was no much help for someone like me who does not understand anything from there. 

     

    Any help on how I can use the modules I would be greatly appreciated. 

    You'll need some kind of integration that can use the models. I'd recommend a GUI like bigAGI: https://localai.io/docs/integrations/ 

  10. 7 hours ago, JustAnotherMe said:

    When i try to install anythingllm I get:

     

    docker: 
    Error response from daemon: 
    failed to create task for container: 
    failed to create shim task: 
    OCI runtime create failed: 
    runc create failed: 
    unable to start container process: 
    error during container init: 
    error mounting "/mnt/user/appdata/anythingllm/.env" to rootfs at "/app/server/.env": 
    mount /mnt/user/appdata/anythingllm/.env:/app/server/.env (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.



    Lookslike the .env is created as dir but is supposed to be a file?

     Thanks for letting me know. The `.env` file must exist before running, otherwise Docker will try to make it (and assumes it's a folder, not a file). I have updated the template to note that.

  11. Is there a way to mitigate breaking changes and/or major versions in a CA template?

    One app I manage a template for has a new major (x.y.z) version out, which requires changes to both the Docker/CA settings (some new config fields, some removed, some changed type from a string to a dropdown option) and to non-Docker settings (permission changes to the remote API the app interacts with via the provider's portal).

    1. Is there a process that I can take to properly migrate user settings from the old "string" type to the new "dropdown" type? If not, is there a way to call this out to the user or force them to view the app's configuration page during an update?
    2. In the same vain, is there a way to stop users from auto-updating to a new major version, or bring attention to it?

    It doesn't seem like there's necessarily versioning for CA templates. The container currently uses `Branch` for which Docker image to use, but with `latest` being the only option. Perhaps changing the only available branch to a specific older image?

  12. 16 minutes ago, animeking1987 said:

    I cant get mine to work I get this error when I try to run it manually. My config file is there. 

    image.thumb.png.7e5d0de3154ff98a12c2618674eec9df.png

    image.png.c668365cb84fbcb16da9e84475ec7467.png

     As the error suggests, your command is incorrect. You need to provide the "-c" flag when providing a config file path. Use the following instead:
     

    python run.py -c /mnt/user/appdata/prerolls/config.yaml



    Also, please reserve this thread for discussion specifically about running this Community App as a Docker container on Unraid. For general questions about running the Python script directly, please open an issue on the project's GitHub page: https://github.com/nwithan8/plex-prerolls

  13. 7 hours ago, SnugglyDino said:

    Ok so I figured out an issue. The "monthly" section is not limiting itself to the prescribed month. When I went through re-enabling things section by section everything was working as I expected until I got to the "monthly" section.

    https://imgur.com/a/JgzPOJt

     

    When I re-enabled the month of October and checked the logs it showed that the "Always" section got disabled even though it's not October. I did not have this problem when re-enabling any other section (but they are set to false so that might be why). Why is the October's setting for "disable_always" triggering even though it is not October?

     
    Thank you for bringing the bug to my attention, it has been patched and will be available in the latest version shortly. The masking of the cron template has also been fixed.

    In the future, please open an issue on GitHub about the project: https://github.com/nwithan8/plex-prerolls, I check that more often than I check these forums.

  14. 10 hours ago, SnugglyDino said:

    Ok so I tried running the program but it does not seem to have changed the current Plex Prerolls and I'm not quite sure what I am doing wrong. I attached an imgur link with my current config.yaml and what the logs are saying. Hopefully you can help me figure out the error in my ways.

     

    https://imgur.com/a/yspLthP

    The path "/mnt/user/data/media/..." doesn't exist inside Plex. You need to use the path to each file as Plex would see it, perhaps something like "/media/...". Refer to the mapped paths on your Plex Docker container.

     

    Also, those Fall, Winter et. al schedules won't work with just paths to the prerolls folders, you need to include full paths to individual media files (not sure if perhaps you redacted those for this help session)

  15. On 6/28/2023 at 12:26 PM, gellux said:

    uhhh, not sure if this is the correct link, but it's where's linked from the Docker's install page...

     

    trying to install Tauticord, by nwithan8 but keep getting this at the end of the install:

     

    "Error: failed to register layer: stat /var/lib/docker/btrfs/subvolumes/d55c1127cda32d7088da9a6435ff7f908cbbff49d5ca47fa96187831562d1540: no such file or directory"

     

    never had an error message like this on an install before

     

    Apologies, I haven't been on the forums in a while. Did this resolve itself?

  16. On 7/31/2021 at 3:26 PM, jbrodriguez said:

    Hi no dependencies are needed it's a single binary.

     

    Try to run it from the command line

    /usr/local/emhttp/plugins/controlr/controlr

     

    and report any error it throws

    Running into the same issue, Unraid 6.9.2, latest ControlR plugin version.

     

    Ran `/usr/local/emhttp/plugins/controlr/controlr` and recieved:
     

    I: 2022/03/13 14:29:20 app.go:57: controlr v2021.11.25|3.0.0 starting ...
    I: 2022/03/13 14:29:20 app.go:65: No config file specified. Using app defaults ...
    I: 2022/03/13 14:29:20 app.go:185: cert: found MYSERVERNAME_unraid_bundle.pem
    I: 2022/03/13 14:29:20 app.go:75: state(&{Name:MYSERVERNAME Timezone:America/Denver Version:6.9.2 CsrfToken:9B4C04CC22086972 Host:http://MYSERVERNAME Origin:{Name:MYSERVERNAME Protocol:http Host:MYSERVERNAME Port:80 Address:XXX.XXX.X.X} Secure:false Cert:MYSERVERNAME_unraid_bundle.pem UseSelfCerts:false})
    I: 2022/03/13 14:29:20 core.go:83: starting service Core ...
    I: 2022/03/13 14:29:20 core.go:355: Created ipmi sensor ...
    I: 2022/03/13 14:29:20 core.go:382: No ups detected ...
    2022/03/13 14:29:20 Unable to wait for process to finish: exit status 127

     

    Settings:
    image.thumb.png.5409d762267fd7bc120a9d9ac8b9b763.png

     

    Diagnostics via https://github.com/jbrodriguez/controlr-support:
    controlr.zip

×
×
  • Create New...