Jump to content

SimplifyAndAddCoffee

Members
  • Posts

    50
  • Joined

  • Last visited

Posts posted by SimplifyAndAddCoffee

  1. Hi, I had to reboot my unraid server the other day due to the webui bug, and now jellyfin will not start. It is stuck in a loop.

     

    ...
    [11:42:07] [INF] [1] Jellyfin.Networking.Manager.NetworkManager: Remote IP filter is Allowlist
    [11:42:07] [INF] [1] Jellyfin.Networking.Manager.NetworkManager: Filtered subnets: []
    [11:42:21] [FTL] [1] Main: Error while starting server
    Microsoft.Data.Sqlite.SqliteException (0x80004005): SQLite Error 11: 'database disk image is malformed'.
       at Microsoft.Data.Sqlite.SqliteException.ThrowExceptionForRC(Int32 rc, sqlite3 db)
       at Microsoft.Data.Sqlite.SqliteDataReader.NextResult()
       at Microsoft.Data.Sqlite.SqliteCommand.ExecuteReader(CommandBehavior behavior)
       at Microsoft.Data.Sqlite.SqliteCommand.ExecuteReader()
       at Microsoft.Data.Sqlite.SqliteCommand.ExecuteNonQuery()
       at Emby.Server.Implementations.Data.SqliteExtensions.Execute(SqliteConnection sqliteConnection, String commandText)
       at Emby.Server.Implementations.Data.ManagedConnection.Execute(String commandText)
       at Emby.Server.Implementations.Data.BaseSqliteRepository.Initialize()
       at Emby.Server.Implementations.Data.SqliteItemRepository.Initialize()
       at Emby.Server.Implementations.ApplicationHost.InitializeServices()
       at Jellyfin.Server.Program.StartServer(IServerApplicationPaths appPaths, StartupOptions options, IConfiguration startupConfig)
    [11:42:21] [INF] [1] Main: Running query planner optimizations in the database... This might take a while
    [11:42:21] [INF] [1] Emby.Server.Implementations.ApplicationHost: Disposing CoreAppHost
    [11:42:21] [INF] [1] Emby.Server.Implementations.ApplicationHost: Disposing PluginManager
    
    {restarts}

     

    is there a way to repair/recover the database?

     

     

    EDIT: I believe I got it working again, mostly, although I lost my watch history.... Here is the process I followed:

     

    1. shut down jellyfin
    2. open unraid console and navigate to jf data directory and run....
    root@LaForge:~# cd /mnt/user/appdata/docker/jellyfin/data/data
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# sqlite3 library.db .dump > recovered_data.sql
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# mv library.db library.db.corrupt
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# sqlite3 recovered.db < recovered_data.sql
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# sqlite3 recovered.db "PRAGMA integrity_check;"
    ok
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# mv recovered.db library.db
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# chown 99:100 library.db
    root@LaForge:/mnt/user/appdata/docker/jellyfin/data/data# chmod 644 library.db
    
    

        3. re-launch jellyfin and wait forever for it to load and then revalidate the library

     

    Hope this helps someone else encountering the same issue...

     

    I don't suppose there's any way to get my watch history back?

  2. Last I heard this was caused by some bug where logs would fill up and nginx would run out of memory due to a web UI tab being left open... the problem is that I can't seem to find a way to fix it. The suggested fixes on the forums involve restarting nginx, but how do I do that if I can't get the webUI to respond, and can't open a terminal or SSH? Is there any option open to me other than a hard shutdown every time I leave a tab open? 

     

    EDIT: some parts of the webui are working, but not the plugins tab so I can't troubleshoot the SSH problem... tools/system log opens and is full up of this: 

     ....

    Jun 16 07:28:55 LaForge nginx: 2024/06/16 07:28:55 [crit] 32453#32453: ngx_slab_alloc() failed: no memory Jun 16 07:28:55 LaForge nginx: 2024/06/16 07:28:55 [error] 32453#32453: shpool alloc failed Jun 16 07:28:55 LaForge nginx: 2024/06/16 07:28:55 [error] 32453#32453: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. Jun 16 07:28:55 LaForge nginx: 2024/06/16 07:28:55 [error] 32453#32453: *15543604 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"

    ....

     

    it seems to start end end on 06/16 

  3. Need help with photoprism. I was trying to update the container and added 

     

    Container Variable: PHOTOPRISM_SITE_URL=mydomain.com

    Container Variable: PHOTOPRISM_DISABLE_CHOWN=false

     

    I also deleted the default admin password variable since I had already configured the admin account on the server, and didn't want it overwriting it....

     

    however now when I start the container, the site no longer loads and I get a splash screen that just says "Not Found" over the photoprism logo. The console would complain about a warning for the admin password not being set, and the account unable to initialize, so I tried adding back in

     

    PHOTOPRISM_ADMIN_PASSWORD=MyAdminPassword

     

    However the issues with it not loading are still persisting. What did I break and how can I fix it?

     

    EDIT:: after changing the SITE_URL to https://mydomain.com the site once again loads correctly, however the external URL used in links is, while different, still not correct...

     

    before adding the SITE_URL variable, generating share links used the TLD photoprism.me:port, but now after setting it to https://mydomain.com it is generating links as localhost:port ... how can I update this variable so that links work correctly?

  4. Hi, I have a dokuwiki install on an older server and I am trying to migrate the data to a new server. I can copy the appdata folder, but I get access denied on the /config/keys/cert.key, and nginx hangs on launch due to the key mismatch. What is the process for migrating this to a new server successfully?

  5. Hi, after installing this plugin, I can no longer rearrange the order of any docker containers outside of the folder(s) or of the folders themselves. 

     

    It seems this problem also persists after uninstalling the plugin again. 

  6. I am currently running a Quadro M2000 and am finding it is barely adequate for a single transcoding stream, and doesn't have enough memory to even attempt running things like Stable Diffusion. I need something I can replace it with that has at least 8GB of VRAM, and can handle 2 or more simultaneous 4k video transcodes between HEVC and MP4. 

     

    The catch is that it needs to operate solely on the power it gets from the PCIE slot. 

     

    Are there options out there or am I SOL?

  7. I'm going to try to condense this down as much as possible without leaving out anything important...

     

    I have a media share at 

    /mnt/user/media

    It is set as follows:

    Use cache pool: yes
    cache pool: cache
    CoW: auto
    Export: yes
    Security: Private
    User access: R/W

    I recently moved all my data from an old UNRAID server using an SMB share on the old server as a mount point, and pulling the data with:

    rsync -avh "/mnt/remotes/$OLDSERVER/media/" "/mnt/user/media" --progress --stats --iconv=.

    I then did some other stuff, setting up and configuring my media managers etc... all mostly behaved as expected, although *rr did seem to struggle with some files that did not have valid filenames... I remediated those manually by using mv $BADNAME $GOODNAME in the console

     

    Now the trouble begins...

     

    when I browse to the media share from UNRAID, from either the share browser, the root console, or a docker console with a mount point there, I can see 800+ files in /mnt/user/media/video/movies. 

     

    expected behavior: see the same when mounting the SMB share from windows

     

    observed behavior: only see the ~ 100 files that were created on the share *after* the data migration. 

     

    Troubleshooting steps already taken:

    • set windows to display hidden and system files -- no change
    • run docker safe new perms -- no change
    • disable docker, disable VMs, reboot the server, disable SMB, start the array, mount the array, run New Permissions on all shares except for appdata, dismount the array, stop the array, enable SMB, start the array, mount the array, remove and recreate drive mapping in Windows. -- no change
    • tried on another computer -- no change

     

    laforge-diagnostics-20230113-2019.zip

  8. I have both docker and VMs disabled, and rebooted the server without restarting docker, yet there are docker files on /mnt/disk3/system which will not move to the cache drive when the mover is invoked. 

     

    One of my dockers is taking an unusually long time to start up, and I'd like to eliminate this as the cause by getting this all onto the cache drive. Is there any way to force the files to move?

    Untitled.png

    Untitled2.png

  9. I'm having an issue migrating Radarr to a new server. Whether I copy the config files manually, or use the built in backup/restore feature.

     

    I am trying to change the relative location of the root folder. e.g.

    OLD: path /mnt/user/media/=/media && root folder =/media/Movies

    NEW: path /mnt/user/media/Movies/=/media && root foldere =/media

     

    The problem I am having is that I cannot add or change the root folders. 

     

    DEBG 'radarr' stdout output:
    [Warn] RadarrErrorPipeline: Invalid request Validation failed: 
     -- Path: Folder is not writable by user nobody 

     

  10. 6.11.5 I have mounted a remote share using 

    mount -t cifs //RemoteMachine/Share /mnt/remote_shares/RemoteMachine

    After running an rsync script, I am trying to unmount the share to re-mount on a different path. 

    I get this:

    root@UNRAID:~# unmount /mnt/remote_share/RemoteMachine
    bash: unmount: command not found
  11. Hi,

     

    I am trying to decide how to manage the cache drives on my new server.  I have 4x 1TB SSDs which I intend to use for protected cache. I have one or more shares which will need a very large cache pool for storing downloads on their way to the array, and also docker images and appdata which will be used regularly.

     

    I had planned initially to use 2 pools of 2x 1TB SSDs each in BTRFS RAID1, and use one for caching the downloads and other shares, and one for docker, system, and appdata... however I am now wondering if I might be better served using a single BTRFS RAID10 pool for all of them, in order to fully utilize all of the available drive space. 

     

    Apart from the obvious benefit of being able to use more of the 2TB of space for caching downloads etc when the docker/appdata folders are small, I want to address some concerns about the performance of the pools for caching in BTRFS RAID1 vs RAID10 mode, and fault tolerance.

     

    Am I asking for trouble using BTRFS RAID10? Is there a significantly greater chance of data loss or downtime from drive failure?

     

    Does RAID10 offer a clear performance benefit over RAID1 (outside of that which can be gained by splitting competing services to different pools)?

     

    with BTRFS RAID1 will the RAID remain software and hardware agnostic? e.g., can you read from a single disk without the array, like you can with the UNRAID array?

     

    Conversely, with BTRFS RAID10, I would assume you cannot... but can you at least still rebuild the array without taking it offline?

     

    Thanks in advance.

  12. I did some searching on this issue and really didn't want to necro this old thread again for a "me too": 

     

    This seems to be a widespread problem with these servers, and I am running into it as well. I really do not want to fall back on booting legacy mode just to boot unraid on this server, although I may go the route of using an ESXi host with unraid as a guest OS if another option doesn't present itself. 

     

    Does anyone know why syslinux is crapping its pants when trying to boot unraid on this board? 

     

    I've tried different memory configs, using ECC RAM. 

  13. including cache drive? 

     

    is there a way to lock the state of things so the original configuration can't be altered or broken in the process of migrating data to new drives? What about exporting/importing the docker containers and configurations? plugins? etc. If I clone the original boot drive, can it be adapted to associate new drives and pools on a new controller without breaking anything?

  14. I have an unraid server I am replacing, looking to upgrade everything. I'm moving from a j5005 with 4x 2.5" drives to a proper rack server with an 8x3.5 backplane, and I'm upgrading my cache drive and my app data drive pool to their own 1TB SSDs. 

     

    What is the fastest/easiest way to accomplish this? I'm willing to spend on another unraid license to attach all the drives for the build if needed. The new drives are SAS and I can't plug them into the existing server. I also can't plug in all of the existing drives at one time to the new server. Is there a way to just clone all the data over onto the new drives and then export/import my config to a new USB drive?

     

    Oh yeah, the boot USB drive will also need to be replaced, since the old one is a custom made Disk-on-Module and I don't have a USB header on the new mobo to plug it into.

     

    Also I'm going from a bare metal install of unraid to a guest VM on ESXi.

     

    What's the order of operations here in order to do this and not risk data loss from screwing it up?

  15. I am looking for the best way to implement a full bidirectional sync between one or more windows PCs and the server, using something like osync/bsync or other rsync based scripts. The goal is to have any changes in files on the server or client side be reflected immediately on the other, but with the server side not deleting files, and keeping soft backups of changed files. Ideally I'd also like to be able to sync just select directories with PCs that may not have enough storage for all of it. 

     

    Is there any way to set up something like this?

×
×
  • Create New...