[Support] Djoss - FileBot


Recommended Posts

2 minutes ago, IT_Feldman said:

Hi everyone,

I'm having an issue in the GUi for FileBot. There's an error message that never goes away and I'm not sure how to resolve it. The message is: "Unexpected error encountered: JSON.parse: unexpected character at line 1 column 1 of the JSON data". I have not seen anything in the logs that points to an issue, and I cannot find a JSON file in appdata that would be the source of the problem.

Any thoughts on how to proceed?

FilebotError.png

 

This is usually caused by a browser plugin blocking something needed by the page.

  • Like 1
Link to comment
On 12/18/2020 at 11:31 PM, Pixel5 said:

Also i had a strange thing happen when i used the GUI to change some names.

I set it up to sort the files into some folders and when it ran this on a different share which does not have this folder structure yet it did create it but on a share level.

 

So i have a share for TV Series which cache disabled and a downloads share that is cache only, when i ran my renaming i suddenly had the TV series structure on my cache drive and had to set it to cache yes and run mover to have the files moved of the drive.

 

 

I'm having the same issue with filebot not moving files (from GUI) when renaming. Did you ever resolve this issue?

I have my shares set up like this

 

mnt/user/downloads/movies       (Cache:Yes)

mnt/user/movies              (Cache:No)

 

When renaming files from the downloads share, I tell Filebot to rename and move them to the movies share.

 

This works fine only if the mnt/user/downloads/movies files are already off the cache.

If the files are on cache, it creates:

1)  mnt/user/movies/"Movie_name (date)" an empty folder on the array

2) mnt/user/movies/"Movie_name (date)" a populated folder with the movie file in it,  in the cache

 

This orphans the files there since mover will not touch them without changing the cache settings for the movie share.

 

I guess this is because filebot doesn't do a copy+delete operation, so for the move operation essentially filebot creates its own share folder structure in the cache which then gets stuck in there. Filebot and unraid think that the files are where they should be since they are in the share now.

How do we get around this?

 

Edited by Optimbic
Link to comment
5 hours ago, Optimbic said:

I guess this is because filebot doesn't do a copy+delete operation, so for the move operation essentially filebot creates its own share folder structure in the cache which then gets stuck in there. Filebot and unraid think that the files are where they should be since they are in the share now.

How do we get around this?

well, i would say thats the expected and wanted behaviour most likely (well, even all i know off ;))

 

may rather show also how you mounted the shares, you may can change this behaviour when u really split them.

 

this part here (not splitted here, i really really want to move instead copy ... ;))

image.thumb.png.7ab5097f0182406ffc2fa280cbfbb285.png

 

when i remember correctly and had it wrong setted up, i made two mounts for the docker or so ...

 

even easier ... just use cache yes for your movie share too and run the mover there too ... but i guess thats not a option cause otherwise you would run it like this already.

Link to comment
25 minutes ago, alturismo said:

well, i would say thats the expected and wanted behaviour most likely (well, even all i know off ;))

 

may rather show also how you mounted the shares, you may can change this behaviour when u really split them.

 

this part here (not splitted here, i really really want to move instead copy ... ;))

image.thumb.png.7ab5097f0182406ffc2fa280cbfbb285.png

 

when i remember correctly and had it wrong setted up, i made two mounts for the docker or so ...

 

even easier ... just use cache yes for your movie share too and run the mover there too ... but i guess thats not a option cause otherwise you would run it like this already.

 yeah I see what you mean about it being expected behavior, it's just not what I want it to do :)

 

I guess filebot creates the folders/files in the cache and then when it goes to move them it doesn't need to because they are already in the share. For Filebot and Unraid, the files are where they should be.

 

I'm not using the AMC script so I left those paths empty.

 

I did turn on the cache for the movie share and I'm using the mover until I figure a way to get around this.

 

I wonder if I could somehow tell filebot to move the files to user0/ shares, that should technically bypass the cache, but the files are already in cache so I guess it's not going to work.

Link to comment
19 hours ago, Optimbic said:

 yeah I see what you mean about it being expected behavior, it's just not what I want it to do :)

 

I guess filebot creates the folders/files in the cache and then when it goes to move them it doesn't need to because they are already in the share. For Filebot and Unraid, the files are where they should be.

 

I'm not using the AMC script so I left those paths empty.

 

I did turn on the cache for the movie share and I'm using the mover until I figure a way to get around this.

 

I wonder if I could somehow tell filebot to move the files to user0/ shares, that should technically bypass the cache, but the files are already in cache so I guess it's not going to work.

So I've been thinking about this some more.

 

If filebot does a rename operation which changes the path of the file (which is the same as a move operation) in situ, it will always bypass the share's cache and allocation settings.

 

For example, let's imagine somebody has their mnt/user/downloads/movies/ share to include only disk 5, and the mnt/user/movies/ share's allocation setting is disks 1 to 4 high water. So disk 5 should never have a mnt/user/movies/ folder structure.

Filebot is instructed to rename a file from 

mnt/user/downloads/movies/movie_folder/movie_file

to

mnt/user/movies/movie_folder/movie_file

 

Filebot will create the movies share on disk 5 and the renamed files will not be moved to the proper disk under the high water calculations. They will stay on disk 5 since filebot has created the movie share folder structure on disk 5.

 

However, I can't find anybody else discussing this issue in the threads so I suspect I am somehow misunderstanding this? Can someone let me know if I'm wrong? I'm probably doing something wrong aren't I? I mean, I can't be the only one that has encountered this issue?

 

It's not a showstopper or anything, for me personally at least since I have my downloads share to be on all disks high water and the cache issue can be bypassed by enabling cache on the destination share and then run mover.

 

However it's a bit of an interesting puzzle on how to optimize the operations so it's more efficient. Obviously we can just rename the files without changing the paths and then do a mv command from the terminal, but it's extra steps.

Edited by Optimbic
Link to comment

All you are discussing should not be a concern for FileBot or any applications working with /mnt/user/.  The issue/behaviour you are raising seems to be related to unRAID internals.

 

For example, you mentioned that moving a file from a cache-enabled share to a cache-disabled shared does not produce the expected result.  From FileBot's point of view, the file has been moved from one folder to the other.  However, under the hood, unRAID did not place the file to the expected disks.

Link to comment
1 hour ago, Djoss said:

All you are discussing should not be a concern for FileBot or any applications working with /mnt/user/.  The issue/behaviour you are raising seems to be related to unRAID internals.

 

For example, you mentioned that moving a file from a cache-enabled share to a cache-disabled shared does not produce the expected result.  From FileBot's point of view, the file has been moved from one folder to the other.  However, under the hood, unRAID did not place the file to the expected disks.

 

Yes exactly, it seems that everything is "working as intended", except that the files are physically in places that violate the rules of the share.

 

Although the fact that Filebot creates empty folders in the correct physical location, tells me that maybe something does try to move them but Unraid tells it not to, since they files are already on the share. At that point, as far as Unraid or Filebot is concerned, everything is fine.

 

So essentially the question is how do we get Filebot and unraid to work together so that the share rules are not violated?

 

 

 

 

Edited by Optimbic
Link to comment
7 minutes ago, Optimbic said:

So essentially the question is how do we get Filebot and unraid to work together so that the share rules are not violated?

 

may try this as you use the GUI function (i never used it, only amc here ... ;))

 

add another path to the docker like

 

/storage2 <> /mnt/user0/movies/

 

now in the web GUI set storage2 as target ?

 

or may play with this settings here, as you use manual webui processing anyway, test these functions ...

 

image.thumb.png.136553940853253dab6a430d1c415d69.png

Link to comment
22 hours ago, alturismo said:

may try this as you use the GUI function (i never used it, only amc here ... ;))

 

add another path to the docker like

 

/storage2 <> /mnt/user0/movies/

 

now in the web GUI set storage2 as target ?

 

or may play with this settings here, as you use manual webui processing anyway, test these functions ...

 

image.thumb.png.136553940853253dab6a430d1c415d69.png

 

Thanks, will try that. The copy action should work, although then I have to go in and delete things manually again so it's the same amount of work as rename + mv operation from terminal.

 

When you use the AMC you don't have these kinds of problems? Or are your shares set up in such a way that it doesn't matter?

Link to comment
28 minutes ago, Optimbic said:

When you use the AMC you don't have these kinds of problems?

im definately not using copy/delete (from disk > disk) operation, i want the straight move like expected, as mentioned it would be a issue for meif the files would be copied to another disk instead moved instantly ;)

 

but in amc you set a input and output, so i assume yes, it would work like you want it to work when adding 2 different root mountpoints also from filebot's pov as thats the usual failure when most complain files are copy/del instead moved instantly.

Link to comment
21 hours ago, alturismo said:

im definately not using copy/delete (from disk > disk) operation, i want the straight move like expected, as mentioned it would be a issue for meif the files would be copied to another disk instead moved instantly ;)

 

but in amc you set a input and output, so i assume yes, it would work like you want it to work when adding 2 different root mountpoints also from filebot's pov as thats the usual failure when most complain files are copy/del instead moved instantly.

 

Thanks, I'll play around with AMC and see.

 

Yeah I also don't want the copy+write, I want it to move stuff.

Link to comment
44 minutes ago, master shayne said:

trying to run amc any guidance would be appreciated

amc settings are done in the docker enviroment variables

 

docker filebot edit, scroll down, advanced amc are in "show more settings"

 

here just a snippet from regular's

image.thumb.png.cf410735bda18c7837365b53377ffc12.png

 

and when you expand "show more ..." another snippet

image.thumb.png.ae3f66b1bbcc3c6e14967bc52b520ff6.png

Link to comment
  • 3 weeks later...

Not sure if it's been answered before. But I noticed the watch folder and output folder. Do these need to be different because I have a ton of media and would like to keep it in the same location. For example my media resides in /media/Movies and /media/Tv Shows. If I point the watch folder and output folder to media will it put the content back into its corresponding folders or will it just dump it out at the top level in /media?

 

I did not see any info on this in the GitHub unless I overlooked it.

Link to comment
4 hours ago, SaltyCapn said:

I did not see any info on this in the GitHub unless I overlooked it.

this is more filebot related, so it depends now how you setup your output format ... so rather look at the filebot readme's.

 

as starter, just copy over some "test" files to a test location, and run from there to see the results before you run it all over ...

Link to comment
  • 2 weeks later...

I'm fighting a 'bug' with the most recent update (v23.03.3). When I updated my container from the Docker tab, it seems to start fine with no errors in the logs, but the moment I try to rename something I get a Filebot is a directory message and no rename occurs. I tried removing the app and then deleting the Filebot folder under /appdata.

 

Even a clean re-install with the stock presets gives me the same message. I then tried rolling back and used tag v23.03.2 and then restored the backup of the /appdata/Filebot folder from last night, i.e. before I tried the update. Alas I still get this same message and no rename occurs.

 

Any thoughts on what to try next?

 

UPDATE: I was able to get it working. I had to roll back to v23.03.1 and restore my backup from before the update to v23.03.2 (which was a few days ago). I'll make a 'permanent' copy of the backup I restored just in case, but for now I'm leaving it pointed at the v23.03.1 version.

Edited by AgentXXL
update
Link to comment

Hello! Since I upgraded the docker, FileBot crashes when I rename files or folders. I can restart it and maybe get it to work once and then it crashes again.

 

[filebot-info] Activate License [PX31516439] on [Sat Apr 01 10:57:17 CDT 2023]
[filebot-info] License: FileBot License PX31516439 (Valid-Until: 2071-12-09)
[filebot-info] Apr 01, 2023 10:57:17 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[filebot-info] WARNING: The index for data file /config/cache/0/url_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/url_1c.index
[filebot-info] Done ヾ(@⌒ー⌒@)ノ
[supervisor  ] starting service 'app'...
[xvnc        ] Sat Apr  1 10:57:18 2023
[xvnc        ]  Connections: accepted: /tmp/vnc.sock
[xvnc        ]  SConnection: Client needs protocol version 3.8
[xvnc        ]  SConnection: Client requests security type None(1)
[xvnc        ]  VNCSConnST:  Server default pixel format depth 24 (32bpp) little-endian rgb888
[xvnc        ] Sat Apr  1 10:57:19 2023
[xvnc        ]  VNCSConnST:  Client pixel format depth 24 (32bpp) little-endian bgr888
[xvnc        ]  ComparingUpdateTracker: 0 pixels in / 0 pixels out
[xvnc        ]  ComparingUpdateTracker: (1:-nan ratio)
[supervisor  ] starting service 'amc'...
[app         ] Apr 01, 2023 10:57:19 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/expression_classes_0.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/expression_classes_0.index
[supervisor  ] all services started.
[app         ] Apr 01, 2023 10:58:16 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_search_eng_2.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_search_eng_2.index
[app         ] Apr 01, 2023 10:58:16 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_1c.index
[app         ] Apr 01, 2023 10:58:17 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_data_0_eng_2.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_data_0_eng_2.index
[app         ] 19 files renamed.
[app         ] Apr 01, 2023 10:58:21 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/fanarttv_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/fanarttv_1c.index
[app         ] Apr 01, 2023 10:58:21 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_en-us_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_en-us_1c.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_en-us_1c_etag_1.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_en-us_1c_etag_1.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_1c.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_1c_etag_1.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_1c_etag_1.index
[app         ] #
[app         ] # A fatal error has been detected by the Java Runtime Environment:
[app         ] #
[app         ] #  SIGSEGV (0xb) at pc=0x00001539b3f31030, pid=863, tid=1123
[app         ] #
[app         ] # JRE version: OpenJDK Runtime Environment (17.0.6+10) (build 17.0.6+10-alpine-r0)
[app         ] # Java VM: OpenJDK 64-Bit Server VM (17.0.6+10-alpine-r0, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
[app         ] # Problematic frame:
[app         ] # C  [libmediainfo.so+0x131030]  ZenLib::BitStream_LE::Get(unsigned long)+0x50
[app         ] #
[app         ] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
[app         ] #
[app         ] # An error report file with more information is saved as:
[app         ] # /storage/hs_err_pid863.log
[app         ] #
[app         ] # If you would like to submit a bug report, please visit:
[app         ] #   https://gitlab.alpinelinux.org/alpine/aports/issues
[app         ] # The crash happened outside the Java Virtual Machine in native code.
[app         ] # See problematic frame for where to report the bug.
[app         ] #
[supervisor  ] service 'app' exited (got signal SIGABRT).
[supervisor  ] service 'app' exited, shutting down...
[supervisor  ] stopping service 'amc'...
[supervisor  ] stopping service 'openbox'...
[supervisor  ] service 'openbox' exited (with status 0).
[supervisor  ] stopping service 'nginx'...
[xvnc        ] Sat Apr  1 10:58:34 2023
[xvnc        ]  VNCSConnST:  closing /tmp/vnc.sock: Clean disconnection
[xvnc        ]  EncodeManager: Framebuffer updates: 1330
[xvnc        ]  EncodeManager:   Tight:
[xvnc        ]  EncodeManager:     Solid: 2.289 krects, 22.3138 Mpixels
[xvnc        ]  EncodeManager:            35.7656 KiB (1:2437.82 ratio)
[xvnc        ]  EncodeManager:     Bitmap RLE: 258 rects, 73.407 kpixels
[xvnc        ]  EncodeManager:                 7.66309 KiB (1:37.8137 ratio)
[xvnc        ]  EncodeManager:     Indexed RLE: 3.507 krects, 3.08514 Mpixels
[xvnc        ]  EncodeManager:                  563.757 KiB (1:21.4497 ratio)
[xvnc        ]  EncodeManager:   Tight (JPEG):
[xvnc        ]  EncodeManager:     Full Colour: 3.56 krects, 13.3688 Mpixels
[xvnc        ]  EncodeManager:                  8.97584 MiB (1:5.68624 ratio)
[xvnc        ]  EncodeManager:   Total: 9.614 krects, 38.8412 Mpixels
[xvnc        ]  EncodeManager:          9.5688 MiB (1:15.4959 ratio)
[xvnc        ]  Connections: closed: /tmp/vnc.sock
[xvnc        ]  ComparingUpdateTracker: 96.9176 Mpixels in / 29.7823 Mpixels out
[xvnc        ]  ComparingUpdateTracker: (1:3.2542 ratio)
[supervisor  ] service 'nginx' exited (with status 0).
[supervisor  ] stopping service 'xvnc'...
[xvnc        ]  ComparingUpdateTracker: 0 pixels in / 0 pixels out
[xvnc        ]  ComparingUpdateTracker: (1:-nan ratio)
[supervisor  ] service 'xvnc' exited (with status 0).
[supervisor  ] sending SIGTERM to all processes...
[amc         ] Terminated
[supervisor  ] service 'amc' exited (with status 143).
[finish      ] executing container finish scripts...
[finish      ] all container finish scripts executed.

** Press ANY KEY to close this window ** 

Any thoughts?

Link to comment
9 hours ago, Keek Uras said:

Hello! Since I upgraded the docker, FileBot crashes when I rename files or folders. I can restart it and maybe get it to work once and then it crashes again.

 

[filebot-info] Activate License [PX31516439] on [Sat Apr 01 10:57:17 CDT 2023]
[filebot-info] License: FileBot License PX31516439 (Valid-Until: 2071-12-09)
[filebot-info] Apr 01, 2023 10:57:17 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[filebot-info] WARNING: The index for data file /config/cache/0/url_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/url_1c.index
[filebot-info] Done ヾ(@⌒ー⌒@)ノ
[supervisor  ] starting service 'app'...
[xvnc        ] Sat Apr  1 10:57:18 2023
[xvnc        ]  Connections: accepted: /tmp/vnc.sock
[xvnc        ]  SConnection: Client needs protocol version 3.8
[xvnc        ]  SConnection: Client requests security type None(1)
[xvnc        ]  VNCSConnST:  Server default pixel format depth 24 (32bpp) little-endian rgb888
[xvnc        ] Sat Apr  1 10:57:19 2023
[xvnc        ]  VNCSConnST:  Client pixel format depth 24 (32bpp) little-endian bgr888
[xvnc        ]  ComparingUpdateTracker: 0 pixels in / 0 pixels out
[xvnc        ]  ComparingUpdateTracker: (1:-nan ratio)
[supervisor  ] starting service 'amc'...
[app         ] Apr 01, 2023 10:57:19 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/expression_classes_0.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/expression_classes_0.index
[supervisor  ] all services started.
[app         ] Apr 01, 2023 10:58:16 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_search_eng_2.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_search_eng_2.index
[app         ] Apr 01, 2023 10:58:16 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_1c.index
[app         ] Apr 01, 2023 10:58:17 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/thetvdb_data_0_eng_2.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/thetvdb_data_0_eng_2.index
[app         ] 19 files renamed.
[app         ] Apr 01, 2023 10:58:21 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/fanarttv_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/fanarttv_1c.index
[app         ] Apr 01, 2023 10:58:21 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_en-us_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_en-us_1c.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_en-us_1c_etag_1.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_en-us_1c_etag_1.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_1c.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_1c.index
[app         ] Apr 01, 2023 10:58:22 AM net.sf.ehcache.store.disk.DiskStorageFactory <init>
[app         ] WARNING: The index for data file /config/cache/0/tmdb_1c_etag_1.data is out of date, probably due to an unclean shutdown. Deleting index file /config/cache/0/tmdb_1c_etag_1.index
[app         ] #
[app         ] # A fatal error has been detected by the Java Runtime Environment:
[app         ] #
[app         ] #  SIGSEGV (0xb) at pc=0x00001539b3f31030, pid=863, tid=1123
[app         ] #
[app         ] # JRE version: OpenJDK Runtime Environment (17.0.6+10) (build 17.0.6+10-alpine-r0)
[app         ] # Java VM: OpenJDK 64-Bit Server VM (17.0.6+10-alpine-r0, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
[app         ] # Problematic frame:
[app         ] # C  [libmediainfo.so+0x131030]  ZenLib::BitStream_LE::Get(unsigned long)+0x50
[app         ] #
[app         ] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
[app         ] #
[app         ] # An error report file with more information is saved as:
[app         ] # /storage/hs_err_pid863.log
[app         ] #
[app         ] # If you would like to submit a bug report, please visit:
[app         ] #   https://gitlab.alpinelinux.org/alpine/aports/issues
[app         ] # The crash happened outside the Java Virtual Machine in native code.
[app         ] # See problematic frame for where to report the bug.
[app         ] #
[supervisor  ] service 'app' exited (got signal SIGABRT).
[supervisor  ] service 'app' exited, shutting down...
[supervisor  ] stopping service 'amc'...
[supervisor  ] stopping service 'openbox'...
[supervisor  ] service 'openbox' exited (with status 0).
[supervisor  ] stopping service 'nginx'...
[xvnc        ] Sat Apr  1 10:58:34 2023
[xvnc        ]  VNCSConnST:  closing /tmp/vnc.sock: Clean disconnection
[xvnc        ]  EncodeManager: Framebuffer updates: 1330
[xvnc        ]  EncodeManager:   Tight:
[xvnc        ]  EncodeManager:     Solid: 2.289 krects, 22.3138 Mpixels
[xvnc        ]  EncodeManager:            35.7656 KiB (1:2437.82 ratio)
[xvnc        ]  EncodeManager:     Bitmap RLE: 258 rects, 73.407 kpixels
[xvnc        ]  EncodeManager:                 7.66309 KiB (1:37.8137 ratio)
[xvnc        ]  EncodeManager:     Indexed RLE: 3.507 krects, 3.08514 Mpixels
[xvnc        ]  EncodeManager:                  563.757 KiB (1:21.4497 ratio)
[xvnc        ]  EncodeManager:   Tight (JPEG):
[xvnc        ]  EncodeManager:     Full Colour: 3.56 krects, 13.3688 Mpixels
[xvnc        ]  EncodeManager:                  8.97584 MiB (1:5.68624 ratio)
[xvnc        ]  EncodeManager:   Total: 9.614 krects, 38.8412 Mpixels
[xvnc        ]  EncodeManager:          9.5688 MiB (1:15.4959 ratio)
[xvnc        ]  Connections: closed: /tmp/vnc.sock
[xvnc        ]  ComparingUpdateTracker: 96.9176 Mpixels in / 29.7823 Mpixels out
[xvnc        ]  ComparingUpdateTracker: (1:3.2542 ratio)
[supervisor  ] service 'nginx' exited (with status 0).
[supervisor  ] stopping service 'xvnc'...
[xvnc        ]  ComparingUpdateTracker: 0 pixels in / 0 pixels out
[xvnc        ]  ComparingUpdateTracker: (1:-nan ratio)
[supervisor  ] service 'xvnc' exited (with status 0).
[supervisor  ] sending SIGTERM to all processes...
[amc         ] Terminated
[supervisor  ] service 'amc' exited (with status 143).
[finish      ] executing container finish scripts...
[finish      ] all container finish scripts executed.

** Press ANY KEY to close this window ** 

Any thoughts?

I am having the same issue

Link to comment

can confirm the last update broke everything.

[amc         ] #
[amc         ] # A fatal error has been detected by the Java Runtime Environment:
[amc         ] #
[amc         ] #  SIGSEGV (0xb) at pc=0x000014a1e2331030, pid=842, tid=844
[amc         ] #
[amc         ] # JRE version: OpenJDK Runtime Environment (17.0.6+10) (build 17.0.6+10-alpine-r0)
[amc         ] # Java VM: OpenJDK 64-Bit Server VM (17.0.6+10-alpine-r0, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
[amc         ] # Problematic frame:
[amc         ] # C  [libmediainfo.so+0x131030]  ZenLib::BitStream_LE::Get(unsigned long)+0x50
[amc         ] #
[amc         ] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
[amc         ] #
[amc         ] # An error report file with more information is saved as:
[amc         ] # /tmp/hs_err_pid842.log
[amc         ] #
[amc         ] # If you would like to submit a bug report, please visit:
[amc         ] #   https://gitlab.alpinelinux.org/alpine/aports/issues
[amc         ] # The crash happened outside the Java Virtual Machine in native code.
[amc         ] # See problematic frame for where to report the bug.
[amc         ] #


 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.