Jump to content

tazire

Members
  • Posts

    377
  • Joined

  • Last visited

Posts posted by tazire

  1. got a weird issue with sonarr atm. I have the connection setup correctly apparently. however whenever i try to add a tv series i get the following error.

     

    [41m[30mfail[39m[22m[49m: Ombi.Api.Api[1000]
    StatusCode: BadRequest, Reason: Bad Request, RequestUri: http://192.168.1.18:8989/api/series/0
    [41m[30mfail[39m[22m[49m: Ombi.Api.Api[1000]
    StatusCode: BadRequest, Reason: Bad Request, RequestUri: http://192.168.1.18:8989/api/series/0
    [41m[30mfail[39m[22m[49m: Ombi.Api.Api[1000]
    StatusCode: BadRequest, Reason: Bad Request, RequestUri: http://192.168.1.18:8989/api/series/0

     

    The Ombi GUI will act like everything went fine and that the addition is being processed. But the series will never actually get added to sonarr. I cant try add the series again through ombi as it says it has already been requested. I can manually add series myself through sonarr and all works fine. that end it appears.

     

    EDIT***

     

    Just to add to this I have found more info for this. Its some issue with the links as I tried to just click on links to spacific tv shows and it trys to connect me to an invalid link.

    For example... when i click the game of thrones link it trys to redirct me to

    https://www.imdb.com/title/http://www.imdb.com/title/tt0944947//

    when it should be

    https://www.imdb.com/title/tt0944947/

     

    I have no idea why it would do this or what i can do to fix it? but this seems to be why the series addition is failing.

  2. Having a little trouble with permissions atm... I know its something i've done but looking for a bit of a hand to get it back working. I recently moved all my docker appdata and docker.img to an unassigned device nvme. I had a few permissions issues with a couple of dockers not being able to edit or access their config files and i think thats whats happening here. I had to redo my sabz and rutorrent dockers. both back up and running as they should. but as a result ive had to go and change the app api in my sonarr radarr etc. radarr and lidarr worked as they should. however when i change the api the test works fine and connects but the save button does nothing. so when i leave that part of the settings and go back in the api has not been updated. Anything i can do to fix this? im reluctant to redo sonarr and have to readd all my media to it again. cheers.

     

    EDIT

    Just to add to this. Getting the following errors in logs.

     

    2020-07-17 14:37:04,424 DEBG 'sonarr' stdout output:
    [Error] TaskExtensions: Task Error

    [v2.0.0.5344] System.Data.SQLite.SQLiteException (0x80004005): attempt to write a readonly database
    attempt to write a readonly database
    at System.Data.SQLite.SQLite3.Reset (System.Data.SQLite.SQLiteStatement stmt) [0x00083] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x0003c] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior)
    at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
    at Marr.Data.QGen.InsertQueryBuilder`1[T].Execute () [0x00046] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\QGen\InsertQueryBuilder.cs:140
    at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\DataMapper.cs:728
    at NzbDrone.Core.Datastore.BasicRepository`1[TModel].Insert (TModel model) [0x0002d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Datastore\BasicRepository.cs:111
    at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x0013d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:82
    at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push (System.String commandName, System.Nullable`1[T] lastExecutionTime, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x000b7] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:95
    at NzbDrone.Core.Jobs.Scheduler.ExecuteCommands () [0x00043] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Jobs\Scheduler.cs:42
    at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in /build/mono/src/mono/external/corert/src/System.Private.CoreLib/src/System/Threading/Tasks/Task.cs:2476
    at System.Threading.Tasks.Task.Execute () [0x00000] in /build/mono/src/mono/external/corert/src/System.Private.CoreLib/src/System/Threading/Tasks/Task.cs:2319

     

     

  3. Just a question in relation to docker spacifically... i have offloaded my appdata and .img to a UD. I keep getting the error relating to the config path not being set as R/W Slave. When i try to edit the docker container to fix this the config is the one path that i dont have the edit option next to in order to set this as required.

     

    Is this something i should redefine myself by adding a new path and leaving the existing config path empty? obviously as the edit and delete button arent available on the config as standard this isnt an option.

  4. Ok i tried to google this and just cant find anything to help. essentially the title says it all really. I am making a few changes. I am moving my appdata off the cache drives onto an nvme drive im expecting. I then want to change my cache drives to RAID 0 and run nextcloud off this only. I currently have duplicati backing up my nextcloud to backblaze but id like to have another share on my array as a direct copy of the nextcloud (cache only) share. for immediate availablity should i have cache drive issues.

     

    What is my best option to achive this? rsync the whole share to the other share? and whats the best way to have this occure automatically every day or once a week etc?

  5. 4 hours ago, tazire said:

    Yea that's the steps I followed. But when I get to the update command I get that error in the terminal. 

     

    I then tried getting into my nextcloud instance and obv maintenance mode was on causing issues... Turned that off and it gave me the update button which I tried which again gave me a similar error to above only on the nextcloud ui

     

    EDIT

     

    Ok i tried a new install and just copying the old config from the nextcloud back up and it just goes back to the same issue... unfortunately.

    Also tried to rollback to linuxserver/nextcloud:18.0.4-ls81 but it just gives me the same result.

    Ok just in case anyone else finds similar issues. I checked my config and for some reason the version was down as 17.x.x so i changed that to 19.0.0 and all good now

  6. 3 hours ago, saarg said:

    Have you checked the manual method linked in the first post?

    Yea that's the steps I followed. But when I get to the update command I get that error in the terminal. 

     

    I then tried getting into my nextcloud instance and obv maintenance mode was on causing issues... Turned that off and it gave me the update button which I tried which again gave me a similar error to above only on the nextcloud ui

     

    EDIT

     

    Ok i tried a new install and just copying the old config from the nextcloud back up and it just goes back to the same issue... unfortunately.

    Also tried to rollback to linuxserver/nextcloud:18.0.4-ls81 but it just gives me the same result.

  7. hmmmm just trying to go from version 18.0.6 to 19 I got the following error...

     

    Nextcloud or one of the apps require upgrade - only a limited number of commands are available
    You may use your browser or the occ upgrade command to do the upgrade
    Set log level to debug
    Exception: Updates between multiple major versions and downgrades are unsupported.
    Update failed
    Maintenance mode is kept active
    Reset log level

     

    im aware this is an issue when you jump a major update but im fairly sure 18.0.6 was the most recent prior to 19? Not really sure where to go with this. Cant update and kinda stuck now. I read this thread

    https://help.nextcloud.com/t/updates-between-multiple-major-versions-are-unsupported/7094

    but this seems to be if you skip a major release. At this point will i have to reinstall?

     

  8. i keep getting the following errors and then ombi becomes very unresponsive...

     

    [41m[30mfail[39m[22m[49m: Ombi.Api.Api[1000]
    StatusCode: InternalServerError, Reason: Internal Server Error, RequestUri: http://x.x.x.x:8989/sonarr/api/Episode?seriesId=46
    [41m[30mfail[39m[22m[49m: Ombi.Schedule.Jobs.Sonarr.SonarrSync[2006]
    Exception when trying to cache Sonarr
    Newtonsoft.Json.JsonSerializationException: Cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.Collections.Generic.List`1[Ombi.Api.Sonarr.Models.Episode]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
    To fix this error either change the JSON to a JSON array (e.g. [1,2,3]) or change the deserialized type so that it is a normal .NET type (e.g. not a primitive type like integer, not a collection type like an array or List<T>) that can be deserialized from a JSON object. JsonObjectAttribute can also be added to the type to force it to deserialize from a JSON object.

     

    a restart of the docker brings it back for a short period until it happens again. Any ideas. I might just do a full fresh install soon.

  9. Exact error is as follows

     

    Apr 27 22:32:53 SERVER kernel: BTRFS critical (device sdaa1): corrupt node: root=5 block=1759166464 slot=120, bad key order, current (81626601 84 706685372) next (81592320 12 81626604)

     

    I googled it a bit and was wondering if I simply reformat my cache drives and start from an appdata backup will this resolve the issue? Server is runninng and I can use everything but the error is repeating and filling up the logs. 

     

    any help greatly appreciated. 

  10. hmmm got this installed there but having issues... unraid sees the card but when i plug the eithernet cable in i get nothing at all (light doesnt even appear on the switch to indicate connection)... plugged back into the motherboard 1gbe nic and all fine... 

     

    EDIT

     

    Nevermind. This solved my issue. 

     

     

     

  11. I only have a very odd 4k file thats maybe more than that. But the other thing is this was never an issue for me prior to the lockdown. And the increase in people using my server. Its just the usage on the server seems to be slowing down my experience. So i was just trying to figure out if there is anything i can do to get everything back working seemlessly.

     

    Without going to much into the problem my main question is would a change in expansion card and change to dual link to the HBA improve the throughput and possibly my experience? This is very much a hobby for me too so i enjoy all the upgrading and learning about this stuff too. I mean i really have no real reason to upgrade to 10Gb networking but again i just enjoy all that kinda stuff.

     

    Anyway thanks for the input either way. 

  12. Its not something i had noticed until this lockdown though... with more and more people accessing my plex server it seems the throughput of the HBA or expansion is possibly slowing down my experience. Just that with 2 or 3 people possibly reading data from 2 or 3 different drives at once it leaves less throughput for me to watch what i want. 

     

    Im just wondering if the upgrade in expansion card and going to a dual link to the HBA will improve my experience?

  13. Its not so much about the speed.... as improving my personal experience within my home. Atm i am having slowdown viewing 4k content from my plex server while 2 to 3 others are watching from my server. This is direct play 4k content so i know its not a transcoding issue. Im essentially just trying to eliminate the bottlenecks which may be causing this. It may simply be the read speed of the drives are causing the issue but I just want to elimate the throughput of the drive access as a possible cause. 

  14. Title says it all really. I currently have a 24 bay setup with 19 bays populated. my current setup is 

     

    r7 1700

    64gb RAM at default speed

    nvidia p4000

    hba lsi 9240 8i 

    Expander RES2SV240

    1GB NIC soon to be 10GB

     

    Basically im looking to maximise my drive speeds as best i can. From reading other threads I was kinda leaning towards an expander upgrade to the RES2CV360 and use a dual link to the existing HBA, currently single link to the RES2SV240. Im just curious will the HBA be the bottleneck in this instance and should i upgrade it for best performance? Cheers in advance. 

     

     

  15. 15 hours ago, Aceriz said:

    Hey did you manage to set this up?  What you describe is what I am hoping to setup

     

    No i never did after. Just couldnt get it working right. I didnt give it an awful lot of thought after the initial effort though. 

  16. On 1/27/2020 at 4:10 PM, SavellM said:

    Aye I'm aware. But with NextCloud you can add a AV plugin and it scans files as they are uploaded.

    I wanted to use this feature but it requires ClamAV, which I am now running in a docker. I've seen some people use the  docker from NC to run these scans but I cant figure it out.

    In relation to getting this to work with nextcloud....

     

    If i remove the scan path set will that stop the auto scanning?

    Then im setting the path to clamav in nextcloud. Then I plan on setting this path in the nextcloud settings. 

     

    Hope is that clamav instance sits idle until nextcloud activates it to scan uploads?? Just setting this up now so ive no idea if it will work.

     

  17. 2 minutes ago, Taddeusz said:

    I still haven’t been able to get the Community Document Server app to work. The error in my log is that x2t cannot be found.  I’m fully updated.

    Same issue here. i just went with the docker and its all working as expected. just left the app alone while its early days. 

×
×
  • Create New...