• Content Count

  • Joined

  • Last visited

Posts posted by CyrixDX4

  1. I'll wait for 6.9.1.


    There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things. 



    Thanks for all the hard work in finally bringing this release out to the masses.

    • Like 8
  2. I'll echo others thanking you for making this beautiful set of metrics and panels. My inner monitoring child is going bonkers.


    Now, forgive me for asking these questions:


    1. Where is the install guide? I've combed through several pages and I only see the dockers that I need to install and some config files. Is there a step by step guide/wiki on how to impliment this wonderment?


    2. What is the json file for? What does that do? Where does it go.


    If I missed a page that had the install instructions and full configs that would be wonderful.  I'm a bit lost and don't want to go willy/nilly installing all the dockers and come back with "Well now what?"


  3. 10 minutes ago, Transient said:

    As the previous user said, just add :119 to the end of the repository name. This means to pull the one tagged 119. When left off, it will pull the one tagged latest.


    To find the tags, you can check Docker Hub. The easiest way IMO is to turn on Advanced View in Unraid (top right) then scroll down to your Unmanic container and click the little link that says By: josh5/unmanic. That'll take you over to Docker Hub and you can go to the Tags tab and see all the previous versions.


    ...only I just tried it and it looks like josh5 has since removed all tags other than latest so you may be unable to pull down 119. I'm not sure why he would do that. Maybe he didn't and there's an issue with Docker Hub at the moment?


    That's what I tried using the :VERSION and looks like everything is gone.   Very odd.   


    Only versions up are the 0.0.1-beta7 version and I can't pull down those versions for some reason.

  4. 10 minutes ago, Transient said:

    It did indeed fix the error, however it appears to have been unrelated. Now I have no errors in the log, but it still doesn't process anything. All the workers are idle even though there are several pending. If I roll back to 119 everything works again.


    Is there any information I can provide that would be useful in identifying the problem?


    I'd love to know how you specified the container version on install so myself and others can do the same.

  5. Getting same error as others:

    [E 201021 13:43:22 web:1788] Uncaught exception GET /dashboard/?ajax=pendingTasks&format=html (
    HTTPServerRequest(protocol='http', host='xx', method='GET', uri='/dashboard/?ajax=pendingTasks&format=html', version='HTTP/1.1', remote_ip='xx')
    Traceback (most recent call last):
    File "/usr/local/lib/python3.6/dist-packages/", line 3099, in execute_sql
    cursor.execute(sql, params or ())
    sqlite3.OperationalError: database is locked
    During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "/usr/local/lib/python3.6/dist-packages/tornado/", line 1697, in _execute
    result = method(*self.path_args, **self.path_kwargs)
    File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/", line 59, in get
    File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/", line 77, in handle_ajax_call
    self.render("main/main-pending-tasks.html", time_now=time.time())
    File "/usr/local/lib/python3.6/dist-packages/tornado/", line 856, in render
    html = self.render_string(template_name, **kwargs)
    File "/usr/local/lib/python3.6/dist-packages/tornado/", line 1005, in render_string
    return t.generate(**namespace)
    File "/usr/local/lib/python3.6/dist-packages/tornado/", line 361, in generate
    return execute()
    File "main/", line 5, in _tt_execute
    for pending_task in handler.get_pending_tasks(): # main/main-pending-tasks.html:4
    File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/", line 103, in get_pending_tasks
    return self.foreman.task_queue.list_pending_tasks(limit)
    File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/", line 171, in list_pending_tasks
    if results:
    File "/usr/local/lib/python3.6/dist-packages/", line 1987, in __len__
    File "/usr/local/lib/python3.6/dist-packages/", line 1969, in _ensure_execution
    File "/usr/local/lib/python3.6/dist-packages/", line 1886, in inner
    return method(self, database, *args, **kwargs)
    File "/usr/local/lib/python3.6/dist-packages/", line 1957, in execute
    return self._execute(database)
    File "/usr/local/lib/python3.6/dist-packages/", line 2129, in _execute
    cursor = database.execute(self)
    File "/usr/local/lib/python3.6/dist-packages/", line 3112, in execute
    return self.execute_sql(sql, params, commit=commit)
    File "/usr/local/lib/python3.6/dist-packages/", line 3106, in execute_sql
    File "/usr/local/lib/python3.6/dist-packages/", line 2873, in __exit__
    reraise(new_type, new_type(exc_value, *exc_args), traceback)
    File "/usr/local/lib/python3.6/dist-packages/", line 183, in reraise
    raise value.with_traceback(tb)
    File "/usr/local/lib/python3.6/dist-packages/", line 3099, in execute_sql
    cursor.execute(sql, params or ())
    peewee.OperationalError: database is locked

    was working flawlessly for months.


    I went and wiped my install because it didn't matter too much what I had already encoded.  



    Also getting this error still:

    [2020-10-21 16:42:12,454 pyinotify WARNING] Event queue overflowed.
    [W 201021 16:42:12 pyinotify:929] Event queue overflowed.


    Version - 0.0.1-beta7+752a414



    The workers are hit/miss when it decides to pickup and work on a file.  UnManiac is not grabbing/converting my mp4's into mkvs like it once did.   Not sure what broke and why it's not forcing converting them as I have all the flags set.



    Here is the error log/output of one of the files I'm trying to convert:

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x55c3374e3880] stream 0, timescale not set
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/library/xxx/Project A (1983)/Project A [1080p].mp4':
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42isomavc1
        creation_time   : 2019-12-25T02:24:21.000000Z
        title           : Project.A.1983.1080p.BluRay.H264.AC3,DD5.1
        artist          : 
        album           : 
        comment         : 
        encoder         : DVDFab
      Duration: 01:45:38.02, start: 0.000000, bitrate: 5539 kb/s
        Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p(bt709), 1920x808 [SAR 1:1 DAR 240:101], 4890 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default)
          creation_time   : 2019-12-25T02:24:21.000000Z
          encoder         : JVT/AVC Coding
        Stream #0:1(zho): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default)
          creation_time   : 2019-12-25T02:24:21.000000Z
        Side data:
          audio service type: main
        Stream #0:2(eng): Subtitle: dvd_subtitle (mp4s / 0x7334706D), 7 kb/s (default)
          creation_time   : 2019-12-25T02:24:21.000000Z
        Stream #0:3: Video: png, rgba(pc), 640x269, 90k tbr, 90k tbn, 90k tbc (attached pic)
    Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 0, only the last option '-c:v libx265' will be used.
    Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 1, only the last option '-c:v libx265' will be used.
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 (native) -> hevc (libx265))
      Stream #0:3 -> #0:1 (png (native) -> hevc (libx265))
      Stream #0:1 -> #0:2 (copy)
      Stream #0:2 -> #0:3 (dvd_subtitle (dvdsub) -> subrip (srt))
    Subtitle encoding currently only possible from text to text or bitmap to bitmap


    .  I turned on debug and reduced my scanning to genre's instead of my entire multi-TB movie directory.  Is there an upper limit on files that python says "Too many files, can't process"





  6. Anyway to 'force' a build through? There's a nasty bug for many private trackers that aren't reading cookie data that has been patched in the last 24 hours.  


    I could wait till tomorrow or hope you can release a manual build today.  


    Do appreciate all your work.

  7. 1 hour ago, bonienl said:

    There is no GUI support for connecting a container to multiple networks.

    It is possible using CLI and the "docker network connect" command.




    What's the point in then of the Devices field in Additional Settings? That doesn't make any sense. 


    I didn't want to have to go the route of manual entry, if that's the only way....

  8. How do I add additional network interfaces to my docker containers? 


    Use Case: Syncthing has the ability to be managed over one port but have the replication stream over another port.  I want all replication over my 40gb network instead of my 1gb network


    I've searched all over this forum but still unsure where to put this in. Adding a device doesn't work as the container fails to recognize my bridge when I tell it to use "br4"



  9. On 3/4/2019 at 6:15 PM, Siren said:



    Out of curiosity: Have there been any posts/reports of 40gbe connections working with unraid? If not, guess I might be the 1st 😁 



    Keep me posted, just dropped 40gbe in all my servers, haven't had a chance to mess with them just yet. Upgraded from 10gbe

  10. 4 minutes ago, charlescc1000 said:

    This is a great write-up and should work well for many people with less than 5TB of data.


    I previously used CloudBerry but quickly discovered that the backup software has a maximum of 5TB of data that it can manage unless you purchase the enterprise edition for $300.


    So this is an important consideration as many unraid users probably have far more than 5TB of data they would like to backup.

    Just checked:


    5 TB Storage Limits [PRO]

    Data volume you can manage with CloudBerry Backup is 5 TB in PRO version and 200 GB in freeware version. CloudBerry Lab doesn't offer storage, you need to buy from storage providers separately.



  11. Long time user, returning poster to the new boards, my old account was tied to an email I don't have/use anymore.


    The Question at hand is how can I transfer all my license keys to my new email address baring that I don't have my old email address any longer but I do have all my unraid systems running and can validate them.


    Yes I should've changed my email account before I hit the DELETE button but this was one account I must have missed during my purge of an account I should've gotten rid of years ago.

  the best bet?