dertbv

Members
  • Posts

    210
  • Joined

  • Last visited

Posts posted by dertbv

  1. On 9/1/2021 at 10:56 PM, Josh.5 said:

    There is also a plugin specifically for remuxing video files. Search for "remux" or "container" in the plugin installer.

    I have the remux plugin installed however it is not changing the file name

     

    2021-09-02T16:19:40:INFO:Unmanic.TaskHandler - [FORMATTED] - Adding inotify job to queue - /library/xxx/test.mp4

    2021-09-02T16:19:41:INFO:Unmanic.Foreman - [FORMATTED] - Processing item - /library/xxx/test.mp4

    2021-09-02T16:19:41:INFO:Unmanic.Worker-W1 - [FORMATTED] - Picked up job - /library/xxx/test.mp4

    2021-09-02T16:19:45:INFO:Unmanic.Worker-W1 - [FORMATTED] - Successfully ran worker process 'video_remuxer' on file '/library/xxx/test.mp4'

    2021-09-02T16:19:45:INFO:Unmanic.Worker-W1 - [FORMATTED] - Successfully converted file '/library/xxx/test.mp4'

    2021-09-02T16:19:45:INFO:Unmanic.Worker-W1 - [FORMATTED] - Moving final cache file from '/library/xxx/test.mp4' to '/library/xxx/test-1630613985.7216587.mp4'

    2021-09-02T16:19:45:INFO:Unmanic.EventProcessor - [FORMATTED] - MOVED_TO event detected: - /library/xxx/test.mp4

    2021-09-02T16:19:45:INFO:Unmanic.Worker-W1 - [FORMATTED] - Finished job - /library/xxx/test.mp4

    2021-09-02T16:19:45:ERROR:Unmanic.FileTest - [FORMATTED] - Exception while carrying out plugin runner on library management file test 'ignore_under_size' - [Errno 2] No such file or directory: '/library/xxx/test.mp4'

    Traceback (most recent call last):

    File "/usr/local/lib/python3.8/dist-packages/unmanic/libs/filetest.py", line 135, in should_file_be_added_to_task_list

    plugin_runner(data)

    File "/config/.unmanic/plugins/ignore_under_size/plugin.py", line 67, in on_library_management_file_test

    if check_file_size_under_max_file_size(data.get('path'), minimum_file_size):

    File "/config/.unmanic/plugins/ignore_under_size/plugin.py", line 45, in check_file_size_under_max_file_size

    file_stats = os.stat(os.path.join(path))

    FileNotFoundError: [Errno 2] No such file or directory: '/library/xxx/test.mp4'

    2021-09-02T16:19:46:INFO:Unmanic.PostProcessor - [FORMATTED] - Post-processing task - /library/xxx/test.mp4

    2021-09-02T16:19:46:INFO:Unmanic.PostProcessor - [FORMATTED] - Copying file /library/xxx/test-1630613985.7216587.mp4 --> /library/xxx/test.mp4

    2021-09-02T16:19:54:INFO:Unmanic.EventProcessor - [FORMATTED] - CLOSE_WRITE event detected: - /library/xxx/test.mp4

    2021-09-02T16:19:54:INFO:Unmanic.TaskHandler - [FORMATTED] - Skipping inotify job already in the queue - /library/xxx/test.mp4

     

     

    Side not since this is copying the file with a new name rather than over writing it. It runs over and over creating copy after copy.

     

    Screen Shot 2021-09-02 at 16.32.41.png

  2. I love the new look and the way the plugins are working.  I have run into a problem where i chose the output format on h254 nvidia plugin.  I am not seeing a way to make it default to mkv files.  I have a few mp4 files that it appears to work on but they are still mp4 when completed. 

     

    A nice have would to be able to unpause all workers at the same time rather than doing them one at a time. 

  3. 2 hours ago, Squid said:

    If you were running the appdata backup plugin with the libvirt backup option enabled, there's a copy in your backup share

    I thought i had it checked but apparently it was not.  It looks like it is the only file that i do not have a back up of. :(  Back up the usb once a day but not that file DOH!

     

  4. So i made a stupid mistake deleting the file and lost all of my VM's, I still have the img files and am looking for a way to import them back in?  Is this possible?  I have tried to setup a few by choosing the appropriate osx and make the changes in the gui, linking the img file as the hard drive.  However i a start shell screen when i launch them.  Any ideas?

  5. Has anyone had an issue where unmanicis running correctly for a short period of time and then it stops processing files.  It shows them in the queue to be processed but it is just not grabbing them.  If i restart the container it will process them again for a little while and then stop again at some point?

    Screen Shot 2021-04-17 at 7.40.14 AM.jpg

  6. I have been using the staging branch for a while.  Just did a pull and I am not getting the dashboard.  getting  Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1697, in _execute result = method(*self.path_args, **self.path_kwargs) File "/usr/local/lib/python3.6/dist-packages/unmanic/webserver/main.py", line 64, in get self.render("main/main.html", historic_task_list=self.historic_task_list, time_now=time.time(), session=self.session) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 856, in render html = self.render_string(template_name, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1005, in render_string return t.generate(**namespace) File "/usr/local/lib/python3.6/dist-packages/tornado/template.py", line 361, in generate return execute() File "main/main_html.generated.py", line 34, in _tt_execute elif session.level > 0: # page_layout.html:119 TypeError: '>' not supported between instances of 'NoneType' and 'int'

  7. My Current setup is 

    Supermicro X10SRH-CF Version 1.00A - s/n: NM163S019478

    CPU: Intel® Xeon® CPU E5-2673 v3 @ 2.40GHz @ 32GB Ram

    TI1660 for  Video Conversion

     

    I am thinking of upgrading to either 

    Intel Core i7-10700K C

    ASUS ROG STRIX Z490-E

    or

    Intel Core i9-10850KA

    ASUS ROG STRIX Z490-E

     

    Which would be better and do you see any knowns?

  8. I am showing

    "Traceback (most recent call last):

    File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/ffmpeg.py", line 191, in set_file_in

    self.file_in['file_probe'] = self.file_probe(vid_file_path)

    File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/ffmpeg.py", line 156, in file_probe

    probe_info = unffmpeg.Info().file_probe(vid_file_path)

    File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/unffmpeg/info.py", line 55, in file_probe

    return cli.ffprobe_file(vid_file_path)

    File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/unffmpeg/lib/cli.py", line 67, in ffprobe_file

    raise FFProbeError(vid_file_path, info)"

     

    in my log file what can i do to resolve this?

  9. Any idea how i can resolve this error?

     

    "2020-08-01T22:00:59:ERROR:Unmanic.Worker-2 - [FORMATTED] - Exception in processing job with Worker-2: - 'utf-8' codec can't decode byte 0xb2 in position 24: invalid start byte"
     

    This was caused by the Audio.  I had to change the settings just to pass though.

  10. 2 minutes ago, Josh.5 said:
    4 hours ago, dertbv said:
    When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 

    The issue is the page being loaded does not paginate the query. And the query is not optimised as an SQL dataset. Switching to mysql will not improve page load times. The orm calls for those pages need to be optimised. This is something already on my to-do list and will probably be a few hours of work to do properly when I get the time. No ETA ATM.

    Not complaining thank you for all that you do! I just turned on the Nvida piece and have completed more in 10 days than i have in the last year.  

     

  11. 2 hours ago, chiefo said:

    I've seen something similar. Mine has gotten to the point twice where the page will never load when i try to view all records. I've had to blow away the database to restore functionality to that page. I can almost never get it to open an individual record even after starting with a fresh database. The entire tab locks up

    If you blow away the entire database won't it go through every file that you have ever done?

     

  12. On 7/24/2020 at 3:05 PM, Josh.5 said:

    I'd be interested to know how it is struggling. Sqlite tables can handle millions of table entries without issue. It something is struggling, it's how the query is written and needs to be optimised.
    It is my opinion that using mysql for this app is a waste of time, but I'm happy to be wrong.

    When opening the webpage it is taking a while to populate the see all records.  then it takes more time to actually view details on a completed item.  Chosing failures and massive wait time.. 

  13. Currently running on ESXI and wanting to move over to bare metal. Running 5 Vm's that would need to be converted over.  Have about a dozen dockers including Plex with about 6 streams concurrently.  Can one of the unraid masters give me feedback on this list?

     

    Intel Core i9-9900K 3.6 GHz 8-Core Processor

    Noctua NH-U12S 55 CFM CPU Cooler

    Asus TUF Z390M-PRO GAMING (WI-FI) Micro ATX LGA1151 Motherboard

    Crucial Ballistix Sport LT 64 GB (4 x 16 GB) DDR4-3200 Memory

    1 Samsung 860 Evo 1 TB 2.5" Solid State Drive

    5 Toshiba N300 10 TB 3.5" 7200RPM Internal Hard Drive

    MSI GeForce GTX 1660 Ti 6 GB GAMING Video Card

    Fractal Design Node 804 MicroATX Mid Tower Case

    SeaSonic FOCUS Plus Platinum 550 W 80+ Platinum Certified Fully 

     

    thanks

     

  14. Has anyone been able to get this ti work running unsaid as a VM in esxi?  I have a 1600ti card that i installed put in pass through mode added it to unraid.  I boot fine but I am getting "Unable to determine the device handle for GPU 0000:0B:00.0: Unknown Error" .  researching this out says to edit the VM  and add hypervisor.cpuid.v0 = "FALSE", and then unraid boots extremely slow and then fails fails to fully boot.  Anyone have any pointers?

  15. It is using the same controller the samsung drive was using when it was working..

     

    I assume that these two directories are on the cache drive?

    sudo fstrim -a -v
    /var/lib/docker: 11.9 GiB (12789510144 bytes) trimmed
    /etc/libvirt: 921.6 MiB (966324224 bytes) trimmed
     

  16. Finally swapped out my cache drive with a Samsung 1tb 860.  Trim was working however i started to have issues with the drive.  It would grind to a halt once it got to 180 GB on it.  I decided to exchange it for a Crucial MX500 1TB.  it seems to be doing much better however when i run the trim command i get fstrim: /mnt/cache: the discard operation is not supported.  Any ideals what's going on?

  17. On 7/18/2018 at 8:30 AM, JustinAiken said:

    After updating to the new version yesterday, I get this:

     

    
    ::: Testing DNSmasq config: ::: Testing lighttpd config: Syntax OK
    ::: All config checks passed, starting ...
    ::: Docker start setup complete
    [✗] DNS resolution is currently unavailable
    [cont-init.d] 20-start.sh: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [cont-init.d] 20-start.sh: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
    

    And it doesn't stay up.  DNS is set to use to 1.1.1.1

    Any headway on this, i am showing the same thing