Jonwork88

Members
  • Posts

    12
  • Joined

  • Last visited

Posts posted by Jonwork88

  1. Thanks for making the "Slicer" dockers - I'm actually using the OrcaSlicer.  I am putting it on my server so we can access remotely.

     

    We just got a new 3D printer (our 1st one - so Im pretty new at this), and when we do a bed mesh it won't display the mesh, only provides me with the message "Sorry, your browser does not support WebGL".  I'm running Chrome and it does work when I run OrcaSlicer natively on my computer.  Any thoughts?

     

    I don't see anything referencing WebGL in the log, for both the docker log, and the logfile in AppData for OrcaSlicer.

     

    Any idea?

    Thansk! Jon

  2. This looks like just what I need to clean up old versions of family videos.  Trying it out and I get the error "cannot find shared FFmpeg libraries".  Want to make sure its not a problem with my install - was ffmpeg/ffprobe bundled into the container or do I need to install it separately.  If I need to install separately, should I be putting it in the "/mnt/user/appdata/VideoDuplicateFinder" directory?
    Thanks!

  3. Quote
    On 4/13/2020 at 1:59 PM, Squid said:

    You probably don't have a mapping added for the unassigned device in the template.  You have to map something like /UNASSIGNED to /mnt/disks

     

    On 4/13/2020 at 2:39 PM, trurl said:

    And make sure that mapping is read/write slave.

     

     

    This worked like a charm for me to allow me to connect to remote shares - edited the Binhex-Krusader container to add a new "path" called /REMOTES on the container and located at  /mnt/remotes/ on Unraid (Host Path) and now I could see and interact with the remote shares...

     

    image.png.46f9c2f90f95cfeb5368c4325cebab41.png
    Thanks!  Jon

    • Like 1
  4. On 1/21/2023 at 5:23 PM, BeardElk said:

    I've got the same, and after "migration" my hardware transcode doesn't work anymore. 

    I tried to downgrade to a previous version but hw transcode for me is totally broken right now.
    This could be something else entirely, ie user error or something, just wondering if anyone else has experienced this. 

    I let the migration run it's course and plex starts up just fine btw. Just give it time depending on your library. 

     

    I am also having a problem with Hardware Transcoding not working anymore.  I am on 6.11.5 of Unraid and I have my Plex docker (BinHex-Plex Pass) on latest.  I have an Nvidia P2000 and I am running the most recent set of drivers (

    525.85.05).  I don't see any errors in the docker logs.

     

    Not sure how long HW transcoding on my P2000 hasn't been working....

     

    Any suggestions on how to move forward?  Let me know if there is any additional information you need.

     

    Appreciate any help provided!
    Thanks!
    Jon

     

  5. On 10/9/2022 at 8:53 AM, chris_netsmart said:

     

     

    the reason I was on 6.9 was because I was not able to pass through my ZigBee device into a VM Home Assistant.  and now that issues has returned. 😞

     

    Zigbee error within HA

     

    Error setting up entry ConBee II - /dev/serial/by-id/usb-dresden_elektronik_ingenieurtechnik_GmbH_ConBee_II_DE2496880-if00, s/n: DE2496880 - dresden elektronik ingenieurtechnik GmbH - 1CF1:0030 for zha

    Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/tasks.py", line 456, in wait_for return fut.result() asyncio.exceptions.CancelledError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/config_entries.py", line 357, in async_setup result = await component.async_setup_entry(hass, self) File "/usr/src/homeassistant/homeassistant/components/zha/__init__.py", line 112, in async_setup_entry await zha_gateway.async_initialize() File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 185, in async_initialize raise exc File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 172, in async_initialize self.application_controller = await app_controller_cls.new( File "/usr/local/lib/python3.10/site-packages/zigpy/application.py", line 138, in new await app.startup(auto_form=auto_form) File "/usr/local/lib/python3.10/site-packages/zigpy/application.py", line 118, in startup await self.connect() File "/usr/local/lib/python3.10/site-packages/zigpy_deconz/zigbee/application.py", line 92, in connect self.version = await api.version() File "/usr/local/lib/python3.10/site-packages/zigpy_deconz/api.py", line 451, in version (self._proto_ver,) = await self.read_parameter( File "/usr/local/lib/python3.10/site-packages/zigpy_deconz/api.py", line 416, in read_parameter r = await self._command(Command.read_parameter, 1 + len(data), param, data) File "/usr/local/lib/python3.10/site-packages/zigpy_deconz/api.py", line 301, in _command return await asyncio.wait_for(fut, timeout=COMMAND_TIMEOUT) File "/usr/local/lib/python3.10/asyncio/tasks.py", line 458, in wait_for raise exceptions.TimeoutError() from exc asyncio.exceptions.TimeoutError

     

    time to google the issue

    Hey Chris_Netsmart - were you ever able to fix this?  I seem to have run into the same issue, updated from 6.9.3 to 6.11.5 and my Conbee 2 in HA VM seems to be broken.

     

    Been googling and trying things for a few hours with no joy.

     

    Thanks! Jon

  6. On 10/16/2021 at 7:37 AM, Greygoose said:

    Just setup both redis and paperless-ng. Seems to be working fine except redis shows this error?

     

    Is this normal?

     

     

    Screenshot 2021-10-16 123455.png

     

    I am having the same issue - the Redis container is being rebooted as part of this.  When I try to follow the directions and run the command 'sysctl vm.overcommit_memory=1' in the container terminal I get the following response 'sysctl: setting key "vm.overcommit_memory": Read-only file system'.

     

    I also can't seem to find the files for Redis within my "AppData" share so I can't modify /etc/sysctl.conf either.

     

    Any suggestions?

     

    Thanks!

  7. For those who updated to V0.9.2 from 0.9.1 and found their config.yml file was no longer valid....

     

    In V0.9.1, I didn't have RTMP included in my config.yml and I would get an error "Camera (camera name) has rtmp enabled, but rtmp is not assigned to an input."  but it would still work fine.

     

    With the change to V0.9.2, it no longer works - lists it as a configuration error and shuts down the docker.  

     

    Under each camera section, I added the lines below that seemed to fix it.  I no longer get the error in my logs and frigate is working fine again.  "rtmp:" should line up with your "ffmpeg", "detect", "record" etc.  You'll need the below lines for every camera.

        rtmp:
          enabled: false 

     

    Here is the error message I was getting in my logs - again - solved with the above yml code.

     

    Camera deck has rtmp enabled, but rtmp is not assigned to an input. (type=value_error)
    *************************************************************
    *** End Config Validation Errors ***
    *************************************************************
    [cmd] python3 exited 1
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

    • Like 1
  8. 4 hours ago, JonathanM said:

    A container is more than just the application, think of it as a miniature virtual machine. It has an entire operating system, albeit only the pieces essential to supporting the specific application. The tag means the application itself will not be updated, only the supporting OS files inside the container.

     

    As an aside, one of the reasons docker containers can be so efficient, they share common pieces between them. So if you have a bunch of LSIO containers using the same internal OS, it's not duplicated no matter how many different containers use those same basic pieces. Running multiples of the same container with different appdata uses almost zero additional resources in the docker image.

     

    @JonathanM - thanks for the explanation.  I recognize that you can't guarantee anything, but should this be a safe update to make?  This is all I could find about it (https://github.com/linuxserver/docker-unifi-controller/releases)

     

    Capture2.JPG

  9. I have also been on 6.4.54 for a few weeks and everything is fine. 

     

    However, for the first time I can remember, Unraid is telling me there is a docker update ready - I thought I wouldn't receive these if I set a tag?  Or is it possible this is an update to 6.5.54?

     

    Here is the tag I am using on my docker:  linuxserver/unifi-controller:version-6.4.54

     

    Thanks in advance for any feedback.

    Capture.JPG

  10. On 6/15/2021 at 10:42 PM, salotz said:

    Nothing wrong with your container but if you use the default path in syncthing itself for adding folders to it will be at '~' AKA '/home/nobody' and will save the data into the container image and fill up the docker.img really quickly. I suggest changing the defaults such that the default path's line up with the mount to the host FS (whether that is making the default be `/home/nobody` instead of `/media` or changing the syncthing config). I in the end not a big deal to someone who knows what they are doing, but did take me some time to figure out as I was hoping defaults would have sane behavior and I wouldn't have to fiddle.

    Thanks for making this :)

    For newbs that are trying to get this to work, run the docker with default settings then change this in syncthing and make sure if you manually specify patsh you use something that starts with `/media`.

     

    image.thumb.png.b4737fcd61e4c83a6a717755d0c29e2b.png

     

    @Salotz - Thanks for the tip - I'm a newbie and this specific issue is killing me, I have been unable to get a synced folder on my desktop to save to shares on my array (and vice versa - share to desktop).  So just setting the default path to /media would enable files from a synced folder from my desktop to sync to a share on my array?   

     

    For example, a synced folder at C:\Users\Name\Documents\sync from my desktop computer that I sync with my unraid server should save to /mnt/user/sync?  

     

    Thanks Salotz...  and Binhex - love your Containers, thanks for everything you do.

     

    Thanks!

  11. I also struggle with how to properly backup a VM.  The CA Backup/Restore AppData plugin is great for Dockers, would love to see something similar for VMs or as asked by Newbie - a clear guide on what files/folders to backup in order to do a proper config/settings etc backup including instructions on how to restore if needed.  

     

    I recently had some errors on my cache drive and I am troubleshooting, and have backed up my appdata, but would like to be able to ensure my Home Assistant VM is recoverable as well.

     

    Really appreciate everyone's contributions by the way - love Unraid!!