Jump to content

tjb_altf4

Members
  • Posts

    1,398
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by tjb_altf4

  1. 55 minutes ago, PC Services said:

    Does anyone have the URL for the plugin?

     

    I cant find it on app store anymore. 

    The WireGuard plugin has been merged into the Unraid OS as of 6.10.0, it can be accessed at Settings > VPN Manager

  2. 1 hour ago, andber said:

    I am also looking for this feature
    I want ONE of 12 docker never update. for the others autoupdate is ok (with or registration of TAG, lateste etc., version ). 
    Would it be possible to build in an autoupdate somewhere analogous to autostart, or wait in advanced view, which is autoupdate on or off by default according to installations?  

    CA auto update applications plugin has the ability to set selected (or all) to auto update (plugins and dockers)
    image.png.bed4ca0d75e453e8de2bc02f47e32609.png

    • Thanks 1
  3. 1 hour ago, Victor90 said:

    for some reason each have ~50gb worth of data on them for no known reason to me?

    I'm assuming its been formatted with XFS, this is an overhead for the filesystem.

     

    1 hour ago, Victor90 said:

    Currently my first time parity sync on empty drives is running for the past 2 hours and unraid estimates it'll keep running for next 10 hours. Why is it this long and is it always this long? Write speeds are ~170 megs per sec.

    That's a fairly normal speed for parity, it will go up and down at various stages of the check.

    For comparison my array made up of 10TB drives takes about 20hrs to check parity.

     

    1 hour ago, Victor90 said:

    Does this mean my next parity sync once I add all my existing data (~9TBs worth of data) will take like 7 days?

    The time for parity checking is primarily impacted by the capacity of the largest disk in the array, whether the drives are full or empty makes no difference as the system is checking it all to verify parity is correct.


    @SpaceInvaderOne has a great video explaining how parity works in Unraid if you'd like to learn more.

     

    • Like 1
  4. 1 hour ago, André Groß said:

    Hi, 

     

    it would be nice if we could get segmentation in the vm panel.

    So it would be easier to sort vm's into different folders.

     

    Thank you.

    The docker folder plugin works with VMs also, which can achieve this effect.

     

  5. 4 hours ago, BigDanT said:

    I'm having loads of malformed db issues across my containers across different drives and i never had this issue for years on 6.9.2

    If that happens to be sonarr/radarr/lidarr, that is an issue relating to failed database schema updates that many have faced regardless of OS or running docker/baremetal.

     

    Only instability I've seen in 6.10 was that it seemed to change/reset the default docker network to macvlan, which I promptly changed back to ipvlan.

    DBs can also corrupt if you temporarily run out of space on the drive it is on.

  6. 7 hours ago, Squid said:

    As of today, since 6.11.0 stable is imminent, and I don't believe that this plugin will get updated, I have marked this plugin as being incompatible with > 6.10.3

     

    If you require any packages which you may have been installing via this, you will need to do your own package management.

     

    Note that Perl is now included in the base OS once 6.11.0 is released.

    Will plugins that need manual package resolution be marked as incompatible ?

  7. 7 hours ago, Caldorian said:

    Just throwing my experience out there for people: I found that after I upgraded from 6.9.2 to 6.10.3, most of my binhex container /config folders were set to root:root, and refused to run. The only ones that were set to nobody:users were the ones that I've spun up in the last few months (the others are on the range of a couple years old now). Things seem to be running normally now on 6.10.3 after I chown'ed them to all be nobody:users.

     

    So this was probably related to something that's changed in these containers over the couple years, and some new docker permission/settings with 6.10 that then caused things to flip out.

    For Binhex's dockers, you can delete the perms.txt file in the config folder and on next start the container will reapply correct the permissions

  8. 13 hours ago, takkischitt said:

     

    I'm not having much luck here...

     

    image.png.f718cb2c07e1ba21de331d68f23d314e.png

     

    image.png.73dbbae34b7da1c3017a20c5d9c13f2b.png

     

    I also tried 'http://localhost:8989' but when pressing the test button, it just ran and never completed and I had to cancel out of the settings.

    localhost won't work for docker, use IP:PORT
    image.png.32fb4dfc9a8bdbd8c6a8e20409c53531.png

     

    EDIT: sorry just tracked up and saw its running through deluge-vpn, you have additional config that binhex will help with

  9.  

    On 5/12/2022 at 11:48 PM, calvolson said:

    The shim interface miss-configuration comment looks to be in the right direction. When I disable "Host access to custom networks" in the docker settings this issue stops for me. Running 6.10.0 rc8

     

    I wonder if there are issues due to the shim network itself being macvlan as noted in help section, which is already been known to cause crashes for some (certainly has for me since moving to 6.10).


    image.png.8e6bd1400150adf1b29b616ca4fec7b9.png

     

  10. 56 minutes ago, MegaBlindy said:

    Im experiencing this issue, that lidarr is pinning my cpu to 100%, but not showing of doing anything in the webapp. This is what it looks like on htop

    cpumax.png

    fpcalc is a fingerprinting app, so I think Lidarr is scanning your library trying to identify music.

     

    I would use cpu pinning options in the docker template (toggle advanced) and give it 2-4 vcores, that way it won't cripple your server

    • Like 1
  11. Just now, Ruato said:

    From the 6.10.0 summary of changes: "Added ability to schedule pool 'balance' and 'scrub' operations and calculate whether a full balance is recommended."

     

    How can I setup the above operations? That is, I don't find related settings from Settings -> Scheduler.

     

    Thanks!

     

    It's done on a per disk/pool basis on their individual settings page, which you can get to from main tab 

    • Thanks 1
  12. I see now the available legacy driver version was updated from v470.94 to v470.129.06... is there a way we can lock in that legacy driver branch?

    The double update reboot is a pain, although I'm still thankful the functionality is there at all! :)

     

    note: now up and running again with driver support after a reboot

×
×
  • Create New...