-
Posts
1,398 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by tjb_altf4
-
-
55 minutes ago, PC Services said:
Does anyone have the URL for the plugin?
I cant find it on app store anymore.
The WireGuard plugin has been merged into the Unraid OS as of 6.10.0, it can be accessed at Settings > VPN Manager
-
-
I noticed the QR code generator is no longer available, will this be coming back?
-
1 hour ago, andber said:
I am also looking for this feature
I want ONE of 12 docker never update. for the others autoupdate is ok (with or registration of TAG, lateste etc., version ).
Would it be possible to build in an autoupdate somewhere analogous to autostart, or wait in advanced view, which is autoupdate on or off by default according to installations?CA auto update applications plugin has the ability to set selected (or all) to auto update (plugins and dockers)
- 1
-
39 minutes ago, FrequencyLost said:
Hi, has jq/jq-onig been removed from Nerd Pack?
Can see it listed in
https://raw.githubusercontent.com/dmacias72/unRAID-NerdPack/master/packages/packages-desc
But not as an available option within the Nerd Pack plugin itself
Thanks
jq is part of the base Unraid distro from 6.10
- 1
-
1 hour ago, Victor90 said:
for some reason each have ~50gb worth of data on them for no known reason to me?
I'm assuming its been formatted with XFS, this is an overhead for the filesystem.
1 hour ago, Victor90 said:Currently my first time parity sync on empty drives is running for the past 2 hours and unraid estimates it'll keep running for next 10 hours. Why is it this long and is it always this long? Write speeds are ~170 megs per sec.
That's a fairly normal speed for parity, it will go up and down at various stages of the check.
For comparison my array made up of 10TB drives takes about 20hrs to check parity.
1 hour ago, Victor90 said:Does this mean my next parity sync once I add all my existing data (~9TBs worth of data) will take like 7 days?
The time for parity checking is primarily impacted by the capacity of the largest disk in the array, whether the drives are full or empty makes no difference as the system is checking it all to verify parity is correct.
@SpaceInvaderOne has a great video explaining how parity works in Unraid if you'd like to learn more.- 1
-
1 hour ago, André Groß said:
Hi,
it would be nice if we could get segmentation in the vm panel.
So it would be easier to sort vm's into different folders.
Thank you.
The docker folder plugin works with VMs also, which can achieve this effect.
-
4 hours ago, BigDanT said:
I'm having loads of malformed db issues across my containers across different drives and i never had this issue for years on 6.9.2
If that happens to be sonarr/radarr/lidarr, that is an issue relating to failed database schema updates that many have faced regardless of OS or running docker/baremetal.
Only instability I've seen in 6.10 was that it seemed to change/reset the default docker network to macvlan, which I promptly changed back to ipvlan.
DBs can also corrupt if you temporarily run out of space on the drive it is on.
-
7 hours ago, Squid said:
As of today, since 6.11.0 stable is imminent, and I don't believe that this plugin will get updated, I have marked this plugin as being incompatible with > 6.10.3
If you require any packages which you may have been installing via this, you will need to do your own package management.
Note that Perl is now included in the base OS once 6.11.0 is released.
Will plugins that need manual package resolution be marked as incompatible ?
-
7 hours ago, Caldorian said:
Just throwing my experience out there for people: I found that after I upgraded from 6.9.2 to 6.10.3, most of my binhex container /config folders were set to root:root, and refused to run. The only ones that were set to nobody:users were the ones that I've spun up in the last few months (the others are on the range of a couple years old now). Things seem to be running normally now on 6.10.3 after I chown'ed them to all be nobody:users.
So this was probably related to something that's changed in these containers over the couple years, and some new docker permission/settings with 6.10 that then caused things to flip out.
For Binhex's dockers, you can delete the perms.txt file in the config folder and on next start the container will reapply correct the permissions
-
I was thinking it would be great for visibility if somewhere in the top banner we could see the current stable release, and any current public testing releases (RC etc).
These could also be hyperlinks to their respective announcement threads
- 1
-
1 hour ago, Vr2Io said:
May be time to say goodbye to shuck disk.
Here in Australia, 18TB retail internal HDD is about AUD$1K
Buying an external HDD off Amazon AU costs around AUD$500
I won't be saying goodbye to shucks anytime soon -
3 minutes ago, Vr2Io said:
This mod aim to disable external firmware in the flash chip.
Ah I misread
-
I've got 2 mybook and 4 element boards from recent shucks that have manufacturer dates from this year, none have that damage shown on your pictures.
All have JMS579 chips too.
-
13 hours ago, takkischitt said:
I'm not having much luck here...
I also tried 'http://localhost:8989' but when pressing the test button, it just ran and never completed and I had to cancel out of the settings.
localhost won't work for docker, use IP:PORT
EDIT: sorry just tracked up and saw its running through deluge-vpn, you have additional config that binhex will help with
-
Pretty sure vfio binding happens too late to work for the primary display device.
Passing a correct vBIOS should allow the card to be passed to the VM successfully in this scenario.
-
On 5/12/2022 at 11:48 PM, calvolson said:
The shim interface miss-configuration comment looks to be in the right direction. When I disable "Host access to custom networks" in the docker settings this issue stops for me. Running 6.10.0 rc8
I wonder if there are issues due to the shim network itself being macvlan as noted in help section, which is already been known to cause crashes for some (certainly has for me since moving to 6.10).
-
-
56 minutes ago, MegaBlindy said:
fpcalc is a fingerprinting app, so I think Lidarr is scanning your library trying to identify music.
I would use cpu pinning options in the docker template (toggle advanced) and give it 2-4 vcores, that way it won't cripple your server
- 1
-
Just now, Ruato said:
From the 6.10.0 summary of changes: "Added ability to schedule pool 'balance' and 'scrub' operations and calculate whether a full balance is recommended."
How can I setup the above operations? That is, I don't find related settings from Settings -> Scheduler.
Thanks!
It's done on a per disk/pool basis on their individual settings page, which you can get to from main tab
- 1
-
I see now the available legacy driver version was updated from v470.94 to v470.129.06... is there a way we can lock in that legacy driver branch?
The double update reboot is a pain, although I'm still thankful the functionality is there at all!
note: now up and running again with driver support after a reboot
-
I don't want to be that guy, but the return of syslog colouring has bled over to docker logs, which is now overwriting colouring where/if defined by those applications.
Previous docker container log
New docker container log
-
Linked post is how I did it (for Win10), but as always advised, make a backup of your vdisk incase things go sideways.
-
Sonarr/Radarr can use a recycle bin, but it's not enabled by default, and the path is customisable and not necessarily the same folder used for SMB recycle bin.
Radarr deleted ALL my movies
in General Support
Posted · Edited by tjb_altf4
FYI there is a recycle bin option (not configured by default) for Sonarr/Radarr/Lidarr, which adds a little bit of safety to the media itself.