-
Posts
41 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by TRaSH
-
-
In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated.
With these changes i have the feeling i will go back to the same issues before i decided to bond them. -
On 2/15/2023 at 8:53 PM, questionbot said:
mc not running in tmux was not an issue until I updated to the new version of unraid and the new nerdTools.
Before then I just ran MC with no issues. So I would open a tmux new - s mc and just have it always on... and jump into the tmux session... so any way I left MC would stick.
I recently upgraded and switched and had exactly the same issue.
kinda annoying that the so called not supported nerd tools has a working version and the replacement has a borked version
if you still have the old nerdtools installed you can install a version that still works -
Thnx for this one,
wish there was something like this for WD nvme's -
28 minutes ago, Hoopster said:
.....
I was aware the Nerd Pack was deprecated for 6.11.0 and placed the necessary packages in /boot/extra before upgrading and rebooting. I am not sure that everyone is aware the Nerd Pack has been deprecated and how to load the necessary packages. It might be wise to include this in the release notes (as if everyone reads those 😀).
This should be added in bold text to the changelog including instructions how to load the necessary packages !!!- 7
- 6
-
Wonder why there is respond to this ?
Seems like it's being ignored.
-
Started fresh after the security issue with rtorrentvpn,
also switched to wireguard.
When i use my old rtorrent.rc and start rtorrentvpn it connects and i can access the webgui.
but when i move my sessions to the new install and start rtorrentvpn i can't access the webgui and get a error,then i tried to rename my rtorrent.rc so it creates a new one, and then i still get the same error.
-
7 minutes ago, ich777 said:
From what I've know, no.
Ahh I think now I understand why yours are called unraid optimized,
because you add the actual `/mnt/user/{tv|movies|music}` to it where LSIO and Binhex do have so called consistent path inside the container but let the user choose which path to choose on the host (unraid)11 minutes ago, ich777 said:if a user has the wrong file structure, I don't know if they are willing to change that...
Well after they come in to the Radarr discord and we explaining that they use the wrong suggested paths they end up actually changing it.
13 minutes ago, ich777 said:I don't think so, from what I've remember the path aren't the same or you have to customize them in SABnzbd or NZBGet to match the others so they can see the folders and the files are properly moved.
binhex uses inside his container `data` for his download location and `media` for his media location, so when you use all his images they are consistent between each other (the same with LSIO with `tv`, `movies` and `downloads`) but still no instant move and copy+delete.
18 minutes ago, ich777 said:What would your recommendation? Should I add a note about the path's, the thing that I don't want to do is change anything by default...
EDIT: Eventually write me a short PM and we can chat about this, but please after the weekend, today is my birthday...
Happy birthday,
I will try to figure out something how it can be added.
perhaps a warning if they use this path structure they will have copy+delete and higher i/o and no instant move and no hardlink support.
and perhaps a link to a guide where it's explained how to set it up for a optimized path structure with the support for hardlinks and instant moves -
19 minutes ago, ich777 said:
In terms of Sonarr/Radarr/Lidarr/SABnzbd/NZBGet the mounts are all the same so no configuration of the user is needed /mnt/movies, /mnt/tv, /mnt/downloads
Don't the other Community Developers use between their container images the same path structure ?
I know Binhex and LSIO do.
21 minutes ago, ich777 said:What would be your prefered way? These are not Volumes these are Bind mounts in the Docker way of things.
It isn't only my preferred way it's the overal proffered way, why would you want slow copy+delete and with torrents double file usage if you could make use of hardlinks ?
The overal recommended way would be to use 1 main share with the subfolders under it, this way you would get instant moves and hardlinks working. And you still can lock certain clients to have only access to to certain folders. (like lock your download client to have only access to your download location and plex etc only to your media location) SOURCE
30 minutes ago, ich777 said:I also don't recommend to Bind mount the '/mnt/user' directory to one directory inside the container since new users can have a hard time and even copying over to the wrong directory.
That would be for sure the wrong location
It would be better to use something like the following =>
create a share called for example `data` and in that share create sub folder named: `downloads` or `usenet` and `torrents` if you use both, and a `media` folder where you create in the media folder `tv`, `movies` and `music`
and use the following Bind mount depending which application you're using- for the ARR(S) (Sonarr, Radarr, Lidarr, etc) => `/mnt/user/data/`
- for your usenet client => `/mnt/user/data/usenet/` or `/mnt/user/data/downloads/`
- for your torrent client => `/mnt/user/data/torrents/` or `/mnt/user/data/downloads/`
- for Plex, Emby, JellyFin and Bazarr => `/mnt/user/data/media/`
40 minutes ago, ich777 said:I completely understand what you mean... But for new users this can be horrible...
Yeah for new users it could be sometimes a problem to understand.
41 minutes ago, ich777 said:You can use whatever container you want also @binhex or other Community Developers have excelent containers in the CA App.
Binhex has the same not recommended path structure as LSIO and yours got.
One of the main reason why I started about the path structure thing is because I'm a member of the Radarr support team, and we get allot of unraid users in the discord channel with questions why importing takes that long especially with 4K, and why they have double file usage when using torrents.
Or even worse download directly in to their media library and then wondering why Radarr (and the other arr(s) aren't able to import it.And then we need to explain that most of the unraid Community Developers recommend/suggest to use the wrong paths.
- 1
-
I see you mentioning "unraid optimized"...in what way would these be optimized compared to the images of other docker maintainers?
Another thing I noticed in the used template (and it's a shame others use/recommend this also)
Is the NOT recommended (By Radarr/Sonarr Support Team+Devs) way of passing in two volumes such as the commonly suggested /movies and /downloads makes them look like two file systems (Because of how Docker’s volumes work), even if they aren’t. This means hard links won’t work and instead of an instant move, a slower and more io intensive copy + delete is used.
- 2
-
On 10/15/2020 at 10:15 AM, binhex said:
i think this maybe PIA's DNS playing up, can you change the NAME_SERVERS to:-
'NAME_SERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1'
Changed it,
and i also redownloaded a new .conf file just to be sure.
and now it works.
Time for me to test it also with rtorrentvpn and then see how the speeds are going -
I'm trying to get QbtVPN running with Wireguard, My VPN service (Torguard) supports Wireguard and portforwarding.
I read the 2 links VPN Docker FAQ and Further Help.I also tried rTorrentVPN and i'm getting the same error.
2020-10-14 20:21:44,495 DEBG 'watchdog-script' stdout output: [debug] Having issues resolving name 'www.google.com' [debug] Retrying in 5 secs... [debug] 11 retries left
I've added the supervisord.log that i run with the debug enabled.supervisord.log
Also added the docker compose of unraid.
Yes I know i don't use the default ports, but it's because i have those ports already in useversion: '3.3' services: nginx: ports: - '80:80' - '6881:6881' - '6881:6881/udp' - '8085:8080' - '8119:8118' volumes: - '/var/run/docker.sock:/tmdocker' - '/mnt/disks/VM/appdata/binhex-qbittorrentvpn:/config:rw,slave' - '/mnt/user/data/.torrents/:/data/.torrents/:rw' - /config - /data container_name: binhex-qbittorrentvpn environment: - VPN_ENABLED=yes - VPN_OPTIONS= - 'NAME_SERVERS=209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1' - ADDITIONAL_PORTS= - PUID=99 - DEBUG=true - PGID=100 - VPN_USER=VPN_USER - VPN_PROV=custom - STRICT_PORT_FORWARD=yes - WEBUI_PORT=8085 - LAN_NETWORK=192.168.2.0/24 - UMASK=000 - TZ=Europe/Berlin - HOST_OS=Unraid - VPN_PASS=VPN_PASS - VPN_CLIENT=wireguard - ENABLE_PRIVOXY=no - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' - HOME=/home/nobody - TERM=xterm - LANG=en_GB.UTF-8 network_mode: bridge privileged: true restart: 'no,always' logging: options: 'max-file=1,max-size=50m,max-size=1g' image: nginx
-
Using up number it have the error,
Using host name and it worked
-
I love the ability to add one drive of any size at a time in unraid to grow my array
What i really mis is a nice overview of the health of the drives. -
-
-
i'm using the "linuxserver/nzbget" version and i don't have that issue anymore using Currently installed: 21.0 version.
i used to have the stuck issue a few times even when i was testing it on my windows machine.
-
actually you shouldn't edit anything in a container because that changes after a update of the container,
all changes should be in your appdata config file of that app.
it would help if you tell which container and what you want to do.
-
for automatic backup or manual backup ?
-
Curious why ?
wouldn't it be good to have a backup of your files on the USB ? -
you need to install from the nerdpackage the following
-
would it be possible to have the same way of backup you use for the appdata for the USB/Flash drive ?
meaning that the USB drive also get archived ?
0 compression wouldn't be a real issue,
this way we got multiple backups of it -
-
Thx hopefully no updates so it keeps running.
I won't be home before tomorrow evening
-
After the update this morning when I wanted to login with my mobile.
I can't access the webgui.
Getting a plug-in error
09.07.2019 05:50:08] WebUI started.
[09.07.2019 05:50:08] Bad response from server: (0 [error,getplugins])
In the logs the last part hangs on
2019-07-09 05:23:38,622 DEBG 'watchdog-script' stdout output:
[info] rTorrent running
[info] Initialising ruTorrent plugins (checking nginx is running)...Not able to test with my laptop yet.
Before this evening when I get home.
Already asked a friend of mine to check on his system with a mobile.
[Plugin] Mover Tuning
in Plugin Support
Posted · Edited by TRaSH
wrong screenshot
I'm testing something new with this plugin
and trying to run a script after the mover tuning started
but somehow i get a error that it can't find the script.
Fastdrive = another nvme drive i'm using
```
Sep 1 19:46:58 root: Starting Mover
Sep 1 19:46:58 root: Forcing turbo write on
Sep 1 19:46:58 kernel: mdcmd (92): set md_write_method 1
Sep 1 19:46:58 kernel:
Sep 1 19:46:58 root: ionice -c 2 -n 7 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 10 0 0 '' '' '' "/mnt/fastdrive/userScripts/userScripts/mover-after.sh" yes 90 '' '' 30
Sep 1 19:46:58 root: Log Level: 1
Sep 1 19:46:58 root: mover: started
Sep 1 19:46:58 root: mover: finished
Sep 1 19:46:58 root: /usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 849: /mnt/fastdrive/userScripts/userScripts/mover-after.sh: cannot execute: required file not found
Sep 1 19:46:58 root: Restoring original turbo write mode
```
Solved: CRLF vs LF
Left it here for others if they run in to the same issue