sauso
-
Posts
68 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by sauso
-
-
Hey Guys,
Apologies if this has been asked before. i've tried to search but i can't seem to find any hits.
I currently have a 2tb cache pool and use that to download all my Media to. I've set it to write to the cache and then i have mover moving the data to my array once a month. This is great however i average 1.5tb of data a month and when mover starts it moves everything.
Is it possible to have mover only move items older than a certain date (say 30 days)? That way new files stay on the cache pool to allow super fast play back whilst after a month they go to the array. (i should add that i then use rclone to move them to the cloud after 6 months).
Would be truly awesome to have 3 fully automated tier's of storage.
Thanks,
sauso
-
On 8/21/2019 at 8:01 AM, ken-ji said:
Also, if you have VLAN support, your docker network on the vlan is able to talk to unraid.
AFAIK, openVPN works very well with its own dedicated IP ( as long as the docker network is either on a different VLAN, or interface from the Unraid )
FYI @ken-ji I found a post where you said to remove the IP addresses from the Interfaces. Did this and it worked straight away.
Thanks!
-
6 hours ago, Riotz said:
I did this and I can connect to it internally but not from any outside network. It was working perfectly while proxied (orange cloud) through cloudflare. I am not sure why it stopped working all of a sudden. I guess I will look elsewhere for an explanation. I just dont get why it broke all of a sudden.
Stupid question but did your external IP change? I get cloudflare message only if my Internet is down or my IP has changed.
-
3 hours ago, z0ki said:
I got an issue after updating to the latest binhex-plexpass. NO metadata is no longer downloading and on top of this if I find a title to FIX the listing, no Agents are showing so something is broken here. It was working before the latest update.
Update was released 4 hours ago. I just updated and it fixed the issue.
-
Hi Guys,
I've got all my docker containers in a vlan (vlan10) and it is all working great.
Only problem is that I running OpenVPN and this has to be on host mode to operate correctly. Problem i'm having is my LetsEncrypt web server is in the docker vlan and of course it can't talk to the OpenVPN container which is attached to the UNRAID IP.
I've read about adding a second NIC and letting routing route to the UNRAID Host so I have added a second network to a USB NIC and put one of my containers onto it. The container can ping other containers in the docker vlan however i still can't get it to ping the host.
Unraid Server is 192.168.1.100
Docker VLAN is tagged VLAN 10 on BR0
New VLAN is untagged (tagged at swtitch VLAN 20) on BR1
I'm sure i'm missing something simple.
-
Sounds like you aren't mounting it as RW. I don't have any issues.
Post a screenshot of the folder mapping.
-
6 hours ago, DZMM said:
fixed on github. Sorry about - not sure how that went missing
Haha thanks for confirming. I figured it was missing but just wanted to confirm! 😁
-
I feel like i'm going loopy. I can't see anywhere in the unmount script that removes /mnt/user/appdata/other/rclone/rclone_mount_running
Am i going mad or is there another place that this is removed?
-
9 minutes ago, DZMM said:
Run the unmount script at array start rather than shutdown - that's what I do
Bingo, That makes total sense!
-
might touch a new file called unclean_shutdown and work that into the script to delete the running file. if it was an unclean shutdown
-
So had a power outage last night and my server restarted. (will be investing in a UPS soon!)
When my server came back up the mount script failed as the rclone_running file was still present. If the server is gracefully shut down the unmount script removes this file.
Has anyone else found a way to combat this?
-
18 hours ago, DZMM said:
I started this way and eventually as I got more comfortable I went all in for plex media, and I've also lots of other files to the point where my server is just photos and personal docs. I'm gone from around 44TB in my array to only 16TB, where only around 4TB is permanent.
Re seeking, to be my mind it's got better since I started using rclone but it might be because I've got used to it now that I've gone all in. Have you tried experimenting with --buffer-size ?? I think a bigger buffer might help with forward seeking (not backward of course)
Will give it a shot. Haven't really experimented with it yet. Got sidelined moving all my dockers to their own vlan.
-
In mine i set the upload folder to be my current Media Share. I then changed Plex, Sonarr, Radarr etc to point to the union fs mount.
Currently i'm just manually shunting video's up to the cloud. But once i'm happy with my process i'm going to set my script to copy anything older than 1 year to the cloud. Then as i start to run out of storage i will increase the time.
The way i see it i will eventually replace my current HDDs with more flash local playback of new in demand shows and then copy up to the cloud after.
Process is working flawlessly for me. I can't even tell the difference between spinning up a drive and playing from the cloud. The only thing that is a bit slower on cloud is seeking.
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media=RW:/mnt/user/mount_rclone/google_vfs/Media=RO /mnt/user/mount_unionfs/google_vfs/Media
- 1
-
Love your work @DZMM!
Got this working with relative ease. Have you had any luck with Rclone Union yet?
- 1
-
12 hours ago, Jorgen said:
What’s your run command Sauso?
If you run in bridge mode with privileged=off, you also need --cap-add=NET_ADMIN
This was missing for me, probably because I’ve just been updating without problems for a long time and am using a very old template. It was causing all sorts of problems until I fixed it.
Just a thought, might not be your problem.
Sent from my iPhone using TapatalkHey Jorgen,
Below is my docker run. It already has NET_ADMIN in it so i'm at a loss.
docker run -d --name='openvpn-as' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'PGID'='100' -e 'PUID'='99' -p '943:943/tcp' -p '9443:9443/tcp' -p '1194:1194/udp' -v '/mnt/user/appdata/openvpn-as':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/openvpn-as'
This is where is gets really bizarre. If if terminal into my unraid box and run an nmap to the container the port is open. I can even connect to localhost, the IP of my unraid box and the ip of the container all successfully. But as soon as i try from another device it shows as blocked.
Anyone else have any ideas?
***EDIT***
Still scratching my head so i decided to setup the openvpn appliance. changed my port forward to the new appliance and it worked first time.
Could it be my Unraid server blocking the connection??
**FINAL EDIT**
So i'm a muppet. Something funky must have been going on in my Unraid box. Restarted it and it came good straight away... I forgot rule 1 of tech support. Have you tried turning it off and on again....
- 1
-
-
Hi Guys,
So i had OpenVPN running smoothly until last nights update. (Have auto updating turned on). When i tried to connect this morning it wouldn't connect. Checked my openvpn-as appdata share and most of the contents were missing. Not a big deal. Just created from scratch and had the config back working about 10 mins later.
My problem is that now i can't connect on UDP (port 1194). Port is still forwarded to the container it just won't connect. I tried changing it to TCP (Port 9443, Opened it on my router) and it worked first time. I just can't for the life of me figure out why this bugger won't connect on UDP any more.
Anyone else had a similar experience?
Below are my port mappings, Using Bridged as my networking option.
172.17.0.10:1194/UDP-192.168.0.18:1194
172.17.0.10:943/TCP-192.168.0.18:943
172.17.0.10:9443/TCP-192.168.0.18:9443
Mover Question
in General Support
Posted
Thanks @itimpi. Will take at it.
Anyone else think this could be a useful feature?