tomwhi

Members
  • Posts

    14
  • Joined

  • Last visited

tomwhi's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. OH MY GOSH - I've just worked out why my Home Assistant couldn't talk to my Phoscon/Deconz app.... They weren't on the same docker network... DOH!! My HA was on "host" because there was too many ports to map across, and my Phoscon was on br0 (also as shown in SIO's instructions but made the most sense because it needs it's own IP for things to find it on tcp/80 and tcp/443. I found out by going into the console of the HA container and trying to ping my Phoscon IP - it failed! So the "Api key couldn't not be whatever" is utter bullsh!t from an error message. I also realised it couldn't talk to Pi-hole (which I was less worried about, but equally as annoyed with not working) - and for anyone else who lives and breaths the SIO videos - it's also on br0!! So once my HA was moved onto the br0 network everything worked - well - not everything, now it can't talk to the host, so I can't get Sonnar or Raddar data into my HA setup but that really is the lowest of the low priorities and now I've worked this out I'll try and work out how to get it talking to both br0 and host networks.
  2. I am pretty desperate for this video SIO! :-) Please can you let us know when you think it will be due out?
  3. I've just had the same thing. I've been ripping my hair out all day. I was in the same situation as you, couldn't get the SIO one working so used marthoc/deconz, then on Saturday when my docker updated automatically I was getting a USB error. I've been able to get the docker up and running again by using SIO's and my original appdata, but I am still getting the API error that we've both had with Home Assistant. I've read the API issue is normal in HA, and I've tried updating the configuration.yml for HA with my host IP and port (various attempts) with nothing working. I'd be interested to know if anyone gets this working in any way, shape or form.
  4. I had exactly the same issue too, my post is in this thread so you might have tried this already. I ended up using a different container off the Docker Hub. I can't see which container you're using (I'd need to see the edit page & Repo or send another screenshot with advanced options enabled). I used this guide and it sorted me out:
  5. I managed to get this "working", and I have to say without you, @Hoopster and @ken-ji's posts in the original thread I would have been lost. So thank you! I say "working", because my keys don't survive a reboot but I've been through all 6 pages of the original thread and worked out what I did wrong so that'll be a doddle to fix. Even so, all I have to do is put in a username and password until I can automate it so the solution is ready to rock. I am going to try and write my solution up at some point (mainly because a mate at work wants to do the same thing) and hopefully it helps others. I am using your original rsync commands from the original post for data I want sync'd, and then I got Duplicati working to encrypt personal files, photos and videos (that I'd rather not be in plain text on my dad's server!!) which are sent using the same idea of SSH/SFTP so it's encrypted in transit too, and tested a restore across the wire and locally to my dad's server in case my server totally died. #sample rsync commands rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Audio Books/" root@192.168.x.110:"/mnt/user/media/Audio\ Books/" rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Ebooks/" root@192.168.x.110:"/mnt/user/media/Ebooks/" I installed zerotier this morning and this is just the icing on the already glamorous cake. I tested it with my mate and he was able to access my server once I authorised him access (which was quickly revoked after testing). I selected the IP range of 192.168.192.* so everything I join to that network is on the same network range and can access other nodes on that virtual network - epic, it worked so quickly and easily. Tom Lawrance on YouTube does a good starter video on it. It works amazingly in UnRaid too and passes the virtual network down to the host. So I can setup my rsync and duplicati to use the 192.168.192 range which will never change even if public IP's change. And just for piece of mind the data is still encrypted because of your original SSH idea - which I keep, even though it'll be double encrypted, I'm not fussy about transfer speeds as long as the data gets there.
  6. cheers for the info! I'll give the thread a good read now and get cracking :-)
  7. Thank you! I think I saw another one of your posts during my Googlin' referencing this post. I'll deffo give it a read though - because it was quite an old post I didn't want to be missing a trick if there was something else around. In the post I did see I saw it was setup on the same LAN - did you move the server off the WAN and does it just operate over SSH (port 22)?
  8. Hi, I wondered what the advice would be to sync files from my UnRaid server to another UnRaid server over the internet (my Dad's house). I want to be able to both put our project photos in plain view without modification on both sides (not encrypting or chunking into Zips). I would also like to backup my private data and encrypt it onto his server in mine had a disaster (zip and encrypt would be ok for this). I've seen Duplicati can be used to backup which can encrypt but also zips the files into chunks (which would be ok for the backup part of my project). I've seen rclone can be used to Sync but it looks complicated. I've seen spaceinveder one has done a video on both but mainly talks about putting it into the cloud. Are there any good ideas/advice on how to do this, and any good guides you'd point me at before I start down the wrong road?
  9. Yes - sorry, I have read your post again and see what you mean. I have been through SabNZB, Deluge, and others and they look ok to me. The cases match what they should be (I use the auto complete where possible so I don't think there is a problem with case). I can see why this would be normally the case but this seems to have been creeping up slowly for me so I'm not 100% sure it's the case. When I total my docker sizes vs what unraid is reporting then the two totals don't add up so I think there's more in the docker image than just the live dockers I have installed. To dig around some more, I looked inside /var/lib/docker and with the command "du -ah /var/lib/docker/containers/ | sort -rh | head -60" I could see containers in there which I no longer have; is there a way to safely remove these and just leave the dockers I am actually running? I have a bad habit of adding and removing dockers quite a lot so I wonder if the removal process isn't removing the old images?
  10. Well - my entire Unraid journey thus far has been to wait for a SpaceInvaderOne video, copy what he did, be happy with the results because it always worked first time. I wish I knew this video series was on the way otherwise I wouldn't have spent 3 hours last night trying to get deconz working!! I'm so excited to see this series! During the lockdown home automation has been my number 1 thing to keep me sane, I've tried a CC2531 Zigbee stick, bought loads of Xiaomi sensors and just put a new Sonoff BasicR3 one in to the network. @SpaceInvaderOne, I saw your docker and tried it - it worked well until I tried to get it linked with HA, but then it wasn't able to get an API key. I ended up using the one out of the docker hub with this post to fix my integration issues with the home assistant core docker. I needed to set loads of environment variables and also the "dialout" usermod setting to get it working properly. I don't know if it's worth you adding those into your docker image too? I used the command ls -l /dev/serial/by-id to find out what my conbee2 was using. lrwxrwxrwx 1 root root 13 May 30 02:40 usb-dresden_elektronik_ingenieurtechnik_GmbH_ConBee_II_DE2195326-if00 -> ../../ttyACM0 This is what I ended up with in my docker settings (obviously I had to remap the port from the usual port 80 to some other random port).
  11. I'm not using DVR in Plex - I have probably set it up at some point but I never really got any use out of it. My transcode directory doesn't look like it's configured correctly, I thought I have pointed this at a share on my cache drive. I have had a quick look through all my container settings and it seems they are all setup correctly using the case I set my shares to, i.e. "Download" and not "download" (this is massively helped by the unraid GUI doing the path creation for me. I've mainly used Binhex's applications so I can't imagine the paths in the application are incorrect, but maybe I'm misunderstanding something. Trying to find the main culprit I used the cAdvisor (based on another post) to see the virtual size of my dockers and it seems that the ones that reference my "Download" share are the ones with higher usage. I don't know if this is the right way to go. I drilled into Sonarr (just to pick one) and the directories look correct The settings in docker seem to match my shares... Are there any ways to look inside the docker.img file so I can see what the kind of files are that are in there, so then I can reverse engineer the problem and try to work out what in the applications are misconfigured?
  12. Dude - this is amazing, I've been pulling my hair out for hours trying to fix this and your post sorted me right out!!! I was stuggling with an API issue in HomeAssistant trying to link it up to the spaceinvaderone docker image - I didn't want to create my own docker because that's a ballache, but obviously it's the only way to get it really working. Thank you so much for putting all the technical detail into the post.
  13. Thanks for the reply trurl. I think I jumped to some false conclusions about the app data and docker image, thanks for correcting me. I've attached the diag file from my server. This is the run command from the container (ignore the test environment variable) root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-plexpass' --net='host' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TRANS_DIR'='/config/transcode' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'Test'='Test' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/TV/':'/tv':'rw' -v '/mnt/user/Music/':'/music':'rw' -v '/mnt/user/Music-Unsorted/':'/MusicUnsorted':'rw' -v '/mnt/user/Fitness/':'/fitness':'rw' -v '/mnt/user/Home Videos/':'/HomeVideos':'rw' -v '/mnt/user/Training/':'/training':'rw' -v '/mnt/user/Audio Books/':'/audio-books':'rw' -v '/mnt/user/appdata/binhex-plexpass':'/config':'rw' 'binhex/arch-plexpass' 925f49e97196c5af9b5d66a0f19f21bd19cb36e01f43c6c56b92b713c544cfad The command finished successfully! tomnas-diagnostics-20200529-1845.zip
  14. Hi, I'm seeing my unraid server has a high docker memory usage. I have two questions. I think I've terribly misconfigured my dockers as I've been setting them up because stuff like plex is at 9.3GB and others are higher than I imagined. I assume it's because I've been careless when choosing where to put the /config directly and allowing it to be put into the appdata folder. For Plex I can see my metadata folder is massive - I can confirm I don't have any media hosted in there - just the /config directory is massive. This looks to be the same case for all my other dockers where I've not put any thought into where the config dir should live. Should I move my /config directory out of the appdata or is it ok in there? Is there any issue with increasing the size of the docker.img to accommodate these large folders? Cheers Tom