Father_Redbeard

Members
  • Posts

    18
  • Joined

  • Last visited

Father_Redbeard's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. My brother and I have both been really trying to make an effort to not only ensure our data is safe, but also take back some control from the big guys. Which is what landed me in Unraid land in the first place. He lives in another state. I have plenty of room on my server and would like to allocate 20GBs or so with room to grow that he can use as his offsite option. My original idea was to use something like Zerotier to create the network connection between his network and mine, and then have a share waiting for him to use. What I don't know is if that is the best strategy to do it. And how can I *only* allow him access to his dedicated share and no docker containers, etc? I poked around the web and see that some folks will do a VPN of some sort or Syncthing on a one way transfer with a VPS in the middle as an "untrusted" source so any data at rest is encrypted. And I'm not at all opposed to it. In fact I keep trying to figure out how I can fit it into my flow and actually put it to work. My only hold up with Syncthing is that it's a bit of a bear to setup for me, and I'm more technically inclined than he is! So who here has a similar arrangement with family or friends? How are you doing it? Or folks that are doing it the other way and using family/friend's houses for their own offsite purposes. He trusts me and I trust him, but that doesn't mean I don't want his data to be encrypted before it lands on my disks and as stated earlier, I don't want him to even accidentally land in some other docker container while he's got a live VPN connection. I'd love to hear any ideas. I know for sure I won't be opening it to the internet at all, it would either be VPN or the Syncthing+VPS route instead.
  2. Popping back in to at least divulge what my hang up on this was. NPM official wants to install on br0 network so it can use port 80/443 (I guess). My issue was because NPM could not communicate with any other docker on bridge, host, or a custom docker network I created. I could not figure out how to get the communication to work until I stumbled on the option: Settings -> Docker -> Host Access to custom Networks: Enabled Now NPM can communicate with each of the other containers and I'm able to setup local DNS names for each app I want as well as give them a wildcard cert via Let's Encrypt In my research I only saw references that "if it can't talk to the other containers, it won't work" in the NPM Official thread. But now how to go about fixing it. The above setting works.
  3. Figured it out. NPM couldn't talk to the other containers. Settings -> Docker -> Host Access to custom Networks: Enabled This is probably not the recommended method to do this, but it was the only way I could find to get NPM on the br0 network to see Nextcloud on the bridge/nextcloud-aio network EDIT: Should mention I went back to NPM since I couldn't wrap my head around Caddy. But the above setting still needed to change in order for it to communicate with the other containers.
  4. I've read and re-read the following https://github.com/nextcloud/all-in-one/blob/main/local-instance.md It seems clear, but I'm missing something. Because I cannot get passed the domain check. I've tried with NGINX Proxy Manager (official) as the reverse proxy and now the recommended Caddy using the example compose, caddyfile, and docker start string. The UI gives the error: The server is not reachable on Port 443. You can verify this e.g. with 'https://portchecker.co/' by entering your domain there as ip-address and port 443 as port. I have port 80 open as what I got from the documentation linked above is that for internal you *only* need to open 80. I have AdguardHome running as DNS for the whole network already and it has an entry for my.example.tld that points to the private IP of my Unraid server. There's mention of modifying the daemon.json file to include the DNS server but I'm unsure how to do that. I did, however confirm that the Unraid GUI was pointed to 192.168.1.1, my router IP. My router IP has the AdguardHome instance as the only DNS server. But I also tried with unraid pointed to ip of the Pi that is running the DNS (192.168.1.24). I saw that there is an option to bypass the domain check, but it almost seems like that is required for a local only instance? But it didn't work that way either (though that was when I was trying to get NPM running). I do have domains sitting with Cloudflare, but I have no interest in running through a tunnel so I have the A and CNAME records pointing to the private IP of the server. I'm set up with Wireguard between the server and all devices I care about so I can reach any of my docker containers remotely without exposing them to the internet, which is what I'd like to do with Nextcloud. Here is the docker run I kicked things off with: sudo docker run \ --sig-proxy=false \ --name nextcloud-aio-mastercontainer \ --restart always \ --publish 8090:8080 \ --env APACHE_PORT=11000 \ --env APACHE_IP_BINDING=127.0.0.1 \ --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ --volume /var/run/docker.sock:/var/run/docker.sock:ro \ nextcloud/all-in-one:latest Here is the contents of my Caddyfile: { http_port 8080 https_port 8443 } :8080 { log { output file /config/logs/access.log } root * /app/www file_server } https://my.example.tld:443 { header Strict-Transport-Security max-age=31536000; reverse_proxy localhost:11000 } I'm completely lost. I've been scouring the internet for days trying to get this working and it seems that most folks are doing the standard internet accessible method and not a local only. And I see @szaimen all over the internet talking about this, so I'm understanding that he's the master and commander of AIO. Please help me understand. I'm still learning all of this and want to connect the dots, but I'm floundering on my own. NOTE: One of the domains I have is a .pro, would that negatively affect the ability to get this working? I do have a .com as well and I did try with a duckdns.org domain also. Nothing I'm trying is working.
  5. Yeah, I generally understand that to be the case. But I'm running into issues getting it to work. I know I'm likely missing something very obvious, but again, the tutorials I find online are related to exposing apps to the internet which I have no desire to do. The primary DNS server is the AGH instance at 192.168.1.2, but NPM is running on the Unraid box at 192.168.1.200. I already have AGH server IP in my router's DNS server setting and it is working for adblocking at a DNS level. What am I putting where? NPM has several tabs and types of hosts I can set, but I'm not 100% sure where to go to get it working. Same thing with Adguard Home's DNS Rewrite. I was able to put a DNS rewrite that responds to <tower>.local with the IP 192.168.1.200 and I'm at the server login page. But AGH doesn't allow for port numbers in that setting. Do I need the same server to be primary DNS since it has both NPM and the various apps I want access to by domain?
  6. Unraid 6.12.3 I have several docker containers running apps that I want to be able to access by domain name instead of IP:Port. I do not need to access these services outside my home network as I have a Wireguard tunnel between my phone, laptop, tablet and the server itself. I can't seem to find a consensus regarding the best way to accomplish this. I have a Raspberry Pi running Adguard Home as DNS #1 server and a fallback instance as #2 in a docker on the Unraid server as well as AGH sync propagating any settings I change on 1 to 2. I know that AGH has DNS rewrite functionality in it, but it won't accept the port, only IP address. I've read a bit on writing out custom DNS in it but it is not making sense to me. Same issue in Pihole, which I used to run. You can specify an IP, but not port. Next I looked into reverse proxies like Nginx Proxy Manager, Caddy, and SWAG. Again, if this *is* the correct way to do it, I'm not understanding the method to make it work. Most of the tutorials I have found are referencing stuff you'd access outside of your network. I do own some domains through Porkbun with Cloudflare name servers. But again I can't get it to work. I'm not understanding the correct way. And I'm trying to find a setup where I can type "overseer.mydomain.com" instead of IP:port without it needing to route out of my network to Cloudflare then back (if that is even possible). Having SSL certs for the services would be cool but not a deal breaker if I can just get the DNS -> IP:port to work. Any suggestions?
  7. @JorgeB All good now! Updated and rebooted after that extended SMART test and everything mounted and apps are loaded up. Given this scenario, is there anything else I should look into? I've already scanned with the "fix common problems" plugin and it's good there too. I also did a quick flash drive backup and I'll continue ensuring my data backups are good to go as well. Thanks for the help!
  8. I did reboot once after receiving the original unmountable messages. No change there. I've downloaded the OS update and I'm just waiting on the SMART test to finish and then I'll give it a reboot. I'm assuming Chromium browsers fare better based on the advice to not use FF?
  9. Version 6.12.0 When I went into my office on Thursday (6/29) morning to start work I saw that my HP Microserver G8 had a red flashing status light. After connecting to iLO to figure out what was going on, I found that it was a "power fault". I did search for quite some time on the code thrown by this power fault and found nothing that specifically matched. In fact most folks that had this same symptom were then unable to power on their server. Mine powered right up. Unraid stated there was an unclean shutdown and needed to do a parity check. Approximately 15 hrs later, it completed with no errors. I then started the array and it seemed fine. Apps I have installed opened, I was able to interact like normal with the webUI, etc. One weird thing was that removing an app (trying to free up some system resources by deleting unused or otherwise redundant apps. Like a backup instance of Adguard Home, for example). I received an error that stated it couldn't remove it because it didn't exist. I thought that was a little strange, so I looked and saw there were some orphaned images (I forget the exact verbiage). None were AGH though. I removed all of those. This is the part that's fuzzy to me. I should've written it down as it was happening but I panicked a bit. Something happened with dockers, either the daemon stopped running/crashed or I did something. I think part of this may have been I likely had way too many apps running with auto start up for the hardware I'm using. Either way when I tried to start them again, they wouldn't budge. I'd get the unraid logo squiggle but it would go back to 'array stopped'. So I started in maintenance mode and that worked, but I wasn't exactly sure what to do with anything so I left it for a bit while I tried to research. It was at this point I noticed that Disk 1 and 2 of my data array and both SSDs in my cache pool are showing the subject error message. File system on HDD 1 & 2 show xfs. The pool of two SSDs only shows on File system and that's btrfs. lsblk results here: Diagnostics attached. I should mention that it wanted to do another parity check after I did a "clean shutdown". But I didn't realize at the time the timing is so short by default so it did yet another unclean shutdown. I did short SMART on disk 1 and it showed no errors. Disk 2 is still running because I mis-clicked and selected extended by accident. The webUI apparently didn't load in Firefox correctly so it looked lined up with the start for short test. My mistake. I don't know what else to do. I did see a few other posts with the unmountable symptoms and saw some moderators and other folks link stuff from the wiki, but the couple I tried were literally dead ends. One landed at 404 page and the other went to the main wiki page. I did see the banner that it's under construction, so that is likely why. If there is any suggestions I'm open to it. If it's a done deal and I need to start over, also fine. Annoying, but fine. I had a decent chunk of stuffed backed up but it was still trying to do the initial backup when it had the power fault. There is some critical data but thankfully I had it backed up in my iDrive and Backblaze B2 bucket (trying to get rid of iDrive and move to B2 entirely, but hadn't cut that cord yet, thank goodness.). If anyone has any leads on the power fault that started this mess, please let me know. I have the server connected to a APC UPS 1500VA on the surge/battery backup side before this happened. And I even tested it by unceremoniously yoinking the power cord to the UPS from the wall and it did exactly what it's supposed to do (this was weeks ago, btw). Probably a bit cavalier, but hey. Still learning the ropes on this product. borg-diagnostics-20230701-1946.zip
  10. So I have Veloren running on my unRaid server and that is working just fine. I had to install it manually as there currently isn't any community app template for it, but I was able to get it to work on my local network without issue. My kids want to be able to play with their friends though, so I set up Cloudflared DDNS to point to a domain I have, let's call it play.example.com. This functions as well. Friends outside of our network can join using that domain name and the Cloudflared app keeps my public IP updated so it properly points to my server. Great, no worries.....except if you ping play.example.com then my public IP comes up. That makes me a bit uncomfortable. So what I've been trying to figure out is how to hide my public IP but still have the game reachable by that domain name listed above. So far I've tried Zerotier setup on unraid as well as a VPS I have (very low end, couldn't host the game itself). Zerotier control panel shows both talking with the network created for this purpose. I think tried manipulating the firewall on VPS to let traffic from it to the unraid server or even just the domain name and neither works. Can't ping public IP from VPS. Can ping VPS from unraid though. Then I looked at iptables and trying to route things that way. Nothing is working. Shoot, I've even taken to asking OpenAI for tips on how to set this up and tried several of "it's" suggestions and I still can't get the game to connect through VPS. So my questions: 1) should I even worry about public IP showing? 2) is there a better approach to shielding my IP in this context? Note: the only port forwarded on my router is the one for this game
  11. @vstylez_ thank you so much for this. I was pulling my hair out trying to find the solution!
  12. If anyone else ends up with this issue where Duplicacy doesn't see the folder structure you want to back up, I had to add the other path(s) as extra variables in the docker template. I'm still running into what I believe are permissions issues. But Duplicacy now sees any folder I add as a variable. For whatever reason that wasn't clear to me before.
  13. v6.11.5 - Posting here as it pertains to two separate apps and may very well just be a permissions issue I can't figure out. I have Urbackup running on one Win10 client that as successfully finished it's first backup (sort of). I believe I mistakenly added it twice and one had a different path so now I have two folders named after that PC with different amount of data in each: > /mnt/user/DESKTOP-{PCname} = 526.37GB > /mnet/user/backups/DESKTOP-{PCname} = 288.04GB The second of these two I created with the intention of having all clients on my network dump their respective backups in this folder. The larger one contains image backups of a couple of systoles before I realized it and stopped it since I don't care for that feature. But here's weirdness #1: Duplicacy only sees /mnt/user/backup/DESKTOP-{PCname} (note no "S" in backup). Then there is a "clients" folder under that as well. Which also contains a file named after this PC that is 34bytes. See attachment "Duplicacy_for_PC_backup.png" for WebUI view for that app. Also "pc_location_under_backupS.png" for SSH view of the file structure on the server end. I can't account for the discrepancy of backup vs backupS or where to even begin to start fixing it. I do have this data backed up to another cloud service currently, so if it's advisable that I just start over I have no problem doing that. I just want to make sure I'm doing it right. ----------------------- Weirdness number 2: I installed the Time Machine community app and it worked right away and my two macOS devices saw it and were successful in backing up without issue. Data on server is housed in /mnt/user/timemachine, see "Server_locations_for_TM.png". However, I can NOT get Duplicacy to see this location in order to send it to the B2 bucket, see "Duplicacy_missing_TM.png". I installed the community plugin for viewing/setting permissions for shares and messed around with that, trying different settings and restarting the Duplicacy container each time but no change it what it seems in /mnt/user/. That timemachine folder is missing entirely from that perspective. Is there some steps I'm missing to grant permissions to shares between containers? I should note that the Time Machine app auto created that folder when I installed it. EDIT: Forgot the photos....
  14. Thanks for that. Was not a solution I was aware of and seems a bit better for my use case than Seafile. I did try Nextcloud and on my old hardware in particular it doesn't run very well. And also has a lot more features than I need so I uninstalled it.
  15. Version 6.11.5 - I have two Macbook Pros, 3 Win10 desktops, and a handful of Android devices (phones and tablet). I am trying devise a backup strategy that will keep my most important data safe. Currently on my Win desktop, I am using iDrive to sync important files to their cloud. I don't understand this to be a viable solution when adding more data from a server like my unraid setup, so I'm looking at Backblaze B2. Both Macbook Pros are set to use Time Machine to the server using the excellent community plugin by the same name, so local copies are taken care of. I looked at Seafile for the windows seats since they have a client that can watch directories and sync with the sever so I was inclined to go that way but I'm willing to hear better options. That would give me Android and Windows file sync to the server in an automated fashion, set and forget. I have both Seafile and Duplicacy spun up and working now but can't get Duplicacy to play nice with B2 yet. But before I dive into troubleshooting that I wanted to check with the community for any flaws in my logic or tips, tricks, other suggestions. I do have Plex and all that on the server but at this point I'm only concerned with backing up the irreplicable stuff. I did search both here and over on Reddit and couldn't find this same scenario only guides and options for backing up unraid to the cloud. I'm missing the best practice for backing up to the server in the first place.