adminmat

Members
  • Posts

    347
  • Joined

  • Last visited

Everything posted by adminmat

  1. I've been using this successfully for a X10 Supermicro for years. Just today I noticed all the sensors are missing except for the HDD. And the fan controls have no effect. It all used to work great. Did you find a solution?
  2. Thanks for this reply. I haven't had a chance to dig into this again. So basically I have a Raspberry Pi server running Raspbian Lite at my parents house. Main purpose is for remote backups. It's rack mounted with a 4TB HDD. I have it on it's own VLAN on that network. I want to connect to it via Wireguard periodically and Rsync to it. Currently I'm using ZeroTier for this but want to switch to WG. So my unraid server is running the WG server, the RasPi a few states away will be a WG Peer. I created the WG config file on unraid and SSH'd it to the Pi. Opened a port on the router. Opened a port on the Pi's firewall. But can't get it to connect. There was no straight forward way to install WG on that Pi since it's running Buster. I'll dig into it more this weekend and follow up. I have a fancy little OLED screen for this little server running a python script. And I know I'm going to break it if I install a new OS 😂 Attaching a couple images of my little 3D printed mount...
  3. Alright. Got it working now. The disconnect is I had Host Access to Custom Networks DISABLED. Maybe that happened after I upgraded the OS? Because I knew I had it working at one point.... I have a custom network for the PiHole docker container. This was set up to remedy the kernel panics. Host Access to Custom Networks = Enabled. Static routs are set in my router from the WG network 10.253.0/24 to the unRAID server 192.168.10.69. All working as intended. Interestingly I can't ping the PiHole server from my device through the Wireguard tunnel although it will resolve / block DNS properly to that device/client.
  4. In the original Wireguard thread form 3 years ago the first page was all questions about how to get this working. And it still hasn't been resolved. I'm going to pull PiHole off unRAID and just use a dedicated RasPi. Not worth the headache.
  5. I see that the issue of getting Pihole to work through Wireguard was in the first page of comments. I'm having this issue now but somehow had it working before. I point my router to pihole to get a "whole network adblocking" which works well. But when I config a peer to use pihole's address it doesn't resolve and I loose internet on that remote peer. I have pihole in a container on a custom network on a VLAN. (to prevent the kernel panics) Using Remote Tunneled Access I can not get this working. It's been 3 years since this has been an issue in this thread. Any resolutions?
  6. I'm having this same issue. Did you find a fix? I have a similar setup except I run PiHole on unraid in a container. It used to work for me but not anymore.
  7. I'm running version Nextcloud 25 with Swag and MariaDB and hoping to get rid of some of my warnings. I know this has been asked 1,000 times.... To get rid of: Your web server is not properly set up to resolve "/.well-known/webfinger". Further information can be found in the documentation ↗. Your web server is not properly set up to resolve "/.well-known/nodeinfo". Further information can be found in the documentation ↗. I read you have to delete the default file at: /mnt/user/appdata/nextcloud/nginx/site-confs/ then restart the container per this. Then you have to replace lines in the default file with what's described here. Is this still correct? I'm scared I'm going to break something.
  8. Anyone successfully set up a Raspberry Pi Wreguard Peer? I sent about 10 hours on this with no joy. All the tutorials are outdated or require you to wipe the SD card and install a new OS.
  9. Ok, solved with the help of Discord. I added a Read-Only path to /etc/localtime in the template and time is showing correctly in Firefox.
  10. Greetings. I set up a Firefox Docker container using Linuxserver.io's template. The timezone is off and I can't find a solution. running env in the docker's console shows: TZ=America/New_York which is correct but if I run new Date().toString(): within Firefox's browser console it outputs: GMT+0000 which is incorrect. So websites I use and Browserspy.dk show the incorrect time. I've tried: - confirming privacy.resistFingerprinting is set to false - adding an Extra parameter in the docker template of: -e TZ=US/Eastern Note that my other Docker containers such as piHole are showing the correct time. How do I set the correct timezone?
  11. This helped all these years later even in Windows 10. 🍻
  12. I'm not sure this is related to this containerized instance but I'm not getting any log entries at all in my debug.log even within the container. I haven't had an entry since 2021-05-12. Could this be I'm throwing no errors or more likely something isn't right. I've tried setting log_level: to INFO, ERROR and DEBUG. Nothing. I have log_filename: set to log/debug.log Are others still getting debug.log entries?
  13. Make sure in your config file you have STDOUT set to true and check /mnt/user/appdata/chia/logs/plotting I don't use chiaharvestgraph My disks don't spin down. Balls are always green
  14. Something doesn't sound right. Can you post a screenshot of where it says connection closed?
  15. Anyone found a way to map a new destination drive while the container / plotter is running? My USB drive is full and I have 6 plots going... I hope I don't have to stop progress 🤓
  16. I'm thinking USB issue. I want to move the Drive getting the I/O errors to a USB 2.0 port but then i have to re-boot unRAID.
  17. Been getting these spammed to a disk log for an new external USB Easystore. I think it may be a connection issue. Drive seems to be working fine. No SMART errors.
  18. This is becoming quite an issue. When adding another external USB drive, this time to the USB ports in the front the disk shows up in the GUI as Dev 4 and given a (sdf) with mount option. It will disappear in the GUI and reappear at random, over and over. It spams the log with: 2021-05-23T07:49:14-04:00 Tower unassigned.devices: Disk with serial 'My_Book_25EE_35504A42374A3542', mountpoint 'My_Book' is not set to auto mount. If I click unmounted then re-mount it will show up no longer as Dev4 (as previously experienced) and can not be used in unraid as intended. Now I have to re-boot every time I add an Unassigned drive. This will not work for my use case. Is there a way to disable UD so it can't see new disks added but keep the disks I have in UD as is? Then I can just add disks via CLI? Will this conflict with the fstab settings? See disk log:
  19. After updating to 1.1.6 it now throws some errors when starting Swar Update: pip install -r requirements.txt solved it.
  20. Interestingly when I installed 1.1.5 it showed up as 1.1.6dev0. There was a issue on GitHub about this as well with a direction to delete a .json file. The devs said it wasn't an issue. My 1.1.6 shows up as 1.1.7dev0 as well. Note that I did not use the container template by Partition Pixel. I manually entered the official docker URL location
  21. Thanks for the explanation about not being recognized as removed. In the testing above I did have one disk in USB 2.0 and one in 3.0. I'm using the ports in the back of the motherboard (I/O shield) I'm noticing some weird transfer speeds back there with the 3.0s. It's running at ~ 90MB/s. I'll try the front header ports when I get a chance. Waiting on an adapter to re-rout them to the back. Will report back.
  22. UD is up to date. Hit refresh in the UD section of the Main page. (Got Success notification) Tried un-mounting and remounting Un-mounting disconnecting, reconnecting mounting Drives are definitely spun up as I moved a test file in both to check (ball is green) after that still says can not read attributes / capabilities for both disks. still shows sdi, sdj. if I click Download smart report it just turns that page black. I am getting drive temp, read / write speeds capacity, usage. what should I try next? Edit: Another thing to note is although I have these drives mapped to a Docker container, and the mapping has been working fine for a week... the Container can no longer see these two drives. Edit: Reboot fixed it.