screwbox

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by screwbox

  1. Nevermind, was just too scared and couldn't think clearly. After a bit of reading in this wonderful forum i found out to just simply start array in maintenance mode and start a filesystem check firstly with -n and after finding that xfs is recoverable without the -n and tadaa everything is back up running now like it was before. Doing a offload of the data from the cache drive right now to after this make a whole new pool out of those two SSDs and than move the contents of appdata back to this new pool. All of this using the usual mover way as some would do when changing SSDs etc. Thanks anyways.
  2. Bought a new 1TB Sata SSD, thought i can just expand my cache pool by setting pool devices from 1 to 2 and add the second SSD to my old one holding my Docker container data (appdata folder). Didn't even noticed if it was BTRFS or XFS so afterwards i can not tell which it was. Started the array and both drives where unmountable. Wanted to undo my mistake, somehow managed to delete all pools and start a new one and added my original single SSD in hopes that the data on it will be there starting the array. Still unmountable volume... So did i break all the data on the disk? Backups are there, but 1 day old and i simply don't want to accept that i have to recover them if i could simply regain my data and i only don't know how. Got the drive with an USB adaptor hooked up to my Arch machine and Gparted tells me "unkown" fot this drive. Any chance to get the data back?
  3. I'm trying to get my Valheim server Problems solved and it seems i'm out of luck. So maybe you guys can help me out. I get random disconnects to my server, checked the autoupdate thingy but as i understood it is fixed anyways because it is always "false". So the last thing i've read somewhere is that i have to make sure that my instance isn't using crossplay and to remove the -crossplay option in my start_server.sh. But all of those articles are dealing with normal dedicated server installations and not this docker version. So how do i make sure that my container isn't using the -crossplay option? Or do you have any other ideas? I'm hosting some other games and services and there are no problems with disconnects like we have here with this Valheim docker container. I have nothing i can do and those constant and randomly occuring disconnects makes playing a real pain.
  4. Ich hänge mich mal hier dran, ich habe die selben Symptome und auch Docker Container auf Br0. Teste das auch direkt mal.
  5. So i'm really lost on this one. Googling and searching this topic isn't helping me because i can not find anything regarding the only output of the log: "ifelse: fatal: unable to spawn importas: No error information", and nothing more happens. MariaDB seems to run fine and i already tried to reinstall the Dockercontainer with the same options i had before just with a fresh downloaded Image. From running fine to this in just one night. I'm really question if this whole Nextcloud thing is the right thing for me. Every time an automatic update etc. happens i'm stuck again with problems. But this time i can't help myself because i can't even google the error message. I really need help here i think. EDIT: Okay, just for the Rest of the World: I'm dumb! But at least i found out that A.: It is not that good to change the docker.img to a docker directory. It works but kinda problematic and B.: That the docker directory was the problem. Switched back to a docker.img and after adding all containers again everything is running as normal. Was so convinced that a docker dir is working fine for weeks now. But i kinda got the point in whats all about the hassle with this. So if this is a producitve system with important data just use normal docker.img it will safe you from big headaches.
  6. So okay, there is something i'm missing obviously. The only thing that i was reading was "Posted October 21, 2020" in the top bar of your post. Literally searched the first page of this thread and can't even find "Edited October 14 by mgutt" so i think i'm too dumb to see this information. But anyways, now that at least i got attention: The question was more in the direction if those informations are compatible and valid in combination of a 2017 CPU BUT with the newest stable release of Unraid since there were so many changes in the background like loosing the original nerdpack and nertools are missing some functionality etc. that i'm not sure if everything is compatible or relevant anymore. Many how to's and tutorials got irrelevant these days beacuse of this. A simple "yes, should be fine" or "no not working anymore because of this and that" would be sufficient. Never intended to throw a bad light on your efforts/tutorial and call them "outdated" etc., if you got it that way i'm sorry. Was not intentional!
  7. Are those instructions from the first page relevant and actualized since 2020? Going to tune some powersavings on my Core i3 8100 Asus P11c-i Unraidserver and just want to make shure that im not applying old settings which are not any good these days...
  8. Also would appreciate an icon of this case!
  9. Ah, okay! I did'nt know that. Of course i don't have a Unifi Router and so i can not expect any data on traffic. Makes sense. So, one thing less to care about and nothing i need to fix. This is good news, because i was already making more work out of it as i should. Should have asked here sooner i think. So thy anyway!
  10. I startet yesterday recieving my very first Unifi AP AC Lite and spun up this Docker pretty quickly. Adopted the AP and everything is running fine. While exploring the functionalitys of this glorious piece of hardware and software (sorry just had pretty shitty router and APs in the Past) i stumbled upon the "Traffic Stats" and at first i thought "no statistics data" is okay because there have to be some data generated at first. But today, some hours later there are again no statistics to display. So what is going on? Did i miss something? Tried to search a bit outside of this community and found out that there is some problem with messed up mongodb or something. But all i found out is misleading because it is not meant for the docker implementation. Didn't find anything regarding my problem here also. Did nothing special to get the container running. Touched nothing, only changed the network from "Bridge" to "custom: br0" and gave the container an IP adress because i like my services on dedicated IPs. Is there anything i forgot?
  11. Have setup Nextcloud with MariaDB behind Swag. Used the very good YT tutorials from spaceinvader one. Everything is fine. I can log in, i have activated 2fa with TOTP and it is working. now I’m curious and wanted to setup the 2FA notification plugin. At first everything seems working. Login with user and password, get notification to my iOS app and can „approve“ but then in the login window which triggered the push notification is not doing anything and the login goes nowhere. At second sight i noticed that the notification in the app seems to come in from my local gateway address and not from my external IP. So i think there is some problem with my swag setup. But i really don’t know how to fix it. I thought i was clever enough just using spaceinvaders instructions for the old letsencrypt docker on my swag container. But now it seems that was not the whole story.
  12. Just a quick response. Ended up using a Pi running my DNS aka. PiHole. Everything is running now.
  13. Clearly not the best way. Should have noted that it is not a 100% DNS. It is just the dnsmasq from my Pihole Docker. But it is really annoying to loose any local "DNS-like" resolver if my homelab is build around this blabla.local domain. Other wise if i loose my Pihole Docker i not only loose my internal i also loose my resolver for the whole internet at home. So my choice was just let Pihole run on a pi without any UPS and other thigs like automated backups etc. or let it run as a docker on my unraid. Which imho is the better choice of those two. But again, the internal dnsmasq resolving of my "whatever".local domain is not working when connected to Wireguard. So what to do? Go the only other way and set up the Pihole on a Pi again and loose all the benefits i get when i host Pihole as a Docker. Is this really the only way? Maybe i should learn to et up a Pihole cluster made of two Pi or something like that...
  14. Excuse me. I don't want to sound ignorant as i didn't read the whole thread. I just searched a bit through. But i can not find any hint about what to do with the routing when i have a router which isn't able to do custom routes. So i can not set up the static route which is needed for Wireguard to be fully functional and even the Docker container are reachable through Wireguard. The biggest bummer is that my DNS is a Docker container so when i'm connected to Wireguard i have no DNS etc. which is a big problem at the moment. Any suggestions?
  15. So, i studied this Topic many Times. And though i'm no english native but i still think i understand everything right. If my router is some kind of cheap enduser ISP stuff which lacks the possibility of custom routing or routes in general and all i can set up is simple NAT i have no chance to access all my Docker container through Wireguard? My Setup is quite simple. I have 2 NICs, one (br0) for the UnRaid Webfrontend and Wireguard etc. and the other (br1) is used for all Docker container with custom IP addresses in my LAN. I use one subnet for everything in my LAN. 192.168.0.0/24. I'm not talking about DNS i can't even ping my Docker container on their own IP. So i initially thought it would be enough to seperate the Wireguard interface from the Docker interfaces. But this was not the trick, so i read this thread and the only thing i'm missing is the custom route in my router which i can not set. What are my options now? I could be so convenient to connect to Wireguard, open my Heimdall-Docker and get everywhere i want (Homekit, DIYHue, Plex, Nextcloud, etc.). But i can't get it to work. Or am i missing something?