Jump to content

CorneliousJD

Members
  • Posts

    692
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. Sadly, here we are again with a 3rd... I'll have to try grabbing my syslog later. 6.9.1 was released but doesn't have any mention of fixing kernel panics. I've never had an issue before 6.9.0 Is the best option here a downgrade? Or is there a way to track down what's causing this with the syslog? The "rip code" listed here is different than the last screenshot, not sure if that's anything to go off of. First panic rip code referenced nf_nat, second mentioned btrfs, now we are back to nf_nat again. Help is greatly appreciated here.
  2. Ok, I've rebuilt the docker.img file, I've stuck with btrfs docker.img for now still. https://wiki.unraid.net/Unraid_OS_6.9.0#Docker The new release talks about different options but I'm not sure I understand what the benefit to XFS would be for docker, or a directory? I left btrfs incase I need to roll-back (if that's even an option still). I also setup a syslog server on unraid itself and it's logging to itself so hopefully I can catch more of what happens the next time it kernel panics (if it does) Fingers crossed!
  3. I can certianly do that without too much trouble. Is btrfs the best format for the docker.img file or I recall seeing another option, when would that other option be used ever?
  4. So I didn't format the cache, I did the unassign/reassign, so all the original data on the drives remainted in-tact through 4 different balance operations.
  5. I know the Kernel Panics are a different issue. If anyone has the ability to help with that, see this thread below But after 2 different kernel panics on 6.9.0, the server comes back up and the host access on docker is still marked as enabled but doesn't work until I stop docker, and start it again, and then the communication works again. Not sure if this was present in 6.8.x as I never had Kernel Panics back then, only getting them now on 6.9.0 Again, if anyone can help with the kernel panic problem:
  6. My recent changes for when this started. Upgraded to 6.9.0 Re-aligned my SSD partitions to 1MiB using the Unassign/Re-Assign method below (in the link) Added the "Soon" plugin and set that up.
  7. This is now the 2nd time I've gotten a kernel panic since upgrading to 6.9.0 (and chaning SSDs to 1MiB partition layout) Here's attached diags and an image of what was on the screen. I do see it says something about "btrfs release delayed node" - Is that related to the panic? I'm really hoping to get some help on this, if at all possible. First time it happened it was doing a btrfs balance (for the 1MiB partitions) But this time (a few days later) I wasn't doing anything at all outside of running my normal containers. Is there any way to figure out what is going on with this? If not is the downgrade back to 6.8.X going to affect the SSDs now that I've changed them to 1MiB? server-diagnostics-20210308-0837.zip
  8. Hmm, I re-ran another manual backup and it took almost 4 hours to do the backup, I understand that verify will take just as long to check it as well? What's odd is the entire process (Backup and verify) up until now has taken about 2-3 hours total for everything. I have not added a significant amount of data or anything since last week (backups run on Sundays at 3AM) Any advice on what I can look into here to make this go faster? 8 hours of downtime is a lot. EDIT: actually the one thing I did do, is change my cache to the 1MiB partition, since they're Samsung EVO drives and were affected by the excessively high write amounts.
  9. Hey Squid, I had an issue this week during my scheduled backups, it got stuck in "verifying" for 3 hours before I hit abort, then it contained to check for updates and start containers. Is there any logs I can pull to find out what happened?
  10. I was able to narrow down my HomeAssistant docker, unrelated to everything else that happened, a USB device wasn't passing through properly. I found that out by just starting the container from the command line and got this error then I fixed it and started it again and it worked. root@Server:~# docker start HomeAssistant Error response from daemon: error gathering device information while adding custom device "/dev/wyzesense": no such file or directory Error: failed to start containers: HomeAssistant root@Server:~# docker start HomeAssistant HomeAssistant root@Server:~# So now I just need to wait for parity check to complete, stop array, remove 2nd cache drive, start and rebalance, stop, re-add the disk, start and rebalance again and then I should be good on the 1MiB partiton on the SSDs and be back to normal. Fingers crossed!
  11. I called it a night and went to bed but yes, it's finished now "no balance found on /mnt/cache/" I still need to remove the other drive, balance, and add it back and balance again now, but parity check is running so I'll wait for that. Is there any way to glean more info on what caused the kernel panic? And also what the execution error of that container is? Thanks in advance, much appreciated!
  12. HomeAssistant is acctually the only container that wont start, all other (50-some of them) have started fine. When trying to run HomeAssistant I get this "Execution Error" Server error with no other information. This containr is in br0 mode running on a a different IP Address, but so is PiHole and that's started without issues. I pulled up server log and I see a few references to eth0 stuff, and IPV6addresses, but not sure what to make of it, everything else in server log is related to BTRFS balance going on. Mar 5 01:45:03 Server kernel: docker0: port 48(veth013765a) entered blocking state Mar 5 01:45:03 Server kernel: docker0: port 48(veth013765a) entered disabled state Mar 5 01:45:03 Server kernel: device veth013765a entered promiscuous mode Mar 5 01:45:08 Server kernel: eth0: renamed from vethd164395 Mar 5 01:45:09 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth013765a: link becomes ready Mar 5 01:45:09 Server kernel: docker0: port 48(veth013765a) entered blocking state Mar 5 01:45:09 Server kernel: docker0: port 48(veth013765a) entered forwarding state Mar 5 01:45:10 Server avahi-daemon[10338]: Joining mDNS multicast group on interface veth013765a.IPv6 with address fe80::e894:abff:fe00:467f. Mar 5 01:45:10 Server avahi-daemon[10338]: New relevant interface veth013765a.IPv6 for mDNS. Mar 5 01:45:10 Server avahi-daemon[10338]: Registering new address record for fe80::e894:abff:fe00:467f on veth013765a.*.
  13. So I was doing the unassign/replace method of fixing the high cache writes in 6.9.0 stable and I had removed the first of two disks, rebalance was fine, then I added it back, and during a rebalance I got a kernel panic. IPMI shows this. Anonymized diags attached. Does anyone know what actually happend here. Also, miraciously, I'm able to boot back up and paritcy check AND another balance are both running. Docker containers all seem to be starting up, appdata is all accessible still. I thought for sure the cache pool would have suffered badly from this but it seems like it may be correcting itself without too much fuss. Fingers crossed... Thanks in advance! server-diagnostics-20210305-0126.zip
  14. I've setup this container but it just doesn't work, loading IP:7777 fails to load the page. Log shows > [email protected] start /usr/src/app > node server.js info: compressed application.js into application.min.js info: loading static document name=about, path=./about.md info: listening on 0.0.0.0:7777
  15. FWIW they also have a very active discord server too and there's a #techsupport channel there. I have gotten lots of help there in the past with AMP.
  16. So I honestly have no idea. I don't use sticky backups personally in my instances, but this is definately more of how AMP works rather than the docker. You might have better luck with questions on AMP itself over at their support forums. https://support.cubecoders.com/ If I knew the answer, of cousre I'd tell you here no problem! This is more-so focused on docker and unraid specific setup issues and troubles here though.
  17. Makes sense, I figured it would be something like that, but it asked that I post the message here so I looked first and nobody else had, didn't want to ignore the message haha. Thanks Squid!
  18. When trying to install Hastebin I get the following. Something really wrong went on during createXML Post the ENTIRE contents of this message in the Community Applications Support Thread <br /> <b>Warning</b>: explode() expects parameter 2 to be string, array given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1582</b><br /> <br /> <b>Warning</b>: array_filter() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1582</b><br /> <br /> <b>Warning</b>: array_values() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1582</b><br /> {"status":"ok","cache":null}
  19. I'm mobile, but *instance* start and *application* start are different. Be sure inside the instance it's set to start the application under config and then I think amp core? Otherwise it will start the instance but not the actual minecraft (or game) server.
  20. I'd also suggest just picking a different user and pass too and see if that works if Mitchs suggestion above doesn't.
  21. Thanks for posting this. I had always had mine setup when I was doing testing as a public URL that I had Photoview up on for testing reasons and it was loading fine!
  22. Interesting, I had that issue for a while but didn't for too long. If your library is huge it might take hours to fill out. But if you have a small collection it shouldn't take but a few minutes to start showing up. This is the official docker container so it may be worth reaching out to them on their GitHub page if it doesn't show up in a few hours time? Sorry I can't be of more help directly, I didn't make the program or container, just the template.
  23. Check your appdata photoview folder. Is it actually rendering thumbnails there for you?
  24. I successfully tested with Maria DB and used the example string in the template. user:pass@tcp(IP:PORT)/database For example, if your unRAID IP address is 192.168.1.100 and your MariaDB or MySQL container running on unRAID is on port 3306 And you have a user of "photouser" with a pass of "photopass" and a database name of "photoview" then you would enter the following. photouser:photopass@tcp(192.168.1.100:3306)/photoview
×
×
  • Create New...