hive_minded

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by hive_minded

  1. LSIO Jellyfin docker stops working after ~20-25 seconds and just stops. Attached are logs. Worked fine for 2+ years before this. Not sure what I broke. JellfinLogs.txt
  2. [ 22.933226] i915 0000:00:02.0: [drm] VT-d active for gfx access [ 22.933482] i915 0000:00:02.0: vgaarb: deactivate vga console [ 22.933521] i915 0000:00:02.0: [drm] Using Transparent Hugepages [ 22.934726] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 22.935295] mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_component_ops [i915]) [ 22.937682] i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4) [ 22.938811] [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0 [ 23.002401] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 23.060813] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 23.122299] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes Thoughts? Having trouble with jellyfin I believe this could be why.
  3. Mine looks similar to this, however I am still seeing this.
  4. I get hang ups requiring a forced power down every week or two. 8600 on 6.9.2. Was hoping 6.10.0 would fix it, but waiting until a stable release comes. I suppose I could try and disable the iGPU, but that would mess with Jellyfin transcoding, which isn't ideal.
  5. Did you find a solution to this? I get these same random freezes every few weeks too. My server was stable for the first 4-5 months I had it, but over the last 2-3 months I keep getting these random freezes and I can't figure out what is causing it. I've only ever been on 6.9.2. Considering upgrading to 6.10.0 RC2, but I'm trying to wait it out until a stable version comes out.
  6. Fair enough - and the scrubbing unfortunately did not work. I've had 2 freezes since trying that. I guess I'll to reseat the m2 drives and PCIe card - as well as run a memory test. And then try upgrading to 6.10.0-rc1? Just odd that it worked flawlessly for the first 4-5 months, and the issues just started without any hardware change/distruption. All of the hardware was purchased new - so everything is pretty fresh.
  7. 36 hours later and running "btrfs dev stats /mnt/cache" shows 0 errors. I am going to tentatively consider this one solved. Though I will be keeping a close eye on errors in the future, and am willing to try 1 drive xfs cache if btrfs continues to have issues. Right now, everything appears stable though. Thanks again for your help @JorgeB
  8. Ok gotcha, thanks for the responses. I'll keep a close eye on it now to make sure new errors don't pop up.
  9. Running a scrub on my cache drives spits out this: Error summary: verify=54 csum=76277 Corrected: 0 Uncorrectable: 0 Unverified: 0 **edit** Running it again, clicking the 'Repair corrupted blocks' box gives a slightly different result, and it looks like some were corrected Error summary: verify=53 csum=76256 Corrected: 76309 Uncorrectable: 0 Unverified: 0 I tried to run an extended SMART self-test on both of them, but when I click 'START' nothing appears to happen. However both of them say 'PASSED' on the 'SMART overall-health' field at the bottom of the drive information. **edit #2** After forcing a reset of the stats by running "btrfs dev stats -z /mnt/cache", and scrubbing - this is my new output: [/dev/nvme0n1p1].write_io_errs 0 [/dev/nvme0n1p1].read_io_errs 0 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 152533 [/dev/nvme1n1p1].generation_errs 107 It appears that the scrub added 107 generation errors. The link you posted mentioned that it can often be related to cables. I actually am not running any cables though - both SSDs are in a QNAP 2x m2 PCIe card. So it looks like either the drive itself may be going bad (which is weird, because its a WD Blue which I think are usually pretty reliable, and its only 6 months old) - or maybe the QNAP PCIe card has something wrong with it?
  10. I don't know that the first thread 100% applies to me - searching through my syslog I don't have any 'kernel panic' errors. Looking through the last few hours the few that caught my (very untrained eye) are Sep 30 04:08:21 alexandria kernel: WARNING: CPU: 7 PID: 0 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Sep 30 04:08:21 alexandria kernel: cpuidle_enter_state+0x101/0x1c4 Sep 30 04:08:21 alexandria kernel: cpuidle_enter+0x25/0x31 Sep 30 04:08:21 alexandria kernel: do_idle+0x1a6/0x214 Sep 30 04:08:21 alexandria kernel: cpu_startup_entry+0x18/0x1a Sep 30 04:33:53 alexandria kernel: BTRFS error (device nvme0n1p1): bad tree block start, want 1823784960 have 0 I also see a lot of 'netfilter', which the OP in the first thread you linked mentioned. Two of my dockers - Netdata and Tailscale are both set to use 'host' network (the rest of my dockers are using 'bridge'). The link you posted also mentioned that host access could be causing the issues as well. So I will switch both of those to bridge. ------------------------------- Running the command in the second link to check btrfs pools for errors spits out this [/dev/nvme0n1p1].write_io_errs 0 [/dev/nvme0n1p1].read_io_errs 0 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 161864479 [/dev/nvme1n1p1].read_io_errs 129699470 [/dev/nvme1n1p1].flush_io_errs 2518939 [/dev/nvme1n1p1].corruption_errs 164095 [/dev/nvme1n1p1].generation_errs 0 So it definitely looks like there are a lot of errors. The link says that all values should be 0. I'll run a scrub and an extended SMART test on my cache SSDs.
  11. So I have been running into some unfortunate issues lately. I built my server back in April or so - and up until around a month ago it ran flawlessly, no issues at all. Around a month ago I got my first 'freeze'. Where I am unable to access the server - WebGUI, and docker services running, can't ping, etc. Totally unreachable. But the server is still powered on. It requires a force shut off and on to 'fix'. Around 2 weeks later it happened again. A week after that it happened again. And it's happened twice in the last 5 days now. So it's definitely becoming more regular. After the last time it did this I set up syslog, so I will attach that. I'm really at a loss here (and the most recent freeze happened mid preclearing 2 new HDDs, so that is also a bummer. Any advice would be much appreciated. syslog-192.168.0.000.log
  12. I need some help I completely broke Wireguard. I was having some issues getting it working, especially after switching from Google DDNS to Cloudflare + NPM. Couldn't get anything to work. So I decided to nuke everything and start fresh. I rm -rf'd /etc/wireguard/, deleted the plugin, redownloaded the plugin and then tried to start fresh. However, when I went into VPN settings after nothing would save. Hitting apply would reset everything. After looking around, it looks like re downloading the plugin did not create '/etc/wireguard'. So I mkdir '/etc/wireguard', as well as '/etc/wireguard/wg0.conf' Now I can get a couple things to save, but once I try to 'add peer' it just raises and lowers the tunnel wg0 part, as if Im clicking the down/expand arrow. At this point I decided to delete everything again, and restart the server. When I did that /etc/wireguard/ showed up (was not there when I restarted) and had the old files from the very beginning. Any pointers on how to just 100% reset the Wireguard state. Deleting/restarting is clearly not working, and nothing new I am doing is saving.
  13. I used to have a Wireguard tunnel set up and running, but I was unable to get remote working by using my domain name (only my server IP would work remotely). I was able to use this to access the WebGUI remotely. I recently switched from using Google domain, to using Cloudflare for DNS management. I have been able to get everything set up to where I can now access docker containers like Jellyfin remotely (using Nginx Proxy Manager). I have read that you should not use NPM to access your WebGUI remotely. So I am trying to set up up a Wireguard tunnel again. I can not seem to get it to work properly. Right now I am able to connect to the Tunnel/Peer I set up on my phone. If I try to go to mydomain.com, it directs me to the NPM 'Congratulations' landing page. I want it to work where going to mydomain.com sends me to my Unraid WebGUI. If I try to access my WebGUI by going to 192.xxx.x.xxx it just times out and doesnt take me anywhere. Where am I messing up? I'm not sure if I'm missing something on my router (Unifi), Cloudflare, NPM, or unraid GUI. Any help would be much appreciated. **edit** All of the above was on my Phone, not connected to my LAN. When I connect to my LAN, I am able to access my WebGUI by going to 192.xxx.x.xxx, and if I go to mydomain.com I still get the NPM 'Congratulations' landing page. **edit #2** I was not able to get anything to work at all when selecting 'remote tunnel access'. When I switched to 'Remote access to LAN' that is when I started to be able to access the internet, and the NPM 'Congratulations' landing page.
  14. @codefaux Yes, sorry I should have clarified - I am using the Unifi for my router/WebGUI - via a UDM. Question about something you said, "Also, it is a wildly horrible idea to expose the Unraid web interface to the internet, it is absolutely not designed for that sort of exposure. Very, very risky.". If I am setting everything up to be able to access through the Wireguard VPN Tunnel (either with IP address, or domain - yum.cookies.us) - that is effectively shielding Unraid and my Local network from the internet, right? Seeing as the only way to get in is through the tunnel that I've created? That is my understanding at least, but I want to make sure that is accurate, because keeping my network secure from the internet is obviously priority number 1. Also - when you mention needing to set up NAT reflection what do you mean by that? In a few threads that I've read from people with similar issues to me, they reccomended changing the setting 'Local server uses NAT' to 'No'. I have not tried to change that yet. Outside of instructions from the official guide (which seems to have been pulled from the Unraid website, but the longer version is still available here) I have not changed anything. Do you think that changing that setting would help, or accomplish setting up NAT reflection? Or is that something completely different? And I should clarify, being able to connect to my LAN from domain name inside my network would be neat, but not really a game changer. It would be cool to use my domain name because I like it and I think it's neat, but I'm the only person connecting to my server at home so it's not the end of the world. Being able to connect to my LAN via domain name outside of my LAN via domain name is much more important though. It makes things much more simple for to just give a domain name to friends and family, versus an IP address. Also I'm not 100% sure, but a Jellyfin guide I glanced over mentioned that it was a requirement for remote viewing. However, there may be workarounds. I haven't dug into that yet as I'm trying to solve this problem first. If I screenshot my Unraid VPN tunnel and peer settings, along with my Google domain DDNS settings, and finally my Unifi DDNS and Port Forwarding rule - would that help you identify where any kinks in the line are? I am more than happy to do so, but I'm not sure what information I need to block out for privacys sake. I am guessing both 'yum' and 'cookies' in my domain 'yum.cookies.us' in Google Domain and Unraid settings? As well as Google Domains username, and my Unraid Static IP? If I left anything out, or some of those don't actually matter - do let me know and I can get my screenshots posted asap. Thanks a lot for your time and your help, no rush either way - I know life can be a bit bumpy at times. Thanks again.
  15. I just wanted to piggyback off of what some of the other recent comments said. For some reason whenever Unraid.net updates, it bricks the current theme and is impossible to set back without uninstalling/reinstalling.
  16. As a quick update - a few things I've checked. If I go in to my google domains -> DNS -> Synthetic Records -> Dynamic DNS settings - I can see the subdomain that I picked, with a valid IP. When I go to DNSChecker.org, and type in my full domain (yum.cookies.us - the example from the original post) it comes up with the same IP that is listed on google domains. So that part of the process seems to be working right. However, when I try to ping yum.cookies.us from my local PC, the request times out. When I use nslookup from my local PC - it comes up with the proper domain name (yum.cookies.us) and IP that shows up in google domains. Not sure where that leaves me - it's just odd that I'm still able use access remotely with the IP - but for some reason my domain name won't work. edit #3 So it looks like some of the errors I was running in to - had to do with my local PC always being connected to VPN. If I closed out of the VPN - I am now able to access 'yum.cookies.us'. Except it is pointing to my Unifi WebGUI, not my Unraid WebGUI. And that only works from within the LAN, outside the LAN it still does not work (but the IP address + wireguard tunnel still does). I must have gotten some wires crossed somewhere, but I'm not sure where.
  17. Hey - sorry for the late reply. I am not able to access my server via domain name from inside or outside of the LAN My level of experience of DNS troubleshooting is pretty low, but I can read instructions and follow guides well enough. I mostly followed this guide My domain is from Google Domains - and I use dyndns for the service. Google doesn't show up as an option in the Unifi settings, but the video I linked mentioned that they should be interchangeable for this because they use the same protocol. And no, I don't use my VPN tunnel from within my own network, just when I need to access it remotely. But it would still be nice if I was able to access my server (both inside and outside of my LAN) using the domain name.
  18. So let me start by saying that in the past month I have bought a UDM, set up an Unraid server, and started a Jellyfin/media playing server. I say all of that just to point out I am new at this, so there is very likely a lot of stuff I am just looking over - so I really appreciate any help. I followed the Wireguard setup located here - and mostly everything works. I was able to log in and access my server remotely when I visited my family last weekend. So I imagine that means I have set everything up relatively right, with regards to the adding a peer, port forwarding, and DDNS. However, for a reason I can't quite figure out - I've never been able to access it via the actual domain name. Say my domain is 'cookies.us', and I set my sub domain as 'yum'. So I should be able to access my server by going to yum.cookies.us, as far as I understand it? Since everything else is set up right (I assume, since the access remotely part of it is working) - does anyone have any ideas why the domain name wouldn't work? I am trying to set up a remote jellyfin to give to friends and families, and that would be much easier if I am just able to give them my domain name instead of an IP. If this is in the wrong part of the forum, let me know I am happy to move it. I am also happy to post any screenshots to help people figure it out (I just didn't so far, because I'm honestly not sure what I need to hide/block out in regards to privacy and security). Any help is appreciated, thanks!
  19. I had been trying to figure out a way to download specific files from my Seedbox (as opposed to a whole automated system like some of the videos have guides for). I landed on using Firefox to log in to my Seedbox, and download the files from there. The files end up in /mnt/cache/appdata/firefox/Downloads - and from there I can unzip and then move to the proper share. I've been running into an odd problem though. Once every 3-4 downloads, one of them will download 99.9% of the file, but never actually finish. When you initial the download you get 2 files - a .zip and a .zip.part - when the download is completely finished the files merge into one .zip file. But the files that freeze up will finish downloaded essentially, but never merge together so I am able to unzip them. It's not a specific file thing, because sometimes a download won't work, but after removing it and starting the download again it will work the second time. It seems to be more consistent if there is less total downloading. Outside of that I can't really place a pattern though. Any idea why this might be happening? Or some settings I can tweak to increase the likelihood of the downloads finishing?