Bobat

Members
  • Posts

    50
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bobat's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Sort of. My OPNsense router was also plugged into the same monitor, so I think after 15 minutes of inactivity the monitor was automatically switching over to the other input and the messages would start up again. I unplugged the other cable and it seems to be stable now. I rarely need a monitor plugged into either machine, so maybe I'll try one of those HDMI dummy plugs rather than keeping the monitor on all of the time.
  2. I did have a monitor plugged in via an HDMI KVM. I plugged the monitor directly into the server and it seems to have stopped. Thanks! EDIT: I spoke too soon, it stopped for about 15 minutes and then started doing it again.
  3. I'm getting the "EDID block 0 is all zeroes" error filling up my log on 6.11.0 final. I saw this on 6.11rc4 as well (never tried rc5). Rolling back to rc3 makes it go away. I saw this thread suggesting that blacklisting the GPU driver may fix it, but I use the integrated GPU for Plex transcoding so I don't think that's a viable solution for me. Any other ideas? burns-diagnostics-20220923-1425.zip
  4. It goes in your nginx.conf. In mine, it's right after the events{} block and before the http{} block.
  5. The US servers don't look like they support port forwarding. For the servers that do, it looks like you're limited to a single port. I don't use the function personally, so I'm not 100% sure on that.
  6. If you're using PIA, make sure you're using the most up to date ovpn config files from the PIA site. They just recently retired a bunch of legacy servers. My config file was pointing to one of those old servers and I couldn't figure out why it wouldn't connect anymore. Download the default Nextgen config files and make sure you're using the current server names.
  7. This is definitely possible. I do something very similar with my custom domain and Cloudflare. You need to set up stream proxying in your nginx.conf file. stream { # Defining upstream servers for proxied traffic upstream tcp_backend { server 123.456.7.8:9443; } upstream udp_backend { server 123.456.7.8:1194; } # Defining protocols and ports for data to be proxied. server { proxy_connect_timeout 300s; proxy_timeout 300s; listen 9443; proxy_pass tcp_backend; } server { proxy_connect_timeout 300s; proxy_timeout 300s; listen 1193 udp; proxy_pass udp_backend; } } Where 123.456.7.8 is the internal IP address of your OpenVPN server.
  8. The client was a Roku box hooked up to my TV, so no browser involved.
  9. Happened to me this morning with the Plex docker. One local stream, so not particularly heavy traffic. No other downloads/uploads from my other services. burns-diagnostics-20180322-0744.zip
  10. From your syslog: Several people (including myself) are having the same issue. See here, here, and here. I've downgraded back to 6.4.1 and everything's good for me now. I'll try again after the next release to see if a new kernel patch solves it.
  11. I’ve had a few kernel oops and call traces lately. Any ideas? burns-diagnostics-20180317-1736.zip
  12. Got 2 of these from Amazon on Black Friday. 8TB Seagate Barracuda Compute inside (ST8000DM004). Great deal!
  13. Thanks. I think I initially misunderstood how dual parity was working and thought that both parity drives had to be available to have protection in a dual-parity setup. After reading this I get why having either of the parity drives available gives me one-drive failure tolerance while the other parity is rebuilding. So is Parity 1 always the XOR and Parity 2 is the Reed-Solomon? So I want to replace Parity 1 with the new larger drive, let it rebuild, and then unassign Parity 2?
  14. Here is my current config: 2 parity drives (3tb each) 1 cache drive (1tb ssd) 12 data drives (mix of sizes from 1 - 3tb) I just bought 2 8tb drives on a black friday deal. The goal is to both increase the capacity of the array and consolidate and eliminate one of the smaller drives. I think the dual parity was overkill, so the final config will be a single 8tb parity drive and the other 8tb drive will be put into the array as a data drive. I think the steps for this would be: Stop the array Unassign both of the parity disks Assign one of the 8tb drives as parity, leave the old parity disks unassigned Start the array Wait for parity to rebuild Power down, pull one of the old 1tb data disks and replace with the other 8tb disk (precleared previously) Assign the new 8tb drive into the old slot, let it rebuild from parity Preclear the old parity disks, add them to the array as data disks Sound right or am I missing something?
  15. http://lime-technology.com/forum/index.php?topic=12767.msg259006#msg259006 Step-by step in is in the zip file referenced in the post. I just did this myself for the first time a few days ago. I had the "failed to initialize PAL" error at step 5, but luckily I had another PC on hand I could use to finish the procedure.