Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Bobat

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is definitely possible. I do something very similar with my custom domain and Cloudflare. You need to set up stream proxying in your nginx.conf file. stream { # Defining upstream servers for proxied traffic upstream tcp_backend { server 123.456.7.8:9443; } upstream udp_backend { server 123.456.7.8:1194; } # Defining protocols and ports for data to be proxied. server { proxy_connect_timeout 300s; proxy_timeout 300s; listen 9443; proxy_pass tcp_backend; } server { proxy_connect_timeout 300s; proxy_timeout 300s; listen 1193 udp; proxy_pass udp_backend; } } Where 123.456.7.8 is the internal IP address of your OpenVPN server.
  2. The client was a Roku box hooked up to my TV, so no browser involved.
  3. Happened to me this morning with the Plex docker. One local stream, so not particularly heavy traffic. No other downloads/uploads from my other services. burns-diagnostics-20180322-0744.zip
  4. Lol. Less than an hour after posting that I got one Downgraded again.
  5. 24+ hours since upgrading to 6.5.1-rc1 and no call traces. Looks good.
  6. From your syslog: Several people (including myself) are having the same issue. See here, here, and here. I've downgraded back to 6.4.1 and everything's good for me now. I'll try again after the next release to see if a new kernel patch solves it.
  7. I've also rolled back to 6.4.1 and have been OK so far.
  8. Have you had any call traces or kernel oops since moving to 6.5.0? I’m having a similar issue with Plex since 6.5. Considering a rollback to 6.4 to see if it goes away. Are you using linuxserver’s docker by chance?
  9. I’ve had a few kernel oops and call traces lately. Any ideas? burns-diagnostics-20180317-1736.zip
  10. Got 2 of these from Amazon on Black Friday. 8TB Seagate Barracuda Compute inside (ST8000DM004). Great deal!
  11. Thanks. I think I initially misunderstood how dual parity was working and thought that both parity drives had to be available to have protection in a dual-parity setup. After reading this I get why having either of the parity drives available gives me one-drive failure tolerance while the other parity is rebuilding. So is Parity 1 always the XOR and Parity 2 is the Reed-Solomon? So I want to replace Parity 1 with the new larger drive, let it rebuild, and then unassign Parity 2?
  12. Here is my current config: 2 parity drives (3tb each) 1 cache drive (1tb ssd) 12 data drives (mix of sizes from 1 - 3tb) I just bought 2 8tb drives on a black friday deal. The goal is to both increase the capacity of the array and consolidate and eliminate one of the smaller drives. I think the dual parity was overkill, so the final config will be a single 8tb parity drive and the other 8tb drive will be put into the array as a data drive. I think the steps for this would be: Stop the array Unassign both of the parity disks Assign one of the 8tb drives as parity, leave the old parity disks unassigned Start the array Wait for parity to rebuild Power down, pull one of the old 1tb data disks and replace with the other 8tb disk (precleared previously) Assign the new 8tb drive into the old slot, let it rebuild from parity Preclear the old parity disks, add them to the array as data disks Sound right or am I missing something?
  13. http://lime-technology.com/forum/index.php?topic=12767.msg259006#msg259006 Step-by step in is in the zip file referenced in the post. I just did this myself for the first time a few days ago. I had the "failed to initialize PAL" error at step 5, but luckily I had another PC on hand I could use to finish the procedure.
  14. I was having the same issue post upgrade. Here's what I did to restore connectivity to Crashplan Central: [*]Stop the Crashplan docker [*]Backup your my.service.xml file in your Crashplan docker config folder - mine was in /mnt/cache/appdata/crashplan/conf/ [*]Edit my.service.xml and remove the hash string between the <autoLoginPasswordHash> tags [*]Change the text between the <autoLogin> tag from "true" to "false" [*]Start the Crashplan docker [*]Launch the GUI client (either through Windows or the MATE, whichever you use) and sign in to your Crashplan account when prompted then close the client [*]Stop the Crahsplan docker [*]Edit my.service.xml and change the text between the <autoLogin> tag back to "true" [*]Start the Crashplan docker again Confirm through the GUI client that everything connected and your backups are working again. I don't know if toggling the autoLogin part is strictly necessary, you may only need to remove your old password hash, but this worked for me.
  15. The binhex delugevpn container is back. Going to play around with it tonight.