Seige

Members
  • Content Count

    56
  • Joined

  • Last visited

Everything posted by Seige

  1. Would it be possible to run multiple instances of the letsencrypt container? All instances would have to have the same port mapping (i presume). Would this be possible by defining custom docker networks for each instance of letsencrypt and would http validation still work? Thank you for the help!
  2. Thank you for the input. This seems to have fixed it for now. I presume this change to the php file is not persistent across reboots?
  3. I am from Europe. It seems to be more reliable today, the timeout now happens once I reach AWS in Seattle, which I presume is their firewall.
  4. I have the same issues with all of my dockers now. LSIO and others. I also have issues tracing (tracert) hub.docker.com which is hosted by amazonaws. Their twitter support says it is all fine on their end, so not sure what to make of it. I can also not update Nextcloud through the webinterface, as the package is not downloading (again from amazonaws as far as I can tell). VPN of different regions also does not help.
  5. I have the same issue, always comes back with "0 B pulled" on all of my containers. I think amazonaws have some issues, tracert does time out in my case, also.
  6. These ports are required for running TS: 9987 (UDP) 10011 (TCP) 30033 (TCP) (for file transfers) Did you specify the internal ports and protocols correctly? In what network type mode are you running the container? You might want to try it in "host". The error message "Unable to open /config/licensekey.dat, falling back to limited functionality" is shown, because you have not purchased a full server licese and are operating a server which can only host up to 32 clients (i.e. limited functionality). Also you might want to consider removi
  7. This is a very brief description of your problem. What is the exact error in the log? If it used to work and now is suddenly broken, it might be because of an issue of your port 80 routing (at least in my experience this is very often the culprit). Do you know how to access the docker command line and run a cert renewal test? This usually gives you a more detailed error message.
  8. As @Squid mentioned, if you search for teamspeak, you can click a link to extend the search to dockerhub. I added a screenshot to my post.
  9. Preface: Unfortunately, the LSIO container based on LGSM has been deprecated (blog post). Since my friends and I use TS when gaming, I moved my old server to the official docker linked in the post . It is fairly straight forward, but I wanted to write a brief guide in case others would find it useful. Please note that I am not associated with Teamspeak in any capacity and would consider myself a linux noob. Guide: Stop the LSIO Teamspeak container. Use the Community Applications Plugin to install the official Teamspeak docker
  10. For a while now, I also get the array index out of range error, when trying to run a benchmark. It does not spin up any drives. Scanning works fine. Here is a screenshor of the error: I tried to reboot the server and pulled a fresh image with a new appdata folder. No changes. Do you have any idea what might cause this? Thank you!
  11. Google says about 100 MB/s, so you should be right: according to this https://www.tomshardware.com/reviews/wd-red-10tb-8tb-nas-hdd,5277-2.html
  12. The 3/4 TB WD reds are rather slow in comparison. Their 10 TB model is a bit faster, and since the read/write speed is not linear it is hard to predict an exact value. When going from a 4TB WD red parity to a 10 TB Ironwolf my parity check times went from 10 hrs to 19 with another 10tb data drive in the array. Once it is done with the 4 TB section involving the WD reds it picks up speed since the Seagates are notably faster. With only Ironwolfs or similar drives I guess it would be around 12 ish hours.
  13. If you have a setup with a single parity and a somewhat decent cpu it is pretty much limited by the slowest drive in your array. I would estimate something between 12-20 hours.
  14. +1 for that, not sure if we should put that in the feature request section? Other than that, no issues with the update so far!
  15. I am pretty sure this was because of your split level. You only allow to split at the top level and your disks are very full. Why not set split level to auto and see if that helps?
  16. Quick questions, and maybe not related, but are your disks still in a hardware raid configuration? What does the server do after this period? Webui is becoming unresponsive? Is it locking up? I am not a syslog guru, but things that look weird: Jun 6 01:58:41 DaneviServer kernel: DMAR-IR: This system BIOS has enabled interrupt remapping Jun 6 01:58:41 DaneviServer kernel: on a chipset that contains an erratum making that Jun 6 01:58:41 DaneviServer kernel: feature unstable. To maintain system stability Jun 6 01:58:41 DaneviServer kernel: interrupt
  17. How is you M1015 installed? These controller gets very warm, even under idle. With rising ambient temperatures this could also be the culprit. I would highly recommend installing the Noctua NF-A4x10 FLX on top of the heat sink. If you are up for it replace the thermal compound with something better. Mine was dried up and flaky. I used a small cable tie to hold the fan in place, but others are using screws (image is not from a M1015, but results are pretty much the same) : I leave it running at full speed, it is rather quiet and the heat sink is cool to the touch, even u
  18. I did some more tests today and I can confirm that it is caused by the smb folder that is mounted at the startup of the array. The can be replicated if the folder is not mounted immediately because the remote drive has to wake first. If the drive is already spinning, no error is thrown. Nonetheless the folder is mounted correctly in both cases so it is not a big deal. Cheers, Seige
  19. Thank you @pwm for your additional explanation! I was not familiar with this, but it looks like that this is exactly what happened. I was able to replicate the issue depending on the way I use copy/paste from inside of windows.
  20. After some additional testing it seems that this error is in fact caused by the network mount, if the remote drive is sleeping and has to wake up. The mounting is still completed, but the error is still created. Will do some additional testing and report back.
  21. Maybe putty does not like copy pasting from the web browser. I carefully ran it through Notepad++ and double checked all characters and seems to work for balance and stats. But command btrfs fi df /mnt/cache still produces the error. Also tried the build in web terminal, same results. I feel rather inapt EDIT: It seems to work, when I type it manually. Not sure where this is coming from, never had any issues with copy paste before.
  22. Thanks for the quick response! I have copy pasted both commands from some of your posts. At the moment both commands work, but e.g. btrfs fi df /mnt/cache Does lead to the same "ERROR: cannot access '/mnt/cache': No such file or directory" Could it be that these commands cannot be executed one after another?
  23. Hi, I recently installed a second SSD and created a btrfs cache pool. After a while I noticed performance degradation and came across several suggestions by @johnnie.black. After running: btrfs balance start --full-balance /mnt/cache and fstrim -v /mnt/cache performance was back to normal. Hence, I wanted to add a weekly script with: btrfs balance start -dusage=75 /mnt/cache This leads to the following error message: "ERROR: cannot access '/mnt/cache': No such file or directory" while the original command still
  24. Hi, I realised that I reveice the following error message when the server is starting up (it it not always being logged in the syslog file, but it always appears in the log window of the webui): Jun 7 18:01:21 Tower emhttpd: error: send_file, 139: Broken pipe (32): sendfile: /usr/local/emhttp/logging.htm Not sure what is causing this, I cannot detect any unusual behaviour. Here is my diagnostics file: EDIT: link removed Thank you for the help! Cheers, Seige EDIT: Maybe this is caused by the mounted smb fo