Jump to content

scud133b

Members
  • Content Count

    65
  • Joined

  • Last visited

Community Reputation

5 Neutral

About scud133b

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So I gathered that the drives might be accessed during the test so I actually shut down all VMs and all other docker containers (so this was the only thing running) and that produced the test results you saw in my post. I haven't had a chance to try swapping disks between the onboard controller and the PCIe controller but I can do that next if you think it might illuminate something. Thanks
  2. Hi @jbartlett this is a really cool tool, thank you. The tests are finding a few problems. Note that Disk 4 is on the onboard controller and the others are on a SAS HBA card. First, the app reports a bandwidth cap error on all my SATA drives. Second, I'm getting frequent Speed Gap notices when I run the all-disk benchmark. Third, the graph appears to be pretty shaky/wobbly for several of my disks. I attached a shot of the graph. Any suggestions for what is causing this behavior?
  3. And now another fun finding: I got the entire process to work by doing it in Chrome. So apparently the errors happening in the Brave browser prevented the proxy host from being fully created -- but doing it in Chrome worked fine. So I'm all green again and everything's working now. *shrug*
  4. Another data point: it appears the app never correctly saved the certificate for my new proxy host. The error refers to the folder /etc/letsencrypt/live/npm-14/.... and that isn't in the file system. I see the other folders for other proxy hosts (like npm-3, npm-5, etc.) but not npm-14. So for some reason the new SSL cert and the associated files are not being saved in the appdata folder for any new hosts I try to create.
  5. Also using Brave and you're right. Works fine in Chrome. I only changed the http/https ports (e.g., the container gets ports 1880 1443) and I have my router properly forwarding to them. That config has been totally fine until I tried to create a new proxy host this week.
  6. In case it helps here's an example log I just grabbed: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... [cont-init.d] 00-app-script.sh: exited 0. [cont-init.d] 00-app-user-map.sh: executing... [cont-init.d] 00-app-user-map.sh: exited 0. [cont-init.d] 00-clean-logmonitor-states.sh: executing... [cont-init.d] 00-clean-logmonitor-states.sh: exited 0. [cont-init.d] 00-clean-tmp-dir.sh: executing... [cont-init.d] 00-clean-tmp-dir.sh: exited 0. [cont-init.d] 00-set-app-deps.sh: executing... [cont-init.d] 00-set-app-deps.sh: exited 0. [cont-init.d] 00-set-home.sh: executing... [cont-init.d] 00-set-home.sh: exited 0. [cont-init.d] 00-take-config-ownership.sh: executing... [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 00-xdg-runtime-dir.sh: executing... [cont-init.d] 00-xdg-runtime-dir.sh: exited 0. [cont-init.d] nginx-proxy-manager.sh: executing... [cont-init.d] nginx-proxy-manager.sh: Starting database... [mysqld] starting... 2020-04-14 19:27:17 0 [Note] /usr/bin/mysqld (mysqld 10.3.22-MariaDB) starting as process 359 ... 2020-04-14 19:27:17 0 [Note] InnoDB: Using Linux native AIO 2020-04-14 19:27:17 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-14 19:27:17 0 [Note] InnoDB: Uses event mutexes 2020-04-14 19:27:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-04-14 19:27:17 0 [Note] InnoDB: Number of pools: 1 2020-04-14 19:27:17 0 [Note] InnoDB: Using SSE2 crc32 instructions 2020-04-14 19:27:17 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-14 19:27:17 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-14 19:27:17 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 00-xdg-runtime-dir.sh: executing... [cont-init.d] 00-xdg-runtime-dir.sh: exited 0. [cont-init.d] nginx-proxy-manager.sh: executing... [cont-init.d] nginx-proxy-manager.sh: Starting database... [mysqld] starting... 2020-04-14 19:27:17 0 [Note] /usr/bin/mysqld (mysqld 10.3.22-MariaDB) starting as process 359 ... 2020-04-14 19:27:17 0 [Note] InnoDB: Using Linux native AIO 2020-04-14 19:27:17 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-14 19:27:17 0 [Note] InnoDB: Uses event mutexes 2020-04-14 19:27:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-04-14 19:27:17 0 [Note] InnoDB: Number of pools: 1 2020-04-14 19:27:17 0 [Note] InnoDB: Using SSE2 crc32 instructions 2020-04-14 19:27:17 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-14 19:27:17 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-14 19:27:17 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-04-14 19:27:17 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2020-04-14 19:27:17 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-04-14 19:27:17 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-04-14 19:27:17 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-04-14 19:27:17 0 [Note] InnoDB: 10.3.22 started; log sequence number 2894528; transaction id 29983 2020-04-14 19:27:17 0 [Note] InnoDB: Loading buffer pool(s) from /config/mysql/ib_buffer_pool 2020-04-14 19:27:17 0 [Note] InnoDB: Buffer pool(s) load completed at 200414 19:27:17 2020-04-14 19:27:17 0 [Note] Plugin 'FEEDBACK' is disabled. 2020-04-14 19:27:17 0 [Note] Server socket created on IP: '::'. 2020-04-14 19:27:17 0 [Note] Reading of all Master_info entries succeeded 2020-04-14 19:27:17 0 [Note] Added new Master_info '' to hash table 2020-04-14 19:27:17 0 [Note] /usr/bin/mysqld: ready for connections. Version: '10.3.22-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server [cont-init.d] nginx-proxy-manager.sh: Upgrading database if required... [cont-init.d] nginx-proxy-manager.sh: Shutting down database... 2020-04-14 19:27:18 0 [Note] /usr/bin/mysqld (initiated by: unknown): Normal shutdown 2020-04-14 19:27:18 0 [Note] Event Scheduler: Purging the queue. 0 events 2020-04-14 19:27:18 0 [Note] InnoDB: FTS optimize thread exiting. 2020-04-14 19:27:18 0 [Note] InnoDB: Starting shutdown... 2020-04-14 19:27:18 0 [Note] InnoDB: Dumping buffer pool(s) to /config/mysql/ib_buffer_pool 2020-04-14 19:27:19 0 [Note] InnoDB: Buffer pool(s) dump completed at 200414 19:27:19 2020-04-14 19:27:19 0 [Note] InnoDB: Buffer pool(s) dump completed at 200414 19:27:19 2020-04-14 19:27:20 0 [Note] InnoDB: Shutdown completed; log sequence number 2894537; transaction id 30003 2020-04-14 19:27:20 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2020-04-14 19:27:20 0 [Note] /usr/bin/mysqld: Shutdown complete chown: /config/log/nginx/nginx: No such file or directory [cont-init.d] nginx-proxy-manager.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] starting s6-fdholderd... [services.d] starting statusmonitor... [services.d] starting logrotate... [statusmonitor] no file to monitor: disabling service... [services.d] starting logmonitor... [services.d] starting mysqld... [logmonitor] no file to monitor: disabling service... [logrotate] starting... [mysqld] starting... 2020-04-14 19:27:47 0 [Note] /usr/bin/mysqld (mysqld 10.3.22-MariaDB) starting as process 31753 ... 2020-04-14 19:27:47 0 [Note] InnoDB: Using Linux native AIO 2020-04-14 19:27:47 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-14 19:27:47 0 [Note] InnoDB: Uses event mutexes 2020-04-14 19:27:47 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-04-14 19:27:47 0 [Note] InnoDB: Number of pools: 1 2020-04-14 19:27:47 0 [Note] InnoDB: Using SSE2 crc32 instructions 2020-04-14 19:27:47 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-14 19:27:47 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-14 19:27:47 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-04-14 19:27:47 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2020-04-14 19:27:47 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-04-14 19:27:47 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-04-14 19:27:47 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-04-14 19:27:47 0 [Note] InnoDB: Waiting for purge to start 2020-04-14 19:27:47 0 [Note] InnoDB: 10.3.22 started; log sequence number 2894537; transaction id 29983 2020-04-14 19:27:47 0 [Note] InnoDB: Loading buffer pool(s) from /config/mysql/ib_buffer_pool 2020-04-14 19:27:47 0 [Note] InnoDB: Buffer pool(s) load completed at 200414 19:27:47 2020-04-14 19:27:47 0 [Note] Plugin 'FEEDBACK' is disabled. 2020-04-14 19:27:47 0 [Note] Server socket created on IP: '::'. 2020-04-14 19:27:47 0 [Note] Reading of all Master_info entries succeeded 2020-04-14 19:27:47 0 [Note] Added new Master_info '' to hash table 2020-04-14 19:27:47 0 [Note] /usr/bin/mysqld: ready for connections. Version: '10.3.22-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server [services.d] starting nginx... [services.d] starting app... [nginx] starting... [app] starting Nginx Proxy Manager... [services.d] done. [services.d] starting nginx... [services.d] starting app... [nginx] starting... [app] starting Nginx Proxy Manager... [services.d] done. [4/14/2020] [7:27:49 PM] [Migrate ] › ℹ info Current database version: 20190227065017 [4/14/2020] [7:27:49 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services... [4/14/2020] [7:27:49 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [4/14/2020] [7:27:49 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [4/14/2020] [7:27:49 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [4/14/2020] [7:27:49 PM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [4/14/2020] [7:27:49 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry... [4/14/2020] [7:27:49 PM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [4/14/2020] [7:27:49 PM] [Global ] › ℹ info Backend PID 31823 listening on port 3000 ... `QueryBuilder#allowEager` method is deprecated. You should use `allowGraph` instead. `allowEager` method will be removed in 3.0 `QueryBuilder#eager` method is deprecated. You should use the `withGraphFetched` method instead. `eager` method will be removed in 3.0 QueryBuilder#omit is deprecated. This method will be removed in version 3.0
  7. In my case -- no changes to the container config beyond changing ports to avoid conflicts. I tried downgrading to v1.7.0 and now I have another issue -- I can't actually load the Proxy Hosts page in the UI anymore. Click the link and nothing happens... the other pages work fine (Redirection Hosts, Streams, 404 Hosts). The browser address bar even updates to the correct address (http://server:port/nginx/proxy) but nothing on the page changes at all. So that's fun, doesn't seem related but maybe it is (?). Edit: I should add that when I actually view the files in /nginx/proxy_host/ this new config file is NOT there. So it looks like maybe the database thinks it created the proxy config, but it never actually generated and saved the file correctly.
  8. This is the same behavior I'm getting in v2.2.2 Here's a screenshot of my error:
  9. I can't seem to generate any new SSL certs. All my old ones are still working fine... but when I tried to proxy a new service today (and after doing several tests) I'm getting this error every time: Command failed: /usr/sbin/nginx -t nginx: [emerg] BIO_new_file(\"/etc/letsencrypt/live/npm-14/fullchain.pem\") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/npm-14/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: configuration file /etc/nginx/nginx.conf test failed It appears to be an access issue, like somehow the app lost the ability to save the cert files... I did recently run Docker Safe New Perms but that shouldn't have affected this container (right?). Any ideas?
  10. Check my follow-up post right after the one you quoted
  11. Here is the error: **CrashPlan for Small Business's memory consumption comes close to its limit.** More than 75% of allocated memory is used. Consider increasing memory allocation to avoid unexpected crashes. This can be done via the CRASHPLAN_SRV_MAX_MEM environment variable. (And it provides successively worse errors; e.g., 90% of memory is used, then eventually all memory is used and it crashes.) Spaced used for the backup is 6TB and I've allocated much more than the GB equivalent of RAM (12GB, then 16GB, then 20GB) and the errors are still appearing with crashes. I just rebooted the server for the first time in a while just to do a gut check here.
  12. Yes, the main topic in that article is to increase memory allocation. It says for a 4TB backup you should need 4GB of memory on a Linux system. I've actually adjusted this in the docker settings to *20GB* and it still keeps crashing. Note: I'm assuming the article's CLI command does the same thing as the memory allocation option in the Docker container settings: java mx 1536 If this is actually something different then maybe I haven't tried it yet.
  13. tl/dr: My docker keeps crashing - like every 30 seconds. Occasionally I see the message about running out of memory, but not every time. This container keeps crashing around every 30 seconds or so, sometimes with the "CRASHPLAN_SRV_MAX_MEM" error, but sometimes not. Sometimes there is no error at all, it's like the container just reboots. I currently have 16G allocated in the container settings and it still keeps crashing. I have bumped this as high as 20G and the issue persists, as if changing that number doesn't affect anything. The full size of the backup is about 4TB, and there is only 44GB remaining to do. It's basically 99% done but can't seem to finish that last 1%. Does it make sense that I would have to allocate all of my server memory to this container in order to complete a 4TB backup? Any ideas why it could be crashing so much or how to fix? Unraid container logs aren't showing anything unless the MAX_MEM error pops up. And the Tools>History window inside the app just shows it stopping and starting over and over.
  14. Resolved: For now, you have to choose "VA API" as the transcoder and NOT Intel Quick Sync Video. Totally confusing and frustrating but VA API definitely works. I confirmed it is hitting QSV with the Intel GPU Tools container.
  15. I can't get intel-gpu-tools to work, it won't even finish starting properly. It sends a KILL/exit signal and stops itself. Log attached. Any ideas? Edit: Had the wrong value for userid. I mistakenly wrote "root" instead of "0". Changed to 0 and that resolved it. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... [cont-init.d] 00-app-script.sh: exited 0. [cont-init.d] 00-app-user-map.sh: executing... [cont-init.d] 00-app-user-map.sh: exited 0. [cont-init.d] 00-clean-logmonitor-states.sh: executing... [cont-init.d] 00-clean-logmonitor-states.sh: exited 0. [cont-init.d] 00-clean-tmp-dir.sh: executing... [cont-init.d] 00-clean-tmp-dir.sh: exited 0. [cont-init.d] 00-set-app-deps.sh: executing... [cont-init.d] 00-set-app-deps.sh: exited 0. [cont-init.d] 00-set-home.sh: executing... [cont-init.d] 00-set-home.sh: exited 0. [cont-init.d] 00-take-config-ownership.sh: executing... [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 00-xdg-runtime-dir.sh: executing... [cont-init.d] 00-xdg-runtime-dir.sh: exited 0. [cont-init.d] 10-certs.sh: executing... [cont-init.d] 10-certs.sh: exited 0. [cont-init.d] 10-cjk-font.sh: executing... [cont-init.d] 10-cjk-font.sh: exited 0. [cont-init.d] 10-nginx.sh: executing... s6-applyuidgid: usage: s6-applyuidgid [ -z ] [ -u uid ] [ -g gid ] [ -G gidlist ] [ -U ] prog... [cont-init.d] 10-nginx.sh: exited 100. [services.d] stopping services [services.d] stopping s6-fdholderd... [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.