Jump to content

ljm42

Community Developer
  • Content Count

    1611
  • Joined

  • Last visited

  • Days Won

    8

ljm42 last won the day on September 2 2019

ljm42 had the most liked content!

Community Reputation

172 Very Good

2 Followers

About ljm42

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • Personal Text
    the answer to life, the universe, and everything

Recent Profile Visitors

2687 profile views
  1. Oh sorry I got Minio and Arq confused. I updated my note to be more clear to others. It sounds like the upcoming changes to Arq will probably solve your issues, which is great! Hopefully there aren't a lot of other software packages that need to put 200k files in a single directory My system is six years old but still going strong. I have a Xeon E3-1240 v3 processor on an ASRock E3C226D2I mobo with 16 GB RAM. I was testing on a 4TB Seagate NAS drive plugged into an onboard SATA 3 port. My dockers and a VM were running at the time, but it was not under heavy load. Interestingly, I also tried it with a 12 TB Seagate Ironwolf drive and performance was slightly worse. Nothing really significant, just a little surprising. Nice job on the script BTW, it helped me understand that while there is overhead to the user share system, it takes some pretty extreme values to make it an issue.
  2. I'm guessing users wouldn't purposefully put 200k files in a single directory, but for the OP the issue is the Minio Arq backup software. Using Unraid as a backup destination seems like a great idea, and a user share would be ideal since it can grow larger than one disk. If storing this many files in a single directory is a common behavior for backup software, it will probably affect quite a few people. Not sure what to suggest. Maybe the OP can find a way to set a maximum number of files Minio Arq will put in a given directory? Or maybe split the backups up so that they will fit on individual disks without needing to use a user share? Or maybe there is a comparable backup package that organizes its files in a more compatible way?
  3. Here are the stats for my system: 100K files 200K files 6.7.2 Disk|SHFS: 0.22| 3.54 0.48| 5.59 6.8.1 HL Off: Disk|SHFS: 0.23| 4.86 0.46| 13.11 6.8.1 HL On: Disk|SHFS: 0.23| 15.64 0.51| 31.47 Unraid is running on bare metal. The share it is writing to is restricted to a single drive, so SHFS didn't have to merge content from multiple places, if that makes a difference. There is a significant slowdown going from 6.7.2 to 6.8.1 as the number of files increases. When Hard Link support is enabled the slowdown becomes extreme.
  4. Changed Status to Closed Changed Priority to Other
  5. I see. User error You have to press the "Provision" button. I was expecting that to happen automatically when changing "Use SSL/TLS" to "Auto".
  6. In the meantime, you can press the Tab key once to put the focus on the username field
  7. OK running this command manually generates the certificate: root@TowerVM:/tmp# php /usr/local/emhttp/webGui/include/ProvisionCert.php nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful LE Cert Provisioned successfully The question is, why isn't this happening automatically?
  8. I first noticed this problem in 6.8.1-rc1, updated to 6.8.1 and still persists. Start with an empty /boot/config/ssl/certs directory and USE_SSL set to off, Go to Settings -> Management Access and change "Use SSL/TLS" from "No" to "Auto" The system will generate a self-signed certificate instead of using LetsEncrypt I'm marking this Minor because I haven't seen other reports so it is possible it is unique to my system. But if new LE setups really are broken then it should probably be Urgent.
  9. One option - if you can get to the command line, you could type something like this: /usr/local/emhttp/webGui/scripts/notify -e "Your Unraid server is not secured" -s "I found your Unraid on the Internet without a password" -d "You need to secure this before someone hacks you" -i "alert" That will give them a notification on the webgui and send them an email (if they have that configured)
  10. Here is a tool for checking the status of SSL installations on your internal network: https://github.com/drwetter/testssl.sh It is a command line tool, no GUI. You can run it from the Unraid command line like this: docker run -ti drwetter/testssl.sh <unraid host>:<unraid port> (Note that after running this command, the "testssl" docker will show up in the Unraid webgui. You can't really run it from there, although you can use the webgui to delete it if you want.) When run against stock Unraid 6.8.1-rc1, testssl reports: SSLv2 not offered (OK) SSLv3 not offered (OK) TLS 1 offered (deprecated) TLS 1.1 offered (deprecated) TLS 1.2 offered (OK) TLS 1.3 not offered and downgraded to a weaker protocol NPN/SPDY h2, http/1.1 (advertised) ALPN/HTTP2 h2, http/1.1 (offered) After making the changes above it confirms that only 1.2 and 1.3 are offered (good!): SSLv2 not offered (OK) SSLv3 not offered (OK) TLS 1 not offered TLS 1.1 not offered TLS 1.2 offered (OK) TLS 1.3 offered (OK): final NPN/SPDY h2, http/1.1 (advertised) ALPN/HTTP2 h2, http/1.1 (offered)
  11. Well you've really sent me down the rabbit hole Tagging @limetech for visibility Not only should we disable TLSv1 and 1.1, we should enable 1.3. Lots of good info here: https://en.wikipedia.org/wiki/Transport_Layer_Security TLSv1.0 and 1.1 have multiple vulns: https://www.globalsign.com/en/blog/disable-tls-10-and-all-ssl-versions/ https://tools.ietf.org/id/draft-moriarty-tls-oldversions-diediedie-00.html TLSv1.2 is good, with significant availability: https://qsportal.atlassian.net/wiki/spaces/DOC/pages/3571715/TLSv1.2+Browser+Compatibility TLSv1.3 is best, with security and performance improvements over 1.2 (this should make the webgui a little faster in modern browsers that support it): https://casecurity.org/2018/04/10/tls-1-3-includes-improvements-to-security-and-performance/ According to this page: https://qsportal.atlassian.net/wiki/spaces/DOC/pages/3571715/TLSv1.2+Browser+Compatibility TLSv1.2 has been enabled by default in most browsers for quite a while now: Chrome since version 38 (2014) Firefox since version 27 (2014) Safari since version 7 on OSX 10.9 (2013) IE since version 11 (2013) Edge (all versions) Android since Lollipop (2014) iOS since iOS 5 (2011?) So the risk of dropping TLSv1 and 1.1 seems very small. If people really want to keep using their obsolete clients and don't care about the security issues, I see two options: add an option to the webgui to enable v1 and/or v1.1 (if we think a lot of people will need this) or provide a sed command that people could manually add to their go script that adds TLSv1 and/or TLSv1.1 to the nginx.conf. Speaking of which, users who want to secure their systems today can use a good text editor (such as Notepad++, not Notepad) to edit the /config/go script in their "flash" share. Add these lines to the top of the file (above the reference to emhttp): # disable TLSv1 and TLSv1.1, enable TLSv1.2 and TLSv1.3 # see https://forums.unraid.net/topic/86949-tlsv1-is-being-obsoleted-this-spring/ sed -i 's/TLSv1 TLSv1.1 TLSv1.2/TLSv1.2 TLSv1.3/' /etc/nginx/nginx.conf Before rebooting, if you type this command: grep ssl_protocols /etc/nginx/nginx.conf you should see: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; After rebooting with the updated go script, that same command should return: ssl_protocols TLSv1.2 TLSv1.3; To undo this change, delete those lines from the go script and reboot.
  12. Wow this sounds great! Does "Local server uses NAT" have any effect on whether WG can access these Docker networks, or does it work regardless? When "Local server uses NAT" is set to "No", the gui tells you what static route you need to add to your router. I'm wondering if we should show a similar message when it is set to "Yes"? It isn't always required, but it will be helpful in this case where there are custom docker networks.
  13. I don't use wildcard certs so I can't fully test this myself, but give this a shot: SSH to the server (don't use the web console for this, either use SSH or an actual console) Type this to temporarily modify rc.nginx: sed -i '/HOSTSSL=$(openssl/a HOSTSSL=${HOSTSSL/\\*/$HOSTNAME}' /etc/rc.d/rc.nginx This adds a new line to the script that says if the HOSTSSL variable contains a '*', replace '*' with the server's HOSTNAME. If it doesn't contain a '*', do nothing. That should change '*.home.insanegenius.net' to 'server-2.home.insanegenius.net' Type this to restart nginx with the new config file (again, don't issue this command if you are using the web console): /etc/rc.d/rc.nginx restart Once that is done, please let me know the output of these commands, just to verify it did what I expect: hostname -s grep HOSTSSL /etc/rc.d/rc.nginx grep 302 /etc/nginx/conf.d/emhttp-servers.conf If everything worked, when you visit either of these: http://server-2.home.insanegenius.net http://<IP Address> It should redirect you to: https://server-2.home.insanegenius.net If you want to undo the change at this point, just reboot. Up until now, the change we made will not survive a reboot. To make the change permanent, edit the /config/go file in the "flash" share (use a good editor that understands Unix line endings, like Notepad++ on Windows) and add these lines to the top of the file: # fix wildcard certificates sed -i '/HOSTSSL=$(openssl/a HOSTSSL=${HOSTSSL/\\*/$HOSTNAME}' /etc/rc.d/rc.nginx Reboot and confirm that the redirects still work properly Note that if this functionality is ever added to stock Unraid you'd want to remove those two lines from your go script.
  14. Here are a couple of scripts that give a little insight into the paperless consumption process. I called them "pre" and "post", and put them in the "data" directory: /mnt/user/appdata/paperless/data/pre #!/usr/bin/env bash # https://paperless.readthedocs.io/en/latest/consumption.html#hooking-into-the-consumption-process echo Begin pre-processing script echo - Original filename: [${1}] echo End pre-processing script /mnt/user/appdata/paperless/data/post #!/usr/bin/env bash # https://paperless.readthedocs.io/en/latest/consumption.html#hooking-into-the-consumption-process echo Begin post-processing script echo - Document id: [${1}] echo - Generated filename: [${2}] echo - Source path: [${3}] echo - Thumbnail path: [${4}] echo - Download URL: [${5}] echo - Thumbnail URL: [${6}] echo - Correspondent: [${7}] echo - Tags: [${8}] echo End post-processing script Then add these two variables to the paperless-consumer docker: PAPERLESS_PRE_CONSUME_SCRIPT /usr/src/paperless/data/pre PAPERLESS_POST_CONSUME_SCRIPT /usr/src/paperless/data/post You can see the output by watching the paperless-consumer docker logs
  15. Thanks to everyone who worked on bringing paperless to Unraid! All of the work that has been put into it so far has been super helpful. This is the biggest annoyance so far: https://github.com/the-paperless-project/paperless/issues/546 which basically means if you dump a bunch of files in the comsume folder, you can't make any changes to the database until they have finished being processed. Currently the docker writes files with GID 1000, I've submitted a PR with a change that will allow us to use GID 100 (users) like we do with everything else on Unraid: https://github.com/the-paperless-project/paperless/pull/599 Once that is accepted the template will need a variable for: USERMAP_GID: 100 I haven't fully decided if I will use paperless, but I have some suggestions for the template to help people get started faster. I could submit a PR, but I figured the people who have been using this more might be in a better position to decide if this would be helpful: In the 'Overview', include a link to the documentation: https://paperless.readthedocs.io/en/latest/ For the 'Data' path, set a default of /mnt/user/appdata/paperless/data with the following description: Container Path: /usr/src/paperless/data . This contains the paperless database. Should be in appdata. For the 'Media' path, set a default of /mnt/user/appdata/paperless/media with the following description: Container Path: /usr/src/paperless/media . Once consumed, files will be stored here. You may wish to place this on the array instead of in appdata. For the 'Consumption' path, set a default of /mnt/user/appdata/paperless/consume with the following description: Container Path: /consume . Files placed here will be consumed by paperless. For the 'Export' path, set a default of /mnt/user/appdata/paperless/export with the following description: Container Path: /export . Location for files used by the exporter utility. See https://paperless.readthedocs.io/en/latest/utilities.html#the-exporter For PAPERLESS_OCR_LANGUAGES, set a default value of "eng" and include the following description: Container Variable: PAPERLESS_OCR_LANGUAGES. Space-separated list of 3-letter language codes used for OCR. List of valid codes available here: https://www.loc.gov/standards/iso639-2/php/code_list.php How about adding the PAPERLESS_TIME_ZONE variable, defaulted to "UTC", with the following description: Container Variable: PAPERLESS_TIME_ZONE. Override the default UTC time zone. For details see: https://docs.djangoproject.com/en/1.10/ref/settings/#std:setting-TIME_ZONE How about adding the PAPERLESS_INLINE_DOC variable, defaulted to "false", with the following description: Container Variable: PAPERLESS_INLINE_DOC. When true, PDF files will be viewed in the browser. When false, PDF files will be downloaded.