Jump to content

Sn3akyP3t3

Members
  • Content Count

    61
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Sn3akyP3t3

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • Location
    Earth
  • Personal Text
    Intentionally left blank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So you want to use access a docker container within another docker container? This can be done via linking containers: https://docs.docker.com/network/links/#communication-across-links with the example similar to --link <name or id>:alias I believe some recent changes to the version of docker will require you to use a user defined custom bridged network to do this as well. Documentation on that is here: https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge unfortunately this is a command line only creation process that I know of. The reason for this i believe is because --link is legacy and may be removed in future versions, although I have no idea how to link without it when it goes away... I'm willing to bet they really want people to use Docker compose instead, but I'm not really sure... just the speculation train rolling through.
  2. I didn't spot a particular problem like the one I'm seeing so here I go. I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar. I don't see any errors in the logs, but I don't have any additional logging enabled either. Is there any additional logging I can enable to expose the cause?
  3. Nevermind, found diskover for this sort of thing and then some! https://shirosaidev.github.io/diskover/
  4. I'm looking to make use of the output that comes from the Shares -> Compute All button output, but programmatically. Rather than cobble my own script together I'd rather just use what is already there. Where may I find this script located? I probably triggered some interest into the why. For that I'm just trying to project usage growth so I can predict when I'll be hitting a limit for budgeting purposes. Long story short I made a mistake when black Friday rolled around and I didn't check-out quick enough so I missed the 8GB HDD sales so I ended up with decently priced 4GB drives, but I'm hovering around 70% utilization. I've done a light duplicate file search and dumped a lot of old Acronis True image files, but I would like to know if that effort needs to be turned up to 11.
  5. Something changed in a recent update, month(s) ago possibly, regarding CA's suggested repo source installation. I don't remember the exact behavior it used to be. I believe the Apps section used to offer an icon to change the source of the repo to the official Docker Registry as an alternative to UnRaid's internal providers. Its been a long time since I've been able to install new containers and now that I'm able to I'm not finding an easy way to install from the official Docker Registry. Is there a setting to re-enable this or drop to command line with additional params as a substitute?
  6. I've interested in applying DenyHosts to my UnRAID install, but the plugins by overbyrn appear to have gone poof. The link to the support forum post from the wiki is dead. Wherabouts may I acquire the plugin for DenyHosts and possibly something that creates SSH keys?
  7. This thread is for the entire repo which contains multiple applications. I'm assuming you must mean either MySQL or Postgres. It sounds like customizations and/or data is not being preserved on a restart of a docker container. This can be achieved by mapping the host directory to the container's directory through UnRaid's settings for that docker container. Look for a setting that the container asks for "Container Path: /config" and maybe even "Container Path: /data" and map that back to your appdata directory you have set aside on your UnRaid server. I use the MariaDB and Postgres official repos for my work and not these Docker containers from this repo so I don't have exact settings to share.
  8. I haven't made a dockerized application before, but I have been using a dedicated customized dockerized python container with all the necessary loaded libraries to conduct batch like processing on a schedule. The data that it collects is mapped back to the UnRaid file shares. You can link that also through settings in UnRaid. I suggest you also do that for the config files and application data so that they are not lost when the docker container restarts. Docker data doesn't persist between restarts without doing that. Dig into the docker build documentation and check out examples. I'm not yet sure what you need to do to keep your docker container saved after that so you can safely upgrade or migrate from server to server. I think it may require a docker container running the docker registry, but maybe someone has more experience with that.
  9. Docker does have its own IP address, but its not exposed. You're basically left with two feasible options. Set your network to bridge mode and then you can expose your port as you seem to have already done. Then you access your Mysql instance at the IP address of your UnRaid server, example 192.168.100.11:3306, with the port you specified. This will be useful for you to validate your migration before you pull in your applications if that is your intent. If you proceed to dockerize your applications then you can use Docker's built-in link container capabilities. This you would basically enable linking by putting something similar to this on the Extra Parameters section of Unraid: --link postgres:postgresql. The syntax is basically the flag --link followed by the name of your running instance with a colon separating them and then the name of the instance you wish to be referenced internally within the dockerized container which doesn't really matter. Once you've done that you can log into the dockarized container and view the linked environment variables to confirm settings by running "printenv" from the console. Attached is a photo where you can check the link settings have applied. I use mariadb instead of Mysql, but that doesn't matter.
  10. I worked on the server this weekend and got a reverse proxy setup following this tutorial. I tried to turn around and apply the letsencrypt generated key pair to the mosquitto mqtt container through the /config directory and modified the mosquitto.conf file to make use of the keys, but the service isn't able to launch as there is something not agreeable with the key pair. I commented out the config lines referencing the key and found the files are accessible from within the container. The logs don't say a lot about what the problem could be so I'm wondering if there is a way to enable debug logging for the eclipse-mosquitto, but that may take a little experimentation to build that container with logging enabled. Is there a technique to apply keys created by the letsencrypt container into other containers? The eclipse-mosquitto container expects the keys to be with the application on startup using these lines to enable: listener 8883 protocol mqtt certfile /config/ssh/cert.pem cafile /config/ssh/chain.pem keyfile /config/ssh/privkey.pem
  11. Huh.. looks like these are symbolic links. lrwxrwxrwx 1 root root 19 Jul 7 00:31 EVP_get_digestbynid.3.gz -> EVP_DigestInit.3.gz lrwxrwxrwx 1 root root 19 Jul 7 00:31 EVP_get_digestbyobj.3.gz -> EVP_DigestInit.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_cfb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_cfb64.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_ecb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_ofb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 12 Jul 7 00:31 EVP_md5_sha1.3.gz -> EVP_md5.3.gz
  12. I don't often terminal into the UnRaid server unless I'm changing something or troubleshooting. I stumbled upon this and was immediately curious. Running the below command on the root directory reveals there are 3380 gun zip archive files stored right at the root directory level which is highly unusual for any Linux machine I've ever managed. Is this something I should clean up or is it a bug? I don't work with .gz files for anything I script so this wasn't caused by me directly. I'll crack open a handful of the files to see if there's anything fishy going on. Meanwhile I'm posting this to see if there is anything known about this. root@TrumpIsNotSmart:/# ls -1q *.gz | wc -l 3380 This is one of many pages captured to reveal the file names found in this pile of .gz stuffs ACCESS_DESCRIPTION_free.3.gz@ OpenSSL_add_ssl_algorithms.3.gz@ ACCESS_DESCRIPTION_new.3.gz@ OpenSSL_version.3.gz@ ADMISSIONS_free.3.gz@ OpenSSL_version_num.3.gz@ ADMISSIONS_get0_admissionAuthority.3.gz@ PBE2PARAM_free.3.gz@ ADMISSIONS_get0_namingAuthority.3.gz@ PBE2PARAM_new.3.gz@ ADMISSIONS_get0_professionInfos.3.gz@ PBEPARAM_free.3.gz@ ADMISSIONS_new.3.gz@ PBEPARAM_new.3.gz@ ADMISSIONS_set0_admissionAuthority.3.gz@ PBKDF2PARAM_free.3.gz@ ADMISSIONS_set0_namingAuthority.3.gz@ PBKDF2PARAM_new.3.gz@ ADMISSIONS_set0_professionInfos.3.gz@ PEM_FLAG_EAY_COMPATIBLE.3.gz@ ADMISSION_SYNTAX.3.gz@ PEM_FLAG_ONLY_B64.3.gz@ ADMISSION_SYNTAX_free.3.gz@ PEM_FLAG_SECURE.3.gz@ ADMISSION_SYNTAX_get0_admissionAuthority.3.gz@ PEM_bytes_read_bio_secmem.3.gz@ ADMISSION_SYNTAX_get0_contentsOfAdmissions.3.gz@ PEM_do_header.3.gz@ ADMISSION_SYNTAX_new.3.gz@ PEM_get_EVP_CIPHER_INFO.3.gz@ ADMISSION_SYNTAX_set0_admissionAuthority.3.gz@ PEM_read_DHparams.3.gz@ ADMISSION_SYNTAX_set0_contentsOfAdmissions.3.gz@ PEM_read_DSAPrivateKey.3.gz@ ASIdOrRange_free.3.gz@ PEM_read_DSA_PUBKEY.3.gz@ ASIdOrRange_new.3.gz@ PEM_read_DSAparams.3.gz@ ASIdentifierChoice_free.3.gz@ PEM_read_ECPKParameters.3.gz@ ASIdentifierChoice_new.3.gz@ PEM_read_ECPrivateKey.3.gz@ ASIdentifiers_free.3.gz@ PEM_read_EC_PUBKEY.3.gz@ ASIdentifiers_new.3.gz@ PEM_read_NETSCAPE_CERT_SEQUENCE.3.gz@ ASN1_ENUMERATED_get.3.gz@ PEM_read_PKCS7.3.gz@ ASN1_ENUMERATED_get_int64.3.gz@ PEM_read_PKCS8.3.gz@ ASN1_ENUMERATED_set.3.gz@ PEM_read_PKCS8_PRIV_KEY_INFO.3.gz@ ASN1_ENUMERATED_set_int64.3.gz@ PEM_read_PUBKEY.3.gz@ ASN1_ENUMERATED_to_BN.3.gz@ PEM_read_PrivateKey.3.gz@ ASN1_GENERALIZEDTIME_adj.3.gz@ PEM_read_RSAPrivateKey.3.gz@ ASN1_GENERALIZEDTIME_check.3.gz@ PEM_read_RSAPublicKey.3.gz@ ASN1_GENERALIZEDTIME_print.3.gz@ PEM_read_RSA_PUBKEY.3.gz@ ASN1_GENERALIZEDTIME_set.3.gz@ PEM_read_SSL_SESSION.3.gz@ ASN1_GENERALIZEDTIME_set_string.3.gz@ PEM_read_X509.3.gz@ ASN1_INTEGER_get.3.gz@ PEM_read_X509_AUX.3.gz@ ASN1_INTEGER_get_uint64.3.gz@ PEM_read_X509_CRL.3.gz@ ASN1_INTEGER_set.3.gz@ PEM_read_X509_REQ.3.gz@ ASN1_INTEGER_set_int64.3.gz@ PEM_read_bio.3.gz@ ASN1_INTEGER_set_uint64.3.gz@ PEM_read_bio_CMS.3.gz@ ASN1_INTEGER_to_BN.3.gz@ PEM_read_bio_DHparams.3.gz@ ASN1_ITEM.3.gz@ PEM_read_bio_DSAPrivateKey.3.gz@ ASN1_ITEM_get.3.gz@ PEM_read_bio_DSA_PUBKEY.3.gz@ ASN1_OBJECT_free.3.gz@ PEM_read_bio_DSAparams.3.gz@ ASN1_STRING_TABLE.3.gz@ PEM_read_bio_ECPKParameters.3.gz@ ASN1_STRING_TABLE_cleanup.3.gz@ PEM_read_bio_EC_PUBKEY.3.gz@ ASN1_STRING_TABLE_get.3.gz@ PEM_read_bio_NETSCAPE_CERT_SEQUENCE.3.gz@ ASN1_STRING_cmp.3.gz@ PEM_read_bio_PKCS7.3.gz@ ASN1_STRING_data.3.gz@ PEM_read_bio_PKCS8.3.gz@ ASN1_STRING_dup.3.gz@ PEM_read_bio_PKCS8_PRIV_KEY_INFO.3.gz@ ASN1_STRING_free.3.gz@ PEM_read_bio_PUBKEY.3.gz@ ASN1_STRING_get0_data.3.gz@ PEM_read_bio_RSAPrivateKey.3.gz@ ASN1_STRING_print.3.gz@ PEM_read_bio_RSAPublicKey.3.gz@ ASN1_STRING_print_ex_fp.3.gz@ PEM_read_bio_RSA_PUBKEY.3.gz@ ASN1_STRING_set.3.gz@ PEM_read_bio_SSL_SESSION.3.gz@ ASN1_STRING_to_UTF8.3.gz@ PEM_read_bio_X509.3.gz@ ASN1_STRING_type.3.gz@ PEM_read_bio_X509_AUX.3.gz@ ASN1_STRING_type_new.3.gz@ PEM_read_bio_X509_CRL.3.gz@ ASN1_TIME_adj.3.gz@ PEM_read_bio_X509_REQ.3.gz@ ASN1_TIME_check.3.gz@ PEM_write.3.gz@ ASN1_TIME_cmp_time_t.3.gz@ PEM_write_CMS.3.gz@ ASN1_TIME_compare.3.gz@ PEM_write_DHparams.3.gz@ ASN1_TIME_diff.3.gz@ PEM_write_DHxparams.3.gz@ ASN1_TIME_normalize.3.gz@ PEM_write_DSAPrivateKey.3.gz@
  13. @pappaq are you by chance using DDR4-2666 SODIMM for RAM and using 16 GB? The specs on the ASRock J4105 say one thing, but users say capacity is actually 16 GB RAM and Crucial.com says that its ok with DDR4-2666 SODIMM. So weird!
  14. Probably won't matter to many people, but if you do cross-platform Python scripting on UnRaid and the script inspects what environment it is running from you may be in need to fix your script. The recent version of UnRaid 6.7 has changed the case for good reasons I'm sure Here's the change: >>> print(platform.release()) 4.19.41-Unraid It used to be this case "unRAID". The solution is to make it case insensitive like so: if "unraid" in platform.release().lower() meh... details! Anyway, hope this helps someone out there.
  15. Excellent! That's exactly the news I wanted to hear! I'm not looking for anything with large power draws. I ended up getting a ASRock J3455-ITX which is easy to miss the omitted "B" in the model. This unit features 4x SATA instead of 2x, but with a loss of 0.2 GHz which doesn't bother me.