Sn3akyP3t3

Members
  • Posts

    86
  • Joined

Everything posted by Sn3akyP3t3

  1. So you want to use access a docker container within another docker container? This can be done via linking containers: https://docs.docker.com/network/links/#communication-across-links with the example similar to --link <name or id>:alias I believe some recent changes to the version of docker will require you to use a user defined custom bridged network to do this as well. Documentation on that is here: https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge unfortunately this is a command line only creation process that I know of. The reason for this i believe is because --link is legacy and may be removed in future versions, although I have no idea how to link without it when it goes away... I'm willing to bet they really want people to use Docker compose instead, but I'm not really sure... just the speculation train rolling through.
  2. I didn't spot a particular problem like the one I'm seeing so here I go. I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar. I don't see any errors in the logs, but I don't have any additional logging enabled either. Is there any additional logging I can enable to expose the cause?
  3. Nevermind, found diskover for this sort of thing and then some! https://shirosaidev.github.io/diskover/
  4. I'm looking to make use of the output that comes from the Shares -> Compute All button output, but programmatically. Rather than cobble my own script together I'd rather just use what is already there. Where may I find this script located? I probably triggered some interest into the why. For that I'm just trying to project usage growth so I can predict when I'll be hitting a limit for budgeting purposes. Long story short I made a mistake when black Friday rolled around and I didn't check-out quick enough so I missed the 8GB HDD sales so I ended up with decently priced 4GB drives, but I'm hovering around 70% utilization. I've done a light duplicate file search and dumped a lot of old Acronis True image files, but I would like to know if that effort needs to be turned up to 11.
  5. Something changed in a recent update, month(s) ago possibly, regarding CA's suggested repo source installation. I don't remember the exact behavior it used to be. I believe the Apps section used to offer an icon to change the source of the repo to the official Docker Registry as an alternative to UnRaid's internal providers. Its been a long time since I've been able to install new containers and now that I'm able to I'm not finding an easy way to install from the official Docker Registry. Is there a setting to re-enable this or drop to command line with additional params as a substitute?
  6. I've interested in applying DenyHosts to my UnRAID install, but the plugins by overbyrn appear to have gone poof. The link to the support forum post from the wiki is dead. Wherabouts may I acquire the plugin for DenyHosts and possibly something that creates SSH keys?
  7. This thread is for the entire repo which contains multiple applications. I'm assuming you must mean either MySQL or Postgres. It sounds like customizations and/or data is not being preserved on a restart of a docker container. This can be achieved by mapping the host directory to the container's directory through UnRaid's settings for that docker container. Look for a setting that the container asks for "Container Path: /config" and maybe even "Container Path: /data" and map that back to your appdata directory you have set aside on your UnRaid server. I use the MariaDB and Postgres official repos for my work and not these Docker containers from this repo so I don't have exact settings to share.
  8. I haven't made a dockerized application before, but I have been using a dedicated customized dockerized python container with all the necessary loaded libraries to conduct batch like processing on a schedule. The data that it collects is mapped back to the UnRaid file shares. You can link that also through settings in UnRaid. I suggest you also do that for the config files and application data so that they are not lost when the docker container restarts. Docker data doesn't persist between restarts without doing that. Dig into the docker build documentation and check out examples. I'm not yet sure what you need to do to keep your docker container saved after that so you can safely upgrade or migrate from server to server. I think it may require a docker container running the docker registry, but maybe someone has more experience with that.
  9. Docker does have its own IP address, but its not exposed. You're basically left with two feasible options. Set your network to bridge mode and then you can expose your port as you seem to have already done. Then you access your Mysql instance at the IP address of your UnRaid server, example 192.168.100.11:3306, with the port you specified. This will be useful for you to validate your migration before you pull in your applications if that is your intent. If you proceed to dockerize your applications then you can use Docker's built-in link container capabilities. This you would basically enable linking by putting something similar to this on the Extra Parameters section of Unraid: --link postgres:postgresql. The syntax is basically the flag --link followed by the name of your running instance with a colon separating them and then the name of the instance you wish to be referenced internally within the dockerized container which doesn't really matter. Once you've done that you can log into the dockarized container and view the linked environment variables to confirm settings by running "printenv" from the console. Attached is a photo where you can check the link settings have applied. I use mariadb instead of Mysql, but that doesn't matter.
  10. I worked on the server this weekend and got a reverse proxy setup following this tutorial. I tried to turn around and apply the letsencrypt generated key pair to the mosquitto mqtt container through the /config directory and modified the mosquitto.conf file to make use of the keys, but the service isn't able to launch as there is something not agreeable with the key pair. I commented out the config lines referencing the key and found the files are accessible from within the container. The logs don't say a lot about what the problem could be so I'm wondering if there is a way to enable debug logging for the eclipse-mosquitto, but that may take a little experimentation to build that container with logging enabled. Is there a technique to apply keys created by the letsencrypt container into other containers? The eclipse-mosquitto container expects the keys to be with the application on startup using these lines to enable: listener 8883 protocol mqtt certfile /config/ssh/cert.pem cafile /config/ssh/chain.pem keyfile /config/ssh/privkey.pem
  11. Huh.. looks like these are symbolic links. lrwxrwxrwx 1 root root 19 Jul 7 00:31 EVP_get_digestbynid.3.gz -> EVP_DigestInit.3.gz lrwxrwxrwx 1 root root 19 Jul 7 00:31 EVP_get_digestbyobj.3.gz -> EVP_DigestInit.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_cfb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_cfb64.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_ecb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 17 Jul 7 00:31 EVP_idea_ofb.3.gz -> EVP_idea_cbc.3.gz lrwxrwxrwx 1 root root 12 Jul 7 00:31 EVP_md5_sha1.3.gz -> EVP_md5.3.gz
  12. I don't often terminal into the UnRaid server unless I'm changing something or troubleshooting. I stumbled upon this and was immediately curious. Running the below command on the root directory reveals there are 3380 gun zip archive files stored right at the root directory level which is highly unusual for any Linux machine I've ever managed. Is this something I should clean up or is it a bug? I don't work with .gz files for anything I script so this wasn't caused by me directly. I'll crack open a handful of the files to see if there's anything fishy going on. Meanwhile I'm posting this to see if there is anything known about this. root@TrumpIsNotSmart:/# ls -1q *.gz | wc -l 3380 This is one of many pages captured to reveal the file names found in this pile of .gz stuffs ACCESS_DESCRIPTION_free.3.gz@ OpenSSL_add_ssl_algorithms.3.gz@ ACCESS_DESCRIPTION_new.3.gz@ OpenSSL_version.3.gz@ ADMISSIONS_free.3.gz@ OpenSSL_version_num.3.gz@ ADMISSIONS_get0_admissionAuthority.3.gz@ PBE2PARAM_free.3.gz@ ADMISSIONS_get0_namingAuthority.3.gz@ PBE2PARAM_new.3.gz@ ADMISSIONS_get0_professionInfos.3.gz@ PBEPARAM_free.3.gz@ ADMISSIONS_new.3.gz@ PBEPARAM_new.3.gz@ ADMISSIONS_set0_admissionAuthority.3.gz@ PBKDF2PARAM_free.3.gz@ ADMISSIONS_set0_namingAuthority.3.gz@ PBKDF2PARAM_new.3.gz@ ADMISSIONS_set0_professionInfos.3.gz@ PEM_FLAG_EAY_COMPATIBLE.3.gz@ ADMISSION_SYNTAX.3.gz@ PEM_FLAG_ONLY_B64.3.gz@ ADMISSION_SYNTAX_free.3.gz@ PEM_FLAG_SECURE.3.gz@ ADMISSION_SYNTAX_get0_admissionAuthority.3.gz@ PEM_bytes_read_bio_secmem.3.gz@ ADMISSION_SYNTAX_get0_contentsOfAdmissions.3.gz@ PEM_do_header.3.gz@ ADMISSION_SYNTAX_new.3.gz@ PEM_get_EVP_CIPHER_INFO.3.gz@ ADMISSION_SYNTAX_set0_admissionAuthority.3.gz@ PEM_read_DHparams.3.gz@ ADMISSION_SYNTAX_set0_contentsOfAdmissions.3.gz@ PEM_read_DSAPrivateKey.3.gz@ ASIdOrRange_free.3.gz@ PEM_read_DSA_PUBKEY.3.gz@ ASIdOrRange_new.3.gz@ PEM_read_DSAparams.3.gz@ ASIdentifierChoice_free.3.gz@ PEM_read_ECPKParameters.3.gz@ ASIdentifierChoice_new.3.gz@ PEM_read_ECPrivateKey.3.gz@ ASIdentifiers_free.3.gz@ PEM_read_EC_PUBKEY.3.gz@ ASIdentifiers_new.3.gz@ PEM_read_NETSCAPE_CERT_SEQUENCE.3.gz@ ASN1_ENUMERATED_get.3.gz@ PEM_read_PKCS7.3.gz@ ASN1_ENUMERATED_get_int64.3.gz@ PEM_read_PKCS8.3.gz@ ASN1_ENUMERATED_set.3.gz@ PEM_read_PKCS8_PRIV_KEY_INFO.3.gz@ ASN1_ENUMERATED_set_int64.3.gz@ PEM_read_PUBKEY.3.gz@ ASN1_ENUMERATED_to_BN.3.gz@ PEM_read_PrivateKey.3.gz@ ASN1_GENERALIZEDTIME_adj.3.gz@ PEM_read_RSAPrivateKey.3.gz@ ASN1_GENERALIZEDTIME_check.3.gz@ PEM_read_RSAPublicKey.3.gz@ ASN1_GENERALIZEDTIME_print.3.gz@ PEM_read_RSA_PUBKEY.3.gz@ ASN1_GENERALIZEDTIME_set.3.gz@ PEM_read_SSL_SESSION.3.gz@ ASN1_GENERALIZEDTIME_set_string.3.gz@ PEM_read_X509.3.gz@ ASN1_INTEGER_get.3.gz@ PEM_read_X509_AUX.3.gz@ ASN1_INTEGER_get_uint64.3.gz@ PEM_read_X509_CRL.3.gz@ ASN1_INTEGER_set.3.gz@ PEM_read_X509_REQ.3.gz@ ASN1_INTEGER_set_int64.3.gz@ PEM_read_bio.3.gz@ ASN1_INTEGER_set_uint64.3.gz@ PEM_read_bio_CMS.3.gz@ ASN1_INTEGER_to_BN.3.gz@ PEM_read_bio_DHparams.3.gz@ ASN1_ITEM.3.gz@ PEM_read_bio_DSAPrivateKey.3.gz@ ASN1_ITEM_get.3.gz@ PEM_read_bio_DSA_PUBKEY.3.gz@ ASN1_OBJECT_free.3.gz@ PEM_read_bio_DSAparams.3.gz@ ASN1_STRING_TABLE.3.gz@ PEM_read_bio_ECPKParameters.3.gz@ ASN1_STRING_TABLE_cleanup.3.gz@ PEM_read_bio_EC_PUBKEY.3.gz@ ASN1_STRING_TABLE_get.3.gz@ PEM_read_bio_NETSCAPE_CERT_SEQUENCE.3.gz@ ASN1_STRING_cmp.3.gz@ PEM_read_bio_PKCS7.3.gz@ ASN1_STRING_data.3.gz@ PEM_read_bio_PKCS8.3.gz@ ASN1_STRING_dup.3.gz@ PEM_read_bio_PKCS8_PRIV_KEY_INFO.3.gz@ ASN1_STRING_free.3.gz@ PEM_read_bio_PUBKEY.3.gz@ ASN1_STRING_get0_data.3.gz@ PEM_read_bio_RSAPrivateKey.3.gz@ ASN1_STRING_print.3.gz@ PEM_read_bio_RSAPublicKey.3.gz@ ASN1_STRING_print_ex_fp.3.gz@ PEM_read_bio_RSA_PUBKEY.3.gz@ ASN1_STRING_set.3.gz@ PEM_read_bio_SSL_SESSION.3.gz@ ASN1_STRING_to_UTF8.3.gz@ PEM_read_bio_X509.3.gz@ ASN1_STRING_type.3.gz@ PEM_read_bio_X509_AUX.3.gz@ ASN1_STRING_type_new.3.gz@ PEM_read_bio_X509_CRL.3.gz@ ASN1_TIME_adj.3.gz@ PEM_read_bio_X509_REQ.3.gz@ ASN1_TIME_check.3.gz@ PEM_write.3.gz@ ASN1_TIME_cmp_time_t.3.gz@ PEM_write_CMS.3.gz@ ASN1_TIME_compare.3.gz@ PEM_write_DHparams.3.gz@ ASN1_TIME_diff.3.gz@ PEM_write_DHxparams.3.gz@ ASN1_TIME_normalize.3.gz@ PEM_write_DSAPrivateKey.3.gz@
  13. @pappaq are you by chance using DDR4-2666 SODIMM for RAM and using 16 GB? The specs on the ASRock J4105 say one thing, but users say capacity is actually 16 GB RAM and Crucial.com says that its ok with DDR4-2666 SODIMM. So weird!
  14. Probably won't matter to many people, but if you do cross-platform Python scripting on UnRaid and the script inspects what environment it is running from you may be in need to fix your script. The recent version of UnRaid 6.7 has changed the case for good reasons I'm sure Here's the change: >>> print(platform.release()) 4.19.41-Unraid It used to be this case "unRAID". The solution is to make it case insensitive like so: if "unraid" in platform.release().lower() meh... details! Anyway, hope this helps someone out there.
  15. Excellent! That's exactly the news I wanted to hear! I'm not looking for anything with large power draws. I ended up getting a ASRock J3455-ITX which is easy to miss the omitted "B" in the model. This unit features 4x SATA instead of 2x, but with a loss of 0.2 GHz which doesn't bother me.
  16. Ah, x86, I was trying to stay away from the somewhat costly Udoo x68 units due to cost. I'll take a look at the ASRock boards like the J3455B-ITX and check for compatibility. That should be power effective enough.
  17. I used to have a backup solution with Crash Plan. I haven't restored my remote backup setup since they gave up on regular people and went with the business folk only. I've been shopping around for some years, but the cost per month for my needs are a bit too high. (Amazon banning rsync didn't help...) I'm looking now to host my own remote backup solution, but I don't want my remote host to incur extreme maintenance costs in power. I've been looking around for a SATA compatible extreme low cost board and found the Helios4 which is about the same cost to purchase as what I would pay per year to store the data. Sadly, I can't find any information if the Helios4 board is compatible with UnRAID or if anyone has experience with this. I figure the best option for backing up UnRAID is to have another UnRAID to store a subset of data. I'm pretty sure I would have encryption enabled for security. I would likely run a small number of features on this server, but not too much as to avoid exceeding the small amount of RAM available. I plan to give 50% of the storage space available to the host for kindly allowing me remote backup so they can stop using flash drives to backup their photos which is a bad option. I may in turn also setup remote backup for this person at my location so they also have a proper backup solution in place. Ideas or alternatives to the Helios4 option? Remember, I'm looking for an extreme low power option with little to no noise. If it can be passive cooling that would be a plus, but not essential.
  18. I love that UnRaid 6.7 just rolled out with Telegram. I think the number of agents will increase over time. Currently I count 8 different agents available. Many of us use more than one agent outside of UnRaid for various reasons. Some agents are more flexible than others and get used with enhanced tools such as Tasker or home automation. This allows tight notification elevation control which may interrupt a movie to alert of a problem for example or break the "do not disturb" setting for an important announcement. Also, some agents have limits such as Pushbullet which can be easily exceeded in a short time with lots of chatter. I propose that UnRaid support an additional mechanism to control which agent is subscribed to whichever topic the user selects. The provided annotated screenshots should explain this well.
  19. Whalla! UnRaid 6.7 is released and this feature is now available!
  20. @BRiT I'm interested in native Telegram notification agent support vice an SMTP workaround. There is only one given SMTP setting for email in UnRaid and its already involved in a notification workflow and I'd rather not disturb that for the sake of enabling another agent that is already potentially inbound for support within UnRaid if what was said above is true by @Kewjoe.
  21. Additionally, if you are using Two Factor Authentication you will need to create an app password and use that vice your Google Account credentials.
  22. I must have captured a version of GitLab that had this short-lived defect. An update to the image corrected this behavior... yuck!
  23. Does anyone have a long-term solution to this problem where the container refuses to stay running? I've found a temporary solution, but an update wipes this out, https://oplatform.club/t/topic/50 Image attached shows logging details of the failure to startup.
  24. All I know is that in this case, the scripts failed to run after the update to 6.4. I'm not keen on the underlying changes made and what impacts they may or may not have. Xvfb worked before, but not after. Docker permissions likely were the same as before, but because Xvfb suddenly failed I was forced to seek out an alternative which uncovered the permissions issue.
  25. Just realized this probably belongs under Prerelease 6.4 Support