LTM

Members
  • Posts

    34
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LTM's Achievements

Noob

Noob (1/14)

12

Reputation

  1. I am not able to start the container. I get the following: Unable to find image 'vault:latest' locally docker: Error response from daemon: manifest for vault:latest not found: manifest unknown: manifest unknown. See 'docker run --help'. I have tried :latest, :1.14, 1:13 and they all end in the same error. I was able to successfully install another container after trying this one so I don't believe it is something unraid/my system related.
  2. Hey all, I was just wondering if anyone knew how to tell what is eating up the memory on my USB drive. I have an 8GB USB that I am using and it is 94% full. I went through and browsed it through the main tab but I don't see why it should be so full. When I downloaded a backup, it is only about 750MB in size. Worst case scenario, if I were to take the USB drive, wipe it, and restore the backup, would everything run like nothing happened?
  3. I am having the same issue. I primarily want to use it as TFTP but would also like to play around with PXE boot as well. PXE seems to work to the point that it shows up as an option but when I select it, it comes up with the error "EFI PXE Network (mac) boot failed." Overall, I am not sure if the container is working at all or if it is just the TFTP portion. I have opnSense configured correctly because I can change the IP to my desktop and TFTP works as it should.
  4. So I finally figured it out on my end. When 4.0 came out they apparently switched from tinyproxy to privoxy. While tinyproxy used port 8888 (which my environment variable was set to for it to work), privoxy uses port 8118 in the container. If you did set up an environment variable, make sure that it is using port 8118 in the container and not 8888.
  5. I am also having problems with privoxy... I had everything set up correctly and working but now it is not. Everything else is working correctly.
  6. For those of you running on PiHole, there is a way to be able to use it to be able to use LANCACHE while not having to mess up your setup. In my case, I have certain devices allowed to bypass PiHole (was driving my GF nuts). To change my routers DNS settings to LANCACHE and then have the upstream DNS sent to PiHole would not have worked for me because then everything would have been the same IP/MAC address which would not work for my lists. This was not a possible solution for me. What I ended up having to do was to compile the lists into a dnsmasq format. Luckily, there was already a solution to this located here. Even better, its an official solution! I tried it on Windows but I ran in to errors while compiling the list so I did end up having to run it on the RaspberryPi that is running PiHole. 1. SSH in to your RPi 2. Copy the repository using "git clone https://github.com/uklans/cache-domains.git" 3. Change to the folder that the config file is in: "cd ./cache-domains/scripts/" 4. Change the configuration to your standards. "cp config.example.json config.json && sudo nano config.json" For me, I kept it simple and made everything the same IP (I am not sure what the benefit of splitting them up is...) { "ips": { "generic": "LANCACHE IP HERE (keep the quotes)" }, "cache_domains": { "default": "generic" } } If you want to set up different IPs for each service or multiple IPs for a single service, have a look at the example file on how to do that. 5. Save the config file by hitting "ctrl+x" then "y" and then "enter". 6.0. If you do not have jq installed, you need to install it: "sudo apt-get install jq -y" 6.1. Run the dnsmasq script to generate the appropriate files: "bash create-dnsmasq.sh" 7. Copy the generated files into the PiHole dnsmasq directory: "sudo cp ./output/dnsmasq/*.conf /etc/dnsmasq.d/" 8. Restart the pihole-FTL service: "sudo service pihole-FTL restart" And that should be it! One note to doing this is that you need to change your upstream DNS to something other than PiHole otherwise you will be creating an infinite loop (aka things will not download). I changed mine to 1.1.1.1.
  7. Worth a shot, I guess. You should see something like this in the logs when you first start the container: "2021/08/26 17:22:34 Using config file: /etc/vikunja/config.yml" And the docs for possible file locations: https://vikunja.io/docs/config-options/#config-file-locations
  8. I still haven't figured out how to do this. The documentation is crappy but the only admin settings I can find is if you set up a team. Then you can make a team member an admin of the team. I can't find anything about a global admin if that makes sense. You have to create your own and mount it. What I ended up doing is creating 2 folders on the host: .../vikunja/data .../vikunja/config I mounted the config as so: Host: .../vikunja/config/ Container: /etc/vikunja Config can be found here: https://kolaente.dev/vikunja/api/src/commit/dcddaab7b58ab2e03e0d0f3f0b771a1e6ec0dbce/config.yml.sample Name it "config.yml" and place it in .../vikunja/config on the host.
  9. For anyone that is having trouble with the new Vikunja containers, here are a couple of things I have found out: You need both containers. I would start the API container first. This is the container with the meat and potatoes. You can use 3 databases MySQL (mysql), SQLite (sqlite), and PostgreSQL (postgres). Make sure that you can see in the logs of the api container "⇨ http server started on [::]:3456" Now start the frontend container. When you get to the login page, in the main login form, you need to change the URL at the top. Even though it looks correct, it is set to "/api/V1". I had to include the full url so it was "http://xxx.xxx.xxx.xxx:3456/api/v1" before the register button would appear. I used the ip if the host and not the container ip. Registering the first user seems to just be a regular user. I am not sure if there is an administrator role but have not had time to really play with it just yet. I have not yet gotten to fiddle with a config file yet while having the application work, but I did have to mount it using the following container folder "/etc/vikunja". This is in the API container. And finally, the official documentation for this application is poor. I have only gotten to play with it for about 30 mins after spending about an hour and a half just trying to get it to work...
  10. Does anyone have a script that will check the docker logs for a specific error and restart a container if the error is present? In my case, Plex corrupts EasyMediaEncoder every now and then which will not allow any media to play. The only way I have managed to fix it is to go in and manually delete the folder and restart the container. I did create a script that does this for me but I have to manually fire it. I am hoping there is a way that I can monitor the logs every 5 minutes or so to search for (or any portion of) Jobs: Exec of /config/Library/Application Support/Plex Media Server/Codecs/EasyAudioEncoder-1452-linux-x86_64/EasyAudioEncoder/EasyAudioEncoder failed. (13) and then fire the script to automatically delete the folder and restart Plex. If anyone has a script similar to this or has any info to point me in the right direction, it would be much appreciated. -------------------------- EDIT: Got it figured out: if docker logs --since 1m PlexMediaServer 2>&1 | grep -i "EasyAudioEncoder failed"; then echo "EMC Failed" echo "Stopping Plex container..." docker stop PlexMediaServer echo "Deleting Codecs folder..." rm -r "mnt/user/docker/Plex-Media-Server/config/Library/Application Support/Plex Media Server/Codecs" echo "Starting Plex again..." docker start PlexMediaServer echo "Sending notification to UNRAID" /usr/local/emhttp/webGui/scripts/notify -e "EasyMediaEncoder failed!" -s "Plex Media Server" -d "EasyAudioEncoder failed. Check to see if it has been fixed." -i "alert" echo "Fixed!" fi It basically checks the last minute in the docker logs for plex and searches "EasyAudioEncoder failed". If it does, it stops Plex, deletes the EMC folder, and then restarts Plex.
  11. I meant to say "going to be discarded" in my last post... Was wording it differently and forgot to delete the "not". I guess just rally the troops? The only reasons why I can think is that they chose XML is because it would be easier to import data from an XML file than to import a compose file. But a compose file would still be decently easy enough to import from since there is structured formatting. Plus, it would make it a lot easier for people that are new to unraid to get going faster. The other reason why I think they went for XML is because of the extra data that would be unraid specific like the description, icon url, categories, etc. I can see the current way for those that are not savvy enough to learn how to create a compose file or just like doing things through a UI. But then again, I feel like it would be easy enough to let everyone have the option to choose between a UI that converts the fields to a compose file or just have a text field that lets the users post a compose file directly. But again, all moot points since the template section is going away.
  12. I think that it's heading in the way that we will be required to log in to this new control panel to be able to use our license. Even just once, that is a concern for me. Even in the initial post, its stated that it's "necessary", but optional. What is wrong allowing your customers to do it the current way? But in the bigger picture, there is a lack of communication about what data is sent and/or received while the accounts are connected. Is it just the key? Server status? How many files you have stored? What kind of files you have stored? Computer specs? Your network map? More? All of these things are easily doable. It's a privacy concern for people that don't want a constant connection to Limetech. It can allow them to collect data without you knowing. And who knows what data they would collect because none of it is explained. Also, what if Limetech gets hacked? They now have access to everyone's server. If SSH is enabled, that itself could cause absolute destruction of your server. That, in itself, is a VERY good reason why I don't want any kind of connection to my server. Also, have a read if the links in the quote below. The Qnap one happened only a month or 2 ago. Convenience is one thing, but security is another.
  13. I also do not like where this is heading. One of the reasons I chose unraid is because how I can keep it local. The less I put out to the open world, the better. If this is the way forward, I will be staying on 6.9 until I can find and implement a solution that lets me off the cloud as much as possible.
  14. So what happens if the flash drive fails? Are the templates backed up with the rest of the flash drive when you create a backup? Are they uploaded to the new UPC? Because if this is the case, then I don't want them uploaded. I would much rather keep doing what I am doing and making periodic backups and storing them on other devices on my network and a backup abroad. I'm sure I can automate this somehow, but I have not had the time to look in to it. I honestly don't even like how CA templates work. I would much rather just make a compose file and be able to do it that way instead of having to use the GUI or code it in XML. I would be able to get things working much faster to be able to add a line of code instead of going through the steps of having to edit an environment variable through the GUI. But I digress. Again, some of my containers are modified to my needs. Settings, paths, and variables which includes passwords and sensitive info that I don't want getting out. Every time I edit a container, I have to start it, make sure things are working, go back in to the edit screen, save the xml file, copy and paste it in to VS Code, and git psuh it to my Gitea server. I do this in case my flash drive fails. It would be really nice to just be able to enter in the repo URL and have all of my modified templates just there to hit start and be back up and running when the time comes. Since the devs have decided to remove this feature already, I guess anything I say is just going to be discarded.
  15. I would like to propose a feature that modifies the "Template Repositories" section on the Docker page. I have containers that are modified to my needs that I would like to host on a private git server (using Gitea). I have tried adding a template repository to this section just to find it not working. I searched the forums and have found a few (rather old) posts saying that you can only use GitHub to host the template repositories and that it has to be public. The problem I am seeing and why I want to use my own server is because if you post all of your repository templates on GitHub without going through and scrubbing all of your passwords, application keys, hashes, etc., you will be posting all of that stuff publicly. In my case, I am hosting a Gitea instance on a separate device that is only accessible on my local network. I currently have all my XMLs saved to a repository on that server. It would be really nice to be able to just add the repo url and restore all of my containers in case of a catastrophic failure. GitLab support would be another integration that I can see a lot of other people wanting.