yoleska

Members
  • Posts

    32
  • Joined

  • Last visited

yoleska's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Just a quick reply that Settings->Docker->"Host access to custom networks" setting to "Enabled" worked for me too. Now I just have to create entries for the other 20+ docker containers. ugh Note, I did NOT have to change custom bridge or IP settings. NPM is running in 'custom: br0' network type and it can see all the other docker containers in 'bridge' or 'medianet' networks just fine.
  2. Well, crud - looking at the Status > Command-Line Flags, I see this: --storage.tsdb.path /prometheus That should be /prometheus/data. I'm not sure how the best way to change this.
  3. I'm not really sure if this is the right place, but the last update to Prometheus wiped out all my data history. Come to find out, looking at the container config and inside the container, it's storing all the data files in the root of the container, not in /data. Per the container settings in unraid, they are supposed to be stored in /prometheus/data. But they are stored in /prometheus. What this means is that today/yesterday's docker update wiped out all my data for the last couple weeks. I didn't notice it until today when debugging, wondering where all my grafana history disappeared to. Unraid has the paths setup, but the container/image isn't listening for some reason. Is there any way I can fix this? To tell Prom to store the data in the actual /prometheus/data folder?
  4. Oh, I suppose that's possible. I was changing a bunch of things trying to get it to work right. Very well.
  5. Ok, I see one problem. When the container is installed, it adds a trailing forward-slash to the config path in the container settings: After removing that trailing slash at the end, and placing the config file in the path as mentioned by @Kilrah now it works..yay!
  6. mkdir /mnt/user/appdata/cloudflare-ddns-config wget -qO /mnt/user/appdata/cloudflare-ddns-config/config.json https://github.com/timothymiller/cloudflare-ddns/raw/master/config-example.json /mnt/user/appdata/cloudflare-ddns-config/config.json: Is a directory I'm assuming that the config-example.json file sits inside like: /mnt/user/appdata/cloudflare-ddns-config/config.json/config.json Or is the config.json directory supposed to be a file within the same named directory? I've tried both ways and keep getting an error " Error reading config.json" in the docker log. My config.json file looks like this (linter checked): { "cloudflare": [ { "authentication": { "api_token": "redacted" }, "zone_id": "redacted, yes the long-ass zone file descriptor", "subdomains": [ { "name": "test1", "proxied": false }, { "name": "test2", "proxied": false } ] } ], "a": true, "aaaa": true, "purgeUnknownRecords": false, "ttl": 300 }
  7. The other application was running on a linux VM that also is hosted by Unraid, but I think the app set it's own permissions. It was Cyber-DL, and I'm thinking now the permission issue wasn't directly related to that app, but something funky I did. In either case, it's all resolved now after changing permissions on the video directories.
  8. I think I corrected the issue. For some reason, this one particular download folder is owned by an incorrect entity. After changing it to be like the others, the files were deleted fine with no error. Ignore me. Leaving this post here in case others run into this, as other unraid apps don't have issues, so maybe this app is running with different perms?
  9. Oh there was a log message, but I'm not sure what it means. Other docker apps have no problems accessing this folder/files... 22:39:29 => Failed to delete file '/storage/X/Cyber-DL/Sorted Downloads/Videos/good_movie.mp4', reason: Access to the path '/storage/X/Cyber-DL/Sorted Downloads/Videos/good_movie.mp4' is denied., Stacktrace at System.IO.FileSystem.DeleteFile(String fullPath) at VDF.GUI.ViewModels.MainWindowVM.DeleteInternal(Boolean fromDisk, Boolean blackList, Boolean createSymbolLinksInstead, Boolean permanently) in /tmp/vdf/VDF.GUI/ViewModels/MainWindowVM.cs:line 911
  10. Seeing a problem where the files don't appear to be deleting. I choose "Select Lowest Duration/Quality" and then after all the boxes are selected for the files I want to delete, I choose "Delete From Disk", and the selected items still show up in the list. In fact at the window at the bottom of the page, it shows me how many duplicates there are and "Total Size Removed:0"
  11. Ok, I think I found the problem and a working solution. Replying in case others have this use case. The problem was that the JDownloader plugin that I had downloaded was the incorrect one. Once I downloaded the correct one from the myjdownloader site https://my.jdownloader.org/apps/?ref=myjd_web , NOW it asks me for the user/pass for the cloud service. Now it works as expected: I right click a link, choose "Download with JDownloader" and it sends it to the cloud, then relays it to my connected server and adds it to the queue. Almost perfect. I wish I didn't have to rely on the cloud service to make this happen. It would be better if this docker container had the "get" ports enabled, either 3129 or 9666 (I think) and then I could just point the browser extension directly and the local LAN IP and bypass the cloud MITM transaction. Oh well, it's working, so that's the important part.
  12. Yup, I have an account there, but it's still a manual effort to get links into it that then get sent to the docker instance. That's a nice feature, but not what I'm looking for. What I'm looking for is to get the right-click context menu in Jdownloader extension for Chrome to send the links directly to the docker instance. The Chrome extension does have the option "Remote Server" so I assume it *can* talk to the docker instance, but I'm just missing a piece of the puzzle - maybe some parameters on either client/server side. Is that possible or maybe it's just wishful thinking?
  13. New topic, maybe discussed before, but I'm not reading through 16 pages. I have the Unraid docker image setup, no big settings changed. I installed the Chrome JDownloader plugin. It asks for a remote server and I put the server URL (that I can get to from my web browser). After that I would assume I could just right click a link, choose "Download Link" and it will get passed to the Jdownload docker instance, but it does not. Instead it asks me to install this 5 year old helper application (which Win11 thinks is malicious). Is that still the recommended way, or can I tweak some other setting to send the link to the download client from the browser? Edit: I installed the helper app (found one not as old), but I'm still unsure how to have the chrome plugin talk to the docker container. It errors that it's looking for an executable, but the executable isn't local to chrome, it's the docker container.
  14. I think I just found my own answer. Lots of recommendations comparing Drive to Nextcloud, so I'll look into that.
  15. After moving from Synology to Unraid, I think one of the most used apps that I used was their Cloud Station or now called "Drive" where it was a shared folder between any PC (local or on WAN) and the NAS. Is there any application or solution similar for Unraid? Transferring small files between work, home, was just easier when there was a common folder among them all. Or a Dropbox-like client would also work.