karldonteljames

Members
  • Posts

    148
  • Joined

  • Last visited

Everything posted by karldonteljames

  1. Oh poo. I opened the page earlier and came back to it. Clearly I closed the wrong one. Sorry all.
  2. Hello. I’m using machinaris and I won my first two chia coins two days ago. I can see that they are in my wallet, but they are not in my “Cold Wallet” how do i get them into my nucle wallet so that I have them available for exchange if i wanted to? Thanks in advance.
  3. Good evening. I won my first two coins the day before yesterday, and i can see that the coins are in my wallet, but they not in my cold wallet (I’m using nucle) but my wallet on there is still empty. I’m not sure if this is the best option or not. Any advice on exchanging one of these coins would be great. Thank you.
  4. I tried the first fix and it didnt make a difference. Just trying the update now. Not sure if it makes a difference but the summary page says: No blockchains found from any farmers. Just starting up? Please allow at least 15 minutes for blockchains to get started.
  5. Thanks the output from chia farm summary is showing 25. I have no other chia devices on my network. Farming status: Farming Local Harvester 25 plots of size: 2.474 TiB Plot count for all harvesters: 25 Total size of plots: 2.474 TiB Estimated network space: 22.529 EiB Expected time to win: 5 years and 4 months For details on farmed rewards and fees you should run 'chia start wallet' and 'chia wallet show'
  6. Morning All. Is there a problem with the dashboard not updating the number of plots, and the disk size? I can see in the graph that it is detecting new plots, but the dashboard is not updating. I've been using the same plots fort the last year, but i'm just trying to add a few more now that I have better hardware to plot on.
  7. Are there any plans for TubeSync to support emby as a notification source? I also wondered if it is possible to use a subfolder under the channel with the year (basically so it can be seen as a season in emby)? I tried the following, but even I create the folder, (2022 in this case) tube sync throws an error saying the out file doesn't exist. /{yyyy}/{yyyy_mm_dd} - {title_full}.{ext} edit: Removing the proceeding '/' allows the video to go into the correct folder, however the supporting metadata (thumbnail and nfo) is stored in the folder above. I've tried adding the variable to the directory, but that just creates a new folder called {yyyy} so I assume that isn't parsed into the location. is there a way round this at all? Also, is it possible to run a scheduled task manually?
  8. Odd. I changed the download path to a folder without uppercase letters and started the container, which started ok. changed it back to the original container, and it's running ok.
  9. Not sure if there was a solution, but i too am getting the following error, no matter how many times i start the container. Starting loop.... Checking for new Videos Traceback (most recent call last): File "main.py", line 8, in <module> pytubDef.loop() File "/app/pytubDef/__init__.py", line 202, in loop channelArray = returnMonitoredChannels() File "/app/pytubDef/__init__.py", line 256, in returnMonitoredChannels channelURLs = monitoredChannelsFile.readlines() io.UnsupportedOperation: not readable wen the docker image is setup i can see the following: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='auto-yt-dl' --net='br0' --ip='192.168.10.105' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_5000'='5000' -v '/mnt/user/MediaYouTube/':'/app/Downloads':'rw,slave' -v '/mnt/user/appdata/auto-yt-dl':'/app/data':'rw' 'guhu007/auto-yt-dl'
  10. Have I mis-configured something? The "mainnet" folder within machinaris appdata folder has grown to a huge 27gb. is this expected behaviour? I wasn't sure if I could move this onto one of the disks containing the plots?
  11. Morning, I'm trying to migrate my influxdb to v2 from 1.8. I've followed the information here but i'm not sure that the data has converted, I dont see any major disk activity, and don't see anything in the logs. I want to try and retain the few years of data i have if possible. I've added the following to my docker image: I've created a copy of the influx folder and called it influxdb2, and setup another instance of the influx docker, giving it a different name and mapping the above fodler so I can keep 1.8 connected to HomeAssistant until i've made sure that I'm doing the process correctly. Any advice on how i can migrate the data please? EDIT: There was a .bolt, so I removed that and now the logs are showing that it is doing something.
  12. No i didnt put all four in. As i understand it, wont this allow all unauthenticated access?
  13. I've managed to get the graph showing in home assistant, but grafana requires an additional log in, which is one of the steps i'm trying to avoid.
  14. Afternoon All, I'm trying to embed some graphs into home assistant, and it looks like this docker version doesn't use grafana.ini but environmental variables (unless i've understood it wrong.) according to this page i need to add some settings, do i add each one of these lines as a variable in the docker image? From what I understand these changes allow full unauthenticated access to the grafana pages, which i obviously do not want. I do access and view the graphs using an external URL on occasion, and find it really helpful to be able to do this. can I allow one IP address (my home assistant IP) to pull the data from grafana, but still keep other addresses secure? Thanks in advance.
  15. After moving from a pfsense device to a udmp, I’m happy with most of the offering with the exception of no open vpn. This solution would have meant not having to implement more hardware. Part of the reason I went for a udmp was to slim down the hard ware I’m using, not remove one and have to add two. Unraid runs all the time anyway, so having this in a docker image seems like an ideal solution, especially as I can put the open vpn docker in my dam and tunnel traffic without too many concerns.
  16. I'm running a UDM Pro, But until i get the container to actually run as expectedin it's own i'm not able to help. Are you using the UDP in classic setting or modern settings? it might be worth double checking that rule is enabled. - It might also be worth trying to run in host mode rather than bridge mode. i'm not 100% sure but i think using it in bridge mode affects the way routing works.
  17. Same issue here. ``` Sorry, a session error has occurred It is possible that your session has expired or your login credentials do not allow access to this resource. See error text below for further details: SESSION ERROR: SESSION: Your session has expired, please reauthenticate (9007) ```
  18. Thanks I’ll take another look at it, is it possible to have it backup to google drive, or OneDrive, or would I need to use clone for that? Does it backup in open file format, so i could do an individual file / docker restore?
  19. It’s been a while since I used ca backup, and the last time I did it stopped all dockers and vm’s, is that still the case? I’m looking to try and backup my app data and unraid data to my google drive or onedrive, and trying to think of the best way of doing that without stopping all of my containers. Ideas and suggestions are appreciated.
  20. Im seeing an issue where i cannot connect to deluge, i'm getting connection refused. When I take a look a the logs i can see the following every couple of minutes: DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. The top of my config is setup as below: client dev tun proto udp remote sweden.privacy.network 1198 remote denmark.privacy.network 1198 remote man.privacy.network 1198 remote nl-amsterdam.privacy.network 1198 remote no.privacy.network 1198 remote brussels.privacy.network 1198 remote lu.privacy.network 1198 remote malta.privacy.network 1198 remote monaco.privacy.network 1198 resolv-retry infinite nobind persist-key cipher aes-256-gcm ncp-disable auth sha1 tls-client remote-cert-tls server auth-user-pass credentials.conf compress verb 1 <crl-verify> Any advice is appreciated.
  21. All Went without a with hitch. All drives replaced upgraded. I did one drive at a time, (I know I could have probably did two at once, but wanted to play it safe) Each new 10tb drive took about 22 hours to rebuild, so all in took a little over a week to replace all eight drives. The only other question i had was about automatically removing duplicates.
  22. That's fine, I'll stop my backup form running for a couple of days, that isn't a huge issue, not taking many pictures at the moment anyway. I don't think the cache is protected by the parity, (I have two cache drives so i think they protect each other? And all of my docker appdata is running from there.) So i can't see that it will make a huge difference. One other question. I ran some advanced tests on my unraid server using fix common problems, and it has detected multiple duplicate files across a few of the drives, is it possible to automatically remove the duplicates? I've tried to run dupeGuru, but it seems to lock up and not report anything.
  23. It looks pretty straight forward. Shutdown > Replace one parity drive > Restart > Assign new drive > Let unraid rebuild. Shutdown > Replace second parity drive > Restart > Assign new drive > Let unraid rebuild. Shutdown > Replace one non parity drive > Restart > Assign new drive > Let unraid rebuild > Repeat until all drives are upgraded. During this process i'm assuming that i cannot use any of my dockers is that correct? All of my dockers are running on my cache drives. > These will not be replaced. Is there anyway i can continue to use my dockers during the rebuild process?
  24. Thank you. I'll read through this. My plan is always to keep the disks as they are until the replacements are confirmed to be working ok.