[Support] - Storj v3 docker


Recommended Posts

17 hours ago, Jack8COke said:

Luckely i havent deleted the old cache drive. I could restore my identity folder. Somehow he has not copied the key files. But i have copied them with cp * -r. I dont know why. But now it works 🙂

Great!

Link to comment
  • 2 weeks later...
On 1/28/2023 at 6:09 AM, nerbonne said:

 

Did anyone ever solve enabling zksync?

I was just messing around with this and figured it out so I thought I'd post it here in case others are still struggling.

 

You need to put "--operator.wallet-features=zksync" within the "Post Arguments:" box for the container config. You have to make sure you are within the advanced view in order to see this. Press apply and you'll see "zkSync is opted-in" under the payout section of the storj WebUI.

Link to comment
12 hours ago, Sokar said:

I was just messing around with this and figured it out so I thought I'd post it here in case others are still struggling.

 

You need to put "--operator.wallet-features=zksync" within the "Post Arguments:" box for the container config. You have to make sure you are within the advanced view in order to see this. Press apply and you'll see "zkSync is opted-in" under the payout section of the storj WebUI.

That's weird that it worked for you. When I tried it, my template wouldn't save, but I did end up getting it to work by editing the config.yaml inside the container (per Storj documentation, either way is fine).  This is the line needed for the config.yaml:

 

operator.wallet-features: ["zksync"]

 

Link to comment

Hoping someone may be able to help point me in the right direction.  Thanks!

 

I've been running my node for over a year and noticed it was offline today and an update was available.  I published the update and am now unable to start the node.  I've verified all my associated drives are mounted and was able to find the info.db file here /mnt/disk3/Storj_a/data/storage/info.db. 

 

Error: Error starting master database on storagenode: database: info opening file "config/storage/info.db" failed: unable to open database file: no such file or directory
        storj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:331
        storj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:308
        storj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:283
        storj.io/storj/storagenode/storagenodedb.OpenExisting:250
        main.cmdRun:193
        storj.io/private/process.cleanup.func1.4:377
        storj.io/private/process.cleanup.func1:395
        github.com/spf13/cobra.(*Command).execute:852
        github.com/spf13/cobra.(*Command).ExecuteC:960
        github.com/spf13/cobra.(*Command).Execute:897
        storj.io/private/process.ExecWithCustomConfigAndLogger:92
        main.main:478
        runtime.main:250

Link to comment
On 3/15/2023 at 6:00 PM, nerbonne said:

That's weird that it worked for you. When I tried it, my template wouldn't save, but I did end up getting it to work by editing the config.yaml inside the container (per Storj documentation, either way is fine).  This is the line needed for the config.yaml:

 

operator.wallet-features: ["zksync"]

 

I will add it to the docu

Link to comment
  • 5 weeks later...

I posted to official Storj and I didn't get any answers there ... posting here too...

 

https://forum.storj.io/t/misconfigured-but-everything-is-set-right/22308/9

 

I also tried adding the flags -p 28967:28967/udp to the extra parameters for launch but that still doesn't enable UDP into the container ...

 

Any ideas? TCP works, UDP does not.

 

And even weirder - ping works

 

1325606744_Pingdom2023-04-1710-52-52.thumb.jpg.b011215bea769b0e76782d3c4e573ccc.jpg

Edited by Mind Dragon
Link to comment

I'm having a QUIC issue where it constantly goes to misconfigured. I've checked the network gear and port forwarding is correct. Sometimes when I restart the Storj docker it works for minutes or hours then it goes out again. I'm not sure exactly what's happening but when I look in the docker tab I notice the Port Mappings (App to Host) "unknown IP:28967 <-> local ip:28967" I assume the unknown IP should be my WAN IP?

 

 

Link to comment
  • 2 weeks later...

Hi all, I've been running the storj docker app for almost a year and it has been largely great, but I've recently been having an issue where the container fails and restarts a lot. Like 4+ times a day. It usually restarts in 2's as well, such as at 2am then 2:10am, then it's fine for a few hours. I haven't really done any troubleshooting with crashing unraid docker containers and I'm having trouble finding where logs would be stored for the failed container(s). The log UI only shows the active container. Can someone point me to where useful logs would be?

 

If it helps debug the issue, it also seems to happen every 2 to 4 weeks. I haven't had it restart in 2 weeks and now it's restarted 4 times. 

Edited by Hkup859
  • Upvote 1
Link to comment
  • 2 months later...

I am getting an error of port forwarding, I'm using DuckDNS and have used the DuckDNS app from the community app within UnRaid.

 

Any ideas on what to do? Ports on router have been forwarded already.

 

Edit: ignore the question, port was wrong on DUCKDNS

 

Edited by articulateape
Link to comment
3 hours ago, articulateape said:

I am getting an error of port forwarding, I'm using DuckDNS and have used the DuckDNS app from the community app within UnRaid.

 

Any ideas on what to do? Ports on router have been forwarded already.

 

Edit: ignore the question, port was wrong on DUCKDNS

 

Why are you using DuckDNS anyways. It's uptime is really bad and what purpose does it serve.  Storj doesn't need dynamic DNS.

Link to comment
7 hours ago, articulateape said:

 

My IP is dynamic, I thought I needed something like DuckDNS, if not then I can remove that. I've seen the Docs saying to use No-IP

Storj works fine on dynamic IP as long as you have port forwarding setup properly. I think the only thing you'd need dynamic DNS for is to access the admin page but personally I'd use wireguard so it isn't exposed to the internet.  But if you are set on dynamic DNS, no-ip is a good option. Anything is better than DuckDNS. 

Link to comment
4 hours ago, nerbonne said:

But if you are set on dynamic DNS, no-ip is a good option. Anything is better than DuckDNS. 

 

Appreciate the advice here. I'm not aware, what makes no-ip better than DuckDNS aren't they basically the same? 

 

Quote

Storj works fine on dynamic IP as long as you have port forwarding setup properly.

 

I thought port forwarding is just opening up ports on router, but will also need dynamic DNS service like No-IP if also using dynamic IP from ISP, maybe I got this all wrong?

Link to comment
2 hours ago, articulateape said:

I thought port forwarding is just opening up ports on router, but will also need dynamic DNS service like No-IP if also using dynamic IP from ISP, maybe I got this all wrong?

That's correct for websites etc to be accessible with a human-friendly name instead of an ip, but storj doesn't care about that.

Link to comment
7 hours ago, articulateape said:

 

Appreciate the advice here. I'm not aware, what makes no-ip better than DuckDNS aren't they basically the same? 

 

 

I thought port forwarding is just opening up ports on router, but will also need dynamic DNS service like No-IP if also using dynamic IP from ISP, maybe I got this all wrong?

From past experience duck DNS goes down ALL the time.  

Link to comment
1 hour ago, articulateape said:

 

That works for me, but then what do I put here?

HTPC-Vault-UpdateContainer.png

just put your public IP which even when dynamic, shouldn't change that often.  you'll have to edit the container every time it changes.  if it changes frequently, then use a dynamic dns service just not Duck DNS.

Link to comment
  • 4 weeks later...

I have a bunch of old and slow HDD's in a pool that supply my STORj node with space.  Its my way of keeping the landfills less crowded!  This has been working fine for a couple of years now.  I have been having stability issues with my server, so im paying attention to the logs allot now.  
I have noticed the storj docker seems to keep resetting itself (but not fully...just the networking side?)!
 

Quote

Aug 23 08:10:28 (MY SERVER NAME) kernel: 
Aug 23 08:22:02 (MY SERVER NAME) kernel: docker0: port 3(veth47b1fa7) entered disabled state
Aug 23 08:22:02 (MY SERVER NAME) kernel: vethcf14aef: renamed from eth0
Aug 23 08:22:02 (MY SERVER NAME) kernel: docker0: port 3(veth47b1fa7) entered disabled state
Aug 23 08:22:02 (MY SERVER NAME) kernel: device veth47b1fa7 left promiscuous mode
Aug 23 08:22:02 (MY SERVER NAME) kernel: docker0: port 3(veth47b1fa7) entered disabled state
Aug 23 08:22:03 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered blocking state
Aug 23 08:22:03 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered disabled state
Aug 23 08:22:03 (MY SERVER NAME) kernel: device veth4824638 entered promiscuous mode
Aug 23 08:22:03 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered blocking state
Aug 23 08:22:03 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered forwarding state
Aug 23 08:22:03 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered disabled state
Aug 23 08:22:04 (MY SERVER NAME) kernel: eth0: renamed from veth8461dd3
Aug 23 08:22:04 (MY SERVER NAME) kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4824638: link becomes ready
Aug 23 08:22:04 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered blocking state
Aug 23 08:22:04 (MY SERVER NAME) kernel: docker0: port 3(veth4824638) entered forwarding state


No other containers do this.  They are all set up as "bridge mode" to a x4 br0 802.3ad (4) mode (and yes, i have set up LAG on the managed switch...its all working as far as i can tell).

The dockers log seems normal, with the occasional yellow "upload/download failed" warning that we have learned to ignore as it just means my node lost the file race.  

Why is this happening?  anyone have any light to shed?


Additionally, is there any way to tell which docker corresponds to the "docker0: portX" in the system logs? It seems to be unrelated to anything in the docker tab.

Thanks for ya'lls help 

Link to comment
On 5/2/2023 at 6:58 AM, Hkup859 said:

Hi all, I've been running the storj docker app for almost a year and it has been largely great, but I've recently been having an issue where the container fails and restarts a lot. Like 4+ times a day. It usually restarts in 2's as well, such as at 2am then 2:10am, then it's fine for a few hours. I haven't really done any troubleshooting with crashing unraid docker containers and I'm having trouble finding where logs would be stored for the failed container(s). The log UI only shows the active container. Can someone point me to where useful logs would be?

 

If it helps debug the issue, it also seems to happen every 2 to 4 weeks. I haven't had it restart in 2 weeks and now it's restarted 4 times. 

me too.  i just noticed it happening.  rebuilt docker image and installed everything again...still doing it every few mins sometimes!  quite scary as this is a 2 year old node and my suspension/online score is dropping fast!  No one seems to have answers to it tho...

 

Link to comment
4 hours ago, miicar said:

me too.  i just noticed it happening.  rebuilt docker image and installed everything again...still doing it every few mins sometimes!  quite scary as this is a 2 year old node and my suspension/online score is dropping fast!  No one seems to have answers to it tho...

 

What do the logs say when it crashes.  When mine was crashing all the time it was because of a corrupt file.  Once I fixed that it was fine. 

Link to comment
13 hours ago, Kilrah said:

Note that it'll occasionally restart on its own as it self-updates the app, shouldn't be that frequent but AFAIK it's happened twice in the past couple of days. When you first posted I had several days of uptime though.

no this is several times an hour...the gui sometimes says "uptime 10 mins"...it seems like a soft crash too. 
The logs from unraid show:
"Aug 27 19:24:35 (my server name) kernel: docker0: port 6(veth61056a3) entered disabled state
Aug 27 19:24:35 (my server name) kernel: vethba5c05c: renamed from eth0
Aug 27 19:24:35 (my server name) kernel: docker0: port 6(veth61056a3) entered disabled state
Aug 27 19:24:35 (my server name) kernel: device veth61056a3 left promiscuous mode
Aug 27 19:24:35 (my server name) kernel: docker0: port 6(veth61056a3) entered disabled state
Aug 27 19:24:38 (my server name) kernel: docker0: port 6(veth3f3da73) entered blocking state
Aug 27 19:24:38 (my server name) kernel: docker0: port 6(veth3f3da73) entered disabled state
Aug 27 19:24:38 (my server name) kernel: device veth3f3da73 entered promiscuous mode
Aug 27 19:24:39 (my server name) kernel: eth0: renamed from veth4ade7af
Aug 27 19:24:39 (my server name) kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3f3da73: link becomes ready
Aug 27 19:24:39 (my server name) kernel: docker0: port 6(veth3f3da73) entered blocking state
Aug 27 19:24:39 (my server name) kernel: docker0: port 6(veth3f3da73) entered forwarding state"

 

Its hard to capture the storj logs unless i sit here and watch the unraid logs to see it happen in real time and quickly copy the storj logs...normal operation, otherwise, quickly fills the logs with normal download/upload processes...although my score in the gui is defiantly suffering!     

20 hours ago, nerbonne said:

What do the logs say when it crashes.  When mine was crashing all the time it was because of a corrupt file.  Once I fixed that it was fine. 

what was that corrupt file?  As i said, i re-installed the node and its still doing this. 

I was thinking its cuz my slow array takes to long to feed it info (had DB corruption issues when the DB was on the same array as the data; but i have moved them to another pool of NVME drives, and things have been great ever since).  what else, other then data, gets read?  maybe i should move everything but the data off that slow array.  weird that this started happening AFTER i removed the slowest disk from the mix (and that disk is now in another PC with a windows based node...running fine).

Probably unrelated, but i don't get why i have random IP6 Stuff in my logs, when i have all IP6 stuff disabled on both my server and the router and switch...i've just ignored it, cuz it doesn't seem to do anything bad...just annoying to look at.

Edited by miicar
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.