[Support] - Storj v3 docker


MrChunky

Recommended Posts

2 hours ago, KrisMin said:

Of what I have read, there is no hard cap on the storage size per node and the number is just a strong suggestion from the dev team.

Someone in the forum did some math with hes years of statistics data and came up with a conlcusion that a theoretical limit would be aound 40TB of per IP subnet if IP filters work like we think they do. That number is based on hes statistics that a ~5% of all data gets trashed and therefor a pool of >40TB would never fill up, because 40TB would be the line in which point ingress = trashed. Of course if in the future an average Ingress rate increases, then the max pool size would also be larger.
On our case (running nodes on docker), I would keep a single node size less or equal to 24TB, because it's less risky. If one node gets a bad reputation for some reason, then you still have other good ones to compensate and you could easily increase their size if needed.

I started two identical nodes running on the same machine and the same network. Somehow one of them got uptime warning (from one us. sattelite) and lowered its uptime score, even tho they are identical in terms of availability and uptime. How did that happen, I have no idea. Must be some kind of bug. If I see this happening again, then I'll investigate it some more. Hopefully it was just once.

Awesome info, thanks for sharing.

 

Do you have a link to that storj forum? 

 

I think you're still suggesting that I run 2 nodes at once but just that I wait until the first one is fully vetted and operational to spin up the second one but still yet don't exceed a 40TB as there's likely no point, right?

Link to comment

Thanks!

 

I can confirm, when I started my second node, my ingress on 1st node slowed, I do not know if it slowed my vet time. Do you know if I can check number of passed audits? I just see 100% not the actual count.

 

Why do you think 5 8TB nodes would be more efficient than 2 20TB nodes?

 

image.png.456d689327d1ba3584414c4a42ee6a3e.png

Link to comment

Hi, thank you for this docker.

Since Storj say the node needs to do his update automaticaly and have the latest version.

Does this docker always download the latest version directly from storj?

or you need to update it on your side?

I want to be sure that if you not there in 6-9 months my node will not stop working or stop being updated.

 

Thank you

Link to comment
4 hours ago, cybex said:

Hi, thank you for this docker.

Since Storj say the node needs to do his update automaticaly and have the latest version.

Does this docker always download the latest version directly from storj?

or you need to update it on your side?

I want to be sure that if you not there in 6-9 months my node will not stop working or stop being updated.

 

Thank you

The docker will Update automatically assuming you have this function turned on on unraid for all your docker containers. This usually achieved through auto update plugin:

 

Edited by MrChunky
Link to comment
7 hours ago, srfnmnk said:

Thanks!

 

I can confirm, when I started my second node, my ingress on 1st node slowed, I do not know if it slowed my vet time. Do you know if I can check number of passed audits? I just see 100% not the actual count.

 

Why do you think 5 8TB nodes would be more efficient than 2 20TB nodes?

 

image.png.456d689327d1ba3584414c4a42ee6a3e.png

I really do appreciate the passion to get an optimal setup. However, this discussion should really be had on storj forums directly. The people there will be much more knowledgeable about these topics and you can probably find the answers you are seeking already there.

Link to comment
5 hours ago, MrChunky said:

I really do appreciate the passion to get an optimal setup. However, this discussion should really be had on storj forums directly. The people there will be much more knowledgeable about these topics and you can probably find the answers you are seeking already there.

Agree, thank you. will move deep storj conversation over there.

Link to comment
1 hour ago, srfnmnk said:

Hi @Squid is there any way as of yet to specify only certain docker containers for auto-update?

 

Thanks

Sure, you can turn autoupdate on-off for every container individually via Unraid Auto-update Appliations plugin. Just go in to this app (under settings) and you'll find docker tab and auto-update toggle buttons there.

Edited by KrisMin
  • Like 1
Link to comment
On 1/14/2021 at 5:26 PM, KrisMin said:

I had a same issue. Apparently when running the first time


-e SETUP=true --mount type=bind,source="/mnt/user/storj/<identityfolder>/",destination=/app/identity  --mount type=bind,source="/mnt/user/storj/<datafolder>/",destination=/app/config

is needed.

 

@KrisMin Thank you for pointing this out. I tested it and it seems that indeed something has changed in unraid docker implementation since I last created a node. This argument is now necessary on the first run. I added this info to the template and the topic.

  • Like 1
Link to comment

Update about running this thing on "cache preferred" mode. Looks like storj has some silly rule built in which returns an error and switched the node offline when the space runs below  ~450GB. 
This rule makes no sense to me, why...?

 

2021-01-27T19:29:00.600Z ERROR piecestore:cache error getting current used space: {"error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
Error: piecestore monitor: disk space requirement not met

 

So I had to switch the cache off for my storj share. As soon as I did that, the node started working again.
Maybe there's some workaround on this? Would more exprerienced Unraid users would even prefer to run such thing on cache preferred? I suspect it would put less stress on HDD's when done so.

Edited by KrisMin
Link to comment
17 hours ago, KrisMin said:

cache preferred

I think you shouldn't use cache preferred -- you should use cache "yes" then mover will move it over and the df -h command will see the free space on your underlying drives, not the space on the cache. It's working for me with cache "yes". When I log into the docker image I also see the proper space for the mount as well.

  • Like 1
Link to comment

Solved

 

 

I guys i'm new to this.

 

I'm at the point to start my docker and i always got the error 

 

2021-01-28T16:04:52.630Z INFO Configuration loaded {"Location": "/app/config/config.yaml"}
Error: storagenode configuration already exists (/app/config)

 

 

My config in extre parameters 

-e SETUP=true --mount type=bind,source="/mnt/user/storj/identity/storagenode/",destination=/app/identity  --mount type=bind,source="/mnt/user/storj/config/",destination=/app/config

 

My bad i forgot to remove the -e SETUP=true

 

 

In identity/storagenode folder i got all my .cert and .key

 

and

 

config folder look like this

 

image.png.8786bb1ff1ad34cab1adbfb2cadae0c2.png

 

thx all

Edited by francrouge
forgot to remove the -e SETUP=true
Link to comment
8 hours ago, srfnmnk said:

I think you shouldn't use cache preferred -- you should use cache "yes" then mover will move it over and the df -h command will see the free space on your underlying drives, not the space on the cache. It's working for me with cache "yes". When I log into the docker image I also see the proper space for the mount as well.

Thanks! I think that should be written in OP - to use as Cache = yes.

Link to comment

I am curious how the mover doesn't corrupt / break the WAL or other database files. Maybe mover doesn't move locked files? I would prefer to keep the db files on the cache in the /appconfig but figuring out how to get the nested mounts to work got a bit iffy. 

Link to comment
20 minutes ago, srfnmnk said:

I am curious how the mover doesn't corrupt / break the WAL or other database files. Maybe mover doesn't move locked files? I would prefer to keep the db files on the cache in the /appconfig but figuring out how to get the nested mounts to work got a bit iffy. 

Yes, mover doesn't move locked files. Files that are currently in use by a process are locked, this is true for any process on the unraid system as far as I know.

 

In theory this means that if storj starts using files differently it can breaker mover functionality. Seems to be working fine at the moment... But I think in the long run it is quite risky to have mover running on the storj folders. One thing that could be done is to use mover for the data chunks but not for the folders where the databases are... Just speculating though.

Edited by MrChunky
Link to comment
7 hours ago, MrChunky said:

One thing that could be done is to use mover for the data chunks but not for the folders where the databases are

Right, that's what I wanted to do but the organization of the files and data is nested and challenging to get to mount to the docker properly. As you said, it seems to be working. If that changes I will let you know. I have periodic backups of the databases so I could recover in the event of an issue. 

Link to comment
  • 4 weeks later...
  • 1 month later...

Hello,

i am trying to install the docker but i am receiving this error

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='storagenode-v3' --net='br0' --ip='192.168.1.230' -e TZ="Europe/Athens" -e HOST_OS="Unraid" -e 'TCP_PORT_28967'='28967' -e 'WALLET'='0xa605c08349a0d6ee8972ec004c1a2da525c55a2d' -e 'EMAIL'='********' -e 'ADDRESS'='******:28967' -e 'STORAGE'='3TB' -e 'TCP_PORT_14002'='14002' -e 'BANDWIDTH'='' -e SETUP=true --mount type=bind,source="/mnt/user/storj/identity/storagenode/",destination=/app/identity  --mount type=bind,source="/mnt/user/storj3/data/“,destination=/app/config 'storjlabs/storagenode:latest'


The command failed.

Link to comment
2 hours ago, atithasos said:

Hello,

i am trying to install the docker but i am receiving this error

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='storagenode-v3' --net='br0' --ip='192.168.1.230' -e TZ="Europe/Athens" -e HOST_OS="Unraid" -e 'TCP_PORT_28967'='28967' -e 'WALLET'='0xa605c08349a0d6ee8972ec004c1a2da525c55a2d' -e 'EMAIL'='********' -e 'ADDRESS'='******:28967' -e 'STORAGE'='3TB' -e 'TCP_PORT_14002'='14002' -e 'BANDWIDTH'='' -e SETUP=true --mount type=bind,source="/mnt/user/storj/identity/storagenode/",destination=/app/identity  --mount type=bind,source="/mnt/user/storj3/data/“,destination=/app/config 'storjlabs/storagenode:latest'


The command failed.

 

Hard to say just from the information you have given. May be the TCP port is already in use, may be the mount path are not working properly.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.