Jump to content
Jcloud

[Support] QDirStat, Jcloud - cryptoCoin templates

321 posts in this topic Last Reply

Recommended Posts

Posted (edited)

It seems that Unraid does not play well with some aspects of the storj v3 container. In particular the volume mapping seems to be broken, I am not sure why or how. My node failed twice in the last month. Currently I have no reliable solution on running v3 node on unraid docker. The config below is just the latest iteration, that does not seem to work reliably.

 

On 8/7/2019 at 12:09 PM, nuhll said:

 

Are you someone from storj?


WHY you dont provide templates?

I am not, the second paragraph is copy pasta, that's why it sounds like I am talking in the first person.

 

Since UnRaid template editor does not support proper config options, I would imagine they have no way of supporting their version of plug and play template. Although, it is trivial to configure it yourself.

 

For any one who wants to do it themselves:

1. Got to community Apps, type in storj, then click "get more results from DockerHub", click on the official storagenode template by storjlabs.

1769076135_Capture.PNG.256097f33f42f059c87648dca4c2bf00.PNG

The repository should be storjlabs/storagenode:alpha

 

2. Add the following values manually by clicking "Add another Path, Port, Variable, Label or Device":

Port: Host Port: 28967, Connection Type: TCP

Port: Host Port: 28967, Connection Type: UDP

Variable: Key: WALLET, Value: YOUR_WALLET_ADDRESS

Variable: Key: EMAIL, Value: YOUR_EMAIL_ADDRESS

Variable: Key: ADDRESS, Value: YOUR_EXTERNAL_IP_ADDRESS:28967

Variable: Key: BANDWIDTH, Value: YOUR_BANDWIDTH (per month)

Variable: Key: STORAGE, Value: YOUR_STORAGE_VOLUME

 

3. Map the storage locations by enabling advanced view

Capture1.PNG.30a43bc0a90d6e1725931b593c4821bb.PNG

In the extra parameters field enter: 

Capture23.PNG.141329f954612239038a4b39063d0bd5.PNG

--mount type=bind,source="/mnt/user/appdata/storj_cert/storj/identity/storagenode/",destination=/app/identity --mount type=bind,source="/mnt/user/storjv3/",destination=/app/config

Replace the locations in quotation marks with the locations for your configuration, first one for your identity token location and second for your storage location.

 

4. Launch docker by clicking Apply

 

 

 

 

Edited by MrChunky

Share this post


Link to post
Posted (edited)

Thank you very much.

 

Ill try it atm.

 

Ive added --restart=unless-stopped -c=512 to the extra parameter, tho.

 

-c limits it to half of my cpu resources and restart does restart it until i click stop... ^^

 

Seems to be working just fine. I just wonder about "Storage Node Dashboard ( Node Version: v0.0.0 )"

 

Btw, i dont think the difference bewteen -v and -mount is that big of a problem. I cant really see a way the dockers keeps running/or starting when your appdata is not there. Only if u manual fuck it up, tho.

 

Anyway beeing on the official template should be probably the best way to go.

Edited by nuhll

Share this post


Link to post
Posted (edited)

Hmm,  now im getting:

rpc error: code = PermissionDenied desc = info requested from untrusted peer

 

and the status shows offline? xD

 

"No bootsrap address specified {"STORAGE": " is normal, right?

Edited by nuhll

Share this post


Link to post
Posted (edited)

Lol just tested it again. Old (jclouds) version works just fine (online and it shows node dashboard 0.0.0.17 or something), the official version shows 0.0.0.0.0.0 and offline... xD

 

edit:

okay after i startet the old node and then the new mode it works now. (same dirs) but still shows 0.0.0...

I also changed storjlabs/storagenode to storjlabs/storagenode:latest, one of the things did it.

 

Lets see how it works out.

Edited by nuhll

Share this post


Link to post
On 8/15/2019 at 11:35 AM, nuhll said:

Lol just tested it again. Old (jclouds) version works just fine (online and it shows node dashboard 0.0.0.17 or something), the official version shows 0.0.0.0.0.0 and offline... xD

 

edit:

okay after i startet the old node and then the new mode it works now. (same dirs) but still shows 0.0.0...

I also changed storjlabs/storagenode to storjlabs/storagenode:latest, one of the things did it.

 

Lets see how it works out.

I think I forgot in the description I wrote above that the storjlabs/storagenode shouold be storjlabs/storagenode:latest or storjlabs/storagenode:alpha. Will add it above, thanks for testing:)

Share this post


Link to post
26 minutes ago, MrChunky said:

I think I forgot in the description I wrote above that the storjlabs/storagenode shouold be storjlabs/storagenode:latest or storjlabs/storagenode:alpha. Will add it above, thanks for testing:)

Im not sure if it was that, how far i understand docker storjlabs/storagenode should always mean storjlabs/storagenode:latest anyway... (also it didnt download anything new...)

 

Seems to work now flawlessly. Thanks.

 

Do you also have version 0 in docker exec -it storagenode /app/dashboard.sh ?

Share this post


Link to post
3 minutes ago, nuhll said:

Im not sure if it was that, how far i understand docker storjlabs/storagenode should always mean storjlabs/storagenode:latest anyway... (also it didnt download anything new...)

 

Seems to work now flawlessly. Thanks.

 

Do you also have version 0 in docker exec -it storagenode /app/dashboard.sh ?

This my output

Storage Node Dashboard ( Node Version: v0.17.0 )

======================

ID           xxxxx
Last Contact 1s ago
Uptime       33m43s

                   Available         Used      Egress      Ingress
     Bandwidth       xxx PB     xxx GB     xxx GB     xxx GB (since Aug 1)
          Disk       xxx TB     xxx GB

Bootstrap bootstrap.storj.io:8888
Internal  127.0.0.1:7778
External  xxx:28967

Neighborhood Size 253

 

Share this post


Link to post

just to give some feedback, it works for some time (maybe a day?) and then the node is offline.

 

I hate this whole crap. Why something cant just work? Or just not work? But how both? :D

 

 

Storage Node Dashboard ( Node Version: v0.0.0 )

======================

ID           
Last Contact OFFLINE
Uptime       17h57m7s

                   Available     Used     Egress     Ingress
     Bandwidth       20.0 TB      0 B        0 B         0 B (since Aug 1)
          Disk        2.0 TB      0 B

Bootstrap 
Internal  127.0.0.1:7778
External  :28967

Neighborhood Size 174

Share this post


Link to post

Just to update everyone on the state of storj v3 nodes. It seems that Unraid does not play well with some aspects of the storj v3 container. In particular the volume mapping seems to be broken, I am not sure why or how. My node failed twice in the last month. Currently I have no reliable solution on running v3 node on unraid docker.

Share this post


Link to post
2 minutes ago, MrChunky said:

Just to update everyone on the state of storj v3 nodes. It seems that Unraid does not play well with some aspects of the storj v3 container. In particular the volume mapping seems to be broken, I am not sure why or how. My node failed twice in the last month. Currently I have no reliable solution on running v3 node on unraid docker.

I have the same problem 3 times database became corrupted.

 

The problem might be because storj database needs more time to close correctly everytime the docker is shutdown or restarted.

Storj recommends 300 seconds Unraid by default is much lower, I don't remember the value

 

This might not be a problem when you don't have a lot of data but when you have many Gb the database probably needs more time and then the docker is killed by Unraid while the database is still doing things.

 

 

One solution should be to configure this docker to with more time

https://docs.docker.com/engine/reference/commandline/stop/

 

But I don't know how to do it

 

Share this post


Link to post
2 minutes ago, L0rdRaiden said:

I have the same problem 3 times database became corrupted.

 

The problem might be because storj database needs more time to close correctly everytime the docker is shutdown or restarted.

Storj recommends 300 seconds Unraid by default is much lower, I don't remember the value

 

This might not be a problem when you don't have a lot of data but when you have many Gb the database probably needs more time and then the docker is killed by Unraid while the database is still doing things.

 

 

One solution should be to configure this docker to with more time

https://docs.docker.com/engine/reference/commandline/stop/

 

But I don't know how to do it

 

I got this error with mine:

2019-08-26T11:35:44.215Z	[34mINFO[0m	process/exec_conf.go:229	Configuration loaded from: /app/config/config.yaml
2019-08-26T11:35:44.237Z	[34mINFO[0m	kademlia/config.go:78	Operator email: xxx
2019-08-26T11:35:44.237Z	[34mINFO[0m	kademlia/config.go:92	operator wallet: xxx
2019-08-26T11:35:44.762Z	[31mERROR[0m	storagenode/main.go:164	Failed to initialize telemetry batcher: process error: telemetry disabled

main.cmdRun
/go/src/storj.io/storj/cmd/storagenode/main.go:164
storj.io/storj/pkg/process.cleanup.func1.2
/go/src/storj.io/storj/pkg/process/exec_conf.go:262
storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:280
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:72

n
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:200
2019-08-26T11:35:44.781Z	[31mFATAL[0m	process/exec_conf.go:286	Unrecoverable error	{"error": "Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: file is not a database\n\tstorj.io/storj/internal/migrate.(*Migration).Run:106\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:230\n\tmain.cmdRun:167\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200", "errorVerbose": "Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: file is not a database\n\tstorj.io/storj/internal/migrate.(*Migration).Run:106\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:230\n\tmain.cmdRun:167\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200\n\tmain.cmdRun:169\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200"}

storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:286
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:72

n
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:200

Looks like database is unhappy in some way as well. I got the following response from support:

Quote

Perhaps it's related to how this platform is mounting the disks. Looks like the container started earlier than the filesystem is ready and the customers' data stored inside the mountpoint and storagenode starts to fail audits for data on yet not mounted disk, and then has been hided after mount is happened. And now it starts to fail audits for hided data.

I am not convinced this is true... as the error is not to do with failed audits, but with database access. I am in a situation where I cannot even start the container due to this error.

 

As for the stop time, this can be added to extra parameters afaik. I will experiment with it, since I don't have much to loose.

Share this post


Link to post
4 minutes ago, MrChunky said:

I got this error with mine:


2019-08-26T11:35:44.215Z	[34mINFO[0m	process/exec_conf.go:229	Configuration loaded from: /app/config/config.yaml
2019-08-26T11:35:44.237Z	[34mINFO[0m	kademlia/config.go:78	Operator email: xxx
2019-08-26T11:35:44.237Z	[34mINFO[0m	kademlia/config.go:92	operator wallet: xxx
2019-08-26T11:35:44.762Z	[31mERROR[0m	storagenode/main.go:164	Failed to initialize telemetry batcher: process error: telemetry disabled

main.cmdRun
/go/src/storj.io/storj/cmd/storagenode/main.go:164
storj.io/storj/pkg/process.cleanup.func1.2
/go/src/storj.io/storj/pkg/process/exec_conf.go:262
storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:280
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:72

n
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:200
2019-08-26T11:35:44.781Z	[31mFATAL[0m	process/exec_conf.go:286	Unrecoverable error	{"error": "Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: file is not a database\n\tstorj.io/storj/internal/migrate.(*Migration).Run:106\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:230\n\tmain.cmdRun:167\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200", "errorVerbose": "Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: file is not a database\n\tstorj.io/storj/internal/migrate.(*Migration).Run:106\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:230\n\tmain.cmdRun:167\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200\n\tmain.cmdRun:169\n\tstorj.io/storj/pkg/process.cleanup.func1.2:262\n\tstorj.io/storj/pkg/process.cleanup.func1:280\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:72\n\tmain.main:296\n\truntime.main:200"}

storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:286
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:72

n
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:200

Looks like database is unhappy in some way as well. I got the following response from support:

I am not convinced this is true... as the error is not to do with failed audits, but with database access. I am in a situation where I cannot even start the container due to this error.

 

As for the stop time, this can be added to extra parameters afaik. I will experiment with it, since I don't have much to loose.

Let us know if you start a new node and my solution works for you, I might try to start a new node again

Share this post


Link to post
Posted (edited)

U can set the kill time in ca auto update/backup.

 

I dont see how storj is the only app on this planet which "Perhaps it's related to how this platform is mounting the disks. Looks like the container started earlier than the filesystem is ready and the"

 

wtf?

 

@limetech

Isnt docker standardized, were not using a special unraid docker, am i right!??!

 

 

Edited by nuhll

Share this post


Link to post
Posted (edited)

this whole crap annoys the shit out of me. the old node worked for years without problems and now this. Same shit like you cant put dl/ul limits on it. like wtf, its 2019.

 

If they dont fix their node they dont deserve us. removing it for the time beeing. if anyone has news, ill might try again. 

 

I dont think its unraid related, but who knows. @limetech sadly didnt responded.


Thanks for all your help.

Edited by nuhll

Share this post


Link to post

Maybe is related with the database corruption problems that Unraid 6.7 has.

Is someone tuning storj in a old Unraid build?

 

 

Share this post


Link to post
Posted (edited)

I have the problems with storj, but not with plex...^^ (i think most reported it with plex database?) but yeah,  might be docker + unraid error....

 

And shouldnt it be fixed already!? (or not?)

 

Ill never understand why companys dont start from bottom to top. Why add new features and new clients and new everything when the ground is not propperly working / feature less. There is no real statistic, no controll for UL / DL, no planned downtime.

 

The whole process for this node is so useless complicated... and it should be suited for home users, not tech ppl (thats what they advert with)

Edited by nuhll

Share this post


Link to post
Posted (edited)

It dawned on me a few days ago that the stress from trying to run this thing is not worth the 5 dollars I am gonna make on it per month... I would still have a go at it to support the idea, but as @nuhll said, I will chill for now until the issues are sorted out a bit more.

Edited by MrChunky

Share this post


Link to post

hahaha, ive run 11 nodes (v2) for around a year and only made under 3usd worth of shit

 

So its not about the money, but they could atleast fix their shit when we run their buisness

Share this post


Link to post

OKay, i have news, 3 days ago i startet the docker again and let it run.

 

Till now it seems to work.

Storage Node Dashboard ( Node Version: v0.20.1 )

======================

ID           1XXXX
Last Contact 2s ago
Uptime       1h47m12s

                   Available     Used     Egress     Ingress
     Bandwidth       20.0 TB      0 B        0 B         0 B (since Sep 1)
          Disk        2.0 TB      0 B

Bootstrap bootstrap.storj.io:8888
Internal  127.0.0.1:7778
External  XXX:28967

Neighborhood Size 178

 

Seems to work again? Even with version ..... but no data till now. 

Edited by nuhll

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.