[Support] QDirStat, Jcloud - cryptoCoin templates


Recommended Posts

4 hours ago, L0rdRaiden said:

Thanks @Jcloud, does SIA supports multi nodes?

Nope not that I what I can tell. Also the value of the market, and the fact they have have no corporate customers, I wouldn't worry about it.  

4 hours ago, L0rdRaiden said:

BTW it says that I need 2000 SC (52$) to start to host... I guess I need to put some money in, or is there a way to avoid this?

Should be possible to use the Docker container as a blockchain miner using CPU resources. That is a limit setup by Sia, I'm sure it's part of their ICO to create value in the token, it's pay-up or mine for the quantity.  Also, I think the cash/tokens are used as collateral if you fail as host in your contract. 

 

4 hours ago, L0rdRaiden said:

I have fix it adding the port mapping manually. But I think the problem is that it only map the ports the first time you create the container if you change the number after that the port mapping doesn't update

Yeah the template doesn't change auto-magically, you do have to make the changes manually. Correct, it will only list the four ports, because those are the only ones "exposed," by the Dockerfile, but since you've added the port field on the container; then listed 4000-4010 (or whatever is the range which covers your number of nodes) it will work - Should list (TCP) for all of them, if you see (UPNP) then in the config file for effected node change:   (from false)

 "doNotTraverseNat": true,

 

4 hours ago, L0rdRaiden said:

BTW how can I restart a node without restarting all of them?

docker exec yourStorjContainerName storjshare stop dcea81e7741aeb8db49174ac5821d9035xxxxxxx
docker exec yourStorjContainerName storjshare start dcea81e7741aeb8db49174ac5821d9035xxxxxxx

 

Link to comment
1 hour ago, fortegs said:

hmm all im getting is bridge is connect and in logs, "{"level":"error","message":"Unable to connect to bridge: https://api.storj.io, reason: Bridge request failed (500)",""

iv open ports 4000-4003 also

For each node that is bridge go into its config.json file; look to see if "doNotTraverseNat" is set for true or false - if its on false, set it to true. Repeat this for all effected nodes, then stop your container, restart the container, and then check status again. 

 

If after making the changes, the nodes go from tunneled to broken, then your firewall needs to be opened up for the entire port range used by all your nodes, from 4000-40xx where xx is the number of nodes you're running.

Edited by Jcloud
Link to comment

One thing i found out is:

storj containers sometime dont restart correct. e.g. if they get stopped bc of update. I guess its only happening to me bc i have 14 containers running.

 

(11 storj instances) and one of them has also 3 nodes.

 

Error is something like "daemon is not running, try: storjshare daemon". If you then click it and select start, it starts correct. Just not automatic after server restart or docker update... which is a big problem bc u loose many reputation if any nodes are offline...

 

Im still running zugz/r8mystorj:latest


Also i would suggest you make a thread per plugin/docker you do. :)

 

Edited by nuhll
Link to comment
8 hours ago, nuhll said:

storj containers sometime dont restart correct. e.g. if they get stopped bc of update. I guess its only happening to me bc i have 14 containers running.

Do you still have, "--restart=always" (no quotes) in the "Extra Parameters," field on the webui template? If no, go ahead and add it back and see if that helps you out.

Link to comment
15 hours ago, Jcloud said:

Do you still have, "--restart=always" (no quotes) in the "Extra Parameters," field on the webui template? If no, go ahead and add it back and see if that helps you out.

No, i dont have, i will try. But i guess its some "if" in entrypoint which is not correct detecting if its running or not. (or something like this)

Link to comment
On 5/6/2018 at 2:24 PM, Jcloud said:

Nope not that I what I can tell. Also the value of the market, and the fact they have have no corporate customers, I wouldn't worry about it.  

Should be possible to use the Docker container as a blockchain miner using CPU resources. That is a limit setup by Sia, I'm sure it's part of their ICO to create value in the token, it's pay-up or mine for the quantity.  Also, I think the cash/tokens are used as collateral if you fail as host in your contract. 

 

Yeah the template doesn't change auto-magically, you do have to make the changes manually. Correct, it will only list the four ports, because those are the only ones "exposed," by the Dockerfile, but since you've added the port field on the container; then listed 4000-4010 (or whatever is the range which covers your number of nodes) it will work - Should list (TCP) for all of them, if you see (UPNP) then in the config file for effected node change:   (from false)


 "doNotTraverseNat": true,

 


docker exec yourStorjContainerName storjshare stop dcea81e7741aeb8db49174ac5821d9035xxxxxxx
docker exec yourStorjContainerName storjshare start dcea81e7741aeb8db49174ac5821d9035xxxxxxx

 

 

I get this ouput

 

root@MediaCenter:~# docker exec StorjMonitor storjshare stop 8fx70bbceb6xefd9d1325x6b5eeeaf44c0fxxxxx

  missing node id, try --help
root@MediaCenter:~# docker exec StorjMonitor storjshare start 8fx70bbceb6xefd9d1325x6b5eeeaf44c0fxxxxx

  no config file was given, try --help

 

What am I missing?

 

After a few days some nodes lose almost all peers so this is why I want to restart them. Is this normal?

Edited by L0rdRaiden
Link to comment

Stroj is working as expected with multiple nodes and reporting to StorjStat for me now:) So good job all around @Jcloud.

 

I am looking into SIA now and I got the docker installed, ports forwarded. When I open terminal within unraid gui for that docker (new feature in unraid), I tried a few commands and I just cannot find how to interact with it. Tried host commands and siad comands, nothing found. how do you do it? The GUI shows syncing activity but the daemon says unreachable from terminal...

 

I am still syncing so that might be the issue?

 

Figured it out:

 

Have to use ./siac to access terminal commands from within UnRaid docker terminal. e.g.:

./siac -h
./siac folder add /sia-data/xxx 8TB

Or from SIA UI terminal:

./siac --addr xxx.xxx.xxx.xxx:9980
./siac --addr xxx.xxx.xxx.xxx:9980 host folder add /sia-data/xxx 8TB

Also don't forget to port forward both TCP and UDP... Took me a while to figure that out.

 

There is a small mistake in your instructions: False and True should be the other way around ;)

On 5/6/2018 at 6:28 AM, Jcloud said:

After closing Sia client, and the file manager opened to Sia's configuration files, edit the config.json file

Change "detachted": true, to    "detached": false,

In the "address" line change "127.0.0.1" to your IP address of your Sia container.

Save config.json file.

Edited by MrChunky
Link to comment
13 hours ago, L0rdRaiden said:

do you lose lots of peers on Storj when the server is up many hours/days?

I start the nodes with around 150 peers, after a day or so I have 30-50 less and the number is smaller every day, so I usually restart the docker.

Do you experience the same problem? is normal?

 

Why you think "low" peers would be a problem?

 

As far as i know, the server contacts the nodes when he chooses to give them something to store, and then they need to react (responsetime and timeoutrate).

 

I might be wrong tho.

 

I never cared for peers. And just btw, its beta atm, no real users, so things will change, like how storj reputation works. (so a big node should make the same as 1000 small ones - which is currently not implemented...)

 

https://blog.storj.io/post/173461024823/march-farmer-payouts

https://blog.storj.io/post/173301213503/march-farmer-payouts-on-track

 

THE ONLY thing i noticed was, before i switched to this docker, i had a update every day and so the dockers automatic restartet and updated. But anyway, i dont notice any difference.

Edited by nuhll
Link to comment

SIA is not playing nice with UnRaid system for me. 

 

I have my sia share set to use the cache drives. Firstly the mover is not able to move anything because all the share files are constantly open by sia. Secondly when you shut sia down and attempt to move, sia writes the sparse preallocated file as a full file. e.g. you have 8TB share defined, instead of moving the size that is stored, unraid writes a 8TB file, which is obviously impossible if you do not have 8TB free on a single disk (correct me if I am wrong here).

 

I would recommend either using a share that does not use cache and splitting up you share folders into many small chunks by creating multiple share folders. Or using unassigned devices and external drives.

Link to comment
  • 3 weeks later...
On 5/24/2018 at 1:08 AM, MrChunky said:

I would recommend either using a share that does not use cache and splitting up you share folders into many small chunks by creating multiple share folders. Or using unassigned devices and external drives.

Good line-item to note there. I wasn't able to play around much with that feature yet, as I have no Sia-coin to for the collateral portion of contract. 

 

For those interested in bleeding-edge I've finally gotten around to making an automated build repo for my Storj. Bad news is, I can't make repo zugz/r8mystorj be automatic without first deleting it - which might be confusing. So, if you want the automated repo, in your Docker template change, "zugz/r8mystorj:latest" to "zugz/r8mystorj-auto" 

 

Second, I'm a bit stunned:  there have been 1.7K pulls of my repo! I know this count includes repeated pulls (ie not unique users; just total pulls) but dang, I'm impressed by the number. 

Link to comment
On 6/12/2018 at 9:06 PM, Jcloud said:

Good line-item to note there. I wasn't able to play around much with that feature yet, as I have no Sia-coin to for the collateral portion of contract. 

 

For those interested in bleeding-edge I've finally gotten around to making an automated build repo for my Storj. Bad news is, I can't make repo zugz/r8mystorj be automatic without first deleting it - which might be confusing. So, if you want the automated repo, in your Docker template change, "zugz/r8mystorj:latest" to "zugz/r8mystorj-auto" 

 

Second, I'm a bit stunned:  there have been 1.7K pulls of my repo! I know this count includes repeated pulls (ie not unique users; just total pulls) but dang, I'm impressed by the number. 

 

What do you mean by automated repo? what it does differently?

 

How is SIA working for those playing with it? I'm not running it since you have to pay 50$ or so to become a miner but I'm thinking again about it.

 

BTW SIA requires a good graphic card but storj does not, right?, maybe this is why SIA isn't working properly for some while using it in docker.

Edited by L0rdRaiden
Link to comment
9 hours ago, L0rdRaiden said:

What do you mean by automated repo? what it does differently?

Remember how the original storj template, the author would recompile it every eight hours, I've basiclly taken it one step before that. Or, if changes are made to the repository code Docker hub will see this and re-compile the container. It auto-updates. Not a huge deal as neither I, nor upstream authors have made any changes for about a month.

 

9 hours ago, L0rdRaiden said:

BTW SIA requires a good graphic card but storj does not, right?

Storj is CPU and hard drive base, no gpu needed.

Link to comment
On 5/11/2018 at 1:27 AM, nuhll said:

i have 14 containers running. (11 storj instances) and one of them has also 3 nodes.

 

Can you share a bit more about how you have multiple nodes from one instance? I have been following this thread and have five instances and five nodes running successfully, but I would love to only deal with one container.

 

I have one share for Storj, and in it five main directories (Node_1, Node_2, ...). Then in docker I have my containers (Storj1, Storj2, ... ) each with successive ports (4 per container, so port 4000-4019 are all being used).

 

Obviously each directory has its own config and node information. How are you getting around the seemingly 1:1 mapping of a Storj node and a docker container?

Link to comment
3 hours ago, rm414 said:

Obviously each directory has its own config and node information. How are you getting around the seemingly 1:1 mapping of a Storj node and a docker container?

Go to Community Apps plug-in do a search for, "StorjMonitor" and grab that; the template is going to look similar but have more options.  

Before this will really work well, you'll want to open each ../storj/Node_*/config.json file with nano, vi (or notepad++  something which handles UNIX carriage-returns):

  1.    change: "doNotTranverseNat": false,    ---->  "doNotTraverseNat": true,
  2.    For each Nod_#/confi.json file  I recommend changing "rpcPort": 4005,  to  4001
    • rpcPort for Node_1/config.json    becomes    4001
    • rpcPort for Node_2/config.json    becomes    4002    
    • rpcProt for Node_3/config.json    becomes    4003
    • . . . 
    • way less network ports needed that way.

 

Storjmonitor_settings.JPG

 

Hope that clarifies things. If something is still unclear just ask. :) 

 

Edited by Jcloud
Link to comment
On 6/16/2018 at 1:05 AM, Jcloud said:

Go to Community Apps plug-in do a search for, "StorjMonitor" and grab that; the template is going to look similar but have more options.  

Before this will really work well, you'll want to open each ../storj/Node_*/config.json file with nano, vi (or notepad++  something which handles UNIX carriage-returns):

  1.    change: "doNotTranverseNat": false,    ---->  "doNotTraverseNat": true,
  2.    For each Nod_#/confi.json file  I recommend changing "rpcPort": 4005,  to  4001
    • rpcPort for Node_1/config.json    becomes    4001
    • rpcPort for Node_2/config.json    becomes    4002    
    • rpcProt for Node_3/config.json    becomes    4003
    • . . . 
    • way less network ports needed that way.

 

Storjmonitor_settings.JPG

 

Hope that clarifies things. If something is still unclear just ask. :) 

 

 

 

Thanks, this seems to be working well. I didn't realize there was another image to try!

Link to comment
2 hours ago, rm414 said:

Thanks, this seems to be working well. I didn't realize there was another image to try!

It's possible to do the same thing with the other image, it's just a lot more clunky. StorjMonitor container was in response to the community wanting less clunky and a few features; Storjstat.com monitor script (which the authors of the last repo added shortly after I did), and log deletion to be specific. 

 

I'm glad my instructions were comprehensible! :D

 

Link to comment
  • 4 weeks later...
  • 3 months later...
17 minutes ago, vanes said:

@Jcloud, please tell us about the support of the v3 protocol with this container.

https://storj.io/blog/2018/10/introducing-the-storj-v3-white-paper/

Well the upstream code hasn't changed in six months, my additions are simply shell script code for the entrypoint.  From what I can tell v3 protocol is still being developed, otherwise why blog about a whitepaper?  If/when v3 becomes live I'll see if they change/update this container or just build a new one - if they update the container, things should update easy enough. If storj makes a new v3 container I'd look into making a template, and/or repo for Unraid support.

 

Fundamentally my container is fork of https://github.com/zannen/docker-storjshare-cli.  

I don't think the container presently does support v3, but when the storj team (or 3rd-party) releases a v3-storj client or container I'd be happy to look into it. 

If people find it before I do, welcome track me down. 

  • Like 1
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.