Jump to content
Jcloud

[Support] QDirStat, Jcloud - cryptoCoin templates

321 posts in this topic Last Reply

Recommended Posts

With the latest update It works! 

 

There were however some breaking changes, as the ENV variable for the storjMonitor was changed to STORJ_MONITOR_API_KEY in the template side. 

 

Thanks for the work put in to this!

Share this post


Link to post

I have 5 nodes, all of them "identical" but one of them the first one "Storj1" have a high delta, all the others runs ok.

 

Any idea why? fw/nat rules are the same for all the nodes, using pfsense here.

Share this post


Link to post
3 hours ago, nuhll said:

Hows ur delta?

Storj1 above 150 usually, now is 265

Storj2-5 between -20 and 20

All in the same unraid same conditions.

 

I changed pfsense for sophosXG and it was normal (maybe a coincidence) so I think it might be related with pfsense, but I have reinstalled pfsense from scrach and I still have the same issue.

 

On the other hand are no noticing that during the last week or so the nodes have been more quiet? less allocs

Share this post


Link to post

@Jcloud so is your repo going to have an update soon that fixes a lot of this and works w/ the main repo? Or are we supposed to remove yours and start over with the main repo again?

 

I"m asking because something is definitely not right.  My peers is still really low and I'm not getting additional data as frequent as I used to on the other repo.

 

Thanks!

Share this post


Link to post
1 hour ago, physikal said:

@Jcloud so is your repo going to have an update soon that fixes a lot of this and works w/ the main repo? Or are we supposed to remove yours and start over with the main repo again?

 

I"m asking because something is definitely not right.  My peers is still really low and I'm not getting additional data as frequent as I used to on the other repo.

 

Thanks!

I haven't updated since Thursday, but I think something is up with network. I've been running the official repo and haven't been getting updates either; peers - yes, few Allocs, but no "received.'  I've also noticed that that my rating on storjstat.com flat-lined, where as before it was more like 45-degree slope, on the graph. Locally I just remade the image, but moved back the install location of storjMonitor script (testing).  I haven't seen any new upstream changes. Also, double-checking, you caught the change in webui template. Up-stream changed the environment variable MONITORKEY to STORJ_MONITOR_API_KEY.

 

An upstream fix was also made for tunneling, maybe it's on and that's messing you up? Can try adding this to your template and see if it helps:

storj04292018.thumb.jpg.a646b0a45d320617ef177a86ef66d5fe.jpg

 

 

 

 

Edited by Jcloud

Share this post


Link to post
2 hours ago, physikal said:

@Jcloud so is your repo going to have an update soon that fixes a lot of this and works w/ the main repo? Or are we supposed to remove yours and start over with the main repo again?

 

I"m asking because something is definitely not right.  My peers is still really low and I'm not getting additional data as frequent as I used to on the other repo.

 

Thanks!

I am running both repos, and its the same situation. I presume the network was being tested, and now its not. This is why the alocs are basically 0, not because of the repo.

 

If you read the subreddit, everyone is in the same boat.

Edited by MrChunky

Share this post


Link to post

Guys,

the last weeks was much traffic because it was trash test data. Now its back to normal. Test was from (i dont know) till 27.4... 

 

I got like 5mb the last couple days, that is normal. (11 nodes)

 

I posted a log checker, use this if u unsure if your nodes are running okay.

 

This docker works perfect, i only needed to adjust the ports of the extra nodes created if you use this feature.

Edited by nuhll

Share this post


Link to post
14 hours ago, L0rdRaiden said:

Storj1 above 150 usually, now is 265

Storj2-5 between -20 and 20

All in the same unraid same conditions.

 

I changed pfsense for sophosXG and it was normal (maybe a coincidence) so I think it might be related with pfsense, but I have reinstalled pfsense from scrach and I still have the same issue.

 

On the other hand are no noticing that during the last week or so the nodes have been more quiet? less allocs

 

Hm thats mysterios, but i also can confirm that my dates in the dockers are sometimes not that acurat (like hours off) - but no problems - and usually at some point the nodes goes normal again...


If im right it can go -300 - +300 without problems, above, below is a problem.

Share this post


Link to post

Yes, for me its working, i just needed to change that Key in the template.


U can see my nodes when u enter "unraid" in the ranking :) (while tryin storjmonitor running, i accidently deleted my first storj node which had over 100GB :()

 

Im just waiting for the point they start selling their service again so new customers may lay their files there, until then, its just "farming" for reputation and response time getting low... and waiting for it to start...

Edited by nuhll

Share this post


Link to post
On 4/28/2018 at 2:30 AM, nuhll said:

Now we only need to fix the logs directory and then its perfect.

Made an update to the custom StorjMonitor repository, for those that want to try - added nuhll's request for log purging. By default log purging is off. The webui template has also been updated for NAT Tunneling, and this log purge: TRUE/FALSE (enable/disable), number of days to keep.  File purge, only fires off once, at the start of the container.

 

 

Edited by Jcloud

Share this post


Link to post

@jcloud

THANK YOU!

But the change is not pushed by now?!

 

Seems like "much traffic" yesterday to today. Im already at ~11mb xD

Edited by nuhll

Share this post


Link to post
7 hours ago, nuhll said:

THANK YOU!

But the change is not pushed by now?!

Everything was pushed at the time of the post. Perhaps I need to do something for versioning and or flagging as "latest?" However that won't change the webui. You can either delete StorjMonitor container and image, then pull the template and the docker image.  Another way would be to force-update the image in webui; stop the container; Edit the template for StorjMonitor then add the two following variables on your template:

 

Variable1 key:      DEL_LOGS

Variable1 value:  TRUE

Variable2  key:     DEL_LOGS_DAYS

Variable2 value:  1  

Based on your forum posts you'll want to set the value of DEL_LOG_DAYS to 1; this is specifying how many logs days of logs to keep (don't use 0, minimum good value is 1, default is 7 or a week,  valid values are positive integers).

Edited by Jcloud
changed wording to fit webui terms.

Share this post


Link to post

Thanks ill try first just adding the variable :)

 

I also changed your repo to zugz/r8mystorj:latest

 

Okay, works, atleast for the main node - does it work for storj10\Node_1\log? - i dont have old enaught files to test :D

 

I just wonder, because while i had the other storj docker, i got an update every day, now  not anymore, did i forget anythign!?

Edited by nuhll

Share this post


Link to post
5 hours ago, nuhll said:

Okay, works, atleast for the main node - does it work for storj10\Node_1\log? - i dont have old enaught files to test

Not sure, you have a different setup than I was trying to write general code for. If you used my repos but still did a docker container per daemon, like you were doing before then yes, I think it could be made to work. 

 

The command in my code, and very close the suggestion you gave in forums (btw, thank you):

FOR Single nodes:
     find "${DATADIR}/log" -type f -mtime +"${DEL_LOGS_DAYS}" -iname "*.log" -delete &
FOR Multiple nodes:     
     find "${DATADIR}/${NODE_DIR}$i/log" -type f -mtime +"${DEL_LOGS_DAYS}" -iname "*.log" -delete &

So the default ${DATADIR} is /storj/ 

The more I stare at the command and how your directory is laid out, I'm not sure it will work.

 

To manually fire it off I would change it to:

Substitute /WhatEverIsThePath for what it needs to be, sorry if I've dumb this down too much.

find /WhatEverIsThePath/storj10/Node_1/log -type f -mtime +1 -iname "*.log" -delete

Toss that in CA user scripts, correctly modified, and you should be good to go - you could even continue to use Storj container if you wished. :)

 

EDIT forgot to address, your other question

Quote

just wonder, because while i had the other storj docker, i got an update every day, now  not anymore, did i forget anythign!?

CA updates might be looking for an old image or container name perhaps? Might want to check those settings to see if they are OK. The repo you just changed to, zugz/StorjMonitor, does not yet auto-build and that's why you haven't seen updates. I DO NEED to figure out how to do auto-builds, but I think I might have to delete zugz/StorjMonitor again and re-setup to do so. Presently I've been visiting the official github page, to see if I need to update my repo or not - there have been no code changes for 11 days now. 

Edited by Jcloud

Share this post


Link to post
On 5/1/2018 at 9:58 AM, Jcloud said:

Made an update to the custom StorjMonitor repository, for those that want to try - added nuhll's request for log purging. By default log purging is off. The webui template has also been updated for NAT Tunneling, and this log purge: TRUE/FALSE (enable/disable), number of days to keep.  File purge, only fires off once, at the start of the container.

 

 

Okay, log purge does seem to work also for nodes, thanks! For clarification, i just use this node mode since some days on one node, to test it.

 

"1" seem to be more then 1 day, but that doenst matter as long as it gets automatic deleted from time to time... :)

 

Good work.

 

For automated building: 

https://docs.docker.com/docker-hub/builds/#create-an-automated-build

Edited by nuhll

Share this post


Link to post

Pushed an update, minor bug fix. Multiple node config files would be made with NAT traversal techniques (show up as UPNP).  Only effects newly created nodes, so if your node reports UPNP, and want to stop this: Made the change to each affected  /foo/storj/Node_#/config.json file then restart the container.

Share this post


Link to post

I have seen that the old Storj image is no longer being updated and not part of the apps.

How do I migrate my 5 containers to the new image?

 

Thanks

Share this post


Link to post
8 hours ago, L0rdRaiden said:

I have seen that the old Storj image is no longer being updated and not part of the apps.

I just did a search for "Storj" and found my template in CA store.  I would try deleting the container (your contracts should be on storage array and be untouched) and image in webui, then click on Add container scroll through and find the saved entry for Storj, and rebuild.  - That's what I would try first, it would be much easier, and should be just as effective. Then go back to auto-updates plugins & containers; disable and re-enable auto update for Storj. 

 

I've had terrible luck try to take multiple containers and have them run under a single daemon. 

FYI: the code tweak was a bug for new users, and the multiple node creation routine. 

Share this post


Link to post
4 hours ago, Jcloud said:

I just did a search for "Storj" and found my template in CA store.  I would try deleting the container (your contracts should be on storage array and be untouched) and image in webui, then click on Add container scroll through and find the saved entry for Storj, and rebuild.  - That's what I would try first, it would be much easier, and should be just as effective. Then go back to auto-updates plugins & containers; disable and re-enable auto update for Storj. 

 

I've had terrible luck try to take multiple containers and have them run under a single daemon. 

FYI: the code tweak was a bug for new users, and the multiple node creation routine. 

 

I'm using this template, I guess is still valid right?

https://hub.docker.com/r/oreandawe/storjshare-cli/

 

Share this post


Link to post

Sia-Coin

Sia-coin website:                                     https://sia.tech/
Sia-coin client utilities:                           https://sia.tech/get-started
Docker hub site:                                       https://hub.docker.com/r/mtlynch/sia/
Original Docker install instructions:     https://blog.spaceduck.io/sia-docker/
Repository site:                                        https://github.com/mtlynch/docker-sia
Template site:                                           https://github.com/Jcloud67/Docker-Templates

   Sia-coin, from their website, “Sia is a decentralized storage platform secured by blockchain technology. The Sia Storage Platform leverages underutilized hard drive capacity around the world to create a data storage marketplace that is more reliable and lower cost than traditional cloud storage providers.” 
   
   *** DISCLAIMER *** I have no affiliation with Sia-coin or Mr. Lynch. I was asked if Sia was possible? I found someone already invented the wheel, I just made the hub-cap.  I do not guarantee, nor will hold any responsibility, for loss or corrupted data; or that Sia-coin (Sia), Bitcoin (BTC), or any other “crypto-currency” will gain or maintain their current fiat value.

 ***  Please, with any and all, “ICO’s,” “exchanges,” “tokens,” and cryptos’ do exhaustive research before any transaction. ***

SETUP:
 1.   User share for Docker data, I just called it, “Sia”:

Example
Screenshot_2018-05-03_22-17-19.thumb.jpg.239afbc7816b7be56c1e7ffe936dc180.jpg

 

2.  Punch hole in firewall for outside to your docker's ip address, TCP ports 9981 and 9982. 

   DANGER! : Sia also uses TCP port 9980 for command & control, do NOT expose 9980 outside of your network! Failure to follow this rule can result in wallet and Sia host hijack.

 

3.  Download the Sia client from download URL, if you want a GUI experience.

4.  Setup Sia template and docker container on unRAID using CA store.

Sia4.thumb.jpg.62e09286d05302b66481a39f8f380ec0.jpg

Sia5.thumb.JPG.be812ea6747bb507de9c2a5b887588ae.JPG

5.  For GUI clients, running on a VM or on the LAN:

Start the Client and Click on "About"Sia0.thumb.JPG.c7239b44a43057a45eb29743a672c259.JPG

 

Sia1.thumb.jpg.76344e4b22f6b5c85b674136fec8e9c0.jpg

 

After closing Sia client, and the file manager opened to Sia's configuration files, edit the config.json file

Change "detachted": false, to    "detached": true,

In the "address" line change "127.0.0.1" to your IP address of your Sia container.

Save config.json file.

 

Sia2.thumb.jpg.2671ce905bc886046f3c1048b9d3b1d7.jpg

 

When you Restart Sia, it should connect to your docker:

Sia3.thumb.JPG.49f9ddcb50f73bb3caa90815ca886913.JPG

 

Repost - encase someone doesn't start from the from front.

Edited by Jcloud

Share this post


Link to post

Thanks @Jcloud, does SIA supports multi nodes?

 

How do I add storage folders in the share? Could you explaint please a little bit more the setup until the end? I have followed you steps but right now I don't know if it's working or not.

BTW it says that I need 2000 SC (52$) to start to host... I guess I need to put some money in, or is there a way to avoid this?

 

BTW all of you please register in filecoin early miner program

https://filecoin.io/

https://docs.google.com/forms/d/e/1FAIpQLSfdFpWhJj8OIGA2iXrT3bnLgVK9bgR_1iLMPdAcXLxr_1d-pw/viewform?c=0&w=1

Edited by L0rdRaiden

Share this post


Link to post
12 hours ago, Jcloud said:

Yeah that's the official docker container.

I am using storjmonitor container now I have the main node plus 4 more but the container is only mapping 4 ports

172.17.0.2:4000/TCP172.17.0.2:4000
172.17.0.2:4001/TCP172.17.0.2:4001
172.17.0.2:4002/TCP172.17.0.2:4002
172.17.0.2:4003/TCP172.17.0.2:4003

 

I have fix it adding the port mapping manually. But I think the problem is that it only map the ports the first time you create the container if you change the number after that the port mapping doesn't update

Captura.PNG

 

 

 

Now is working but for some reason, I don't know why, my first node has always a high delta

┌─────────────────────────────────────────────┬─────────┬──────────┬──────────┬─────────┬───────────────┬─────────┬──────────┬───────────┬──────────────┐
│ Node                                        │ Status  │ Uptime   │ Restarts │ Peers   │ Allocs        │ Delta   │ Port     │ Shared    │ Bridges      │
├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
│ 6ec5490584a014188a8e3ae366d5e17ed7xxxxxx    │ running │ 7m 51s   │ 0        │ 138     │ 0             │ 103ms   │ 4000     │ 11.99GB   │ connected    │
│   → /storj/share                            │         │          │          │         │ 0 received    │         │ (TCP)    │ (12%)     │              │
├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
│ 4dd54ed8e3b68fe752618e17055e4eec52xxxxxx    │ running │ 7m 49s   │ 0        │ 139     │ 0             │ 7ms     │ 4001     │ 9.92GB    │ connected    │
│   → /storj/Node_1/share                     │         │          │          │         │ 0 received    │         │ (TCP)    │ (10%)     │              │
├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
│ dcea81e7741aeb8db49174ac5821d9035xxxxxxx    │ running │ 7m 47s   │ 0        │ 93      │ 0             │ 7ms     │ 4002     │ 12.02GB   │ connected    │
│   → /storj/Node_2/share                     │         │          │          │         │ 0 received    │         │ (TCP)    │ (12%)     │              │
├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
│ 4428e5e0f2e84b4e9e9e86df57f381dcdxxxxxxx    │ running │ 7m 46s   │ 0        │ 120     │ 0             │ 2ms     │ 4003     │ 8.56GB    │ connected    │
│   → /storj/Node_3/share                     │         │          │          │         │ 0 received    │         │ (TCP)    │ (9%)      │              │
├─────────────────────────────────────────────┼─────────┼─────���────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
│ c3c26bcc0e3af7d2d8add82d602a1xxxxxxxxxxx    │ running │ 7m 44s   │ 0        │ 104     │ 0             │ 8ms     │ 4004     │ 1.15GB    │ connected    │
│   → /storj/Node_4/share                     │         │          │          │         │ 0 received    │         │ (TCP)    │ (1%)      │              │
└─────────────────────────────────────────────┴─────────┴──────────┴──────────┴─────────┴───────────────┴─────────┴──────────┴───────────┴──────────────┘

 

BTW how can I restart a node without restarting all of them?

 

Edited by L0rdRaiden

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.