KrisMin

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

3 Neutral

About KrisMin

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello and thanks for posting the guide. I managed to get it working with a default OVMF bios just fine. I am running a pair of GTX 1060's on economic settings and it runs clean. hiveOS is a nice convenient tool for managing miners. I got around 1-2% more hash on windows, but managing mining windows VM on Unraid is crappy.
  2. OK, that was an issue with subnet availability. For some reason the NC could not connect to the database when pointed the database IP and port to it. When I joined the same bridge as the mariadb, the issue disappeared. As far as i know this should not be happening, but it somehow did.
  3. I had a server crash and now my NC seems to not get connected to database. ","app":"remote","method":"GET","url":"/status.php","message":{"Exception":"Doctrine\\DBAL\\DBALException","Message":"Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [2002] Host is unreachable","Code":0, Anyone knows how to fix this? The config and data volumes look to be fine. I have no idea why can't it reach the mariadb...
  4. FYI for anyone running a node here. The official Storj node GUI is crappy, but luckily for us, there's one awesome Grafana dashboard available for nodes statistics. https://forum.storj.io/t/prometheus-storj-exporter/1939 Give it a go.
  5. Thanks! I think that should be written in OP - to use as Cache = yes.
  6. Update about running this thing on "cache preferred" mode. Looks like storj has some silly rule built in which returns an error and switched the node offline when the space runs below ~450GB. This rule makes no sense to me, why...? 2021-01-27T19:29:00.600Z ERROR piecestore:cache error getting current used space: {"error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canc
  7. The issue was with windows explorer, which does not accept a self signed certificate. I got it working with FileZilla. However, it looks like webdav is considerably slower in moving files than a mapped external drive copy inside the nextcloud. The latter has other issues, like not being able to copy several files or directories at once and gives an error if multiple directories are put to copy/move. So I stick to webdav and FileZilla combo, which seems to be working without any issues. Just takes a bit more time, which is fine by me. Besides, FileZilla has useful built in rules in case a ver
  8. Hey Everyone! I ask some help with getting this thing working via http. I did a -p8080:80 extra parameter, but It always redirects to https. When i want to map the nextcloud data dir with webdav, I get asked for certs and i want to avoid that.
  9. OK, that was my issue too! Thanks! What an annoying bug, wasted a couple of hours digging in google and docs. Dno why I didn't double check if ports were mapped right in docker.
  10. Sorry if i didn't explain it too well. Error in the web browser when trying to reach the app by the domain name. Both, the NPM and theAPP were reachable by their LAN IP's
  11. Can anyone explain me why NPM does not work if both, the NPM and the APP sit on the LAN ("br0" in mine) and not in the unraid default bridge. I gave a local static IP to the NPM container and also to the APP container I wanted to proxy to. When done so, APP can not be reached with its domain name (Both can be reached with their IP's). In detail: - my router redirects all 80 to NPM IP:PORT and all 443 to NPM IP:PORT - my NPM has a proxy host: app.mydomain.com to APP IP:PORT I get connection refused error. may I ask why please?
  12. Sure, you can turn autoupdate on-off for every container individually via Unraid Auto-update Appliations plugin. Just go in to this app (under settings) and you'll find docker tab and auto-update toggle buttons there.
  13. Here: https://forum.storj.io/t/storj-node-maximum-size/7922/13?u=cryptopumpkin Yeah, probably the best option for us would be to run 8TB or 10TB nodes and only start a next one after the first one gets vetted. I do not know If the second one will still get vetted twice slower than the first one or will it get it as fast as the first one. Need to test or ask from the official forum about that.
  14. Of what I have read, there is no hard cap on the storage size per node and the number is just a strong suggestion from the dev team. Someone in the forum did some math with hes years of statistics data and came up with a conlcusion that a theoretical limit would be aound 40TB of per IP subnet if IP filters work like we think they do. That number is based on hes statistics that a ~5% of all data gets trashed and therefor a pool of >40TB would never fill up, because 40TB would be the line in which point ingress = trashed. Of course if in the future an average Ingress rate increases, then
  15. Cool vid. I see you had some struggles, but got them sorted. Posted below your vid as well that: You can do all cli commands on the local terminal. Just point the container to the identity certs and data directories and its good to go. Also there is no need to turn off the cash disc. The space warning is misleading because the node only sees your cache drive and not beyond. Your storage space should fill up just fine (eventually, because the start is really slow). And one more thing, in case some of you don't know yet. Running several Storj nodes from a same IP unfortunat