KrisMin

Members
  • Posts

    32
  • Joined

  • Last visited

Everything posted by KrisMin

  1. alturismo, I have some kind of anomaly then. Likely hardware related. I guess I'll make a work around to keep a couple of fans on 24/7.
  2. This method is not working for me. When discs spin down, their temperature is not reported and fans stop no matter what min pwm value is set to. This is why I asked for additional "check box" feature.
  3. Since pool discs temperature is not reported while disks are spinned down and I have System AutoFan tied to discs temperature, the fans are stopping. I need them to be in min rpm while discs are down. Would you consider adding this as a feature (checkbox or something) ?
  4. Help please? I added metadata special devices to my zfs pool and it seemed to work until next reboot. After a reboot array is not spun up and zfs pool is reported as "Unsupported or no file system". Is there a way around this? I really really want medadata special devices on my zpool :).
  5. Posting to say thank you! I have been running Radicale server container on my Unraid home server for a month now and it's been fantastic for organizing personal and family activities. Our family is using "OneCalendar" android app for connecting to it on our phones and personally I also use Thunderbird mail calendar on my PC.
  6. Fresh install. Somehow I cant access the webui on the https port. The default http port works fine. Bug or something with my setup, i cant figure out.
  7. Hello and thanks for posting the guide. I managed to get it working with a default OVMF bios just fine. I am running a pair of GTX 1060's on economic settings and it runs clean. hiveOS is a nice convenient tool for managing miners. I got around 1-2% more hash on windows, but managing mining windows VM on Unraid is crappy.
  8. OK, that was an issue with subnet availability. For some reason the NC could not connect to the database when pointed the database IP and port to it. When I joined the same bridge as the mariadb, the issue disappeared. As far as i know this should not be happening, but it somehow did.
  9. I had a server crash and now my NC seems to not get connected to database. ","app":"remote","method":"GET","url":"/status.php","message":{"Exception":"Doctrine\\DBAL\\DBALException","Message":"Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [2002] Host is unreachable","Code":0, Anyone knows how to fix this? The config and data volumes look to be fine. I have no idea why can't it reach the mariadb...
  10. FYI for anyone running a node here. The official Storj node GUI is crappy, but luckily for us, there's one awesome Grafana dashboard available for nodes statistics. https://forum.storj.io/t/prometheus-storj-exporter/1939 Give it a go.
  11. Thanks! I think that should be written in OP - to use as Cache = yes.
  12. Update about running this thing on "cache preferred" mode. Looks like storj has some silly rule built in which returns an error and switched the node offline when the space runs below ~450GB. This rule makes no sense to me, why...? 2021-01-27T19:29:00.600Z ERROR piecestore:cache error getting current used space: {"error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"} Error: piecestore monitor: disk space requirement not met So I had to switch the cache off for my storj share. As soon as I did that, the node started working again. Maybe there's some workaround on this? Would more exprerienced Unraid users would even prefer to run such thing on cache preferred? I suspect it would put less stress on HDD's when done so.
  13. The issue was with windows explorer, which does not accept a self signed certificate. I got it working with FileZilla. However, it looks like webdav is considerably slower in moving files than a mapped external drive copy inside the nextcloud. The latter has other issues, like not being able to copy several files or directories at once and gives an error if multiple directories are put to copy/move. So I stick to webdav and FileZilla combo, which seems to be working without any issues. Just takes a bit more time, which is fine by me. Besides, FileZilla has useful built in rules in case a version of a same file already exists in destination - like "overwrite if a version is newer".
  14. Hey Everyone! I ask some help with getting this thing working via http. I did a -p8080:80 extra parameter, but It always redirects to https. When i want to map the nextcloud data dir with webdav, I get asked for certs and i want to avoid that.
  15. OK, that was my issue too! Thanks! What an annoying bug, wasted a couple of hours digging in google and docs. Dno why I didn't double check if ports were mapped right in docker.
  16. Sorry if i didn't explain it too well. Error in the web browser when trying to reach the app by the domain name. Both, the NPM and theAPP were reachable by their LAN IP's
  17. Can anyone explain me why NPM does not work if both, the NPM and the APP sit on the LAN ("br0" in mine) and not in the unraid default bridge. I gave a local static IP to the NPM container and also to the APP container I wanted to proxy to. When done so, APP can not be reached with its domain name (Both can be reached with their IP's). In detail: - my router redirects all 80 to NPM IP:PORT and all 443 to NPM IP:PORT - my NPM has a proxy host: app.mydomain.com to APP IP:PORT I get connection refused error. may I ask why please?
  18. Sure, you can turn autoupdate on-off for every container individually via Unraid Auto-update Appliations plugin. Just go in to this app (under settings) and you'll find docker tab and auto-update toggle buttons there.
  19. Here: https://forum.storj.io/t/storj-node-maximum-size/7922/13?u=cryptopumpkin Yeah, probably the best option for us would be to run 8TB or 10TB nodes and only start a next one after the first one gets vetted. I do not know If the second one will still get vetted twice slower than the first one or will it get it as fast as the first one. Need to test or ask from the official forum about that.
  20. Of what I have read, there is no hard cap on the storage size per node and the number is just a strong suggestion from the dev team. Someone in the forum did some math with hes years of statistics data and came up with a conlcusion that a theoretical limit would be aound 40TB of per IP subnet if IP filters work like we think they do. That number is based on hes statistics that a ~5% of all data gets trashed and therefor a pool of >40TB would never fill up, because 40TB would be the line in which point ingress = trashed. Of course if in the future an average Ingress rate increases, then the max pool size would also be larger. On our case (running nodes on docker), I would keep a single node size less or equal to 24TB, because it's less risky. If one node gets a bad reputation for some reason, then you still have other good ones to compensate and you could easily increase their size if needed. I started two identical nodes running on the same machine and the same network. Somehow one of them got uptime warning (from one us. sattelite) and lowered its uptime score, even tho they are identical in terms of availability and uptime. How did that happen, I have no idea. Must be some kind of bug. If I see this happening again, then I'll investigate it some more. Hopefully it was just once.
  21. Cool vid. I see you had some struggles, but got them sorted. Posted below your vid as well that: You can do all cli commands on the local terminal. Just point the container to the identity certs and data directories and its good to go. Also there is no need to turn off the cash disc. The space warning is misleading because the node only sees your cache drive and not beyond. Your storage space should fill up just fine (eventually, because the start is really slow). And one more thing, in case some of you don't know yet. Running several Storj nodes from a same IP unfortunately does not get you a multiplied ingress. Storj throttles down big storage providers that way. They filter IP subnets and ingress data is divided between all the nodes in the subnet. Also this goes for a vetting process time. If you start two new nodes on a same day, they get vetted pretty much 2 times slower than a single node would. This is why I am currently holding back the start of my third node.
  22. Just to clarify what I wrote earlier When you run with a: -e SETUP=true for the first time, you will see from the log that the setup was done and container exits. After that you need to remove this parameter and start again. If you mounted your identity and data folders correctly, then you should be good to go and your front end dashboard should now be available from "your-unraid-pc-ip:14002". Happy hddmining! On the sidenote: fresh miners data accumulation rates are extremely slow right now. Hopefully accelerates at least 10 times when the node gets vetted.
  23. use v3 template instead. This here has not been updated for a while. https://hub.docker.com/r/storjlabs/storagenode However, use the :latest tag, not the :beta tag.
  24. I had a same issue. Apparently when running the first time -e SETUP=true --mount type=bind,source="/mnt/user/storj/<identityfolder>/",destination=/app/identity --mount type=bind,source="/mnt/user/storj/<datafolder>/",destination=/app/config is needed.