NewDisplayName

Members
  • Posts

    2288
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by NewDisplayName

  1. Docker shell button? Clever idea! Keep goin casual, thanks!
  2. Thanks CHBMB for this app! Nextcloud makes me loose my last hair... I found the following problem after many grey hairs: [03-May-2018 19:58:47] WARNING: [pool www] server reached pm.max_children setting (5), consider raising it Then i tried to make it higher, but it wont accept the localphp.ini, as it seems. Now i read in this thread like on page 30, that you can change it inside the docker, but after new update, its away.... ... soooo is there a way to make this thing handle 1 person at a time (Handy+PC) without changing config after every update? Like a variable in the template i could use? I dont use SSL or reverse proxy, but mariadb. Is there anything i can do to increase speed besides the FPM thingy? Login takes forever, uploads sometimes also... downloads also... What could This be?
  3. Okay i found out how its working: docker exec -it nextcloud sudo -u abc php /config/www/nextcloud/occ files:scan --all I created a user script to be run every week, i guess thats okay.
  4. Now is only the question, whats the correct file structure for the nextcloud container? Var www nextcloud does not exist I bashed into the docker and find / occ.php find: occ.php: No such file or directory
  5. Okay, log purge does seem to work also for nodes, thanks! For clarification, i just use this node mode since some days on one node, to test it. "1" seem to be more then 1 day, but that doenst matter as long as it gets automatic deleted from time to time... Good work. For automated building: https://docs.docker.com/docker-hub/builds/#create-an-automated-build
  6. Thanks ill try first just adding the variable I also changed your repo to zugz/r8mystorj:latest Okay, works, atleast for the main node - does it work for storj10\Node_1\log? - i dont have old enaught files to test I just wonder, because while i had the other storj docker, i got an update every day, now not anymore, did i forget anythign!?
  7. Correct. Casual user, i need to not forget that. Thats what i mean.
  8. Yeah, like i said, i had this happen, but i had no problem with it, no crash no slowdown, i guess its just ignroing when you do the script while its running.
  9. I didnt noticed anything so far. So i guess its working.. Problem is only when mover cant go below e.g. 80%, then it will be triggered every minute, do you mean this? I had this, didnt had much impact on system, so no real problem.
  10. As far as i know this doesnt change anything other. So if you let ur mover run at midnight (like i do), then it will do this anyway of the script above. The move to hdd only happens when SSD is full, so exactly what you want.. (or if the files are not moved away from temp download until midnight) And even if it gets moved to hdd, its np, you move the file to your archive and then it gets delted from hdd or sdd, wherever it is.
  11. Hey, make your download path a extra share. Set it to cache: yes (so it uses cache, if possible) Then use #!/usr/bin/php <?PHP $moveAt = 80; # Adjust this value to suit. $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percent = ($diskTotal - $diskFree) / $diskTotal * 100; if ( $percent > $moveAt ) { exec("/usr/local/sbin/mover"); } ?> in the plugin user scripts and set it to run every minute (0 * * * *) This way you use your cache, but when it gets full it will transfer files from SSD to HDD. So theoreticle it shouldnt fill up.
  12. Yea, you are right, specific "standard" options should be included via OS not via plugins. But that is how it is.
  13. Thanks i will try that out (but i will just let it run once a day) its not that big of a problem, i think, when i can reach files after one day. (i dont add every day new files outside the cloud app)
  14. Hi, thats what i face everytime i come up with something. So not much changed here. (maybe not so direct)
  15. Hi, you asked for diagnose log, here it is. I think it startet 1.5 4:40 where all dockers just goin to disabled state. Was it because of parity check? (but i never notcied this) unraid-server-diagnostics-20180502-1257.zip I have this plugin currently not installed, because of 2 times where some or all docekrs were closed, which were running fine before i installed this plugin. Edit: I just let it autostart every docker, and let ip at 192.168.0.2 (if i remember correctly thats what it defaults to?) and no port given, at time i entered 1 for the first and then something like 10 for the next, the latest has something like 30 sec cooldown.
  16. Also, i could have an better idea. relocation the files based on how they are accessed, but i guess, thats to hard to implement? Thats why i came up with the light version which is probably easier to implement and (i think) dont has negative effects like performancewise. Like log file accesses and if files are accessed together in a specific time (or from same process (if that is possible to tell)), they should be moved on the same hdd. I guess that would be the 100% solution.
  17. I just read about that cmd, but if i recall correctly i need to be inside the docker to run the cmd, can this also be automated? The thing is, you can put files into nextcloud, but he dont see them until u run a cmd.
  18. Is it possible to use that cmd line like every day via user scripts or something? How u do it? Every time manual?
  19. @jcloud THANK YOU! But the change is not pushed by now?! Seems like "much traffic" yesterday to today. Im already at ~11mb
  20. I saw ur question about plugins after rereading,s orry. I dont know explicit links, but you can find very much information if you just look at plugins from others and or google like "unraid create plugin" or something like this. Maybe there is even an official guide, but i guess, it woudl be outdated. Best is, i think, try to understand how plugins from others work and if you have specific questions, do it...
  21. So many answers... if i would get them when i suggest something. But sadly mostly its just like all try to destroy my idea instead of tryin together to make it better. Anyway, clarification: I dont run a buisness, with reputation i mean storj reputation (like uptime, bw, timeouts). I run 13 Storj nodes. U can see them in the ranking when u search for unraid. Like i said, i run unraid now for a very long time (atleast some months) dockers never stopped until i installed this plugin. Thats all i can say. I relay on the storj nodes (and the rest dockers) to be startet or e.g. i cant access my homenetwork at all when im not home. (which i would like) I dont know what i could have done wrong (and since there is no todo list or something...) Squid please dont get it personally. I apprentiate every plugin, docker, unraid and everyones work here. But if something is not working, then its just not working. Its hard for me to write with you guys, because we say like shit (scheiße) in germany, but its normal, no one would be offended, thats why i said crap.. I farmed the reputation for storj for like months, and every downtime removes my reputation, which is very sad... OVER ALL : I WOULD LOVE / LIKE TO DO MY OWN PLUGINS AND SCRIPTS (or help you with yours), YOU CAN BE SURE ABOUT THIS. But im unable. I just have good suggestions. (some of them even got implemented into factorio ) This i fixxed together with the author. He helped me to get into the docker container and look at entrypoint scripts and such things, which was very interesting. So, i do learn new things.
  22. I dont get this. You mean i should create every folder as a user share? I seperate my user shares like this appdata Programme Archiv Datengrab domains VM Instanzen downloads Temporäre Downloads http_cache Web Download Cache isos ISO Images Kamera Kamera Videos nextcloud Privat Privat storj Storj system System Daten Windows 10 VM And it feels good this way. Problems i have with current split level: most shares would need different split levels set Like Archiv, downloads and nextcloud, privat also and windows 10. So out of 12 shares i would use "smart split" in 5. I could make 100 more shares, but i dont think it will make it easier...
  23. Yes in that case, its not THAT smart, but with wrong split level and or "random" mode it could be even more bad. Like maybe 3 or 4 disks spin up. I would like to beta test it!
  24. Yeah, buit srsly, i dont want to run a cmd everytime i add a file to it.... i dont understnad why ppl dont build it so that if you refresh it just actually refreshes...!?
  25. Heimdall would be really nice, if they implement the proxy thing. Like u only need to auth to heimdall and then he redirects the local links to remote.