Jump to content

NewDisplayName

Members
  • Posts

    2,288
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by NewDisplayName

  1. Hi, i noticed it with the newest RC 6.5.2. RC1 after the restart it startet all dockers, except 3 (or 4?) in the middle. I could start them manually and i didnt found an issue in syslog (he tells he startet the dockers correct). Didnt startet was: Storj Storj1 Storj2 Storj3 (or also 4, i dont remember) all other dockers/ 1 vm startet just fine. Log is attached. edit: changed it to correct diagnostic unraid-server-diagnostics-20180504-2033.zip
  2. I really would like to automate this. its because i download top 100 every week, and many top 100 are the same as the week before... ^^ is there a way to automaticle scan for dupes (manual via grafic works fine)? Or someone know a bash script or something what could do it?
  3. Cleaning script for your archives, logs to syslog. Just change the directorys to fit yours. If you need more extentions, just add them, if you dotn want to remove some extentions, remove the according lines. First block is something you want to delete on all shares, second block is for files which shouldnt be inside movies/series/dokus - but .e.g in games. You could remove the logger entries, but i like to see whats happening... Someone know if theres are clever way to remove files WITHOUT an EXTENTION? e.g." SOMEFILE" Script is very fast, example output:
  4. For all you who want an automatic filescan to happen (with log output, fo all users), get the user scripts plugin and enter this: Script which start nextcloud file sync for all users, with logger output to syslog on unraid
  5. Yep, i found out, was too easy, thanks! Finally: Script which start nextcloud file sync for all users, with logger output to syslog on unraid
  6. I googled it, but didnt find anything on how to use it. From help it seems it can only write to log from file? Enter messages into the system log. Options: -i log the logger command's PID --id[=<id>] log the given <id>, or otherwise the PID -f, --file <file> log the contents of this file -e, --skip-empty do not log empty lines when processing files --no-act do everything except the write the log -p, --priority <prio> mark given message with this priority --octet-count use rfc6587 octet counting --prio-prefix look for a prefix on every line read from stdin -s, --stderr output message to standard error as well -S, --size <size> maximum size for a single message -t, --tag <tag> mark every line with this tag -n, --server <name> write to this remote syslog server -P, --port <port> use this port for UDP or TCP connection -T, --tcp use TCP only -d, --udp use UDP only --rfc3164 use the obsolete BSD syslog protocol --rfc5424[=<snip>] use the syslog protocol (the default for remote); <snip> can be notime, or notq, and/or nohost --sd-id <id> rfc5424 structured data ID --sd-param <data> rfc5424 structured data name=value --msgid <msgid> set rfc5424 message id field -u, --socket <socket> write to this Unix socket --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets -h, --help display this help -V, --version display version is that true? Edit: okay it was too easy. logger test displays May 4 12:54:47 Unraid-Server root: test Thanks for the fast help!
  7. I created the following script (it makes nextcloud scan all files to add files which were not uplodaed thru app (e.g. SMB) Is it possible to display something to the unraid log about this script run?
  8. Thanks CHBMB for this app! Nextcloud makes me loose my last hair... I found the following problem after many grey hairs: [03-May-2018 19:58:47] WARNING: [pool www] server reached pm.max_children setting (5), consider raising it Then i tried to make it higher, but it wont accept the localphp.ini, as it seems. Now i read in this thread like on page 30, that you can change it inside the docker, but after new update, its away.... ... soooo is there a way to make this thing handle 1 person at a time (Handy+PC) without changing config after every update? Like a variable in the template i could use? I dont use SSL or reverse proxy, but mariadb. Is there anything i can do to increase speed besides the FPM thingy? Login takes forever, uploads sometimes also... downloads also... What could This be?
  9. Okay, log purge does seem to work also for nodes, thanks! For clarification, i just use this node mode since some days on one node, to test it. "1" seem to be more then 1 day, but that doenst matter as long as it gets automatic deleted from time to time... Good work. For automated building: https://docs.docker.com/docker-hub/builds/#create-an-automated-build
  10. Thanks ill try first just adding the variable I also changed your repo to zugz/r8mystorj:latest Okay, works, atleast for the main node - does it work for storj10\Node_1\log? - i dont have old enaught files to test I just wonder, because while i had the other storj docker, i got an update every day, now not anymore, did i forget anythign!?
  11. Yea, you are right, specific "standard" options should be included via OS not via plugins. But that is how it is.
  12. Hi, you asked for diagnose log, here it is. I think it startet 1.5 4:40 where all dockers just goin to disabled state. Was it because of parity check? (but i never notcied this) unraid-server-diagnostics-20180502-1257.zip I have this plugin currently not installed, because of 2 times where some or all docekrs were closed, which were running fine before i installed this plugin. Edit: I just let it autostart every docker, and let ip at 192.168.0.2 (if i remember correctly thats what it defaults to?) and no port given, at time i entered 1 for the first and then something like 10 for the next, the latest has something like 30 sec cooldown.
  13. @jcloud THANK YOU! But the change is not pushed by now?! Seems like "much traffic" yesterday to today. Im already at ~11mb
  14. Yes, for me its working, i just needed to change that Key in the template. U can see my nodes when u enter "unraid" in the ranking (while tryin storjmonitor running, i accidently deleted my first storj node which had over 100GB :() Im just waiting for the point they start selling their service again so new customers may lay their files there, until then, its just "farming" for reputation and response time getting low... and waiting for it to start...
  15. Hm thats mysterios, but i also can confirm that my dates in the dockers are sometimes not that acurat (like hours off) - but no problems - and usually at some point the nodes goes normal again... If im right it can go -300 - +300 without problems, above, below is a problem.
  16. Guys, the last weeks was much traffic because it was trash test data. Now its back to normal. Test was from (i dont know) till 27.4... I got like 5mb the last couple days, that is normal. (11 nodes) I posted a log checker, use this if u unsure if your nodes are running okay. This docker works perfect, i only needed to adjust the ports of the extra nodes created if you use this feature.
  17. Yeah, i know that. Thats why i could easy check how big my log dir is. But if you want to make it perfect you could make the +1 a variable, with standard 30. This way you have 1 variable to enable delete and one variable where every user could say if delted after 1...2...3..or 60 days... Also, i dont care about 100gb more or less, like you, but just think about it at some time you forget the storj containers and its getting more and more... like in a year 1000gb...^^
  18. Hey, in my mind its not deleting from unraid host. Isnt entrypoint only run inside the docker and /storj/log is also only inside the docker? My log dirs are different in size, but i think about 10gb in not a month, without my big node which i deleted and was 3 months old. U can easy change +1 to +30 to delete 30 days old data. I dont really think the log files are improtant, because you could also set it to no logging at all. Log files are so easy to modify i really hope storj is not relayin on them for anything
  19. Okay, now i really now it works all. My first stored MB since i tried to install storjmonitor... Now we only need to fix the logs directory and then its perfect. My suggestion for the deleting of the log file would be: if [ -n "${DELETE_LOG}" ]; then sleep 10; find /storj/log -iname "*" -mtime +1 -delete fi And we just add a key DELETE_LOG to the template and when its there it deletes logs older then 1 day. I cant think why you dont want this, but if someone dont want it, he just removes the DELETE_LOG from the template and finish. I testet find /storj/log -iname "*" -mtime +1 -delete and it works inside the storj container. Removed over 300mb in jsut one storj instance.. We could also do a "if the dir is there" instead, but i guess, it doenst matter at the end.
  20. No problem! Im glad u know what i mean now... i guess your log files are also getting big?
  21. Did you change this? And yes im refering to the stroj/log folders
  22. I found out why its not working. In template you name your storjmonitor key "MONITORKEY", but in entrypoint its: if [ -n "${STORJ_MONITOR_API_KEY}" ]; then sleep 10; sed -i "s/YOUR-TOKEN-HERE/${STORJ_MONITOR_API_KEY}/" /opt/StorjMonitor/storjMonitor.js; cd /opt/StorjMonitor; (/opt/StorjMonitor/storjMonitor.sh &) & i guess you need to change MONITORKEY to STORJ_MONITOR_API_KEY or remove the "if" Edit: Yes it works, you need to change MONITORKEY in template to STORJ_MONITOR_API_KEY...., then its filling in the correct API Key. Its because that IF checks if this key is there, and dont start storjmonitor when the key is not there... While you at it, you could add (or a working code) logs=$(find /storj/log/ -name '*.log');for log in $logs; do cat /dev/null > $log;done to the entrypoint script (dont work for me, i dont know why :()
  23. btw, i found https://ssdynamite.com/ there you can check your log files of storj if there is any problem... seems like all instances are working.
×
×
  • Create New...