LSL1337

Members
  • Posts

    147
  • Joined

  • Last visited

Everything posted by LSL1337

  1. wow, great. thanks! I want to copy the file daily. Is there a command in can run, to reset this log every day after a copied it? so a copy a file.activity_DATE.log file, and after that, reset the file with a command
  2. it worked with the ., but it was in 1 line in the web terminal. I removed the dot and the extra line, now it seems to be working. thanks! I didn't know the different line makes a difference even, it was just more readable for me
  3. I have a VERY basic find+cp script, which works in the web command line, but doesn't, when I want to run it with custom scripts (I just tried it manually) Any ideas? #!/bin/bash find /mnt/user/mini/myfolder/ . -iname '*.mkv' -exec cp -n {} /mnt/user/mini/myfolder \; When I click Run Script, I get the find results, and "find: './sys/kernel/slab': Input/output error" And the cp can't start with this error in the find 'results'. What am I doing wrong? Btw i just try to copy files from the subfolders to the main folder. the subfolders will be deleted by the DL client, when seeding is done.
  4. I'm still speechless. Thanks very much. Is there a way to get advanced view without this extra side effect? whatever, i can always toggle it off.
  5. are you f'in kidding me? @ich777 it works why does it have any impact even, and why is it ok for a few days after boot? is this a bug, or a feature? and it has impact, when the ui is not even open I can't even find the words... how many people does this impact btw? @INTEL check the previous post
  6. still nothing, I've given up. What dockers do you run, what motherboard do you have, cpu etc. maybe we can pinpoint something
  7. I'm running 2 Tautulli official dockers on my unraid server. (for 2 plex servers) first is on 8181, second is on 8182. I'm getting tired of something automatically misconfiguring it periodically (I guess after an update). First of all, I can't configure the second instance to use 8181 and I'm not sure how did I change it to 8182 anyway. I don't remember that field being blank months/years ago. So i guess when there is an auto docker update for tautulli, my 8182:8182 port forward setting gets overwritten to 8181:8181, which ofc means the second instance can't start. Any idea how to solve this issue? Can I configure the second container even to run on the same port? and map the second instance to 8181:8182? thanks
  8. I changed back my cache drive from encrypted xfs back to xfs, but the issue still persist. I can't be the only one with this issue...
  9. first of all, thanks for the reply. second: if the baseline would be 15%, i wouldn't care about it, but it is less than 5% (has been for years, without plex transcoding), so the system is doing something, which it shouldn't/ didn't before. based on htop, it might have something to do with containered.toml, which could indicate possible drive usage. this is a 1 line txt file. if it has 10% cpu usage, that would mean something is very wrong... i stopped my containers one by one, and the cpu usage decreased proportionally, which means it's not a single docker which causes the issue, it could be with the docker engine itself. Diagnostics attached lsl-nas-diagnostics-20200527-1300.zip
  10. So am I the only one with this issue, and noone has any idea. This is not github on an open source free loveproject. This is a paid software, right?
  11. After a few days of running, I can see in the stats window, that my cpu usage is 15%, compared to the normal 4-5% constantly (if nothing else is running, on a daily, weekly stats chart) If I restart docker, it doesn't solve it (In settings, docker enable No, than back to yes) If I restart the server, I think it solves it sometimes, for a few days, than it starts again. I think it started around the time, when I changed some docker log settings or after i switch my cache to encrypted xfs It starts to bother me now, cos if this writes to SSD so much log constantly, that it takes up 10% CPU, it will just kill my SSD in a few months (Or var/run/ is all in RAM?) my docker.img is less than 50%, log is 1% in the dashboard. If I run a htop, I get the following results: My cache drive is now XFS encrypted for a few months, i though it would solve this occasional btrfs csum error. I was wrong, as the docker img is still btrfs... Any ideas? Thank you!
  12. is the log, which is shown in the UI, available as a text file in some folder maybe? Can this be written to file periodically? daily for example? I might try to load it into DB, so I can better analyse it, and optimize some jobs, so i can group the disk spin up/down activities better maybe
  13. after 4-5 days now it went away, without me doing anything, than it came back by itself again very weired. before that it was always this. i was messing around in plex before (transcoding), and it started it again. If i stop dockers 1 by 1, it get's smaller and smaller linearly. So it doesn't make much sense. I don't think it's conencted to 1 docker. I'm running the same ones for a year now. anyone else? where/how can I even see this file?
  14. After a few days of running, I can see in the stats window, that my cpu usage is 10% higher constntly, than the normal baseline. If I run a htop, I get the following results: If I restart docker, it doesn't solve it (In settings, docker enable No, than back to yes) If I restart the server, I think it solves it sometimes, for a few days, than it starts again. I think it started around the time, when I changed some docker log settings a few weeks ago. Now I have a single log, without rotation. It starts to bother me now, cos if this writes to SSD so much log constantly, that it takes up 10% CPU, it will just kill my SSD in a few months (Or var/run/ is all in RAM?) my docker img is less than 50%, log is 1% in the dashboard. btw this morning I also had a btrfs csum error, which crashed all my dockers. I think i recreated it like 6 weeks ago. My cache drive is now XFS encrypted for a few months, i though it would solve this occasional btrfs csum error. I was wrong, as the docker img is still btrfs... Any ideas? Thank you!
  15. The day has come for nvme temp error. in the auto fan speed plug-in I updated, tested it, it works Thank you! 2020.04.11 fixed NMVE exclusion (courtesy of Leoyzen) ps: "added multi fan selection" i'm not sure what is this, and how does it work? Multiple fans can be set to the HDD tems, or even other temps. The UI is the same, how can I configure this?
  16. anyone else have a problem with cpu usage? I have a 7700T, and after every weekly sheduled job (sunday morning), when my docker appdata is backed up, deluge docker always starts with "high" cpu usage. idle my server is around 2% cpu, when nothing happens. after the restart it is a constant 8%. after i manually restart deluge, it goes back to 2%. if i don't touch my server, it will stay at 8% baseline for the whole week. i THINK it was the deluge-web process, last time i checked with 'top'. anyone else has this weired issue, that deluge-web cpu usage spikes on 1 thread after auto backup? (im on the last 1.3 build, before the 2.0 beta) I'm starting to get tired of this, any idea? or an idea where should i start? thanks
  17. you are right, but I don't think there is anyone who manages the plug-in anymore. and the problem with the plug-in is that it can't ignore nvme drives, so the 2nd part of what you wrote is impossible/doesn't work. I downgraded to the previous version (without nvme support), but it didn't work on 6.7.0 the github for it is abandoned. If I could fix it, I would
  18. It should be, but it doesn't work. it was reported months ago by several users, the developer did **** all about it. it seems like such and easy thing to do....
  19. same boat. older versions didn't have nvme support, so i guess it would solve the problem. any way to downgrade to the older version of the plug-in?