Januszmirek

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by Januszmirek

  1. Got it, thank you for pointing me in the right direction. My plex mapping for tmp pasted below: I followed this post when set up transcoding location, maybe ive messed up sth;)
  2. I don't understand what make out of it and where to start. Forget about the vm for a minute. I barely use vms. the issue described in original post happen regardless vm use. Yes, if I stop Plex, no memory issues accur.
  3. Diagnostics attached. It is hw transcode through internal igpu. It never maxes out cpu without maxing out ram first though, so i thought it is ram not cpu issue. tower-diagnostics-20211231-1536.zip
  4. Not sure I picked the right section of the forum for this, if not please move it to the correct one. So I run a rather standard set of docker containers and 2 vms that i very rarely turn on. The issue I have is that even with 16g of ram which I thought would be plenty for my use, I do experience very high memory and cpu cores use and subsequently unraid and all containers not responding. This usually happens with Plex when multiple streams (3-4) are being played with transcoding. I do realize that linux is using all the available Ram and this is fine, I dont need to see free ram;) The issue is that with all these containers runnning, and Plex streaming 3-4 streams my ram is touching 70-80%. If I turn on any vm, the system practically freezes with RAM utilization around 100% and same for all CPU cores. Now, I dont have any issue with upgrading my 16gb to 32gb. But I wonder if this would actually make any difference since Linux would use all available ram anyway, and even with ram upgrade I can get back to square one;(. As far as plex transcoding set up, i guess its also pretty standard - transcode to ram with 60seconds buffer. I would like to keep transcode to ram to save up on disks wear. Turning off transcode is not an option as many clients do not support all formats natively and hence need for transcode. I observed that when I stop the array and start it again the 'idle' ram usage drops to 30% and stays around 50% for a couple of days, then gradually builds up to reach around 70-80% after a week or so when I start to experience issues described above. I also don't get why ram fill up immediately kills cpu utilization which in turn practically freezes the whole unraid server;( Well, not freezes but makes it barely responsive. Ultimately my questions are: 1. Would an upgrade to 32gb from 16gb make a meaningful difference in my case or its a waste of money as its rather configuration issue, not hardware? 2. If not, what easy solutions I can utilise to manage ram more efficiently? Maybe a cron job to release ram every now and then? Or maybe there's a plugin that can handle this job? Thank for help.
  5. Thanks! All updated now and works as it should;)
  6. Wow;) that did the trick with the permissions Thanks! I don't get what you mean by an old template, though. I use ich777 repository and it is on its latest update. What else can I update to a new 'template'?
  7. OK, so a bit more context then, I believe this all goes down to permissions. So the folders with subfolders and files that are created by me in krusader can be easily removed. The folders I have issues with are the ones created by other docker containers, for example CA backup plugin, or created on a windows vm and copied across a network share. Some folders permissions tab is greyed out like in the attached screen. But other folders have User: krusader, but are still non-removable with Delete button. I think a setting that would give krusader root access might be the solution but I don't know how to set it up.
  8. Potentially a noob query about krusader so bear with me;) Some folders (with files and subfolders inside) when I try to delete them, do not respond to Delete/F8 command, literally nothing happens when I press Delete button. If I however invoke terminal window in Krusader and rm -r the same folder it is deleted immediately every single time. I used to remember Delete button worked with no problems. I did get the warning message that the folder contains files/subfolders inside but confirming these made the folders deleted. Anyone knows what settings should be changed to bring back the functionality to Delete button? Thanks.
  9. Good to hear that the back-up process works for you. Just out of curiosity, did you actually try to recover that back-up into freshly new bw install, to make sure the back-up can be restored including all user accounts, and their data?
  10. Hello, I am interested in bitwarden backup as well. Does this scrtipt actually back up everything? Included atachements? Did you try to recover your backup? Did it work?
  11. Stupic macOS, that worked like a charm! Thanks a bunch;)
  12. Tried to update to the latest version and below appeared. Anyone can help me fix this? Thanks.
  13. I think I have exact same issue with combo of nextcloud v21 and nginx proxy manager. Everything works like a breeze except that every file more than 1gb downloaded remotely stops after 1gb. Can someone please explain me like i'm 4-yr old which files in nextcloud and/or in NPM should I edit to get rid of this limitation once and for all. Many thanks in advance. EDIT: so basically I found the solution myself. No need to edit any files in nextcloud. In order to get rid of 1gb download limitation : 1. go to NPM Proxy Host tab. 2. Click '3 dots' and click 'Edit' in your nextcloud proxy host. 3. Go to advances tab and put the following: proxy_request_buffering off; proxy_buffering off; 4. Click Save That's it. Enjoy no more stupid limitations on the file size downloaded. Tested 13gb file from remote location and it worked like a charm. Hopefully someone finds it useful for their setup.
  14. Hello, is it possible to downgrade deluge 2.x.x to 1.3.x and maintain present torrents state? Tried before using only tags and it ended up with successful downgrade but the torrents disappeared from the client. If you could direct me how to transfer current state of torrents to downgraded version it would be appreciated, thanks;)
  15. My issue is not the typo. The issue is that when I input the above command into the unraid terminal or netdada console, I always get the "-bash: -v: command not found" error;(
  16. newbie query here, where do i put these commands? and When i put above in CLI i get:
  17. Hello, I am using NPM with linuxio/nextcloud. Everything works perfectly except one issue. I have problem when someone tries to download a file more than 1GB in size. It either stops downloading a file or breaks the download entirely. In other topics I found different solutions how to address this issue but all solutions point towards letsencrypt config. Can anyone point me towards a solution with NPM and how to enable download of files >1GB? Much appreciate your input. Thanks. Below suggested solutions I found so far but had no luck with finding files mentioned below:
  18. tried to reinstall catalina today, i got this message shown up, didnt change anything in the vm GUI config, please help;) EDIT: sorry about that, obviously i missed the part about the network interface from SI tutorial;)
  19. Perhaps, I did not word it correctly. I don't know what is the reason for this issue. I did not reinstall it yet. What I am concerned about is if I reinstall it would it mean loosing all configuration? Do I need to start the config from scratch? I spent a lot of time configuring it and wouldn't want to loose it. Can you conform I can reinstall it with no risk of loosing configuration? Thanks
  20. Something weird with nextcloud happened to me today. I was trying to add another external mount. I was running 19.0.4 version of nextcloud on unraid 6.8.3. After editing all the details under Add new Path in nextcloud container I hit Apply and was welcomed with an error. I thought its a standard mistake with paths, etc. so I wanted to try again. To my surprise the nextcloud container is gone. It doesn't show up among other containers. The service is not running - I cannot connect to it both locally and externally. I don't even know what to start with and what info I can provide you with to help me with this issue? There is still a nextcloud folder in appdata. Please help;)
  21. Great work, i'd like to replicate the same for my Edgerouter 4. @FlorinB If you still follow this thread, please help me with the below queries. 1. Where exactly on unraid did you save the main configuration file and under what name? 2. The following section makes me think ports 80 and 443 need to be forwarded for this to work? Is this correct? I can't use these as I have forwarded them already for nextcloud use. Is there any way configuring other ports for edgemax? If so, what modifications I need to make and where? server { listen 80; server_name edgex.*; return 301 https://$host$request_uri; } upstream erl { server 192.168.22.11:443; keepalive 32; } server { listen 443 ssl http2; server_name edgex.*; include /config/nginx/filterhosts.conf; include /config/nginx/ssl.conf; client_max_body_size 512m; 3. Is the below actually needed? I want to be able to access edgemax gui from anywhere, not limited by a certain IP range. I don't have filterhost.conf file in /config/ngnix, Do I need to create one? include /config/nginx/filterhosts.conf; #allow from this ip allow 212.122.123.124; #temporary internet ip on my router allow 178.112.221.111; #deny all others deny all; 4. Are there any modifications needed on the edgemax side? 5. Are the above modifications enough? Anything else you didn't mention in your original post? Thanks for help.
  22. Hello, did you get any success with notifications for rclone? It's pretty much the only feature I'm missing from rclone - notifications. Don't need anything fancy, discord notification or email sent on sync errors only would be enough;)
  23. Hello, im using latest deluge docker from ls.io - Deluge 2.0.3-2-201906121747-ubuntu18.04.1 One of the trackers I use has the following rule: 'Due to release stream issues, Versions 2.x Sourced from their PPA (or docker based on PPA) will not work here' Apparently lsio docker is based on personal package archive. Is there anything I can do to override this? I really like deluge for seeding and i had no other issues with this version so far. Thanks for help;)
  24. If you can set up new container before deleting old one, the easiest way would be to use export/import location option.
  25. This might probably sound like a noob/dumb question to you, but gotta ask one thing about my nextcloud setup. So I was mostly following SpaceInvaderone's guide on nextcloud reverse proxy setup. I mean mostly, as my setup is missing one step - reverse proxy server. So in my arrangement I have nextcloud linked with ddns service through letsencrypt, but without additional step of reverse proxy that would point back to ddns domain. Everything works smooth as butter both locally and from mobile device. I have 'All checks passed' on internal nextcloud page and 'A+' security rating on scan.nextcloud.com. That being said I was wondering how much 'less secure' my setup is comparing to a full blown setup shown in SIO video? Thanks, and sorry if this was asked before - couldn't find the answer to that in this thread.