LSL1337

Members
  • Posts

    147
  • Joined

  • Last visited

Everything posted by LSL1337

  1. maybe try 'nct6775' sensor. It works for my b250 intel board. auto detect didn't work for me either
  2. multiple people reported this nvme temp issue, no response. Seems like it would be an easy fix, but i guess noone cares. The plug-in could even ignore nvme drives by defalt, cos they aren't really cooled by FANs anyway.
  3. Is it Possible to install a previous version of AutoFan, before it had 'nvme support'? It just broke the functionality basicly for every nvme user
  4. Marbles: On linux you have access via the webUI only. The webUI doesn't have RSS features implemented yet (I wouldn't hold my breath, it was requested years ago, the devs gave a maybe later answer, but ofc it's not up to the LS.io guys). If you want RSS on unraid, you should use deluge (it's awesome), or give flexget a try, but that's a whole different world.
  5. Hi. I'm on 6.6.7, latest plug-in. Auto Fan doesn't exclude my cache drive. It used to work, anyone else has this issue? Now my fan speed is always based on my "hot" nvme ssd. tried restart, disable reanable, tick untick, etc. still. My highest temp (in the log) is always my nvme ssd temp any ides? thanks! edit: I see other people have the same exact problem. On my previous server build, I did not have nvme cache, and it did work as advertised. It excluded my sata ssd cache drive. I guess the nvme support is not great atm. (I also have an unassigned drive, but it's 2,5", always spun down and its cooler then my array hdd's, so I don't think this is the issue) Hopefully it will be fixed later.
  6. yesterday i was messing around with this docker, and i could only get to the webui on HOST mode. i mapped the ports, the -e webui paramter as well, but no luck... is there a downside running this docker in host (network) mode?
  7. thanks, but my config folder is mounted. webui password only reset after i (manually) update it. not after a restart
  8. i've been using this deluge docker for years now with the same mountings. If I download something to the docker.img, than it would show in 2 seconds. I usually download 5-10-100gig torrents. I checked the log size with a 'user script', which i found on this forum on a similiar topic. written by Squid I guess. it showed 500mb log after a 3 days uptime. i forced updated the image (auto update is off for deluge for me), than 1 min later, the script said my log is a few kb only. since i updated deluge (few weeks ago), i regularly see higher than normal cpu usage on deluge, deluge-web as well. (and i guess it's normal, that if I update deluge docker, my webui password resets, right) I think I didn't change any settings about logs, but the 6 month old build didn't have this issue. If I get home, I will send you the mountings, but they are fine. how can I even see the actual log files. maybe it would be easier to debug the problem. thanks
  9. that section is quite useless imho. I'm not downloading into the docker. something filling it up fast. can't even imagine how can deluge make 500 megs in a few days. my question is about deluge now, so anyone?
  10. hi my docker.img is filling up deluge Size: 280M Logs: 547.0MB in the last weeks, deluge and deluge-web has high cpu usage? any ideas? where can i even find these files I guess it's in the docker.img, cos they are not in the 'mapped' files. I'm not a docker/linux expert (as you can see) thanks! OK: I restarted the docker, and now the log size is 0, but i guess it will grow to this size in 1-2 days. anyone else with similiar issues?
  11. yeah google showed me the same. i used ts3 before, and I only forwarded the default port and it worked. it doesn't for a while now. any docker specific thing i should know? does anyone has this working with WAN:port?
  12. I started with that (default settings), and couldn't forward 9987 either. couldn't join from WAN:9987, but LAN:9987 worked. same with bridge solution, i mentioned above btw: any reason this docker uses host, and not bridge?
  13. I don't auto update Deluge. The auto update and back-up plugin runs, so it stops deluge every week i guess, to make a backup. Last time it reset the webui password back to normal, and radarr/sonarr couldn't connect to deluge anymore. Anyone else had this issue before, that your webui password was changed back to 'deluge'? is there a way I can hardcode this somehow, maybe via a parameter or something? thanks!
  14. anyone else has problem with connection from outside? I can connect to it via LAN, on the default port. But portforward still doesn't work no matter what I do, I can't connect to it I forward the default 9987 port I even changed the docker to host mapping to something random, like 8088, and i can connect to LANIP:8088, but when I forward the port, I can't connect to WAN:8088 any ideas? is it with my router? I have plenty of other dockers, which work fine with port forwarding (web apps, like radarr, sonarr) anything special about ts3? Do I need other ports? cheers
  15. I have two plex servers running on unraid. One is in normal default mode (on the default unraid host IP .100), and one is on a different IP address (br0, with .128) both works great out of the box, both is accessible from outside. 2 Plex servers need to plex monitoring dockers (Tautulli, aka plexpy). As expected, I can connect to the .100 default plex server just fine, but i can not connect to the br0 .128 server. Only via WAN, but my WAN IP keeps changing, so it looses connection after a few days (home internet, periodically changing WAN IP ofc). QUESTION: How can I connect my 2nd Plexpy (on the .100 server) to my br0 .128 Plex server? Is it possible? What do I need to change on the dockers network configuration? If I point the 2nd Plexpy to 192.168.1.128:port, it will never connect. If I connect it to WAN:port, it works fine for a few days as expected. (I can't enter a DNS address). Thank you! edit: I posted this on the Tautulli docker support thread, but no answear for 2 weeks. This issue seems like a general network configuration issue. edot2: solution from reddit:
  16. Hi guys I used Tautulli docker for a long time, and it could always connected fine to my PMS. I managed to get 2 plex servers running on my machine, but now tautulli is messed up. I have my plexinc docker for my main plex, it's running in host mode, everything is fine. I have my second LS.io docker for my shared plex server on br0, with a fixed 192.168.1.128 IP address. I need to run 2 tautulli, I know, but I can only connect to my br0 PMS via WAN IP only, which is changing all the time ofc. how can i connect my LS.io tautulli to a br0 LSio PMS? It won't work if I put tautulli in host mode either. Any way I can connect the 2 containers? (btw I had the same issue when I used the official tautulli docker) thank you!
  17. any tipps for what software to use? anyone any experience with xfs restore? Or any other ideas for btrfs restore (i still have the old ssd cache, from which i moved the data to the array. can that be recovered?) filesystem is ok, but the files were moved (copy and delete i guess), no write activity on the drive since. i tried XFS restore apps on windows, so far no luck... same for btfrs 1-2 small files are recovered, but nothing substantial
  18. I run both on the first disk (cache), then only the second on the others. looks like it was the other way around... thank you for your help
  19. reading your post again: wouldn't the above command do the reading? it would read 20gig of sdc, and write it to 'dev/null', so i wouldn't lose my data, right? if my partions are unradable, it means i must have done it the other way, right? write 20gig of /dev/null to sdc.
  20. Hi i THINK I did the following command: dd if=/dev/sdc of=/dev/null bs=2M count=10000 I did a count=1000, and then 10.000, so it means i have overwritten the first 20-22GB of each disk. how can i go about fixing these drives? is there a way i can mount them on this unraid machine? i don't have any other linux machine, which can read xfs, only my main win10 PC. can you point me to the right direction maybe? thank you!
  21. after reading up on the dd command, did this command just overrode my partition, without any questions, and now all of it is gone? can some of it be repaired salvaged? i did 20GIG writes, does it use empty sectors, or just the first 20gig? can xfs_reapir or something like that help me?
  22. So after a quick search for 'Unmountable: Unsupported partition layout', i get results, that people couldn't mount after a unraid version upgrade. This is not the case for me. I just bought an NVME ssd, and installed it. Switched all my shares to cache: yes (moved everything to array) removed the old sata cache, put in the new nvme one in, set my shares back to (if needed) to Cache: Prefer), run mover, to move it back to cache in some cases. Soooo. during the mover, i wanted to test the speed. I'm a linux noob, did a quick google search, and ended up on this page: http://www.fpgadeveloper.com/2016/07/measuring-the-speed-of-an-nvme-pcie-ssd-in-petalinux.html so i did the time dd if/dev... off/dev ... command, it showed some false numbers (3GB, then 600MB, whatever) tried it with my other drivers as well dev/sdc sdb, still showed 300-400MB for sata HDD-s, so it was obv. not valid. whatever I clicked shares tab, and some of them were missing already, hmm after the mover finished, i restarted it, and boom, multiple drives unmountable. I have 2 array drives (8TB+3TB) (no parity yet, i know...) 1 cache (used to be sata, now nvme) 2 unassigned (2TB+4TB) I THINK i did this speedtest command on the 8TB (now unmountable), on the NVME drive (now unmountable) and the 2 unassigned drive (I think unmountable, but there is no text on the main screen). So I can only mount the 3TB drive, on which (i think) I didn't run the above mentioned 'speedtest' command. I don't know anything about filesystems etc, it's just the only thing I can think of, which could have messed up my partitions. they were always ok before. I only found this error, when i installed brand new drives, but then i just formated them, and i was good to go. SO, DID I LOSE MOST OF MY DATA? Is there a way to fix these partitions/drives? I can't mount ANY drives via unassigned devices, only the 3TB, which can be mounted via the array as well. Any idea how can i fix the partitions? Could the 'dd' command messed up all my partitions? any input would be appriciated thank you!
  23. we can't be the only two users with the same fan detect problem. this have to be a version upgrade or plug-in error, right? when i press detect on 6.5.2, nothing happens, like it used to in the old versions. Any other way to figure out my fan sensor name, and input it manually? I have 7700T and MSI b250 bazooka btw thanks
  24. I just wanna say thanks. this last 2 posts really helped me out. I was under the impression, that without the intel P state governor, I can't get turbo on my 7700T. with your script i could test it, and confirm it was working with 'Conservative' scheduler. it goes back idle to 900, and turbos up to 3800, when it needs to. thank you!
  25. I'm in the same boat, on a B250 motherboard and 7700T. the funny thing, it was working on 6.5.1, and one day it didn't. any ideas? same with motherboard temp. now I can only find the CPU temp. I now upgraded to 6.5.2, still, when I press detect, NOTHING happens. thanks