Jump to content

IamSpartacus

Members
  • Posts

    802
  • Joined

  • Last visited

Everything posted by IamSpartacus

  1. So you're saying get a separate controller and only use that one to pass through to the VM? I'd really love to be able to use my front USB ports and thus basically treat the system like a baremetal PC even though it's running my "PC" as a VM.
  2. I'm looking into (and will soon be testing) converting my main everyday PC (Threadripper 1950x + ASRock Taichi x399) into an Unraid server + Windows VM with just about all devices passed through to it to be used for my daily driver. Thus I want to be able to use all my current USB ports (I need almost all of them for things like VR, etc.) for the Windows VM. Thus I'll need to boot my Unraid USB off a non-passed through USB controller. Can I simply add a PCIe USB controller, and connect my Unraid USB that way for booting? Any issues with this?
  3. Anyone tested/gotten this plugin working with a Quadro RTX 4000? Thinking about picking one up.
  4. How does one install the guest agent? EDIT: Nvmd, found it on the virtio CDrom.
  5. I have. I just prefer to use LSIO images if possible because they are always very well maintained and constantly updated.
  6. Looks live V2 is out of beta and working well. Wonder if LSIO has plans to create a V2 container. I imagine updating the current container wouldn't work since it would cause all those running V1 to break as there is no upgrade path from V1 to V2.
  7. I can confirm that using the JVM_MX variable is working for me and I'm now on the newest version.
  8. Has anyone been able to get the WebAPI plugin installed in this container to enable Organizr V2 to talk to Deluge? When I go to install the plugin it never shows an option to check off and thus isn't working.
  9. No it's a Supermicro thing. Only 2 zones controllable Got it, that makes sense than as to why it's different. I appreciate the quick responses.
  10. Is that a recent change? I just upgraded my board from an ASRock Rack board and with that IPMI I had that ability to configure all 5 of my fans separately.
  11. Two (main/backup) 80TB servers that backup daily offsite.
  12. Is there a reason why my FANS 1-4 are all grouped together and there is no FAN 5 in my Fan Settings? I've got the latest BIOS and IPMI BMC for my board (https://www.supermicro.com/products/motherboard/Xeon/C600/X10SRM-F.cfm).
  13. I just wound up connecting via USB since my UPS is right next to my server anyway. Wasn't worth the hassle.
  14. I'm interested in both. I realize Emby will perform better since Plex can't do both encode and decode yet in Linux. Can you elaborate a little more? What exactly does 500fps equal time wise and what's the original bitrate of the file you're giving as an example here?
  15. I appreciate that sentiment but that doesnt really give me any hard numbers or frame of reference to go off of. What kind of CPU are you co.paring it to and how much faster is it in comparison?
  16. Has anyone tested background transcoding (mobile sync or optimized versions) performance on a P2000? Trying to determine if I'd see a significant perforboost over my E5-2680v3.
  17. I dont spin my drives down and this plugin detects that all 6 of my data drives are spun down. Bit I guess for me it's not a big deal because since I dont spin my drives down I can just set turbo write as the default.
  18. I'm just showing the RAM constantly creeping up towards 4GB in Advanced Docker settings. Then I keep seeing this in my syslog: Feb 25 09:38:31 SPE-UNRAID01 kernel: Task in /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc killed as a result of limit of /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc Feb 25 09:38:31 SPE-UNRAID01 kernel: memory: usage 4194304kB, limit 4194304kB, failcnt 234151298 Feb 25 09:38:31 SPE-UNRAID01 kernel: memory+swap: usage 4194304kB, limit 8388608kB, failcnt 0 Feb 25 09:38:31 SPE-UNRAID01 kernel: kmem: usage 47156kB, limit 9007199254740988kB, failcnt 0 Feb 25 09:38:31 SPE-UNRAID01 kernel: Memory cgroup stats for /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc: cache:12772KB rss:4133200KB rss_huge:1951744KB shmem:3480KB mapped_file:396KB dirty:396KB writeback:132KB swap:0KB inactive_anon:3928KB active_anon:4134452KB inactive_file:4896KB active_file:1180KB unevictable:0KB Feb 25 09:38:31 SPE-UNRAID01 kernel: Tasks state (memory values in pages): Feb 25 09:38:31 SPE-UNRAID01 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 32436] 0 32436 5840 81 81920 0 0 run.sh Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 32578] 0 32578 4314 42 73728 0 0 jsvc Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 8499] 99 8499 7912541 1030559 10633216 0 0 jsvc Feb 25 09:38:31 SPE-UNRAID01 kernel: Memory cgroup out of memory: Kill process 8499 (jsvc) score 985 or sacrifice child Feb 25 09:38:31 SPE-UNRAID01 kernel: Killed process 8499 (jsvc) total-vm:31650164kB, anon-rss:4123796kB, file-rss:0kB, shmem-rss:0kB It's been doing this for a few days and only reboots fix it (until it reaches the RAM limit again). For now I've rolled back to 3.9. Lots of people complaining about memory leaks in 3.10 on the Ubiquiti forums.
  19. Does anyone know if it's possible to downgrade back to 3.9? My 3.10.x container is unsable because of the memory leak as it keeps hitting my defined 4GB ram limit and then crashing. It's unusable at this point.
  20. Ok I think I found the culprint. My Unifi Video docker appears to have a memory leak and it keeps hitting it's 4GB limit. Feb 25 09:38:31 SPE-UNRAID01 kernel: Task in /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc killed as a result of limit of /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc Feb 25 09:38:31 SPE-UNRAID01 kernel: memory: usage 4194304kB, limit 4194304kB, failcnt 234151298 Feb 25 09:38:31 SPE-UNRAID01 kernel: memory+swap: usage 4194304kB, limit 8388608kB, failcnt 0 Feb 25 09:38:31 SPE-UNRAID01 kernel: kmem: usage 47156kB, limit 9007199254740988kB, failcnt 0 Feb 25 09:38:31 SPE-UNRAID01 kernel: Memory cgroup stats for /docker/55b2e60f9f27f0028b26ff6f2afd34d31c045f1660d64d530ff2622a795f96cc: cache:12772KB rss:4133200KB rss_huge:1951744KB shmem:3480KB mapped_file:396KB dirty:396KB writeback:132KB swap:0KB inactive_anon:3928KB active_anon:4134452KB inactive_file:4896KB active_file:1180KB unevictable:0KB Feb 25 09:38:31 SPE-UNRAID01 kernel: Tasks state (memory values in pages): Feb 25 09:38:31 SPE-UNRAID01 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 32436] 0 32436 5840 81 81920 0 0 run.sh Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 32578] 0 32578 4314 42 73728 0 0 jsvc Feb 25 09:38:31 SPE-UNRAID01 kernel: [ 8499] 99 8499 7912541 1030559 10633216 0 0 jsvc Feb 25 09:38:31 SPE-UNRAID01 kernel: Memory cgroup out of memory: Kill process 8499 (jsvc) score 985 or sacrifice child Feb 25 09:38:31 SPE-UNRAID01 kernel: Killed process 8499 (jsvc) total-vm:31650164kB, anon-rss:4123796kB, file-rss:0kB, shmem-rss:0kB
  21. I just don't get what that could be since i have 64GB of memory in my server and it's only using a third. Unless it will log this error if one of my dockers is hitting it's configured limit?
  22. Fix common problems is detecting that my server is out of memory when the GUI and CLI are both showing only 34% usage. Subsequent scans keep finding the same error. spe-unraid01-diagnostics-20190225-1341.zip
×
×
  • Create New...