Jahf

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by Jahf

  1. Disabled C-States may have addressed the problem (possibly in addition to disabling ASPM before this). At some point I'll be getting a single slot GPU and moving the HBA up to the x8 CPU slot and seeing if I can safely reenable C-States. But ... if this continues to be stable ... that will be when my wallet won't scream at me for buying even more parts ... At this point, with a fair amount of stress given to the system for 3 days (mixed use between parity check while running benchmarks, docker torrents, benchmarks, gamestreaming from the Windows VM) I've been more solid than any point since building the new system with no more lockups/reboots. I've of course just jinxed it *knock on wood*. I'll be leaving the system in a low use state for a couple of days to see if I run across the problem again. ... As for RAM, I know I'm out of spec. Just going over 3200mhz on these sticks is out of spec. I spent over a week trying to overload with various things like OCCT, memtest, etc and haven't had any issues during actual use (no WHEA errors when in Windows with ECC on, no errors with ECC off in extended memtests, never a crash/freeze/lockup when system getting use). So since this isn't a mission critical machine I'm willing to accept being out of spec. I did find a couple of tables confirming the 2667mhz specification on 3950x CPUs and I know the 5950x shares the same IMC, so yeah ... agreed. It's out of spec. But specs tend to be conservative and I'm not currently seeing an issue that makes me think it's the RAM.
  2. Apologies if this is already well documented, I've been searching but most of the terms are generic enough that I'm not seeing an answer to what I'd like to do. And I'm still very new at QEMU KVMs. I have 2 GPUs on my system, and working passthrough VMs for my big GPU. I am hoping there's a way to use the smaller GPU (GTX 1060 6GB) attached to the Unraid host in GUI mode and then start up other VMs using that as the display. Reasons: 1. use the 1060 for occasional transcoding (just to 1 TV in my house, it isn't shared outside) 2. use my kbd/mouse on the host to control these VMs while using evdev to send input to my passthrough VM 3. ability to run multiple non-passthrough graphical VMs at the same time I'm fairly sure from reading various bits that QEMU can work like this (similar to running VMWare host on Windows, which I'm passingly more familiar with). What I'm not sure of is if the minimal desktop provided by Unraid GUI mode would allow this setup? If so I'm assuming I'd need to make manual edits to the XML as the Unraid VM config doesn't give an option like this. If I'm thinking down the wrong lines, I might look into doing a nested VM setup for my less used machines (I like to dabble, I also want to have some special purpose VMs for locked down networking for financial stuff, etc) and attaching it to the 1060 ... but ... that will: * kill evdev usefulness from what I can see as it looking like evdev only handles going from 1 host to 1 VM at a time rather than switching between multiple VMs (or am I wrong, I've only started reading on evdev) * remove the ability to use the nvidia driver on Unraid dockers My other possibility is to use a Proxmox host (with an added desktop environment) and virtualize Unraid, which might actually be the better answer. Would still lose the GPU transcoding, but I can make an input/control scheme that works well for my plans. Just trying to see if I can make the current setup work without adding that step.
  3. It's DDR4 3600, ie 1800mhz. I don't see where I've put 3600mhz (I have my system specs in my profile but don't call out the ram speed).
  4. @Constuctor ... thanks, going through it now, will reply back EDIT: will have to wait for my disk activity to be done in an hour or so to reboot and verify the BIOS stuff. C-states were enabled. Going to have to run through the night with them disabled and see. At this point long-term I'd actually rather it turns out to be the PCH slot issue given I was hoping to be able to have the power savings.
  5. Note: I added system component summary to my profile, obviously it's also in my diag file.
  6. I recently (couple of weeks) started migrating from my old system to a new case and more hardware to start using VFIO VMs. New system is seemingly stable for hours but will often have a spontaneous reboot in the middle of the night causing unclean shutdown. I've done some searching and found this can be an issue with PCIE ASPM so I disabled that in BIOS. It's also possible that the issue is due to my PCIE2 LSI HBA moving from the 8x middle PCIE slot to the bottom 4x slot (which may be controlled by the PCH). I put a GTX 1060 in the middle slot for use as graphics for non-gaming VMs (top slot is a 3080ti for games). Unfortunately it is a 2-slot card and I can't put it in the bottom 4x slot because it crushes the USB headers and front panel connectors. Questions * Is there any chance the ASPM is still an issue (I haven't disabled it in any config files) if it is off on the BIOS? * How likely is it that this would be caused by being on the probably-PCH x4 slot? * Is there anything I can proactively do to force the various possible causes while actively using the system? (see next) Note: I thought I enabled syslog to see previous errors but simply enabling syslog in the server didn't save any files in the syslog share I created. So all I have are the (anonymized) Diagnostics attached. I also save non-anon diags if Limetech requests them that match the anon ones here. ... The biggest problem I'm having tracking this down is these reboots never happen while I'm active on the system. In fact most don't even occur when I'm awake. I just wake up hoping the system survived after changing something and usually find it has restarted overnight. Some times I'm doing literally nothing on the system when it happens, other times I have active torrents to keep it busy through the night (trying to force an active condition). I thought I had it stable so last night I let a couple of long disk activities run that had not completed before the system restarted. The two things that were running last night were: * Copying of 1.75TB from a 4TB unassigned drive ... it was going great guns when I went to sleep with about 1TB completed. It made it to about 1.45TB when the system rebooted about 5 hours later. * preclear of a different 4TB unassigned drive via binhex's Docker. Was doing this both to verify the disk won't throw errors on me (it's old but light use) as well as to stress the system. I'm unsure if that process succeeded. Today I decided to just nuke the old empty partitions and have Unraid format it (in process atm). * Fired off an extended SMART test on the 4TB drive that was being precleared ... unsure if this was an issue but ... I do note that the UI reports the test was aborted by host. I'd be surprised if that SMART test was still running 4-5 hours later but ... reporting for completeness. I checked -after- download diagnostics, which IIRC fire off a SMART on all drives, so this may be a non-issue as later on the page I see "Self-test execution status: The previous self-test routine completed without error or no self-test has ever been run." I'm actually a bit surprised the system rebooted under those conditions as most times it seemed to happen at idle or right after turning on the array after the server had been allowed to stay online for hours without the array started (Squid or whoever, I "hid" the issue I had when this started showing that happening but I'm assuming LT can still see the hidden issue if it helps with diagnosing anything ... that issue may or may not be related as I opened it due to an MCE error I was told was safe to ignore but it did show a reboot as described below). Prior to this I felt disabling ASPM had been successful for a couple of days. I'm now sorta thinking I might have had both an issue with ASPM as -well- as an issue running the card in the bottom slot, but I'm asking for a second opinion along with ideas on what I can do to test. For now I'm planning to temporarily remove the 1060 and put the HBA in the second slot (when my 4TB format completes as I already fired it up) and running for a few days before getting hopeful I've fixed the issue. But that's a lot of blind faith trial and error so help figuring out testing procedures would be very appreciated. If that works I'll get to go on a hunt for a decent 1-slot GPU for the bottom slot ... something I'd hoped to avoid (Updated: deleted the anonymized diags from the post out of internet paranoia as I'm hopeful the issue is handled ... if LT wants the non-anonymized ones to look into it for me just buzz me)
  7. I've started using Discord to push notifications to mobile and really appreciating it. Especially as a way to see past notifications. One thing that would be very handy would be, if the notification is clickable in Unraid (ie, clicking the notification takes you to a new page like Fix Common Errors), add that destination as a link in the Discord notification. If notification bot doesn't allow surrounding existing text with a link, just append the link to the message. Not worried about it being pretty, though being compact is nice. If the notification is just informational, and doesn't go to a specific page (not sure if that ever happens), a generic UI link might also still be handy. ...
  8. Perfect, glad to know what I did is the expected method. Yeah, I'm on the default docker from the Unraid interface (ie, the one that comes here when clicking "support" for the docker). So I'd assume I'm on stable and it has the tone mapping. Regardless it definitely worked. The video I test with is horribly washed out without tone mapping when tested on my phone and has a very high bitrate so I can stress test transcoding. PS. While I have your ear, I'm trying to configure mylar3 using your image. The "support" link for that goes to the generic mylar3 page rather than a thread here if you maintain one. And it doesn't enable Bridged networking by default like most of your images do. Easy fix obviously just something I thought you might want to tweak next time you're looking at the mylar3 docker.
  9. Workaround: "Enable Tone mapping" with Unraid 6.9.1 and an Nvidia decoder I figured I'd write this for posterity in case anyone else wants to set it up. My System: Unraid 6.9.1 (latest stable) I3770K 32GB RAM GTX 1060 6GB Latest Jellyfin / linuxserver docker Everything worked as expected but when I "Enable Tone mapping" and then try to watch a file with HDR data I get a player error of "This client isn't compatible with the media and the server isn't sending a compatible media format." (no error when "Enable Tone mapping" is not checked, no error if watching a file without HDR data). I did some searching prior to posting and did find posts like this one on Reddit. Following that post I found the expected directory doesn't exist on the Docker I used (latest as of 3/26/2021 for Unraid from linuxserver). The following fixed it: mkdir -p /etc/OpenCL/vendors echo 'libnvidia-opencl.so.1' > /etc/OpenCL/vendors/nvidia.icd As soon as that was done, the files that failed to play were able to play and tone mapping was evident compared to when the option was turned off when viewed via transcoding. Don't know if it makes sense to add those directories and entry by default, but if not it might be a good thing to append to the nvidia install hints.
  10. Heya, yeah, but I got it going already be generating via Nitrogen. Thanks to your help on Discord the other night (Hyde here)
  11. Thanks for the reply (and docker). No, it's not necessary to run the map generator to play, but it is necessary if you want to play a fully random map. That was my goal ... large random map that my friends could explore with a server that keeps running even between logins. It's 7 hours later and it's still running so I'm going to reinstall, get it working default, then generate the map on my gaming PC, and see if I can find a way to move it to the Docker.
  12. Newb preface: this is my first game server docker of any form and my docker knowledge is limited to running a jellyfin / radarr / sonarr / etc server for a few months. Issue: 7dtd server is taking a VERY long time to do map generation (hours). Reading up it seems the map generator utilizes GPUs these days. My server does have a 1060 6gb for transcoding but I'm assuming the Steam docker doesn't have GPU access set up. My CPU on my Unraid box is a bit underwhelming (3770K). What I'm wondering: can I create the map on my gaming PC (3800x/3070) and the move it to the container afterwards? where is the map generated (as in path)? I'm guessing it's all in memory right now as I can't find any significantly large files in the container's appdata. is it viable to map the container's save game folder outside of the docker so that I can wipe the docker as needed without losing save games (might already be done, just don't have a way to check yet since it's still running setup) I'm not even sure the docker is working yet Is there a setup guide that goes into the steam docker basics step-by-step? I'm piecing things together from this thread as best I can. Links/guides appreciated. If none just ELI50 please (explain it like I'm 50 ... since ... I'm 50). Thanks!
  13. Thanks. That's exactly the info I need to put "soon" in perspective
  14. Generally speaking is there an estimate on how long until 6.9.0 hits stable? I'm relatively new (6.8.3 was my first version) around here. I just happened to decide to install my unused 1060 into my Unraid server a couple of days after the major change to getting it working on Unraid. I'm hesitant to go to the beta release but if 6.9.0 is likely a ways off from stable I may go ahead and do it. I just haven't gone through the upgrade process on Unraid yet ever and was planning to hold off on going to beta testing anything until I had gone through a stable upgrade at least once. PS. I'm familiar enough with software dev release processes that I'm not asking for a firm date. Just a "probably a couple weeks" vs "couple months" vs "no way to know, could be a year" type of estimate.
  15. Seeing this here as well today. Seems to be something I can live with so I'm just documenting what is going on. Started with my binhex-deluge docker reporting "not available", all others showed updated (I did an update all a few days ago). Updated my linuxserver/letsencrypt to be linuxserver/swag. Restarted my server to fix a Web UI glitch where one of my CPU cores was showing 100% even though htop showed very low usage. After the restart I started up my Dockers and ... now ALL except linuxserver/swag are showing "not available" (swag was the only one I had manually updated today). Clicked "check for updates" and "unavailable" is gone (as in I see either "up-to-date" or "update-ready" for all Dockers) I use pihole and it's the next-to-last Docker to start in a "start all" sequence. And sure enough as soon as it was active the last in my sequence (swag) showed status. So I have to wonder if folks are seeing this issue due to one of their dockers being needed to be running for the update check to run.
  16. Problem: Deluge plugin selection preferences not saving state between startup/shutdown Background: Setting up Deluge + Sonarr + Radarr for the first time (new to Unraid and Dockers, but not completely new to Linux ... console is familiar to me). I've got things running. Sonarr's happily plugging away, Deluge is downloading on command, Radarr will be next. Detail: Problem I'm seeing is that the setup I'm following needs the "label" plugin. That works fine when enabled and Sonarr is sending the labels. But whenever I restart the Deluge docker ... it comes up with the Label (and Extractor) plugins disabled again to where I have to manually set them back on. SOME preference states are saved fine between sessions. My Queue settings are definitely at the values I set rather than the defaults. It seems to just be the plugins that aren't staying enabled. Any hints as to what I need to do? PS. I found a post on a non-unraid forum where the user was recommended to use the "Connection Manager" "Stop Daemon" to stop the docker (in the case of Unraid this would be instead of using the "Stop" command on the Docker screen) to save preferences. However this gives an error and ... since other preferences are saving ... doesn't seem to apply here as a need.
  17. Necro. Sorry. I almost posted on the original thread ... https://forums.unraid.net/topic/80192-better-defaults/ but this seems the better place to take the idea forward. Coming from a long history of working on similar projects (not recently ... but as far back as working at Cobalt before they were bought by Sun). I completely agree that running Dockers/VMs as unprivileged users as well as running commands through sudo with an unprivileged admin were things I just sort of expected to find when I started with Unraid recently. There've been a couple of comments from Lime that make me worry. * It's "an Appliance" so lower security than a custom built server is expected. Yes. But Internet appliances have been around a long time and there are appliances going back to the late 90s that did more in regards to user privileges (Cobalt Qube and Raqs were actually built to fulfill the place a modern Unraid system fits a user today). Dockers didn't exist back then but the concepts aren't that much different than breaking out of a virtual web site and gaining access. VMs existed back then but weren't used in the same ways they are now. * 'It shouldn't be possible to "break out" of a container and as far as I know this isn't possible. Otherwise it will be a security vulnerability of Docker itself.' Sure. Same can be said of running a web service as root. If the service is vulnerable and broken, the vulnerability is the fault of the service. It's the job of the host to use sane defaults to proactively limit the damage done if it happens. Not to the point of making the system arcane to work on ... but 'sudo' and wheel groups aren't arcane, they're well documented for decades and these days it's actually weird to not need to use them. And as pointed out in the original thread, there have been at least proof of concept attacks against Docker. At that point companies have 2 directions to go: "Docker's fault, sorry" or "no worries, even if it happens we've done the work to protect you". I DO know the pain that can be caused to your support group if you change some basic paradigms. Going back to Cobalt for a second, our first generation of appliances were not secured well and we had to make some core changes. That changeover was full of support calls. But ... this was back when things weren't well documented on the Internet and most users had never even heard of Linux. Once that process was done supports' life, as well as the consumers', was much much easier. Because most day to day problems simply didn't occur anymore. In reading a number of replies from Lime it really feels to me like there is an inherent inertia involved because of the hesitance to change a long established product. I get that. I get it well enough that I'm not expecting these changes to be made. But I can still hope for them. I can make some of the changes I feel are needed (disabling Telnet, SMB v1, changing root password) very easily. Other things like making Unraid support a non-privileged user to run specific highly integrated tasks? No way. And ... I'm a brand new user. I bought my Pro license a day ago after configuring everything and knowing what I was getting into. So I'm 100% not saying Unraid isn't a good product. I'm just saying the security could improve. I would never expect these types of core changes in a point patch. I don't expect them in the next major patch. I'm just lending a voice to consider them at the point where the next major release is defined after this point to consider them.
  18. New Unraid system. Just converted my LSI 9211-8i to the IT firmware. In researching how to do that I came across various threads from 2-5 years back that state the newer LSI firmwares aren't working with TRIM and the drivers in Unraid. I'm going with the assumption I'm fine putting the SSD cache on the motherboard controller (Z78 chipset, SATA III 6Gb/s). But if that adds a noticeable load and/or lowers transfer rates I might re-wire and check to see if the LSI TRIM issue has been fixed.
  19. Ouch. That nixes this system for me then until I figure out an alternative card. Ok. Thanks. edit: found a M1015 card and a couple of fan out cables, on the way now. The reason I was originally asking was I wondered if splitting parity across 2 controllers might help with writes. But in looking at my motherboard, the onboard Z68A controller is SATA-2 so if I need to avoid the Marvell I'll be putting everything on the M1015.
  20. I'm gathering up the remaining bits to convert my previous desktop to a first Unraid server. The 2 questions I've got before ordering my last stuff: Dual NIC config: I'm hoping there's no big issue with utilizing the dual NICs on the Unraid box, 1 dedicated to my desktop PC, other going to home router for media? Dedicated link to desktop would be on a different subnet. Upgrade to 2.5Gb: My new desktop has a 2.5Gb/s ethernet port. My main use for connection to this system is going to be storing finished photos and on-going 3D projects. Possibly using a Git server. Is it worth getting a cheapish RT8125 based 2.5Gb PCIE card for the Unraid server for the dedicated connection? I'm keeping costs low for this initial build so not going to be able to buy 2x 10Gb solutions, I'm mostly asking how Unraid performs with the 8125 driver, which seems to be pretty new. I'm guessing I'll be alright with the split 1Gb solution I already have, just curious. Which SATA controller to put the Parity on: The Unraid server (specs listed below) has 2 separate SATA controllers, the built-in chipset controller and a Marvel controller. Is there a recommendation to put both parity on the same controller, or split them between the 2? Similar question for connecting the cache SSD. Unraid server parts: MSI Z768A-GD80 (B3) i7-3770K 32GB RAM LSI SAS9211-8I controller (skipping motherboard controllers based on feedback) 2x HGST He8 8TB 7200 for parity (overkill for 2x data drives, but open for later expansion) 2x WD Red 8TB 5400 for data 1x Samsung EVO 860 1TB for cache/dockers SanDisk Cruzer Micro 8GB USB2.0 drive (other older hard drives not listed since I'm going to test them significantly before re-using) GTX 1060 6GB, no other PCIE currently Obviously the MB and CPU are older, if things go smashingly I may upgrade the Unraid server hardware later on, but right now I want to get familiar before deciding on the cost benefit.