iamatesla

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

iamatesla's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Yes I read that occurs - but my only manual change to the default XML was to adjust the GPU config section so pass-through worked properly. This doesn't explain the other massive XML changes that happened between the "GUI change" and the original? My 1st edit of the default XML: <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> XML AFTER a single "web GUI" edit unrelated to the GPU at all: <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev>
  2. I've spent about 2 solid days trying to get a WIn10 "Daily Driver" VM to work consistently with GPU and other device pass-through - and I've experienced just about every issue in the book. I finally got it sorta working - though I had to abandon passthrough of a AMD 5700XT card in favor of a GTX 1080 - but now I've run into a repeatable/consistent problem which completely breaks my VM. That is - if I make a single change via the unRAID web GUI to the VM (while VM is off), unRAID significantly changes the XML version of the VM config and breaks the VM (fails to boot). Info on my setup: Ryzen 3700X CPU msi tomahawk b550 mobo Quadro P2000 in MAIN PCIe mobo slot - for unRAID terminal output and Plex transcoding GTX 1080 in 2nd PCIe slot - used for win10 VM passthrough OS is a sorta old Win10 LTSC iso that will NOT boot with UEFI - it only boots for some reason with SeaBIOS - but it does work. So - I make a VM by clicking the Win10 VM template and entering all my info - VM works great - see image below: Now - if I stop the VM, edit the VM by one item - lets say removing one of the cores as in this below screenshot - and try to restart, the VM breaks / will not boot at all. The vDisk remains ok, and if I make a brand new Win10 VM with the template and same settings as in the original working VM, all is good and it boots again. I noticed some changes in the XML view between the single change config and the "pure config", so I saved the two XML files and diffed them in VScode. They're hugely different - I have no clue what actual line or lines are causing the VM to fail to boot, but why would the configs be changing so much with only a single change from the gui? For instance, every reference to "alias" has been removed from the config where I changed one thing from the web GUI. Diff image 1: Diff image 2: The two full XML files should also be attached if anyone wants to diff them separately. Any reason this would be happening? I literally can't make changes to the VM after it's been created without it failing to boot on the next startup. workingXML.xml removedOneCPUWTF.xml
  3. I enabled CSM/legacy in my BIOS - still has the same problem - maybe there's another hidden bios setting i'm missing but who knows. I did run the memory check built into my MSI motherboard BIOS - it found no errors on any of my drives. I have a 3rd NVME in the box now with the copied/recovered cache drive data on it. It's mounted and working again. I'm NOT going to enable the win10 vm this time and make sure the system is stable for a few hours before more testing. My current theory is that somehow increasing the win10 VM disk size from 30GB to 80GB somehow led to a memory/corruption issue. Is there any way just changing 30 to 80 in the VM GUI could have a corrupting effect on the cache file system where the VM disk lives? I think the next thing to do if the system stays stable is to burn my current win10 vm image and config to the ground and start over.
  4. This is neat - I Tried to run Memtest86+ from the GRUB screen before unraid boots... and I get a blank screen for 10s and it loops back to the boot screen as if nothing was clicked... Unraid does boot. Gonna try to rebuild the NVMe as I did at the start, but gonna leave VMs disabled and see if I see the problem again. If I don't, I think that means my VM is somehow causing the corruption... and yes - I fresh formatted that 2nd NVMe just an hour ago before I moved the corrupted drive's data over to it.
  5. Unfortunately I spoke too soon. Things were mostly good, and I was watching the dockers and all of the sudden I started getting errors about "read-only file system" in the docker logs and they wouldn't start back up. I tried to reboot and now the completely different NVME drive is acting the exact same way as the first - won't mount. Nothing working again... A new log should be attached. tower-diagnostics-20210701-1136.zip No clue what to do now. I could keep re-building the drive as earlier, but I'm afraid this problem is going to keep appearing. I was trying to get the win10 VM to work... maybe there is something crazy wrong there that keeps causing this, and if I just nuke the VM and start from scratch it might be ok?
  6. Thanks! that was helpful. I *think* my initial drive corruption issue is resolved. I was able to mount the problem nvme drive with the command you recommended. I ended up installing an extra nvme drive in my server, mounting it, and was able to copy over the data from the old drive. I then installed the new drive as the only cache drive and it's working as expected with no errors. My dockers are back up luckily! I might go back and re-format the original 2TB nvme, copy the data back over, re-install, and try it as a cache drive again when I have more time. I don't think the drive itself caused this problem. I'm still hitting just about every problem I can imagine trying to get my Win10 VM to actually work with a passed through GPU but that may necessitate a separate post. Core issue there seems to be unraid not wanting to release the GPU to the VM completely? Tried moving to motherboard 2nd GPU slot, etc... with no luck. Win10 worked with the passed-through GPU up until I installed actual AMD drivers.... I've got a Nvidia Quadro in slot 1 that should be being used by unraid for everything else...
  7. I've been trying to get a Win10 VM to work on unRAID all day and just encountered a major issue out of nowhere with the cache drive. I had just got the VM working with a pass-through 5700XT GPU. I was in the win10 VM (it lives on a brand new 2TB nvme drive) and I had just resized it (using unRAID gui button at VM tab) from 30GB to 80GB so I could install the AMD GPU drivers - I used win10 disk manager inside the VM to expand into the new storage - I installed GPU drivers - all good. Restarted the VM and boom - the entire box crashed and showed a segfault on the separate unRAID terminal monitor I had connected. I hard re-booted and the system came up, but now cache drive won't mount and is giving the following errors: FYI - pic is somewhat confusing. I manually tied to play with the fs type as I thought my cache might have been xfs and unRAID was trying to mount it with btrfs, but changing and re-spinning up the array had no effect on the core problem. I'm attaching the system logs i have. Any help is greatly appreciated! tower-diagnostics-20210630-2101.zip
  8. holy shit! Man you totally just solved all my problems! I had no idea 4k was the devil! I'll just delete it immediately, return my 4k tv, and go pick up a 1080p model at goodwill asap! /s I'm so sorry to come off as mean in this response. I really am. This is honestly a pet peeve of mine and you kinda triggered me hardcore - I'm just SOOOO tired of posting to forums about technical topics only to be told "Don't do what you're trying to do" for some reason. It's just not a productive or valid answer in the modern world of ever advancing technology. It's just an excuse. A cop out. In fact, I have seen this post you reference - just not in detail because it turned me off of itself immediately. STOP saying 4k is evil and don't do it. It's truly like saying drugs are the devil and don't do any of them. How about explaining the REASON WHY!!!! (based on my data posted.) and the technical paths forward. 4k is the goddamn future whether it's ready on a specific platform or not. I do now see several relevant topics in the post you reference. This is a complex technical problem and my entire post is centered around understanding the specific faults in my system for 4k streaming and transcoding - I need help with this because it's hard. 1. Is the Audio the source of my lag? The article references DTS and notes about manually removing certain audio formats. I this applicable based on the content I'm trying to play as referenced in my original post? 2. What's the best way to currently separate libraries for 4k content vs 1080p content in plex? Any good guides / tutorials on doing so? I reallly hate to have to do this, but if it's the only technical solution, be it. 3. 4k is the future. What are the missing links for plex/emby/unRAID. What still needs to be done / or is in progress to make 4k streaming and transcoding easy? Anyway. ranting aside - If your're going to reply, how about instead, you provide an explanation relevant to the data I listed in the original post as to what is the root of the problem.
  9. Alright I could use some help identifying a bottleneck in my system. Problem: I can't play back a 4k (HEVC Main 10 HDR w/ TRUEHD 7.1) movie from my unRAID server to my new (Roku Enabled 4k TCL) TV either directly OR via transcoding without significant buffering / lag. Important info and specs: 1. unRAID server and TV are connected via a local lan running 1gbps direct wired CAT6 2. Using PLEX with GPU Transcoding via a Nvidia Quadro P2000 - should be capable of multiple 4k stream transcoding 3. Server CPU is E3-1246 v3 @ 3.5GHz and 32GB ECC RAM. I really have no idea where to begin debugging the direct playback buffering issue. The TV plays great for several minutes then either buffers or says the connection to server was not sufficient. During direct playback, the server CPUs stay low and appear just fine - but I still have buffering! The PLEX datarate shows this is using ~89mbps to stream which my network should do no problemo. Possibly an issue with the TV not being able to handle the TRUEHD 7.1 audio??? Now for transcoding - system appears to be using the GPU properly but maybe is trying to use the CPU for audio transcoding and failing there? If I try to transcode a 4k movie at all, all CPU spikes to basically 100% and I get buffering on the TV even though I bought a $300 GPU to prevent this. Running "watch nvidia-smi" shows the GPU being used for the transcoding as shown below. What gives here? Here are screenshots of transcoding from thr 4k down to 720p (4mpbs😞 Thanks for the help!
  10. Alright so this has been ridiculous but I finally got a successful boot. The problem was not BIOS on my server. Or any other random stuff people suggested on other forums related to the server itself. As suggested above, I tried on a completely different PC and still got no boot. Same error actually "This is not a bootable disk" or occasionally "Missing operating system" Then I tried having my roommate use his own flash drive, downloading the tool, imaging, and all completely separate from my setups (He uses win10 same as me - both updated OS's). IT WORKED! HE DID THE EXACT SAME THING AS ME but his tool and flashdrive booted in my nas immediately... so WTF... NEXT, I took his working flash drive, re-downloaded the installer, ran it exactly as he did, but on my computer - FAIL! Now, I had someone else also download the tool from scratch on their win10 pc, and load unRAID onto one of the drives that previously failed for me, it worked fine! The Trick: I used the unRAID flashdrive tool on a completely different windows 10 PC. I actually tried a ton more combinations / [permutations across probably 6+ various flash drives on different friends computers. CRAZY MIXED RESULTS for the install process. Sometimes drives would throw errors in the creation tool. Sometimes it would appear successful but fail to boot in the NAS. Sometimes it would work fine. I have NO idea why any of this happened and could not detect a consistent pattern. Some obscure bug in the drive creation utility? Some problem with the way the tool and/or the manual process are formatting the drives? This also does not explain why the manual process failed for me as well. Anyway.... The only reason at this point that I'm still giving unRAID a shot is purely due to my preexisting hatred for FreeNAS....
  11. Tried removing the '-' character on the EFI boot option already - it didn't do anything. Same problem. Will try to boot a different system in a bit.
  12. No. Nothing "unRAID" shows up. MOBO boot screen just stays something along the lines of "No Bootable Device Detected" and just sits there at a black screen as if no boot media is present at all.
  13. Ok well I apologize if this post sounds angry... it's because I am ... Especially in a product that expects me to pay a fee... So far I've wasted the last 2+ hours of my Sunday trying to get unRAID to boot in my NAS at all. My hardware: MOBO: SuperMicro X10SLM+-LN4F CPU: 4th Gen i3 RAM: 16GB ECC I've been using FreeNAS for the last year or so. It boots up and runs from USB drives just fine with my hardware. Unfortunately I've had terrible success with the FreeNAS software itself... I can't even get SMB shares to work properly with FreeNAS... so I figured I'd try unRAID out. Now, I HAVE LITERALLY TRIED ALL NORMAL STEPS to make this boot, so don't ask me to try a USB2.0 port vs 3.0 because I did... yes system is 64bit....etc.... There is a more fundamental underlying problem here. Especially with so many other posts about this problem I've been reading through such as this one from yesterday!!!!! I know this is in vain because everyone will invariably post non-useful suggestions in an infinite loop but here it goes: What I've tried: - Multiple different USB drives; varying sizes; both branded of high quality and unbranded of questionable quality - The Drive Creation Utility & Manual Mode (where you need to run the Make Bootable script) - Multiple USB drive formatting methods and settings: UEFI / no UEFI, FAT vs FAT32, etc... - Multiple USB ports on my mobo - both USB2.0 AND USB3.0 - Adjusting MOBO BIOS settings to boot the USB drive in various orders - ALL my USB drives "SHOW" just fine in MOBO BIOS boot order. - Attempted to load a Linux ISO onto the USB drive, system booted to Linux as expected - boots FreeNAS immediately just fine from USB in a normal 3.0 port. - Troubleshooting steps here (under USB wont boot section) https://wiki.unraid.net/USB_Flash_Drive_Preparation - Pausing the boot with the select boot screen and trying all the options (USB Drive, USB FDD, USB ZIP) But seriously... I don't understand how a PAID product can have so many users with this issue. I tried out unRAID because normally you pay extra for products to JUST WORK. So, what's going on here? Can anyone shed some light on this?