jlruss9777

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by jlruss9777

  1. POSSIBLE LOSS OF "EXEC" OPTIONS ON MOUNTED SSD not much luck today second time typing this out .. first time didn't post. After my unraid server froze and i had to do a hard reboot and (coincidentally probably) after the last UD update on the 16th, I seem to be having an issue where my plex media server "media scanner" crashes every night around 2 am. Plus i am now having playback errors etc. After talking with the Plex Media guys on their forum and looking at the Plex logs and crash reports. The said it looks like the disk the data is on "is no longer mounted with exec options". So my question is how can you tell if a mounted disk still has exec options (which i assume to mean execute or executive to allow permission to run what ever process or codec plex needs) and if not how to re-establish them on the disk. This set up has run with no problem for years so i don't think it would be the set up. Figure probably corruption or error from hard reboot though the disks and parity have all come back ok. Below is the relevent post from the Plex forum: The bigger issue is that wherever the PMS AppData is located it is no longer mounted with exec options. As you are using the 3rd-party unassigned drive plugin, I wonder if something changed recently with it. You could try reaching out on the Unraid forums to see if the issue is known there and can be resolved with another mount option. Jan 17, 2021 02:01:30.588 [0x14c561ecd700] INFO - CodecManager: obtaining EAE Jan 17, 2021 02:01:31.392 [0x14c561ecd700] ERROR - Unzip: could not set executable bit on output file Jan 17, 2021 02:01:31.392 [0x14c561ecd700] ERROR - CodecManager: failed to extract zip The other thing you could do is upgrade to Unraid 6.9 RC2 and move your unassigned drive to be a second cache pool and then set Plex to be on that. It would function in the same way as the unassigned drives plugin, but would be mounted and handled by Unraid itself. The core issue here is that ffmpeg and Plex require the ability to “execute” codecs, specifically EAE (Easy Audio Encoder) as this handles Dolby TrueHD/EAC3 decoding and encoding. Any ideas? Edit: did i post this in a manner that wasn't acceptable? Or is this something that is so obvious to everyone else it doesn't warrant a response? I see nothing that indicates where i could have removed or where to reapply execute privileges for the PLEX SSD mounted in UD. Is this something that can be found and fixed to your knowledge or do i just have to rebuild from scratch?
  2. Thank you both for your insight. Turns out it was in fact a mouse! (my wife took it to use on her laptop). Crazy that something like that would stop the VM from starting all together. I reinstalled 6.6.3 with all peripherals attached this time and everything, VM included, so far works with no discernible hiccups. The rig is used as a data server, Plex media server, with the VM acting as a Steam game "server" to my other desktops. I guess i will see if removing the usb mouse and keyboard will affect any of the remote play abilities b/c i don't want to leave them hooked up all the time or have to find them every restart. Again guys, thank you very much. System: M/B: ASUSTeK COMPUTER INC. - Z9PA-D8 Series CPU: Intel® Xeon® CPU E5-2650 0 @ 2.00GHz (Dual) Memory: 64 GB Multi-bit ECC (max. installable capacity 256 GB) Network: bond0: transmit load balancing, mtu 1500 eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: 1000 Mb/s, full duplex, mtu 1500 Cache Pool: x2 Samsung_SSD_860_EVO_500GB_S3Z1NB0K195539Z - 500 GB Storage: 12TB WD Red Drives 6 TB Parity
  3. Upgraded from 6.6.1 to 6.6.3. All but my VM was fine with the update. The single VM wouldn't start and gave the following execution error: "internal error: Did not find USB device 093a:2510" At the same time the VM Log would read: "2018-10-30 17:51:06.614+0000: shutting down, reason=failed" And the Sytem log would indicate: "Oct 30 13:51:06 LargeServer kernel: vfio-pci 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Oct 30 13:51:06 LargeServer kernel: vfio-pci 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none" I have reverted to 6.6.1 and all is working again. Any thoughts or ideas on why the problem with 6.6.3?
  4. Having this same issue with VM not working with GPU pass through. My windows 10 VM works perfectly till i try to pass through the single GPU I want to use. I notice when starting the VM, with a GPU assigned to pass through and a vbios path listed, the first of the VM's assigned CPU cores will shoot up to 100% and freeze there, the rest of the VM's cores stay at 0%. If i try to pass through the GPU with out the vbio pathway listed i can get into the VM with Remote Desktop Connection but not splashtop which just goes to a black screen after i log in. But even when seeing the VM through RDC if i try to launch a game or anything that uses the GPU it will fail to launch. Just screen flick to black for a second then come back to desktop. device manager shows no issues with the gtx1080 or driver. This happens regardless of how many cores are assigned. I have left core 0 alone in all tests and instances for UnRaid to use. The motherboard is set to source the onboard vga first. I have tried changing physical slots removing all other PCI equipment. I've run memtest for 24 hours (no errors). Used online vbios files for my cards modified to work per the guide linked below and its related videos from online sourcing and gpu-z locally. renamed the files to be recognized by the gui interface in VM edit as well as assigned it in the XML editor. i tried all combinations (minus with VNC). put in multiple cards to see if one would pass through, nope. I am seriously scratching my head here. System as is: https://pcpartpicker.com/list/vW6h7W I haven't rebuilt the VM or a new one. The system and VM were origionally created on a Supermicro board but it didn't have the PCI slot for a full sized graphics card so I swapped to the Asus zp9a-d8 motherboard then tried the pass through. Could there be an issue with the VM being created on the old board and trying to GPU pass on the new motherboard? If so why does VNC work no problem? This is past my paygrade i think so hoping someone has an idea or maybe a theory... every thing else works perfectly ...but if it doesn't pass through the GPU i wasted money and hardware. I've attached pictures of the sytems hang up and the vm log at the time. xml and gui set up of the VM and vbios files. along with the last 2hrs of system logs during this. largeserver-diagnostics-20180603-0208.zip syslog.txt
  5. As an add on to this .. while fooling around more i got an alert from fix common problems saying "Your server has issued one or more call traces. This could be caused by a Kernel Issue, Bad Memory, etc. You should post your diagnostics and ask for assistance on the unRaid forums." I'm fuzzy tired ?at this point so i'm going to attach my diagnostic logs and go to sleep for real .. hopefully someone can help with this. Thanks in advance! largeserver-diagnostics-20180603-0208.zip
  6. First of thanks for these videos and walk throughs they have been a god send and up until the last step went with out a hitch but I have one Zotac GTX1080 mini that i want to pass through to the VM. The are no other graphics but the onboard in the system. But here is the issue now... when i add in the vbios file to either the xml or the GUI in the VM editor when i start the VM one of the cores goes to 100% and it just hangs there. i can pause but not stop the VM with out forcing it to stop. it is never accessable via either splash top or remote desktop connection. it essentially hangs at startup where i cant see it. I've changed physical location of the card from top to bottom pci express slot. renamed the hex edited vbios file from .dump to .rom. I'm scratching my head here. When i remove the vbios the VM will start up no problem. However it will be available via remote desktop but has issue in splashtop where it will let me log in but the screen goes black after i put in my password. I am using this unraid machine remotely. it is not hooked up to a monitor usually but is connected via the onboard VGA to a screen since i began trouble shooting. If i go back to VNC no GPU then of course everything works but that defeats the point as this is meant to be a "gaming" VM. I've attached pictures of the sytems hang up and the vm log at the time. xml and gui set up of the VM and vbios files. along with the last 2hrs of system logs during this. Please help... it may be the lack of sleep but I'm obviously missing something! Edit: with the benefit of sleep and hindsight.... I had just swapped out motherboards in this server/workstation combo because the previous boards didn't have the pci express slot to take a full sized graphics card. So my added question is would having created the VM on another mother board then affect its usage on another once applying the graphics card? Should i rebuild the VM? syslog.txt