Iciclebar

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by Iciclebar

  1. I have 4 samsung 960 Evos ( 2x500, 2x250) in an asus hyper m.2 4x nvme adapter. I would like to pass the 250 GB drives through to VMs but I cannot figure out a good way to tell the drives apart. Each drive has an identical description along the lines of samsung/controller/ revision. I tried to take some of them out to try and identify which ones were missing but was not able to identify the 250GB models successfully. Using 2 2xm.2 supermicro cards in different slots didn't change the device ID's either. Running a VM on the cache array really isn't an option for me as I get quite high cpu loads during disk access that is not a problem when passing through the nvme controller. My last ditch option is basically to buy different nvme drives so I can tell which ones are the cache drives and which ones I want to pass through becuase they are different models. 970 evos for vms and 960 evos for cache drives or something like that but I would really like to use the drives I already own. Any ideas on how to tell which device ID they correspond to reliably?
  2. Thanks @Squid! Mover used to slow down all running docker apps until it was done. I can now run it without it negatively impacting my system after lowering the cpu and I/O priority. This update is great!
  3. Hello Unraid Community, thanks for taking the time to read my support thread. I ran the 6.7.1 update when it showed up last night and upon reboot the 2 optical drives in my server no longer showed up under system devices. I had just been using them prior to running the update and had made no other changes to the system. They are connected to a Marvell SE9172 onboard sata 3 controller. Since I was in the middle of using them I moved the optical drives to 2 ports on the Intel controller and they showed up as normal. The Marvell controller shows up in system devices as per normal. Was anything changed in 6.7.1 that would have affected this sata controller? I know in the past some Marvell controllers had been blacklisted for causing problems with the array and having issues with virtualization. For that reason I've never ran any hard drives on that controller and always used it for optical drives. I'm not at my server at the moment so I am unable to attach diagnostic files at the time of this post. I will be back later tonight if you want me to reconnect the drives to the Marvell controller and collect any logs. Thanks again!
  4. Hey Djoss, Thanks for your work on this container, I've gotten a lot of mileage out of them on my server I've got a quick question for you regarding the Source File Stable Time field in the settings. By default it's set to 5 (seconds) im assuming. How big of a value can this take and can it take any different values or is it always seconds? Thanks!
  5. Hey Skippy, Do you know what process is causing the GPU load? I ran a gtx 1050ti on my VM for quite a while before selling it when I parted out my gaming rig and replacing it with my 1080 and didn't have problems like that. I had a 1030 for a while that did have issues with that just due to how weak of a card it was. If you turn on the GPU and GPU Engine categories in your process list you should be able to see what's eating your GPU: I really haven't seen much in the way of "Poor GPU Performance" in the time that I've been working with my Unraid VM. The GPU tends to work if you get it passed through unless its overheating and throttling although your GPU-Z screen shows its running pretty normally. Also, you said you changed the power settings for the GPU in the nvidia control panel. Did you change the power plan to High Performance for windows too ? Anyways, I hope you get some more suggestions from other users and you can figure out this issue.
  6. Hello, I've been working on optimizing my unraid server as I am now using a VM inside of it as my primary machine. I've observed some behavior while testing and setting up that confuses me. I've run VMs on here previously and gamed on them but more as a hobby rather than as my daily driver. If I set the emulatorpin to cores 1,13 (12 core xeon, so this is the 2nd core) and simulate a load (the one that causes problems reliably is the 2nd test on crystaldiskmark (the 8QD 8T test) and the emulator pin core maxes out to 100% and the VM stutters heavily. Mouse movement is interrupted, videos stop, audio stutters. Removing the emulatorpin setting and running nothing on the rest of the server shows unraid using 2-4 threads on unassigned cores near 100% utilization to provide the disk access to the VM which is running on isolated cores. The VM cores also nearly max out during this time. During this time the VM still has audio clipping issues but it is noticeably less of an issue. With the q35 machine only 1 core maxed out on the VM, but multiple unassigned cores were still heavily used and the stuttering audio and mouse were still observed. Is there a cpu cost to running vdisks/emulated block devices that I'm unaware of? I've tried this with the disks set to virtio and sata, machine types set to 440 and q35, and with caching on, off, and set to directsync. It didn't matter if the vdisk was hosted on my cache disk (960 evo pool) or a direct pass of the 860 evo ssd that I bought to try and alleviate this issue. The only thing that resolved the machine response issues was passing the nvme OS disk through to the vm directly and using it but that isn't ideal at this point in time. If this is expected behavior that's fine. Its just that I've seen people suggest the emulatorpin settings and I've seen them reply that it helped or did not help, but I have not seen anyone report that setting the emulator pin was so actively detrimental to the performance of their VM. With the emulatorpin set to 1 or 2 threads even something like loading a game spikes the emulatorpin and the VM lags heavily. I've experienced this trying to remove the occasional audio stutter in discord from my system. I'd appreciate any suggestions or ideas. Thanks.
  7. Hey btrcp, This post is probably in the wrong section (unless you intended to host all of your streaming servers) but the things that probably come closest to what you are trying to do would be something like Xbox Game Pass (allows you to play games on xbox and windows netflix style) or something like Nvidia's Geforce Now when it comes out of Beta. They are basically hosting a VM that can play games and you install your games on it and then can play on any system that can run the geforce now client. Shield, MacOS, Windows. It's currently in beta and it's free if you can get in but eventually it's going to revert to an hourly pricing structure. If you are thinking about hosting your own servers via unraid you're going to need 1 VM per active user and a client for them to play on. Each VM will need a copy of the software they are trying to play if they all want to play it at the same time. Steam does have some sort of loaning program though so you might be able to mitigate this. As far as steamos, its currently just a client but they are working on making it able to host a stream as well. What it would do for you would be that once it can host a steam stream you would be able to use the steam link app on the nividia shield to stream the game and would not need a windows license per VM.
  8. Hello, Is there any requirement or best practices to keep cache drives all the same model / speed or can you just mix and match and what are the benefits / downsides. I just picked up a 4x m.2 x16 card for my asrock rack board. I'm currently using 2x 960 evo 250GB drives as a cache pool for docker storage. I've got a 500gb 960 evo passed through for a VM. I was thinking of adding a 2nd 500 giving me a total of 750GB in the cache pool. Should I find another 960 Evo or can I just throw a 970 in there and call it a day? Will that cause any problems? Also, what I noticed is that my disk performance for VDisks hosted on my cache pool improved dramatically when I switched it to a brtfs cache pool rather than a single xfs drive and I'm thinking about setting my VMs up with virtual disks hosted on the NVME pool rather than directly passing them through. Any thoughts? What happens if I added SATA ssd's to the cache pool alongside the nvme drives, would the cache pool be as slow as the slowest member or will it use all available space without penalizing the data on the NVME drives? Thanks.
  9. Hey Kurkoko, Just a heads up, TDP is not power draw. The TDP is a measure of how much cooling a processor will need to run at it's rated speed. So if the processor is 3.5 GHz without turbo boost and is listed at 65w TDP, the cooler / enclosure needs to be able to dissipate 65w of thermal energy to keep the processor cooled enough to not throttle at its rated speed. This is a slight overestimation because some thermal energy is lost to the board and socket as well. Anandtech has an article on explaining TDP if you are further interested. TDP Article 8700k and 8700T will probably not idle at the same wattage becuase they do not run at the same speeds base or boost. Depending on what the motherboard manufacturer specifies for power values (more information from that article on TDP) the power draws can vary wildly. As a ruie though, the 8700k should have more performance while turboing more aggressively, while the 8700t should draw less power and turbo less aggressively. It will turbo up when the cpu has load. However it depends on several factors as to how "green" it is. It depends on the processor TDP Rating (PL1), the processor power limit (PL2), the length of time the processor is allowed to stay at boost clock (Tau). Board manufacturers can mess with these limits as they are stored in the firmware and the processors basically turbo the entire time. For instance, if they set PL2 and TAU to something ridiculous the processor will never throttle based on power draw, and it will boost practically forever as long as the temperature doesn't reach the limits on the TCase sensor and force it to slow down based on thermals. This is why the 9900k can draw 180+ watts even though its a "95w" processor. Someone had just mentioned a review on a supermicro board that adheared to the intel recommendations strictly and inside that power envelope the 9900k is a lot closer to the 8700k performance wise in that situation. GamersNexus and Hardware unboxed both did videos on power/tdp violations on intel processors pretty recently. The GPU still uses power even when the VM is turned off but depending on the card the idle power draw can be pretty low. Tom's Hardware reported the idle power draw of the 1080ti to be 13.2 Watts.
  10. Hello, I'm currently using a Syba SD-ADA31040 to convert the VGA output from my Asrock Rack EPC602D8A (ASPEED 2300) to hdmi so I can have console access to the unit when needed. Currently, the system posts and shows me the post screen and I can get into the bios and all that. Once unraid begins to start the screen usually goes blank. Occasionally this will result in a corrupted screen, and less occasionally I end up with the unraid login prompt. (Non Graphic boot, havent tried using graphics on the console since I've been using this setup) The on-board video is helpful as I am not required to run a separate video card in pass through scenarios, but the poor performance of the adapter is unfortunate. I'm currently using a 27" 1440p Dell monitor. The higher resolution on the monitor could be causing problems as the adapter only works on 1920x1200 up to 60hz. I've considered some options: Run my 1050ti in the slots above my 1080, don't use the onboard video. Possibly look for a smaller video card slotwise in the future. Run through the process of getting my unraid host to be able to use just the 1080 for console and for pass through. Get IPMI working on my network (it uses java and its been a miserable bastard to get approved on win10's security list) Find the magical unicorrn gsync monitor that still has a VGA port Switch to AMD/Freesync next time for gaming monitors with VGA ports and no Gsync Tax (But more vbios/black screen issues in unraid) Any other suggestions up and and including "get a new non cheap vga to hdmi converter" ? Thanks
  11. Hey Muzzy, I've seen that too in the past. There's a thread on these forums that talk about this issue too: Ability to Install GPU drivers for Hardware Acceleration I'll just reiterate: If you want GPU encoding in plex on unraid your best bet right now is to either run something with a newer intel IGP (which is supported in docker) or spin up a VM and pass the card through. It's fully supported and shouldn't break during OS updates. The big issues that hold back using plex in a docker on unraid with nvidia gpu acceleration are as follows (as best as I understand them) https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/ Unraid doesn't currently have an available video driver that would support this. Now, unraid is basically slackware so you could install it, but you are then running unraid in an unsupported configuration. OS updates could break your server, your docker, both, or you could end up with bugs that might not be adressed due to your custom config. https://forums.plex.tv/t/how-to-setup-nvidia-hw-acceleration-in-ubuntu-docker/288625/5 Plex itself doesn't use the gpu when its passed through the container. Plex just recently added support for intel IGPs to do this because they are running on things like little nas boxes with the quad core atom processors. The p2000 is just a normal quadro card. There might be some driver level issues due to having the driver installed on the host (GTX cards aren't supposed to be able to pass through, gets blocked by the driver) so a quadro may be required in the end, but there are a bunch of other things that need to happen before we get to that point.
  12. Iciclebar

    GPU Passthru

    I might revisit it in the future, I thought it was working pretty great but I kept getting really long weird fps drops that seemed to be related to the CPU's going idle on me. If I was using 6 cores it happened almost constantly, but with 4 it wasn't so bad. Just might need additional work that I didn't have the time for yet. Not trying to steer you away from it, just manage your expectations and realize it does make things more complicated. Other than that, happy building.
  13. Iciclebar

    GPU Passthru

    Hey Mat, There is a performance hit although if your system is optimized properly it should be small (3-5%). Caveats for me were making sure to dedicate storage and pass it through to the vm and let the vm manage it, to set cpu isolation properly (the new 6.6 makes it pretty easy), and to set the boot order in the VM config when you have multiple disks. When I was running a passed through NVME drive and an additional passed through sata disk it kept popping the uefi bios up and wouldn't save my changes. Setting <boot order="1"/> on the nvme device solved that issue, although it occasionally liked to pop off if I was making changes to the template and needed to be reapplied. I had some other issues but they were mostly related to the fact that I was using an e5-2697 v2 (big slow xeon). If I was building another unraid box and the idea was to have it do double gaming duty along with file storage/plex/handbrake like I have now I would have used 2 higher clocked CPUs or something newer. I could also have lowered my expectations on the plex server. That 12 core spoiled me. Maybe i should have run a seperate VM to host the plex and handbrake dockers just to truely isolate them? In the end I had a fully functioning gaming rig literally sitting next to my unraid server and I decided to shelve the project. If my gaming rig blew up I would have no problem using my unraid box as a stand in until I figured out what to do though. Next time the hobby bug bites me maybe I'll do something crazy like run unraid on both systems and virtualize my gaming rig too so I can run plex on the IGP and back up the primary.
  14. Hey Muzzy, Welcome to the forum. You're going to be better off using a VM and passing the GPU through to that VM at this point. These processes are well supported. Docker versions of plex can use some Intel IGPs but you are going to run into pain trying to get plex to use the nvidia gpu in docker. -Good Luck
  15. Happy owner of 2x Define R5s. Whenever I wish I had more space (more drives or eatx board size) I sometimes wish I'd gotten a Define XL R2. Its basically an upscaled Define r4 with eatx support and 4x 5.25 drive bays. Stick a 5 x 3.5" drive backplane in it and have an extra 5.25" bay left over for ODD or 2.5" backplanes. I hope they update it, one based off the r6 would be nice.
  16. That seems pretty normal, the gtx970 doesn't have the greatest idle power consumption, ~ 13w at idle. If you are running dual monitors its going to pull more and some driver revisions have power draw issues when more than 1 monitor is connected. The new 2080 was pulling like 55w in multi monitor at idle due to a driver issue according to some early reviews. According to the UPS my rig draws ~113w at idle and about 160w with a win10 vm spun up. E5-2697 v2 asrock EPC602D8A 64gb 1866 ecc reg Array: 4x 4tb HSGT NAS drives Cache: 2x 250gb 960 evo UD: 1x 500gb 960 evo, 1x 4tb WD SSHD gtx 1050ti