testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. There aren't that many AMD single-slot GPU out side of the Radeon Pro. Compatibility with Mac is basically a coin toss as to when Apple decides something should stop working. So I would recommend you spend the money on upgrading your Mac instead. If someone uses a Mac, there ought to be a reason to insist on using a Mac, which generally makes running Mac VM not a sensible idea. Those without tech skill - so trying to make a MacOS VM work will just be a waste of time Those who like shiny things - you already pay premium for it then just accept it and fork out more money to your beloved Apple Sound engineers - ok, I feel for ya.
  2. You can either use the docker mentioned on github or you can use various other dockers on the CA app store with similar functionality. I would avoid installing anything directly to Unraid especially since it's all in RAM so everything you installed will be erased on the next boot unless you have some complicated workarounds.
  3. 6.8.0-rc7 was the last version that was on 5.x kernel so you might try that. However, I doubt it will help because it's fundamentally an AMD problem. What you probably need is the navi reset patch and not a newer kernel. Someone compiled a custom kernel for 6.8.2 with the navi patch here. Caveat: use it at your own risk (in fact, use all custom kernels at your own risk).
  4. Pass-through means exclusive use. So you need 1 GPU dedicated to your VM and 1 GPU dedicated to your Plex docker. No mix-and-match. For transcoding, if you need more than 2 streams, the Radeon P2000 is highly regarded as a good option (NOT cheapest but good value for money). If you don't need more than 2 streams, then anything GTX 1050 and up would work but I would not go lower than the 1050 Ti. There's a website with estimate max number of streams: https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding but note that their calculation is based on a rather low bit rate and 1080p to 720p transcode (i.e. very optimistic figures). Real life performance is about 1/2 to 2/3 their estimates. For the VM, I would not consider AMD due to the reset issue that AMD is not fixing. There are user-built custom kernel on here with the Vega / Navi reset patch which may or may not work so YMMV. I think Nvidia is a better option especially since you plan to have a GPU dedicated to Unraid. That makes pass-through a lot less problematic.
  5. Use your existing 2x250GB to separate write-heavy and read-heavy data. Something like this: Set all shares with Cache = Yes to Cache = No. You don't have parity so there's no need for write cache (even if you do, I would turn on Turbo Write rather than wasting SSD write cycles as a wriet cache). Set the 2x250GB in a RAID1 cache pool Mount the 500GB as UD and use the console to move data over (except any temp data). Then use the 500GB as temp drive for write-heavy activities. I doubt you will see a diff in performance but if you have a VM that needs a really fast vdisk, just put the vdisk in the 500GB. Alternatively, if let's say you want the fastest possible storage performance for your VM, you can set 1x250GB in cache and 1x250GB as UD as temp drive for write-heavy activities. Then pass through the 500GB to your VM as a PCIe device i.e. stub it and select it in Other PCIe device section of the VM GUI.
  6. There is a plugin for that now. Search vfio in the CA app store.
  7. There are certainly more VM users than hardware-transcoding users. So I would strongly urge LT to make the loading of Nvidia driver an optional setting (especially if it requires existing VM users to make config changes for a feature they didn't ask for or even worse a feature that may even break things). Needless to say, expect a lot of upset users to inundate the forum with "6.x.0 broke my VM" posts. 😅 Last but not least, I can only wish you good luck with all the future "Nvidia new driver has already been out for one whole freaking day, why haven't Unraid had it yet?" demands. 🤣
  8. Sorry to be harsh but if you need that kind of hand-holding just to get the hardware, you will be heavily troubled by the tech hoops you have to pass through (pun intended) to get your project done. Also you will need to be a lot more specific with your use cases than "gaming/etc". The main reason is with a 3-gamer-1-PC build, one of the 3 gamers will have to compromise. Last but not least, unless there's someone on here who happen to have built a 3-gamer-1-PC, you will never get "exactly" in any answer. PCIe pass through is notoriously difficult to predict and requires the on-site person (i.e. you) to have some tech skill to troubleshoot. And nobody wants to be blamed for recommending an exact hardware that should work but doesn't because, for example, the user uses the wrong vbios.
  9. Is your Synology on the same network? If so just use Unassigned Devices to mount the SMB share and do the rsync to the UD mount instead.
  10. What controller type did you pick for the vdisk? virtio / scsi / ide / sata? Try picking SATA. BSOD at boot is usually due to missing drivers e.g. virtio / scsi. SATA tends to be safer. It would be even better if you pre-install all the drivers from the virtio iso before converting physical disk to vdisk.
  11. Long reason: Current QLC tech uses adaptive SLC cache (essentially using QLC cells like SLC cells). That means The amount of cache you have is dependent on how much free space you have. When you run out of the SLC cache then the drive will revert back to using QLC i.e. slow. The wear on the SSD cells being used for SLC cache is quite a bit higher than normal. What it translates to in real life (e.g. Unraid cache) is that it (I'm talking about the Intel 660p specifically as that's what I'm familiar with) performs about the same as a SATA SSD on average. Adding to that is the fact that the 660p cannot be passed through to a VM as a PCIe device due Linux kernel conflicts. Of course if you run out of SATA ports and must add an SSD at the lowest possible cost then it's not at all a bad idea. I would just prefer actual NVMe performance and not a glorified SATA drive. DRAM cache is another thing to look out for as the lack of which is even (way) worse than QLC. I would pick the 660p over a DRAM-less SSD any day, For your use case (i.e. sharing a NVMe among several things including vdisks), anything 3D TLC (aka Samsung V-NAND) with DRAM would be good. If you want to pass it through as a PCIe device to a single VM then you need to research on the controller of the SSD as some don't like to be passed through or require special workarounds with limitations.
  12. Does Handbrake NVENC support HDR? I thought it only does 8-bit.
  13. Tools -> Diagnostics -> attach zip file on your next post.
  14. You can work-around this by creating a new VM in (blank) xml, copy-paste the code from the old VM xml, change the UUID + name and it would work as a separate profile of sort. You can google uuid generator and there are a few websites that will generate it for you. That's how I have 3 different "profiles" for my Windows VM and 2 "profiles" for my Linux VM. The feature you as asking for would be cool but rather complex though. Sort of like a xml snapshot functionality.
  15. Please post it in the specific docker support topic. You are more likely to receive an answer there.
  16. Have you run memtest? Have you done anything to rule out the possibility of hardware failure (e.g. your motherboard / CPU) being the problem? Have you tried the custom kernel with the Vega / Navi patch included? Things usually don't suddenly stop working. Especially in your case, things-stopped-working was precluded with a corrupted docker image. Data corruption (without accompanying drive failure / error) usually points at RAM issue (or CPU issue e.g. problem with a pin connecting to the memory controller). I think we have ruled out all of the usual suspects so we kinda have to look at more extreme causes. Perhaps use a trial license of Unraid on another computer to see if you can pass through your Vega or not. Or something like that.
  17. Yes Yes Nothing special. I put Plex appdata in a share with cache = only. Thanks for the clarification. I thought everything is controlled by the mover.
  18. You are really overthinking it. But if you need to have a task to access only a group disks in your array, you can use the Include functionality of the Share to just include only those disks for the share. An example of things that need to be on a single drive is your download temp i.e. things that need to be processed further. No parity protection is required and the write-heavy nature means low write speed will cripple it (and in fact it will other activities as well because of high IO Wait causing lag). You said you have 2 SSD, which we would assume to be running in a RAID-1 i.e. mirror. All other things being equaled, mirror beats parity any day. Morover, critical data such as your VM vdisk should be backed up. Parity protection is not a replacement for a backup. Cache = only: data always on cache Cache = prefer: data is on cache. If cache full then mover moves data to array. Once cache is freed up then mover moves data back. I subscribe to Occam's razor i.e. the simpler solution is usually the right one. I have found cache = prefer to cause troubles e.g. with plex appdata db being half-array-half-cache so I would rather set things up such that I won't unexpectedly run out of space rather than relying on the mover to "solve" it.
  19. You can add SSD to the array with the caveat that it's not officially supported (even though it would work just like a normal HDD). However, note that there are some SSD's that were reported on here to cause parity error due to how their firmwares do garbage collection / wear leveling. So if you ever do that, I recommend to watch your parity checks for errors, even 1 or 2 (because any error means corrupt data if rebuilding). Your "with its own dedicated parity" part isn't possible however. That's multi-array functionality which is still in development, presumably.
  20. Officially the Ryzen 3900X support: Single Rank 2 DIMMs DDR4-3200 Single Rank 4 DIMMs DDR4-2933 Double Rank 2 DIMMs DDR4-3200 Double Rank 4 DIMMs DDR4-2667 So your 2 DIMM 3200MHz theoretically should be ok. However, anything above 2133MHz is an overclock. So if you run into instability running 24/7 though, drop it to 2667 first then if still unstable, do a memtest at 2133MHz. If it can't run stable at 2667 MHz though, you theoretically have plenty of ground to return but just note that your replacement will not fare much better. If you put the NVMe in the cache pool, for example, then your VM and container can share the space. In that case, your VM will run on a vdisk so you have to specify the size. Same with docker image. Note though that if you are after the highest and most consistent storage performance for your VM with the NVMe, you have to pass it through (i.e. in full) as a PCIe device. That means no sharing. And do not get QLC NVMe (e.g. Intel 660p). You can do Nvidia GPU hardware transcode in Plex docker. However, it means your VM cannot use that same GPU and you have to run Unraid Nvidia (which is a community-built branch of the official Unraid). If you only have 1 powerful GPU then running Plex in your main VM is probably not a bad idea. I personally have Plex as a docker with CPU software transcode since I don't need that many concurrent streams. When I need to do many concurrent transcodes or when it's time critical, I always do it on my workstation VM with a GPU.
  21. Your number 2 and 6 show you don't quite understand the Unraid cache settings. Cache = Yes speeds up write but it won't speed up read (except the data that still to be moved to the array). Unassigned device by default does not use the cache pool - "unassigned" = not in the cache pool or array. Having a separate disk for a separate task is kinda defeating the purpose of having an array. Unless a task NEED to be on a single drive / SSD cache, you should spread them out across multiple drives on the array. That will maximize the utilisation of your drives. Then you use the Share functionality to split them into tasks. What NEED to be on a single drive? Basically anything that would be crippled by low write speed and/or receive frequent but low-priority IO. What NEED to be on the SSD cache? Basically anything for which responsiveness is critical. (i.e. random IO). So it should be something like this: VM: cache = only bittorrent: unassigned (or cache = only if data is not too large) Docker: cache = only appdata (I guess this is the Docker's appdata): cache = only Personal Windows 10 backup: cache = no Unraid USB backup: cache = no Time Machine for Mac: cache = no Movies: cache = no Photos: cache = no Documents: cache = no (or cache = prefer if data is not too large) Minecraft Server (or any other server eg Garry's Mod, L4D2...): don't know You will notice that I don't have anything Cache = Yes. For an array with reconstruct write (aka turbo write) = on and modern HDD, the write speed can quite often exceed gigabit (125MB/s). In addition, Unraid (and Linux in general) will automatically uses free RAM to cache write first i.e. short bursts of write will be super fast regardless of other settings. That means for most home users, the benefit of using the cache pool for write cache is rather limited, which usually doesn't justify reducing the lifespan of the more expensive SSD's. Cache = Prefer serves as a fail safe to move stuff off the cache pool into the array to reduce your chance its filling up. However, as I mentioned above, usually you don't need to have shares with Cache = Yes so the chance of filling up the cache pool (which usually happens due to Cache = Yes shares) would not be that high. In fact, things like docker image, and VM vdisks and appdata should not / cannot be moved while being used. But then beside those 3, there aren't that much more stuff that NEED to be on the cache pool anyway.
  22. Whether you should go for 7200rpm (or even SSD array actually) depends on the number of streamers and how much you dislike buffering. Just based on my own anecdotal quick tests: If just 1-2 concurrent streamers accessing the same drive then 5400rpm is more than good enough. Once you get to about 4 concurrent streamers and if they access the same HDD, you may start to have some buffering with 5400rpm. But it has to be accessing the SAME disk for it to be a problem so I reckon 5400rpm is still ok enough. I think about 6+ streams is where 7200rpm starts to make some sense. 10+ streams is where SSD array starts to become justifiable. So if cost is important and you don't have that many streamers, I would say stick with 5400rpm. In terms of 7200rpm drives, I just buy whatever I need that is cheapest from a reputable dealer whenever I need it. Recently they just happen to be Iron Wolf. The reason I use mainly 7200rpm is because I use the Unraid array mostly as my backup server so there are a lot of small files which benefit from 7200rpm. The QVO would be even better than 7200rpm for my backup jobs but the cost is still too high to justify the benefit.
  23. No, not those settings. Power & Sleep -> Additional power settings -> Choose what the power buttons do
  24. That's why I have been telling people that with modern HDD and turbo write on, there's no need for a cache drive for a pure NAS use case. The point of having a cache drive is no longer to speed up write. It hasn't been the case really since docker and VM were introduced.
  25. If you installed Ubuntu correctly then no, removing the ISO will not stop it from booting. In fact, you are supposed to remove the installation iso after installation.