Jump to content

testdasi

Members
  • Content Count

    1822
  • Joined

  • Last visited

  • Days Won

    5

testdasi last won the day on January 30

testdasi had the most liked content!

Community Reputation

231 Very Good

2 Followers

About testdasi

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

1797 profile views
  1. What controller type did you pick for the vdisk? virtio / scsi / ide / sata? Try picking SATA. BSOD at boot is usually due to missing drivers e.g. virtio / scsi. SATA tends to be safer. It would be even better if you pre-install all the drivers from the virtio iso before converting physical disk to vdisk.
  2. Long reason: Current QLC tech uses adaptive SLC cache (essentially using QLC cells like SLC cells). That means The amount of cache you have is dependent on how much free space you have. When you run out of the SLC cache then the drive will revert back to using QLC i.e. slow. The wear on the SSD cells being used for SLC cache is quite a bit higher than normal. What it translates to in real life (e.g. Unraid cache) is that it (I'm talking about the Intel 660p specifically as that's what I'm familiar with) performs about the same as a SATA SSD on average. Adding to that is the fact that the 660p cannot be passed through to a VM as a PCIe device due Linux kernel conflicts. Of course if you run out of SATA ports and must add an SSD at the lowest possible cost then it's not at all a bad idea. I would just prefer actual NVMe performance and not a glorified SATA drive. DRAM cache is another thing to look out for as the lack of which is even (way) worse than QLC. I would pick the 660p over a DRAM-less SSD any day, For your use case (i.e. sharing a NVMe among several things including vdisks), anything 3D TLC (aka Samsung V-NAND) with DRAM would be good. If you want to pass it through as a PCIe device to a single VM then you need to research on the controller of the SSD as some don't like to be passed through or require special workarounds with limitations.
  3. Does Handbrake NVENC support HDR? I thought it only does 8-bit.
  4. Tools -> Diagnostics -> attach zip file on your next post.
  5. You can work-around this by creating a new VM in (blank) xml, copy-paste the code from the old VM xml, change the UUID + name and it would work as a separate profile of sort. You can google uuid generator and there are a few websites that will generate it for you. That's how I have 3 different "profiles" for my Windows VM and 2 "profiles" for my Linux VM. The feature you as asking for would be cool but rather complex though. Sort of like a xml snapshot functionality.
  6. Please post it in the specific docker support topic. You are more likely to receive an answer there.
  7. Have you run memtest? Have you done anything to rule out the possibility of hardware failure (e.g. your motherboard / CPU) being the problem? Have you tried the custom kernel with the Vega / Navi patch included? Things usually don't suddenly stop working. Especially in your case, things-stopped-working was precluded with a corrupted docker image. Data corruption (without accompanying drive failure / error) usually points at RAM issue (or CPU issue e.g. problem with a pin connecting to the memory controller). I think we have ruled out all of the usual suspects so we kinda have to look at more extreme causes. Perhaps use a trial license of Unraid on another computer to see if you can pass through your Vega or not. Or something like that.
  8. Yes Yes Nothing special. I put Plex appdata in a share with cache = only. Thanks for the clarification. I thought everything is controlled by the mover.
  9. You are really overthinking it. But if you need to have a task to access only a group disks in your array, you can use the Include functionality of the Share to just include only those disks for the share. An example of things that need to be on a single drive is your download temp i.e. things that need to be processed further. No parity protection is required and the write-heavy nature means low write speed will cripple it (and in fact it will other activities as well because of high IO Wait causing lag). You said you have 2 SSD, which we would assume to be running in a RAID-1 i.e. mirror. All other things being equaled, mirror beats parity any day. Morover, critical data such as your VM vdisk should be backed up. Parity protection is not a replacement for a backup. Cache = only: data always on cache Cache = prefer: data is on cache. If cache full then mover moves data to array. Once cache is freed up then mover moves data back. I subscribe to Occam's razor i.e. the simpler solution is usually the right one. I have found cache = prefer to cause troubles e.g. with plex appdata db being half-array-half-cache so I would rather set things up such that I won't unexpectedly run out of space rather than relying on the mover to "solve" it.
  10. You can add SSD to the array with the caveat that it's not officially supported (even though it would work just like a normal HDD). However, note that there are some SSD's that were reported on here to cause parity error due to how their firmwares do garbage collection / wear leveling. So if you ever do that, I recommend to watch your parity checks for errors, even 1 or 2 (because any error means corrupt data if rebuilding). Your "with its own dedicated parity" part isn't possible however. That's multi-array functionality which is still in development, presumably.
  11. Officially the Ryzen 3900X support: Single Rank 2 DIMMs DDR4-3200 Single Rank 4 DIMMs DDR4-2933 Double Rank 2 DIMMs DDR4-3200 Double Rank 4 DIMMs DDR4-2667 So your 2 DIMM 3200MHz theoretically should be ok. However, anything above 2133MHz is an overclock. So if you run into instability running 24/7 though, drop it to 2667 first then if still unstable, do a memtest at 2133MHz. If it can't run stable at 2667 MHz though, you theoretically have plenty of ground to return but just note that your replacement will not fare much better. If you put the NVMe in the cache pool, for example, then your VM and container can share the space. In that case, your VM will run on a vdisk so you have to specify the size. Same with docker image. Note though that if you are after the highest and most consistent storage performance for your VM with the NVMe, you have to pass it through (i.e. in full) as a PCIe device. That means no sharing. And do not get QLC NVMe (e.g. Intel 660p). You can do Nvidia GPU hardware transcode in Plex docker. However, it means your VM cannot use that same GPU and you have to run Unraid Nvidia (which is a community-built branch of the official Unraid). If you only have 1 powerful GPU then running Plex in your main VM is probably not a bad idea. I personally have Plex as a docker with CPU software transcode since I don't need that many concurrent streams. When I need to do many concurrent transcodes or when it's time critical, I always do it on my workstation VM with a GPU.
  12. Your number 2 and 6 show you don't quite understand the Unraid cache settings. Cache = Yes speeds up write but it won't speed up read (except the data that still to be moved to the array). Unassigned device by default does not use the cache pool - "unassigned" = not in the cache pool or array. Having a separate disk for a separate task is kinda defeating the purpose of having an array. Unless a task NEED to be on a single drive / SSD cache, you should spread them out across multiple drives on the array. That will maximize the utilisation of your drives. Then you use the Share functionality to split them into tasks. What NEED to be on a single drive? Basically anything that would be crippled by low write speed and/or receive frequent but low-priority IO. What NEED to be on the SSD cache? Basically anything for which responsiveness is critical. (i.e. random IO). So it should be something like this: VM: cache = only bittorrent: unassigned (or cache = only if data is not too large) Docker: cache = only appdata (I guess this is the Docker's appdata): cache = only Personal Windows 10 backup: cache = no Unraid USB backup: cache = no Time Machine for Mac: cache = no Movies: cache = no Photos: cache = no Documents: cache = no (or cache = prefer if data is not too large) Minecraft Server (or any other server eg Garry's Mod, L4D2...): don't know You will notice that I don't have anything Cache = Yes. For an array with reconstruct write (aka turbo write) = on and modern HDD, the write speed can quite often exceed gigabit (125MB/s). In addition, Unraid (and Linux in general) will automatically uses free RAM to cache write first i.e. short bursts of write will be super fast regardless of other settings. That means for most home users, the benefit of using the cache pool for write cache is rather limited, which usually doesn't justify reducing the lifespan of the more expensive SSD's. Cache = Prefer serves as a fail safe to move stuff off the cache pool into the array to reduce your chance its filling up. However, as I mentioned above, usually you don't need to have shares with Cache = Yes so the chance of filling up the cache pool (which usually happens due to Cache = Yes shares) would not be that high. In fact, things like docker image, and VM vdisks and appdata should not / cannot be moved while being used. But then beside those 3, there aren't that much more stuff that NEED to be on the cache pool anyway.
  13. Whether you should go for 7200rpm (or even SSD array actually) depends on the number of streamers and how much you dislike buffering. Just based on my own anecdotal quick tests: If just 1-2 concurrent streamers accessing the same drive then 5400rpm is more than good enough. Once you get to about 4 concurrent streamers and if they access the same HDD, you may start to have some buffering with 5400rpm. But it has to be accessing the SAME disk for it to be a problem so I reckon 5400rpm is still ok enough. I think about 6+ streams is where 7200rpm starts to make some sense. 10+ streams is where SSD array starts to become justifiable. So if cost is important and you don't have that many streamers, I would say stick with 5400rpm. In terms of 7200rpm drives, I just buy whatever I need that is cheapest from a reputable dealer whenever I need it. Recently they just happen to be Iron Wolf. The reason I use mainly 7200rpm is because I use the Unraid array mostly as my backup server so there are a lot of small files which benefit from 7200rpm. The QVO would be even better than 7200rpm for my backup jobs but the cost is still too high to justify the benefit.
  14. No, not those settings. Power & Sleep -> Additional power settings -> Choose what the power buttons do
  15. That's why I have been telling people that with modern HDD and turbo write on, there's no need for a cache drive for a pure NAS use case. The point of having a cache drive is no longer to speed up write. It hasn't been the case really since docker and VM were introduced.