Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Look to be issue with your USB stick but not because /usr/local is on the stick. Do a diskchk of your USB stick on a Windows machine to make sure the file system is ok. Then boot your server using a different USB port, preferably USB 2.0 port.
  2. Things you might want to try: Updating your BIOS if not already on current version Change Power Supply Idle Control to Typical Current Idle in BIOS Disable C State in BIOS
  3. You need to change the boot order in the xml. Step 1: remove the default boot order Change this: <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0bfa2761-e693-d1d1-be74-671b7df7e20d_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> ... to this: <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0bfa2761-e693-d1d1-be74-671b7df7e20d_VARS-pure-efi.fd</nvram> </os> Step 2: add boot order tag to the PCIe device corresponding to your NVMe (which looks to be 04:00.0 based on the xml and PCI device listing) Change this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> ... to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev>
  4. So many things that have changed between 6.5.3 and 6.8.3 that it's impossible to pinpoint the issue. I would suggest you might want to backup your flash and then download 6.6.0, 6.7.0 and 6.8.3 zip files (use the same link on the Unraid website, just change the version number on the link). Then manually install each version sequentially to see when things stop working and then go check the patch note. Also since it's a Ryzen system, have you made all the necessary stability tweaks in the BIOS?
  5. If you changed the isolation in flash settings, it will also show that banner if you go to the CPU Pinning page. I think there's some config in the CPU Pinning page that saves the boot isolcpus and then compared to the current settings and if diff then show banner.
  6. I know my Zotac GTX 1070 Mini (and Zotac GT 710 but that's a low-end model not suitable for gaming) can be passed through. I used to boot Unraid with the GT710 and pass through the 1070 to the main workstation VM and the GT 710 to a MacOS VM. Currently, the 1070 is the only GPU in the system (i.e. Unraid boots with it) and it is also passed through to the same workstation VM after boot. I do dump and use my own vbios (watch SpaceInvader One tutorial on how to dump your own vbios). Note: the fact that both are Zotac is completely coincidental. I used to have a small case and the cards that fit happened to be Zotac. Caveat: pass-through is very hardware dependent so without having the exact same hardware, it is impossible to be 100% certain. I know factually Intel is better in terms of latency; however, I can't speak for YOU whether the latency would matter. As mentioned, I can't tell the diff but I have a friend who can. This is totally guessing but I think if you are really good at shooting / twitchy games (i.e. like my friend), you are probably more likely to notice the stuttering.
  7. The process that was killed is mediainfo, which I believe is a docker. It looks kinda like a memory leak to me. Linux will kill the process that uses the most memory at the time that it runs out of RAM. That is usually the culprit but may not necessarily always be the case. Maybe watch your docker memory usage (in Docker tab, advanced view) and see if there's any where to optimise.
  8. Maybe I flip the question back: why are you insisting on ECC? ECC is never a system requirement for Unraid. Is it good to have? It is! However, it only helps in a very specific scenario (single-bit corruption) which is extremely rare in consumer uses and even when happened is not as costly as compared to the enterprise context. So all this halo effect I'm seeing lately with people insisting on ECC or burst is rather peculiar. I would say most of the extra stability that people seem to attribute to ECC RAM in consumer uses may have nothing to do with ECC at all but rather because ECC RAM can't be overclocked. Most consumers read "DDR4 3600MHz" in non-ECC RAM spec sheet and immediately go to BIOS and enable XMP and what not. Manufacturer certified overclock is still an overclock. And an overclock is ALWAYS less stable than stock speed. I can always strongly recommend Gigabyte because of the number of times I have seen "don't boot Unraid with it" has fixed pass through issues. You have to make the decision whether that POTENTIAL benefit is worth "giving up" on ECC RAM.
  9. You will not see any real life impact of PCIe 4.0, at least for now (and for consumer uses). Even an RTX 2080Ti overclocked to instability won't saturate PCIe 3.0 x8 so 4.0 x8 is more than enough. Latency is an inevitable result of Ryzen CCX/CCD design. It's not specific to any platform or OS - as long as it's a like-for-like comparison. E.g. you can't compare bare metal latency vs VM latency (which naturally is always higher). But between 2 VM or between 2 bare metal configs, Ryzen (and Threadripper) will always have higher latency. On the subject of Threadripper, latency is worse. Having 1 other die to comm with is a naturally faster than having 3 other dies to comm with. Not sure about 6.9 update impact on latency but as I said, it's a intentional design compromise so it's impossible to eliminate. You will have problem with the RX580. It is a known problem child if Unraid boots with it. So if you use a mobo that is not Gigabyte, your only choice is to have it on the 2nd PCIe x16 slot, HBA on 3rd slot and another GPU in 1st slot. Not a big deal if your RX 580 width is double-slot. A big deal if your RX 580 is the 2.5-slot width variety. Unraid does not need a graphics card at all. That's a misconception when people refer to a card that is "used" by Unraid. It's a function of the motherboard BIOS. If the BIOS allows it, Unraid can boot completely headless e.g. no graphics card at all. All BIOS that I know of will force graphical boot if there is ANY graphics card plugged in at all. So if you don't want a certain card to be initiated at boot (e.g. the RX 580 so you can pass it through to a VM), the only choice is to have another card that the BIOS would boot with (and thus "used" by Unraid). You could run KVM / QEMU in any Linux distro or even Windows Server Hyper-V or VMWare ESXi and it will be exactly the same situation.
  10. It does but not sure how you can get it out. If you run a db refresh without emptying out trash (which is usually configured to be done automatically after scan / during housekeeping), it will mark deleted contents as deleted. In the past, there was a plugin that can export the Plex db into csv files so it's very easy to check. However, Plex has completely killed off all plugins for quite a while now so you will need to know how to deal with Plex db to get that info out.
  11. I haven't seen "PCIe passthrough to a VM" in your list so you are still in the honeymoon period. 🧐 It's like having the first fight with your significant other LOL.
  12. Unfortunately it doesn't have that. Each drive has its own file system so if it's completely gone then there's nowhere else to obtain it.
  13. I don't think you can do 2xGPU + 2xNVMe + 1xHBA with X470. Check the mobo owner manual carefully because in most cases, some slots are deactivated if other slots are occupied. With RAM stick support, again check manual / spec sheet. If it doesn't say it supports, assume it doesn't. There may be support added in BIOS update but ultimately if it's not on the spec sheet, you have no ground to argue for a replacement / return / refund. This particularly applies to 32GB single DIMM support. It only starts to appear in X570 spec sheets and while I have heard that some made it work with X470 but do you want to take the risk? Also I would recommend getting a Gigabyte motherboard so you have flexbility to pick any x16 slot for the GPU that Unraid boots with. This provides flexibility with GPU pass-through-ability in case a card would not work with VM if Unraid boots with it. In terms of latency, it's there and is higher than Intel CPU (even when all workarounds have been applied). It's unlikely to translate to benchmark figures because it manifests as inconsistent fps (aka stuttering). Some can't tell the diff (e.g. me) and some can (e.g. my friend, whom I have done a blind test on and I'm 99% sure he can). Side note is with Ryzen, always remember that the best performing config is not necessarily the most consistent config. The fastest F1 car on Monza isn't necessarily the fastest around Monte Carlo.
  14. Copy-paste your VM xml here and the PCI Device section of Tools -> System Devices. When copy-pasting from Unraid, please use the forum code functionality (the </> button next to the smiley button) so the text is formatted and sectioned correctly.
  15. Yes, it would be fine for Unraid to run on non-enterprise-grade hardware. That's one of the main selling points of Unraid i.e. no need ECC, no need Xeon/Epyc, no need IPMI. Before spending on the upgrade though, have you done any optimisation? What is your H264 bit rate? Any subtitle? From my experience, usually maxing out on a single core with Plex is due to subtitle format (e.g. PGS / Vobsub) and/or incorrect core pinning and/or bad drive (high IO wait -> hang -> 100% load) An alternative you might consider (instead of essentially buying a new server) is perhaps hardware transcoding e.g. get an Nvidia Quadro P2000 - you will have to run the unofficial Unraid Nvidia build but many run it without any problem at all (+ will need Plex premium membership). Upgrading to the Ryzen 5 2600 should offer improvement but there are some alternatives to consider before you want to go drastic. Of course if you are already planning to upgrade (considering your CPU came out in 2011) then just go for it I guess.
  16. If you run RAID-1 then if 1 drive has a checksum error then it can be restored from the mirror. Scrub doesn't mark bad sectors on its own and it definitely is not a sub for pre-clear (or to be exact, it's not a sub for stress testing drives before adding to the array / cache pool). (Also, scrub only tests used space). Note that if it's an SSD then you shouldn't preclear at all. You would be wasting a write cycle at best and depending on the controller, you might actually end up nuking it at worst as the preclear activity confuses the controller wear leveling algo (happened to me so be forewarned). A long SMART test could theoretically mark bad sectors (not applicable to NVMe) but from my experience, they don't tend to work without actual data written. You could use whatever tools that the manufacturer has on their website to test things out (if available) but from my experience, they are way less convenient to use than preclear.
  17. Try turning multifunction on as well (in the same VM Settings page). If they are still not split after that then they can't be split.
  18. What you ask for is essentially to have Mover for Cache = No and Cache = Only. They should be separate options (so additional 2 options Cache = No with mover and Cache = Only with mover. There are current valid use cases with Cache = No and Cache = Only for which the data exists in both cache and array. You don't want an update to suddenly cause existing users issues (i.e. regression) because things suddenly start to work differently. To be honest though, it's kinda obvious in the Help tips and there aren't that many new users have the misunderstanding. It's kinda like the red warning that formatting will destroy your data that Unraid prompts quite visibly and yet every now and then we still have people doing that when trying to restore a failed drive.
  19. Setting up a Linux VM with remote access isn't too hard so you can build Unraid in the VM instead. Building in Unraid is unlikely to be implemented any time soon officially due to the need to keep Unraid as light as possible, which runs contrary to the number of additional packages required to support building within Unraid.
  20. Get a Gigabyte motherboard unless you are 100% sure you can live with having the P2000 on the 1st PCIe slot and RTX 2080 on the 2nd PCIe slot. Gigabyte BIOS would allow you to pick any PCIe x16 length slot to boot Unraid with so you can put any GPU anywhere. With a non-Gigabyte mobo, if you put the 2080 on the 1st slot (why? a lot of the 2080 are 2.5 slots width) then the mobo BIOS will boot Unraid with it. That will cause you issues passing the 2080 through to the VM. Getting a Gigabyte mobo doesn't guarantee passthrough but it will help a lot. If the reason you picked AsRock was for the ECC RAM support then note that you don't need ECC RAM for Unraid (unlike for example FreeNAS which uses ZFS which basically requires ECC). I would pick the P2000 because it's unlocked with number of streams. With the 1660 you will need to use the (frowned upon) Nvidia patch. The P2000 will work out of the box. And it's single-slot so you can put it at the bottom most PCIe slot (which comes back to the point about Gigabyte mobo allowing you to boot Unraid with a GPU on any of the 3 long slots). Note: will need to install Unraid Nvidia build which is a community build i.e. you need to use the plugin to update Unraid instead of using the official method. 960GB is enough for most users. Note though that 650W PSU may not be enough with all (future) the HDD's in place. Make sure you check the power requirement (e.g. use pcpartpicker). What is "SLI Controller"? You mean HBA controller i.e. for SATA / SAS?
  21. Don't forget to change the format on the VM xml / GUI to qcow2 as well. It's very easy to forget after you finished the qemu-img convert. You don't need SCSI to use qcow2 but you should use it to allow the drive to show up correctly to Windows as thin provisioned. Note that you want to install the driver first before changing your drive bus or you may end up with endless bluescreen at VM boot (if that happens, switch back to SATA and it should boot normal again for you to install the driver). And note that raw img is also thin provisioned, just that the size you see is the max size and not the used size.
  22. Install the Tips and Tweaks plugin which should allow you to change the scaling profile. Pick On Demand. Also check your BIOS and enable things like Intel Speedstep / AMD Cool & Quiet or stuff like that.
  23. Download share should have Cache = Only or on an unassigned device. Before making drastic config changes, just try having your torrent stuff on cache and see if it helps. From my experience, torrents just plain kill it when having IO on array, doesn't matter how much IO.
  24. I meant the AMD Ryzen 5 1600 i.e your CPU. I could have been clearer but reread my comment.
×
×
  • Create New...