Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. If you see the board physically (e.g. picture on the website), you will see that slot B is too short for a typical NVMe SSD. So even if you get a M->E adapter, there's zero chance of fitting one in there because it would be even longer. Those Key E slots are cost-saving measures by manufacturer so if they want to produce a cheaper version of their board, all it takes is to not plug in a Wifi module (instead of in the long past which required physical soldering of Wifi modules).
  2. testdasi

    Do I need an SSD

    You will not notice any diff between NVMe vs SATA SSD for typical Unraid uses, except for 3 things. Under heavy IO load, you should notice less lag with NVMe (e.g. lower CPU load). That's because NVMe is built with parallelism in mind so IOWAIT tends to be lower as compared to SATA (and M.2 AHCI to a lesser extend). Obviously part of it is also because NVMe is inherently faster as well. If you regularly copy very large files then 1GB/s is perceivably faster than 500MB/s. If you have a gaming VM, it is highly beneficial to pass through the NVMe to your VM as a PCIe device (i.e. the vfio binding method and NOT the ata-id method). That effectively isolates the NVMe IO from the host (Unraid) IO, which manifests as visibly obviously less lag when the host is under heavy IO load. If you have a spare PCIe SATA controller, you can do something similar as well i.e. pass through the controller to the VM for a similar impact but most people don't have a spare PCIe SATA card laying around + a free PCIe slot to do that.
  3. Unraid has zero role in that. Deciding what device is used during boot is strictly controlled by the BIOS. You can try booting Unraid in legacy mode (i.e. non-UEFI). Then hopefully the BIOS won't initiate the dedicated card.
  4. No. That would be mathematically impossible with just P+Q parity alone. Parity correction only applies to a (or two) self-identifying failure(s) i.e. you have to know in advance which block (in Unraid case, disk) fails. So a parity error can only tells you there is a failure and you have to go identify it yourself. You might be confusing Unraid parity with RAID + checksum filesystems (e.g. ZFS / BTRFS). So for ZFS / BTRFS, a scrub not only verifies parity but also the checksum of the blocks. So if there's a parity error, it would rely on the checksum to identify the wrong block (actually I think it works the other way round i.e. it identify checksum error on a block and rebuild parity that depends on that block). A way to accomplish what ZFS / BTRFS RAID do is to have array disks use BTRFS file systems (instead of XFS). That way when you have parity error, you can just run a scrub and find out which disk has the error and if it's serious you can then rebuild the disk.
  5. IHM is a marketing gimmick that doesn't really add any value over just monitoring your HDD SMART attributes. The fact that it isn't available on the Seagate's enterprise-grade line-up is a rather major clue that it isn't as good as the name suggests.
  6. That should NOT be attempted. Exposing the Unraid GUI to the Internet is asking for troubles since Unraid was never intended for such use. You should set up a VPN (e.g. Wireguard is included via a plugin or use OpenVPN) and use that to access the GUI.
  7. Either is fine. It really depends on what you have at hand and how big your main cache pool is. Plex db generally doesn't require dedicated storage since there isn't much of a load for it to be an issue. So it comes down to free space.
  8. There was a lot of complaining back when whatever organisation that decides USB naming picked that 3.1 gen1 /gen2 thingie and confused the crap out of everyone. Basically: USB 3.2 gen 1x1 = USB 3.1 gen 1 = USB 3.0 = "real" USB 3.0 If it doesn't say USB 3.1 gen 2 explicitly then it's gen 1 Most Type A devices are gen 1 USB 3.2 gen 2x1 = USB 3.1 gen 2 = "real" USB 3.1 USB 3.2 gen 1x2 / 2x2 = "real" USB 3.2 Also running USB 3.0 stick on USB 2.0 port is sufficient to reduce chance of overheating. If only anecdotal, I have been using the same USB 3.0 micro stick (Sandisk Ultra Fit) ever since my very first Unraid server years ago.
  9. With gaming VM, it would be a good idea to pass through a NVMe as a PCIe device. It's not about additional performance but rather to reduce performance variability. With gaming it's not just about max performance but rather the most consistent performance (i.e. less lag). Have 2 SSD's in the cache pool running as RAID-1 for the test / dev VM vdisks would be sufficient. There's no need to dedicate a SSD to a single VM unless you really need it. Plex isn't snappier with db on a NVMe. It depends on how much space you need for the VM. Usually people share the same cache pool between docker image, appdata and VM vdisks. Don't think of Unraid "cache" as being "cache". It's just a fast general storage pool. If you are happy enough with your array write speed (optionally with turbo write turned on), which most users would be with modern HDD's then there's no need to have a cache pool as write cache.
  10. @Pinch: please attach diagnostics zip (Tools -> Diagnostics -> attach whole zip file) but AFTER doing the common steps below. @berta123: please start a new topic with your own details and diagnostics zip, again AFTER doing the common steps below. Common steps: Boot Unraid in Legacy mode (go to Main -> Flash and scroll to the bottom to see what mode Unraid boots as. If it doesn't say Legacy then make sure the permit UEFI boot is unticked, save, reboot and recheck) If you have another graphics card (either iGPU or a dedicated card that is not the one to be passed through), make sure Unraid boots with this and NOT the card to be passed through. Connect a monitor either to onboard port (if iGPU) or the other card and be 100% sure Unraid boots with that. Start a brand new template (don't be lazy and edit an existing one, start a new one), pick Q35 machine type (latest available version) + OVMF + Hyper-V On + VNC graphics + everything else the same. If Windows doesn't boot (e.g. because it was installed with SeaBIOS) then reinstall Windows. Once Windows boots successfully, turn on RDP if possible. If not possible, install a VNC server solution. Note: VNC graphics and passed-through GPU don't work with each other so you can only pick either one, hence this step to enable RDP / VNC from within the VM to check stuff. Report exactly in your post how you obtained the vbios file + whether you watched SpaceInvader One tutorials on Youtube on that topic (and if you haven't watched them then watch them) Connect a monitor to the graphics card to be passed through. Some cards won't initialise without a monitor attached. Edit the template and remove VNC graphics and add the graphics card + HDMI audio but without vbios. If it doesn't work, RDP / VNC in and check the error code and install drivers etc. If (7) doesn't work, edit template with the vbios and report back.
  11. That's normal. The CPU thread is used to load data to and from the GPU and it's a substantial amount of data to load. That's why it's important to ensure you pin the right cores for the F@H docker to prevent lag to the important stuff.
  12. Neither. You will get 500 x 3 / 2 = 750GB space with the default BTRFS RAID1 profile. If you rebalance to RAID0, you will get 1.5TB If you rebalance to RAID5, you will get 1TB. If you rebalance to RAID1C3, you will get 500GB.
  13. No. Boot Unraid with the iGPU and pass through the dedicated GPU to the gaming VM. And configure the docker to use the iGPU for hardware transcoding. The other VM's do not use any GPU and can only be accessed remotely (e.g. via VNC or RDP). You shouldn't pass through the iGPU to any VM. (a) it's unlikely to be possible and (b) once passed through, you will not be able to use it for the docker (e.g. transcoding). 4 cores is enough for a normal gaming VM.
  14. If you use the lsio docker then in the nginx config should have a lot of ".sample" files. Look for nextcloud.subdomain.conf.sample and rename to nextcloud.subdomain.conf and then open it in an editor and follow the instructions. If you aren't sure how to do the config then ask in the letsencrypt support topic.
  15. One thing at a time. Yes you must have port-forwarding for it to work. Once you set up port-forwarding, start the Lets-encrypt docker test protech.my to make sure you arrive at the default NGINX page (e.g. instead of cloudflare error) Then go to nginx config folder and do the .conf file
  16. How do you plan to access the VM? Through another physical computer over remote desktop? If you want to access the VM using a display attached to the server itself then you need a dedicated graphics card passed through to a VM and use that VM to access other VM. Reasons to recommend Intel: the iGPU helps a lot with passing through another GPU to a gaming VM + Intel generally has better single-core performance + Intel doesn't have the inherent latency from the CCX/CCD design of Ryzen. Reasons to recommend AMD (Ryzen): better bang for your bucks + Intel just annoys people with their pattern of anti-consumer behaviours (e.g. most recently artificially locking memory speed for budget chipsets) + if you have a graphics card for Unraid (e.g. P2000 for Plex docker hardware transcoding) then the iGPU benefit isn't important
  17. LT has only officially said they are only considering ZFS. If it's just integration with the ZFS plugin (i.e. everything else manual) then it's the same as just using the ZFS plugin so there isn't anything to consider. That means don't expect it in the GUI anytime soon. Specifically to 6.9.0, I don't think it will have ZFS for the reasons I already posted here. I think you also misunderstood a few things. If you don't want to use the array at all, plug in a USB stick, assign it as disk1 and Bob's your uncle. No need to waste a HDD slot. You can do a feature request to have the one-device-in-array requirement to expand to one-device-in-array-or-cache requirement but if you run pure ZFS then that wouldn't make any diff. When ZFS is integrated, don't expect it to replace the array either. The array is a primary feature of Unraid, why it's called "Un"raid, why it's such a good NAS OS for media storage etc. Mover is something users will have to "learn" (more like familiarize) if they want to use the array. Having ZFS isn't gonna change that. Multiple arrays is not the same as multiple pools and wouldn't be an ancillary benefit of ZFS integration. Even considering multiple-pool, I don't see how it is an ancillary benefit of ZFS integration. 6.9.0 has multiple-pool without ZFS. BTRFS has quota, it's not ZFS exclusive. BTRFS has snapshot, it's not ZFS exclusive. AFAIK, there are really just 2 ZFS key features that fundamentally cannot be replicated elsewhere. Zvol - KVM/QEMU can use vdisk as an alternative to using zvol so it's not essential. Write atomicity - important to those who run RAID5/6 pools but as someone who has experience recovering from both ZFS and BTRFS RAID5 failure, I can tell you it's not essential. And implementing ZFS to Unraid isn't as detriment-free as people seem to assume. For example, there's an official bug in ZFS which makes it not respecting isolcpus. Sure, lots of things don't respect isolcpus but specifically to ZFS, it causes severe lag under heavy IO if the cores are share with a VM. That makes ZFS pool a big no-no for those who want the most consistent performance e.g. gaming VM, which is a major use case for Unraid. FreeNAS is based on FreeBSD which officially says "Note: VGA / GPU pass-through devices are not currently supported." so I'm guessing that's why nobody paid attention to the ZFS bug since FreeNAS users don't even have the gaming VM use case. I'm not saying there's no reason to support ZFS e.g. to attract FreeNAS users. But IMO, it's just another feature in the long wanted list and given the pros/cons, there are other features that should have higher priority and/or can be accomplished with less effort.
  18. Wait for johnnie.black to reply. He's the expert with this sorta stuff. Where did you read that "order of drives in the array/cache/parity is exactly the same"? From my own experience, as long as I assign parity disk to parity slot, cache disk to cache slot and data disk to data slots then all were well. The order didn't matter for me - in fact I have rearranged disk orders many times without any issue at all. I thought it would only cause data lost if, for example, you put data disk in parity slot.
  19. You can but you don't have to. There are 6.9.0 beta builds already released for Unraid Nvidia (and ICH777-compiled build over in the ICH777 plugin support topic). I vaguely remember beta22 or beta24 has an old Nvidia driver that didn't work but I'm pretty sure beta25 has the latest driver that should work.
  20. Try turn on "Bypass single root folder" and "Shared books are in a Calibre library" in the Advanced settings. PS: your questions probably are better asked in the Ubooquity support as they are not specific to the docker.
  21. Yes. Voila. But assuming you are not using some super exotic H265 profiles. The vast majority of things would work seamlessly. LSIO plugin will automate the installation of the Unraid Nvidia build that was built by the LSIO team. If your only intention is to use Unraid with Nvidia support then that probably is the easiest point to start. Note that after you install the plugin, you still have to go to Settings -> Unraid Nvidia to initial the download and install the custom build. ICH777 plugin is intended to use for anyone wanting to compile their own kernel (i.e. in conjunction with the ICH777 custom kernel compile docker) with things beyond just Nvidia drivers. It doesn't automate the installation of the kernel so you will have to do it manually. With both, it's critical to note that you should NOT update Unraid using the official GUI. You basically have to wait till the appropriate custom build has been released. It matters in a sense. Your docker should pin the cores corresponding to the cpu to which the card is connected.
  22. Disabling proxy will use normal DNS process which has a lag between your update of the A record and when it's effective so perhaps that was why it didn't work for you. Enabling it means it always routes through Cloudflare first (you can run DNS check and you will see Cloudflare IP instead of your actual IP). That means any update to the A record (you might even say it's a "virtual" A record) on Cloudflare would be effective practically immediately. The whole point of using Cloudflare DNS is its proxy capability so your actual IP isn't revealed (e.g. avoid DDOS) so there is really no reason to disable it.
  23. Your post suggests you were reckless with the server e.g. doing things without understanding the implication / reasons behind it. So your complaining doesn't change the fact that you (and your users) really only have yourself to blame for the blunder. You seem to refer to "huge" and "small" as the physical size of the stick. It has been widely recommended on the forum to avoid small sticks (e.g. the micro kind) because they have the tendency to overheat. So your changing to a small stick (while having a good working large stick) was already not something you should have done. USB stick replacement is intended to be a last resort i.e. because it's broken. And in most production environments, the best practice is "if it ain't broken, don't fix it". Unraid boots and runs fine with USB 3.0 sticks and USB 3.0 ports. It has nothing to do with Unraid "support". The recommendation to use 2.0 stick and/or 2.0 port is to slow things down to reduce the probability of overheating. Some old motherboards also don't boot with USB 3.0 port so obviously use the USB 2.0 port in that case. Your "other modern day devices" backhanded comment is typical blame-shifting nonsense. USB sticks are modern day device. You can complain about why Unraid boots off a stick because USB sticks aren't reliable and so on and that would be fine. But please don't blame-shift your own blunder. Having said that, the LT folks are understanding so just be patient and wait. You probably haven't had to deal with the true bureaucracy of big corporations to see the "it's your problem, not our problem" response. With regards to how I manage it. I don't change the stick willy-nilly and reserve the replacement for when it's an emergency. I have gone through several complete hardware changes without even having to resort to the replacement process yet i.e. it's still the same stick I have been using since my first Unraid server a very long time ago. I have a backup stick ready and periodically synced to my main stick so I can promptly replace when it's an emergency. I use 3.0 micro sticks because of aesthetics but I understand the tendency to overheat so I use 2.0 port to slow things down (which reduces heat output)
  24. The safe answer is you shouldn't be deploying beta on any production server. Beta is meant for testing only. The anecdotal answer is I'm running 6.9.0-beta25 on my production server with no issue at all.
  25. I updated from 6.8.3 and there was no change to the cache pool. You might want to provide details as to what you have done + attach diagnostics (Tools -> Diagnostics -> attach zip file).
×
×
  • Create New...