Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. +1 Good idea. Sort of an online backup functionality. Probably name it "cloned" is a better name since "both" can be interpreted differently. This would be a great use case for high speed NAS (e.g. for video editing). We could have a RAID0 cache pool for max speed for WIP files. These are automatically backed up to the slow array nightly using the mover so (some) protection against RAID0 failure. Once done, WIP files can be moved to the archive (again on the slow array). Potential complication with shfs as moving operation will be more complicated. Given the arrangement can easily be achieved using User Scripts plugin to cron a nightly bash script, it probably isn't a high priority but it would be useful nonetheless.
  2. eSATA to SATA adapter / cable can easily be found on Amazon / ebay / alibaba etc. You will have a bit of work to run the cable but other than that, it should be easy to convert. (will need SATA power separately from the PSU, just like any other SATA drive). Do you have spare PCIe slot? You could get an HBA / SATA card. There's a topic on here in which johnnie.black did speed test on many adapters (so the natural implication is that they work well to various extents with Unraid). Using USB is the not even the last resort, it should be avoided at all costs.
  3. If you set up syslog to mirror to flash then the file on the flash drive should contain both pre and post crash. Just attach the file. Crashes like that are hard to diagnosed though. We'll see if there's anything useful in the log. While you are at it, attach the diagnostic zip (Tools -> Diagnostics -> attach the whole zip file). My first hunch is PSU. Second hunch is RAM.
  4. There are 2 lsio hydra2 dockers on the app store. Older one points to linuxserver/hydra2 New one points to linuxserver/nzbhydra2 What's the diff between them? I'm using the old one, do I need to change to the new one?
  5. Issue is with your testing method. dsync will create artificially and unrealistically slow result. Try testing with something like rsync --progress or rsync--info=progress2 And you need to use a test file that is large enough, minimum 8GB (preferably at least larger than your RAM). You have to remember that dd is 45 years old. If you introduce any modern intermediary layer (e.g. Unraid /mnt/user, mergerfs etc.), it tends to crap itself.
  6. 1. It depends on the games. Most games won't need more than quad core 3GHz. So assuming each VM has 4 cores, you are left with 2 free cores with 7900X/9900X. Then you need to reserve 1 core for Unraid so you are left 1 core for the game server. That means either you have to run your game server on 1 core (2 threads) or 1 of your 2 VM will need to drop down to 3 cores. As long as you don't overclock (and you shouldn't when hosting VMs and/or running a NAS), modern CPU efficiency (together with modern electricity cost) is basically not something you need to worry about. The TL;DR: is if you have to worry about electricity costs, you probably need to worry more about essentials like food and water. 2. Yes, Unraid needs at least 1 core for it. You have to remember NAS functionality and virtualisation needs CPU power too! For 1 VM, you might be able to live with just half a core reserved for Unraid (i.e. 1 thread) but 2+ VM - forget it. For gaming VM, assign all threads from the same physical cores. That will give you best performance consistency. 3. No. Steam Library should be on the array, not vdisk. And vdisk can't be shared among concurrent VM. There are plenty of guides on how to integrate Steam with Unraid so have a quick look around Youtube. (hint: start with SpaceInvader One channel) 4. The cache that you are thinking, no. Having said all of the above, if you insist on the X299 Taichi + 2x Radeon VII of graphic cards then I would say life will be hard. You will likely need (and even with that, no guarantee): Vega patch. You will need to compile Unraid kernel with the vega patch. There's a forum user who compiled 6.8.3 with vega patch (among other patches) so you might be able to use that but there's no guarantee the user will do the same with future versions. Dumping your vbios (rom file) for the graphics card. Boot up with a graphics card that is not the Radeon VII (preferably one that won't be passed through to another VM). This may or may not be easy since the Taichi will always pick the 1st PCIe x16 slot to boot (Gigabyte motherboard will allow you to pick any x16 length slot to boot with).
  7. I'm a simpleton so rsync --progress or rsync --info=progress2 And I use real data. It's not too hard to find a 40-50GB linux iso nowadays (or just use dd to create a really big test file and do rsync on it).
  8. The general recommendation is "it depends". Your company specific needs carry heavy implication as to whether it is a suitable product or not. I think it would be better if you contact LimeTech directly.
  9. And what is your current directory when executing the dd command? I have found dd + dsync gives unrealistically low results. I think it's because dsync is sequential so your 32068 blocks are done one-by-one so the high IOWAIT is the 32k times dd had to wait to confirm data has been written. In reality, and particularly with NVMe, things are done in parallel and/or in aggregation so it's a lot faster. Have you also done a test on your other server to see if it is capable of more than 250MB/s read?
  10. Turn on syslog mirroring to flash (Settings -> Syslog Server) then try to crash your system and then attach that syslog instead. It should span the crash event so probably provide a bit more clues. Also worth waiting at least 5 minutes before hard power down just to be sure everything that can be logged has been logged. If you think DelugeVPN is the cause of the freeze, in your test, please start it first, wait 5 minutes to see if it freezes.
  11. The reason to recommend using the Mover is that it is fail-safe. Users may not have sufficient understanding of how Unraid share, array and cache work and thus make mistakes that can potentially cause data loss. If you know what you are doing, you can copy / move data manually. Personally I have always done it manually but then I am pretty OCD when it comes to backup so I can afford to make mistakes.
  12. "Considering ZFS" is not the same as "replacing SHFS with ZFS". And I have not expressed any doubt on the fact that SHFS has performance limitation - because I know and implemented workarounds for the limitation. My understanding is this consideration is for the cache pool. SHFS is still the engine behind the array (and shares). Whatever "imagine" you want to do, you can't change the fact that shfs = Unraid so the chance that Limetech would abandon it completely is rather low. Then you also ignored how integrated ZFS pooling is with its underlying file system, which is RAID-based. Selectively implementing the pooling on a different (non-RAID) file system, and then add parity calculation on top, is not going to be simple (assuming no performance penalty). I'm not saying whether it is possible or not possible to do, nor whether it should or should not be done. I'm saying that the SHFS-less ZFS-based product you are imagining is not going to be called "Unraid". And I'm completely ignoring the fact that switching to XFS seems to resolve the issue that the OP is seeing - suggesting the bug is with integrating BTRFS RAID pool into Unraid and not with SHFS. (and directly accessing /mnt/cache to bypass SHFS has been a known workaround for quite some time).
  13. That's like asking MacOS to throw away its kernel and use Linux kernel instead. SHFS = Unraid. You probably meant replacing BTRFS with ZFS.
  14. Ignore the triangles. They are harmless. It stops being annoying after a while.
  15. The xml config looks correct. Try dumping a vbios and use it. Also try using an older version of AMD driver (check for old versions on the AMD website). Newer AMD drivers were known to break pass through.
  16. Don't bother with SATA M.2. It's just a waste of SATA port with M.2 form factor heat issue. You are asking short generic question without specific details which I suspect won't yield any useful answer for you. Also "measurable" is entirely different from "perceivable" and "applicable". It can be measurably faster but not perceivable in your case because your use cases are not applicable.
  17. Your effort to upgrade the USB stick was probably misguided. Unraid OS is loaded into RAM at boot so if you use USB 3.0 stick, all you will save is about 10 seconds of boot time and that's about it. I think you should just continue with the 2.0 stick.
  18. Try removing your USB cards and test your VM without the card physically plugged in (and peripherals plugged into the motherboard).
  19. Nothing to do with trial or not. You have a network issue. Install the Tips and Tweaks plugin and see if tweaking network settings help (e.g. offloading and stuff like that). And obviously check your router settings. Any QoS restriction?
  20. It is too much because /mnt/user will duplicate /mnt/cache and /mnt/disk1. For fastest restoring from backup, rsync /mnt/cache + /mnt/disk# (where # is a number), with each location has its own directory in the backup drive. I know rsync has way to exclude stuff etc. but it's better in your case to have 2 separate lines in the script, 1 for /mnt/cache and 1 for /mnt/disk1. That way if there's interruption / crash / error etc., you know for sure what needs to be rerun.
  21. You need to attach diagnostics (Tools -> Diagnostics -> attach the full zip file).
  22. Some pointers: You shouldn't run your memory at 3200MHz (even if the RAM is rated for such speed). Manufacturer-rated overclock is still an overclock and overclocking is always less stable, which isn't ideal for a hypervisor host. There is no graphic emulation in Unraid. If you need graphic-card-like capabilities then you need to pass through a graphic card to the VM. Threadripper should have enough lanes for 4 GPU's. Issue may come with the 2080 because a lot of the RTX out there are 2.5-slot width varieties which means you can't fit 4x cards in the typical slot arrangement. Make sure to get 2-slot width cards. Get a Gigabyte motherboard because you are embarking on a rather "interesting" adventure so any quality of life improvement will be useful. You almost certainly will need to dump your vbios rom file and/or change which graphic card the BIOS boots with. Gigabyte BIOS has the Initial Display Output BIOS setting that allows you to pick any x16 slot to boot with (instead of having to physically swap the card to the first PCIe x16 slot) - check the owner manual. Expect compromises. VM is never the same as bare metal. Watch SpaceInvader One youtube channel for tutorials and guides.
  23. For massive data migration, I remove the parity drive and copy files by disk (so no single disk receives multiple IO streams simultaneously). Obviously no parity protection but I accepted the risks.
×
×
  • Create New...