Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. +1. I think this bug has always been there, as far back as I can remember.
  2. If I understand what you are doing:, Your 8 even cores = 0 2 4 etc. That covers core 0 which Unraid tends to use for its own things so naturally it will have a bit lower performance than 1 3 5 etc. Your 8 even / odd cores should by itself be more powerful than your previous 10 cores because your 10 cores = 5 physical cores with hyper threading and your 8 even / odd cores = 8 physical cores without hyper threading. The latter should always be faster than the former, especially since you turned off all other activities i.e. dockers and VMs. Based on what you reported, it looks like your pairing is 0 + 1 = 1 pair. You can double check that on the Unraid dashboard. Btw, what you did is one way to maximize performance. I did a similar thing (pinning 3 out of every 4 odd cores to my workstation VM) + use Process Lasso (app recommended by Wendell the Level1Tech guy) to ensure that my games and Plex and work stuff generally don't overlap.
  3. What do you mean by "ipv4 address is bogus"? From a fresh install, that IP should be assigned by your router so if it's bogus then it's a problem with your router DHCP? Also, you can boot to GUI mode (that's 1 of the options at boot) which will allow you to access the GUI without the need of a 2nd computer.
  4. There isn't. RAID1 only allows you to lose up to half of your disks. Losing 2 in a 3-disk RAID 1 = more than half = lost data.
  5. Have you double checked that it actually gives you a new external IP i.e. you are routing through the VPN?
  6. Maybe try the easy way out first: unplug your cable to the current port, plug it into the other port. Any particular reason why you don't have bridge enabled? Does your router / switch support 10GbE? Is your cable Cat 6?
  7. Next time the VM shuts itself down: Tools -> Diagnostics -> Download -> attach zip file here From your VM log, look for the most recent line that looks like this: "terminating on signal 15 from pid [a pid number]", go to Tools -> Processes and look for the pid number and copy-paste that line here. If you can't find it report you can't find it + pid number. On the Main tab, take a screenshot of your array and cache (and any Unassigned Devices) showing how much free space is available on all disks Also, for text that you copy-paste from Unraid to the forum, please use the Code button on the forum (it's the button that looks like </>). It makes it easier to look at things. Expectation management: there are many things that can cause VM to shut itself down so keep your fingers crossed that there's an obvious cause.
  8. If I were you, I would install the Dynamix File Integrity plugin to watch for corrupted files. That's an easier way to do it. If there's parity error but all the files seem ok then it might very well be the parity disk. Maybe it wasn't built correctly?
  9. NIC link is down is down suggesting a NIC / cable / router problem. Try a different port / cable / router.
  10. Let's set the expectation straight first. The 6700K base clock is a 4GHz and the 1700X base clock is 3.4GHz. That is a significant difference. GPU-intensive benchmarks (and games) benefit more from a higher base clock. Barebone can turbo boost higher since there's a lower load on all cores. You have to then add virtualisation overhead to the equation. Again it's not insignificant. So basically, your barebone 6700K vs VM 1700X comparison may not be entirely oranges vs apples but it's pretty close. Now in terms of optimisation: You need to appreciate that a Ryzen CPU contains 2 CCX "glued" together. Don't use an odd number of physical cores (e.g. your 10 logical = 5 physical) on Ryzen. An odd number of physical cores ensures that 1 CCX is always overloaded, reducing overall performance. Based on my testing, the lost performance can be as much as 1 core (e.g. 3+3 is just as fast as 3+4) Spread the even number of physical cores evenly across both CCX will also help (so don't do 2+4, do 3+3) Check your CPU core numbering scheme so you don't accidentally pin the wrong hyperthreaded pair. BIOS changes have been known to change the numbering scheme (e.g. 0+1 = 1 pair becomes 0+8). When you do your Q35 machine type template, did you add the qemu tag (see below) at the end so your emulated PCIe slots run at full PCIe x16 speed? You need to add this above </domain> for Q35 machine type. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  11. You can use both one or the other but you need 2 separate VM templates. Switching back and forth between them with the same template is doable but troublesome due potential errors in the template. That is assuming your GPU resets itself correctly upon VM shutdown.
  12. This is an extremely difficult issue to diagnose due to the multitude of things affecting download speed. I can get close to 90% of my gigabit connection in my Win10 VM (and similar level from Unraid) so it's certainly not a VM problem. However, this speed is only if I use my ISP speed test server. Otherwise, even with known fast servers (e.g. Ubuntu iso and Google), I would be happy if it get 50% gigabit. Your best course of action probably is to try to migrate to using dockers for those purposes that require maxing out your connection.
  13. You misunderstood isolation. Isolation = tell Unraid (including dockers) to NOT use the core (usually because you want your VM to use it). Isolating CPU0 is almost always not recommended. If you want a docker to not use a certain core, you need to use the docker CPU pinning feature to select which cores the docker can use (and thus if you don't select CPU0 then the docker won't use that core).
  14. I don't use it but there's a nzbgetvpn docker in the "App store"
  15. +1, this is incredibly useful for mixed-size arrays.
  16. rclone sync + bash script + CA User Script plugin
  17. I can confirm this bug - but with a different conclusion. cp from cache to disk2 (using console) reaches about 200MB/s, read from disk3 (via SMB) drops to 5MB/s Once the disk2 write is done, read from disk3 immediately goes back up to 197MB/s cp from cache to unassigned device (using console) reaches 500MB/s, read from disk 3 (via SMB) is still high around 172MB/s To remove SMB as a variable, I have repeated the test using console only (2 simultaneous connections) and they have similar results To remove console as a variable, I have repeated the test using SMB only and I can see write speed about 2x-3x read speed but the frequent fluctuation makes it hard to judge. However, it's clear read speed is in the double-digit (i.e. faster than case (1) above). To remove write as a variable, I tested read (via SMB) from 3 disks, 2 disks and 1 disk and get 96-95-97, 141-143 and 210. To remove read as a variable, I tested write (via SMB) from 3 disks, 2 disks and 1 disk and get similarly even splits. No parity. All mitigation disabled via Squid's plugin. So it sounds to me like it's not necessarily an issue with concurrent performance but rather there's a speed limit to the array IO with incorrect prioritisation of write vs read. For read/write to a single disk, it's limited by the maximum speed of the device, usually HDD which is usually lower than this overall speed limit. When read / write to multiple disks, the total speed of multiple devices exceed the speed limit, causing the overall limit to be apparent. If only read or only write, the limit is divided across multiple disks evenly If read + write, there appears to significantly higher priority (and/or resources) given to write, crippling read speed.
  18. First and foremost, wouldn't it be easier to try to resolve your write issue with SMB instead since you have already got rclone setup and working for read? I would imagine that beats trying to find an alternative solution. Now, having read through your post a few times, your workflow doesn't make sense. Why would your team be working on files remotely (i.e. on Dropbox) if you guys are in an office (presumably the same office)? Since you have Unraid, I would think it's faster for your team to directly access the data locally (even through Wifi, it should still be faster than Dropbox) and only use the cloud for external approval and/or and/or archival and/or backup and/or offsite work (for which data should still be downloaded locally before being worked on). Just off the top of my head, I would suggest a workflow like this: Client upload content to Dropbox "Inbound" folder Rclone mount the dropbox Inbound folder Regular script to copy data from Inbound to array Team members work on data on array Finished exports are copied to "Finished" share on array. Used projects and content are moved to a "Archive" share on array. Regular upload script to "Finished" share to "Finished" folder in Dropbox + "Archive" share to "Archive" folder in dropbox. Client review / approval
  19. Jet engine is no biggie. When I was looking at some options for my server, I remember hearing 1 fan that sounds like supersonic fart. That was when I gave up on 1U servers.
  20. What do you mean by "loose their mapping to the Media share"? Showing config on 1 docker won't help much because the issue might very well be in a different docker. Also, you are having downloads in the array. That's highly not recommended.
  21. Start a new template and redo it. That will save you more time than stumbling your way through the xml without knowing exactly what to look for.
  22. Crucial MX500 is the best value I can find at the moment. Otherwise, Samsung 860 EVO. Whatever you get, make sure it's 3D TLC. Not non-3D TLC (if it isn't advertised as 3D then it's not 3D - it's something manufacturers are very proud of). Definitely not QLC Ideally MLC or SLC but those typically are not affordable. And make sure it has DRAM cache.
  23. I understand your point but I can reliably reproduce this with my VPN (PIA) by simply switching to a non-port-forwarding server. I would lose access to the interface within 10 secs or so after docker start but I can tell the docker is still running based on network stats. Maybe something peculiar about my network / ISP.
  24. 2x16 DDR4 3600mhz will upgrade more ram as needed - not recommended to use 2 stick with Threadripper. You are better off with 4x8GB if price is an issue. The motherboard has 8 RAM slots so it does not limit your ability to upgrade (to a certain extent). Also, no point getting 3600MHz. For most workloads, especially in a virtualised envi, you will be hardpressed to find any perceptible diff with those highly overclocked RAM but more likely instability. 2X8 Tb 7200rpm parity Drives (Rest of my drives will be 8TB after testing) - Frank's post above is more sensible use for the 8TB. 2xSSD 256gb cache drives (will upgrade size after testing if needed) - Unless you are storing critical data that needs RAID1 protection, it's better to get 1x512GB. 3xNvme drives onboard will be used for VM's if possible again more testing - If the Xtreme is anything like my Designare, the 3rd M.2 slots (the 2280 slot) is in the same IOMMU group as many other things, most notably the NIC. So you won't be able to pass it through the the VM via PCIe route without ACS Override, which may cause severe performance issues. You can of course pass it through as a disk or use it for a vdisk image with performance penalties as usual. 2xAMD x5100 8gb (these are extras that i have to use for passthrough to vm's) - You might want to google to see if these cards are happy with PCIe pass-through on Unraid. Having someone with successful build would be best but even having someone reporting issues would be good to know e.g. solutions. You have 2x cards so in case you wonder, last I checked, Crossfire doesn't work in VM. 3xDVD BluRay to rip hopefully 3 movies at a time. Have over 600 titles (I know makemkv supports multiple dvd's but not sure how this will passthrough to W10) - Why W10? There is makemkv docker, as well as Ripper. Ripper is better in my opinion - just because it ejects the disk automatically for you once done. You can run multiple dockers easily, 1 fer each optical drive. Just change the name. If the Gigabyte Xtreme is anything like my Designare (which it should be), you should get a cheapo GPU to use in the PCIe x1 slot for Unraid to boot with (the BIOS should allow you to pick which slot as primary GPU). That saves a lot of troubles with GPU pass-through. The Zotac (in my sig) is pretty much the only option I'm aware of that is easily obtainable but you don't exactly need much more than that.
×
×
  • Create New...