1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. Currently I set no topology, using the second xml because in the past I've used topology settings with little to no effect including host passthrough. I don't have time right now but in the next few days, I'll run more benchmarks using topology and host passthrough for comparison.
  2. It's not that OSX isn't supported on unraid, but the only legal way to run it is on Apple hardware. So, as long as you are running unraid on a Mac, you are fine. Limetech has decided not to put effort into supporting the minuscule market share of people who are interested in running unraid on a Mac, but if you are, there is nothing in unraid preventing you from setting up an OSX VM on said unraid Mac. Running OSX on any OTHER hardware besides Apple branded equipment is clearly against the OSX license agreement. good thing I have a bunch of apple branded hardware to run it on!
  3. BUT YOU COULD BE! lol..... not a problem, and don't think I was attacking everything you were saying by taking it point by point. You covered many areas and I wanted to be sure to address them all properly.
  4. I know, it is completely backwards to everything in the cpu pinning thread. That's why it's interesting. And as I noted in certain cases, will create/contribute to audio distortion. As I stated before, I do not know if it is the differences between os x and win 10, or some other factor. I have done no testing in win10. I am concentrating on raw power, but in each case, when video/audio was tested, I noted the results. It does apply to the real world application of my vm's, which I use daily for audio/video editing, web content creation, general web usage, and those other vm's running in my household concurrently. This is in addition to my batch network transcoding(not plex) which is also handled by vm's on a small cluster of machines. So cpu power is important to me, but as noted that also usually correlates to perfectly fine working audio/video with no lag or latency. You can't use the vm editor and run OS X. Many people have begged LT to incorporate this, but they won't due to perceived legal issues that may or may not actually be there. There are too many custom modifications needed to the xml, that if you tried to change any parameter with the vm editor, your custom and needed edits would be gone and the vm would not work. So you have to learn to edit xml manually for everything (short of nvram creation.) I know you can mess thing up royally, and believe me I have as part of the learning process. And I still make mistakes every now and then. Additionally, in terms of topology assignment, I've found almost no difference in scores whether you present the vm a single proc with 8 single cores, or 4 ht cores, or 2 procs with 4 single cores. Basically the same. Again, this is OS X. It's not optimized like windows for virtual environments, which may play a part in all of this. Part of the reason I posted results in here is because everything I did was os x specific, and might confuse everyone else in the pinning thread. I think I've also asked for others to try and confirm my findings too, to determine if the results are consistent in OS X or a function of my equipment. I'm not ready to proclaim my way as the way for OS X, but I know my way works best on my setup, discovered and verified though trial and error. All true. And I realize that the hardware I use was intended for virtualization on a much larger scale than I actually use it for, so I have quite a bit of overkill compared to most users on the forum. Which is part of the reason why my results need more testing/verification by others before it should be universally accepted. I have no idea why my results contradict everything in the cpu pinning thread. They should be discussed and subjected to scrutiny. As I stated, I have no idea the reason for the contrasting results, short of a few guesses. I have zero degrees in computer sciences and network administration. I'm on here to learn and to share what I've learned and discovered, just like everyone else. And try to find the best way to get the most out of my hardware.
  5. IF you find you need to, then add the following to your syslinux.cfg file append vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
  6. also a 50/50 chance you'll have issues with MSI interrupts and need to enable allowing unsafe interrupts in the Syslinux Configuration.
  7. first thing to check is your bios settings on the server, and power settings in iLo (if you have it.) They may be set to max. perf. if those are fine, then probably looking at the cpu scaling/governor next? (to see what it is set on)
  8. shut down vm change share where vm is hosted to cache only invoke mover go get a coffee
  9. there may be an easier way than this, but I don't know it: make sure your array disks are spun down, either by time or by clicking spin down button on pain page. launch vm. if your vm img file is located on the cache only, then it should not spin up the array. where is your app data folder located?
  10. your options: on Cache: If you have 2 or more disks making a cache pool, then your vm is safer because it is backed on on another part of the pool, unless you've changed the raid settings. The vm also has to compete with other services/dockers for disk i/o that are using the cache disk. Faster write speeds than array, even if only a 1 disk cache. via unassigned device: vm not competing for disk i/o with dockers/etc, but does not have a backup due to the lack of a drive pool. faster write speeds than array. on array: much slooooower write speeds to the disk image. generally not advised.
  11. perhaps if it were pinned/sticked at the top it might get more traction, otherwise it might appear as another thread of someone asking for assistance, vs offering it.
  12. I went with using my dual 6 core server because the dual 4 was busy with one of my kids using a vm on it and I didn't have the other 2 powered up. It's essentially the same test, just a few more cores involved: Thread pairings Proc 1 cpu 0 <===> cpu 12 cpu 1 <===> cpu 13 cpu 2 <===> cpu 14 cpu 3 <===> cpu 15 cpu 4 <===> cpu 16 cpu 5 <===> cpu 17 Proc 2 cpu 6 <===> cpu 18 cpu 7 <===> cpu 19 cpu 8 <===> cpu 20 cpu 9 <===> cpu 21 cpu 10 <===> cpu 22 cpu 11 <===> cpu 23 vm1 6-11, vm2 18-23 emulator pin 1-2 326 325 329 326 329 325 vm1 & vm2 6-11, 18-23 emulator pin 1-2 336 334 335 334 334 338 If you do the math, it's a 2-4% improvement when you run both vm's on the same HT paired cores. Which is interesting because in previous tests when they were loaded on the same non paired cores, it delivered the worst results of all combinations. So it's very slightly better (but not at all advisable) to run 2 vm's on 12 shared cores vs running them at 6 cores each on the other's ht pair... if you have to... but really shouldn't.... ha!
  13. Thanks 1812 for taking the time to do all those tests. Interesting results. Would be interesting to see some the difference in between Proc 1 cpu 0 <===> cpu 8 cpu 1 <===> cpu 9 cpu 2 <===> cpu 10 cpu 3 <===> cpu 11 Proc2 cpu 4 <===> cpu 12 cpu 5 <===> cpu 13 cpu 6 <===> cpu 14 cpu 7 <===> cpu 15 A vm1 assigned cores 4-7 vm2 assigned cores 12-15 B vm1 assigned cores 4-7,12-15 vm2 assigned cores 4-7,12-15 and see A or B gets higher scores. place your bets... results in a couple hours!
  14. 1 u would be tricky. Additionally to get my gtx760 to work, I had to employ an external PSU and run cables to the 6 & 8 pin inputs. It's not the best way... but is the only way to make it work. I've been looking for PCIE expanders that would be able to move a card fully external, but can't find anything reasonably priced once you move beyond 1x.... But that is an option, albeit expensive.
  15. Both that I bought off ebay from 2 different sellers were dropped in with no problems. You may have to disable the onboard raid controller in the bios. One of my servers had an onboard raid controller in the process of dying, and even though I wasn't using it to access disks it was causing serr issues in unRaid. But the other 3 are still enabled but have no cabling attached. Make sure you know the speed limits of using the h220 in terms of bandwidth/etc and how that affects sata/sas disk and the backplane on the server. I believe the most you will get is sata II speed with sata disks. SAS might give you stata III. I don't recall, but the product manual for the controller is available online. I also use an hp sas expander, both from ebay, both from different sources and dropped in without issue. The gives me a way to access a larger array of 3.5 disks in addition to the 8 2.5 in the onboard cage. There are probably faster choices that will work considering all the pcie ports available and hardware compatibility. But when I had the lot of servers fall into my lap and I wasn't 100% are I'd stick with unRaid to run on them, so that's why I stayed in the hp family for controllers. I'm certain now. If you're planning on adding a gpu, also be aware that space is tight for bigger cards. I put a gtx 760 in one of my servers but had to use a dremmel to cut off some of the placid shroud which gave me a 1/8th in. clearance between the cpu exhaust housing and the gpu.... fun times.... There are also reasonable cpu upgrades depending on what yours is coming with.
  16. I could be very wrong, but I understood that if you use isolcpus in the syslinux.cfg that it isolated the listed cores from unRaid AND the dockers, meaning that your dockers/plugins can only now use what you have listed as 1, 16, 17. I've never pinned cores to dockers, just isolated cores for vm's, and let unRaid and my dockers figure out which cores of the remaining they want to use and when, and have had no issues with that.
  17. it's still quite important to know how the app data backup works, and to be using it. I asked my question from an honest point of not knowing if there was an issue that i didn't know about using mover. I was also thinking this morning that previously, i've just copied the entire contents of the cache using Kruasder to the array temporarily then move back and hadn't had the issues with docker images that I guess some others have had. I think there are multiple ways to do it. And you've outlined several reasons for that. Some are safer for data storage than others.
  18. stupid question time: when swapping out a cache drive, could you not have just changed the share setting from prefer to yes, and then invoked the mover to send them back to the array, then after swapping out the drive, reversed the process? Or does that eat the files?
  19. or just copy the code that was sent to you via email and put it in the new install.
  20. -cough-cough-everything in my sig works as it is described-cough-... but you don't have to stick with an hp hba.... just consult the hardware wiki/list.
  21. Would you care to name them so that others are forewarned? I don't know which SSDs are good and which are not so good. cheap SSD drives i've used personally in unRaid and other platforms that had horrible sustained write speeds (meaning initial writes at sub 200MB/s and dropping to 80MB or less): Kingston SSD Now series PNY CS1311 Patriot Blast/Blaze read speeds are usually "ok" but none perform like my samsung 840 or 850 drives, but they are also 1/2 the price or less...
  22. is it a cheap ssd? those often start off fast on transfers and then die off down to spinning disk speed or worse. (first hand experience with a couple)