glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. I have the same experiences and was never able to solve it. Followed this kind gentlemens advice and even copied his xml but did not work. Its some magical combination of factors that makes it not work/work. Same for sierra/high sierra/mohave. Tried different clovers etc. Topologies, no topologies , tried for weeks. Max i got was it booted but then almost every program you run crashes. Gave up so if you find it let me know. Run all my osx vms with max 32 vcores (16 hyperthreaded) and called it a day.
  2. Unraid rocks hard. So happy bday and yes, feel free to send me that coveted badge.
  3. Unraid is even worse then the standard effect on any filesystem as in addition to normal filesystem overhead it has to do parity calculations/writes for each small file. I gave up struggling with trying regular incremental backups of huge audio sample libraries (milions) because it would take ages. Different bakcup solution react differently but all suffer. Regardless if you have fast ssd cache aparently. I do them now only once a month to unraid where the bakcup tool has a local database so does not have to recheck every file at every backup run. But still insanely slow always. And daily outside unraid. Normal large files , which is most of my data are rocketing over my 10g net at over 500-700MB/s without issues Its the only downside of an otherwise stellar Unraid experience.
  4. Thanks. Did not know that about the array staying online during clear. Is there a specific order to do things to make sure this happens when you add a new drive ? So off array, add and assingn new drive and start array will fully start array and then do clear separately in background ?
  5. Great stuff man. Thanks for you efford in figuring this out for us.
  6. +1 Absolutely vital i would say to be able to disable all these mitigations for non exposed servers or whatever. Couldnt care less for my server. I just need as much perf as the cpu can deliver. But hats of for LT for beeing on top of this. Respect.
  7. I think its not the starting of the VM but as i reported in some other thread , going to the VM tab starts (in my case) all drives. I bypass by never visiting that tab to start VM but do it from the dashboard.
  8. +1 for integrated in Unraid. currently happy using cloudcommander but nothing beats integrated,
  9. Keep a “tail -f /var/log/syslog” running on the console so when it hangs you can still see the last messages and maybe some hint.
  10. Run lspci -v This shows you all hardware recognised. You should see your cpu (look for brand name) in the output list and its adress, which you eventualy need if you want to do anything with it.
  11. The old card that did not work was a serverpull. This one no idea but works like a charm, better , faster (8 ssd’s) and way way cooler on full load then the old serverpull card. Tested the crap out of it and its a beauty. Your mileage may vary with stuff from there.
  12. based on other post on this forum discussing the exact requirements and issues with trim on hba’s and lates5 firmwares / unraid releases (cant find it but do a search on trim) I moved from a flashed H200 up to a LSI 9300-8i (tnx ebay for cheap chinese card if you are not in a hurry) and also replaced all my evo 950’s with 960’s and got fully btrfs trim working finally. You need the proper card and the proper drive now to get it working on btrfs otherwise you are out of luck. nice little speedboost as well with faster hba. Before that i had to temporary connect to motherboard sata , do trim and connect back to hba.
  13. Fixed the problem . Was related to my own modified bios i had flashed to these cards. Was a bit too agressive apparently on the different power states minimum voltage (to reduce fluctuation and increase stability in high overclocks), which was great for a real hackintosh, but which made the card behave differently in a VM somehow. I noticed when i saw the same or worse behaviour when i tried to get it to work properly on a W10 VM. It would not even work at all (the dreaded error 43) , untill i reflashed it slighly more conservative and all problems disappeared.
  14. Tnx. Thats what i suspected . Will keep an eye on the logging going forward to see if this behavior remains as what it is.
  15. Hi, Have been running a supermicro dual xeon based unraid server since about a year with zero issue's but as of recent better syslog parsing (using splunk) these messages start to worry me, although they have never led to errors at all (zero issues also on weekly parity checks) I have identical 6GB SATA drives spread out over the onboard SATA connectors and the SCU connectors as ran out of SATA. Never a single error on the drive array over all this time, but all drives on the SCO spit out these errors every 15 minutes or so : kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 kernel: sas: ata7: end_device-2:0: cmd error handler kernel: sas: ata7: end_device-2:0: dev error handler kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 The busy , failed and or tries numbers never change , its always as above It is not drive specific just any drive on any of the scu ports . Should i be worried or can these be safely ignored as some sort of failed pings ?
  16. Yeah i guess we have all been there haha. I so love BTRFS reflinks and snapshots for this sort of things. Anyway. Upgraded to a higher clover using the kit from gridrunners high sierra video . Now it worked ok but still the same problem. Even moved to a new (newest) imac model in clover, but still nogo on 40 cores. Going to try an upgrade to high sierra just for fun and see if that does anything (likely will kill my patched 10G drivers but just for the sake of it. Will just revert after)
  17. Well that was a bad idea duhhh. Was on clover 4449 and tried to update to the latest 4871. Dead and not booting anymore. Tnx god for BTRFS Reflinks ;-). Had the smart idea to create a reflink just before i did the clover thing. I do vaguely remember that the 4449 version was specialy adapted to run on qemu, but too long ago to remember. I think for now i will leave it like this unless someone has a bright idea of what causes this rather then blindly experimenting.
  18. Tnx. (you probably meant to say Majove as there are no Nvidia drivers yet) Mine is exactly like yours excluding of course the devices. Use it for Resolve / Nuke / Houdini renders. Crazy. No explanation. So must be somewhere in clover . Can not be OS specific as did a clean install of Mojave (without passing my Nvidia) and behaves the same. Nothing indicative in boot logs and everything indicated 40 cores are active. But programs crash with (as usual) not helpfull errors. But its a system wide thing that only appears with 32+ cores. Anything lower or equal 32 and all smooth. Insane. Will try and upgrade clover to latest version and see if that helps, but doubt it as i think the Mojave instal i used the latest.
  19. When i do that it does boot with 40 vcpu and at first glance it all looks perfect. But most programs i start crash immediately. Even terminal. Geekbench looses the 64bit option and only shows the 32 bit option , which is interesting. It dies when i try to run it. I have heard of this behavior but only above 64 cores. I am running 14.2 as well btw (was a typo before). Can you show me an xml of your working setup with 32+ cores please . Maybe i miss something obvious.
  20. Ps also tried it on a mojave vm to exclude the os release and same behavior.
  21. How did you get to go with so many cores in OSX ? Whatever i do am stuck at 32 vcores. Have a dual 2697v2 supermicro board. Whatever combination i try of sockets, cores , hyperthreading, anything that ends up above 32 and crash during boot. 32 or lower and all great . running clover based sierra vm and would like to assign 40 vcores and the rest for unraid. Did you do anything specific in clover ? Any specific machine type (tried to change but crashed like a madmen, now on model 14.1) or other setting that makes it work ? Was almost convinced it was a hard limit untill i found you and a few others running way above 32 vcores.
  22. Interesting as then it should work. Maybe that is only true for a specific OSX version onwards or with a specific driver for specific os version. Better re-check with Sonnet
  23. if you install the CA auto update containers plugin it will. If you update (new build) on dockerhub, it will be auto updated on UnRaid. Also try the integration between github and dockerhub. Super cool and you dont need to build localy. You just push your changed Dockerfile to github, which is then detected by DockerHub and the connected Docker is then automaticaly build (typicaly slow, but works). Then unraid (depending on auto update schedule) will pick up the changed version . Smoooooooothhhhh
  24. I had a similar struggle with the existing splunk dockers that i needed to modify but lost my config every time mostly due to lack of understanding of dockers. In the end i build a new dockerfile/image on my local macbook based on an existing one like you suggested as well and pushed it into docker hub. Then you can add it to unraid like any normal docker.
  25. Dont you have proper Sonnet drivers for your card ? Sonnet makes a lot of stuff for mac so check it out as you do need a driver to get it working. I have intel cards and flashed the bios so they work with smalltree drivers plus a small patch using clover (see tonymac for threads on that). So depending on the original brand (if its just a rebranded intel ard) of your card that may also be a way to go.