mdrodge

Members
  • Posts

    232
  • Joined

  • Last visited

Everything posted by mdrodge

  1. Thanks for the quick response. It's not making a difference for me. Windows is reporting mostly idle but in the uraid tab I can see the first vm core sitting at 70-85% I have 2 vm's. I just tried it on the second because I'm using the first one. (Up until I found this thread i just assumed this was a inevitable part of vm life, like an Emulator overhead) Would be lovely to sort this though as it messes with the boost (I've had to disable boost in bios so I could actually get a real performance "boost" out of this) I'll show you what I'm dealing with. Cores 2-5 (and there multithread) are vm1 and cores 6-7 (+ht38-39) are vm2 I've already changed vm2's hpet to yes. Is there any other suggestions (Is it possible it only works if they both are set that way??)(i could try that but I'd have to find a different terminal first and that involves getting up lol.)
  2. Is anyone able to explain this. I don't understand where the clock section is. I'm guessing you are saying edit the vm xml with this? Can someone give an example please. Edit/update Oh wait I see. The vm's xml has that string there currently set to no and you just change it to yes. Very simple.
  3. Maybe SuperMicro's customer support could tell you? Just say you need the release notes for bios 3.1 and 3.2 or whatever. It's not an unreasonable request. They also might have some suggestions.
  4. I was looking to. I saw that. Well I guess it's up to you. I've updated the bios on my ryzen gaming rig 20+ times now. They had 1 bios a month for 2 years. It's still going. I can't tell you how to live you life but I know what I'd do.
  5. Well that's an early bios in the life of that cpu. Its possible they may have fixed this problem with an update. Do later version change notes give any clue. I understand not wanting to update unless forced to but I'd try it personally if something wasn't working after upgrading hardware. At least read the change notes i guess.
  6. take it/them out wipe the pads with a drop of alcohol, dry it and reseat it/them. are you sure all the mobo' pins (i'm guessing intel) are ok? PCIe lane issues is often a bad cpu mount. A gpu might run fine with as little as 4 lanes so just because it boots................
  7. Seems good after 8 hours of use. The dashboard has 1 minor bug. It's been reported and it's usable. Setting up nVidia drivers was nice and simple (i was nervous but it worked out fine) You just update then after reboot go to community apps and download nVidia Driver (by ich777) The package won't show up until you've updated to beta 35 (or higher) Then you can remove the old Unraid-nVidia package and raise a drink to @linuxserver.io for every thing he did for all that time It feels good to be all official and supported. (stable with my TR4 2990WX with gt710 + Quadrop2000 for docker, 1 vm with a 1050ti, m.2 and 3 usb3.0 controllers and 1 vm with no hardware) (official Plex, Emby, jlesage's Handbrake, Resilio {both versions} all working well) Apart from the dashboard issue (the vm and docker section are broken but if you go to there individual tabs they are fine so it's not a problem for me...... Anyway apart from that I'd recommend it if your already running a beta but if your on stable and you have no issues then as ever beta is beta and stable is always what's advised.
  8. @Frank1940 That's it thank you for a quick response
  9. HI, I could use a little hand holding please, I've been left out in the cold How do i get the nVidia drivers installed I've seen this link mentioned https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg I've been searching but can't find any explanation (just a bunch of people arguing) I have been using the LinixServer.io nVidia builds for a while (now it's depreciated, no explanation just a "i quit" statement) I think a guide would be helpful somewhere either on the beta 35 build page or maybe a final post on the LinixServer.io page for plebs like me he are confused and left without a path. I'm a point and click pleb so if someone could tell me what to type and where that would be great (I'm sure there are a few of us in this position so I'm guessing these instructions should be somewhere easy to find.) Thank you in advance.
  10. So it's working very well now. 9 Days without issue Occasionally it will stall a little but only when Parity is running. It's useable now.
  11. 4194304 is the golden number for me at the moment
  12. Agent Run Out Of System Notify Watchers Fixed (Unraid Terminal) open terminal (the icon that looks like (>_) on the top right of the main unraid gui on and type cat /proc/sys/fs/inotify/max_user_watches This tells you how many watchers you have (8192 is stock), to change the amount, type sudo sysctl fs.inotify.max_user_watches= followed by the number of watchers needed. There website says some os's only support 524288 and setting it higher does nothing BUT at 1048576 i still get the error and at 4194304 I don't so unraid ISN'T effected by this I use sudo sysctl fs.inotify.max_user_watches=4194304 DO NOT SET ABOVE 2147483647 You can set this once and forget it (doesn't need resetting on boot) Just wanted to share this with anyone who might have the same issue
  13. Cron Schedule needs a space. (* * * * *) not (*****) that explains my problem.
  14. Update on new steps I've taken. (The c-states etc seems to have fixed crashes and the stalls seem array related so. 1* I have an inport folder on the cache and anything brought in from Windows is passed into that using Resilio and every minute i run a user script for permissions of that folder. ( newperms /mnt/user/folder with a custom schedule of */1**** ) This allows me to not check the whole array every hour like I was doing before witch was dumb and was causing all types of issues indirectly. I now check the whole array at 1am with the same sort of script as above 2* CA Auto Turbo write (Hopefully will give a better balance than having it always on turbo.) 3* Change my mover schedule so it's nightly instead of hourly. (I couldn't do this before because a full cache would cause docker and vm's to crash)(a month ago I did step 4 and now I can do step 3 as a result) 4* I set up a second pool with just 1 ssd and moved appdata, system and iso to it so that even if the cache filled the rest of the system isn't effected) All this puts much less work on the array so should help with the stalling issue and also because less stress on array it can spin down, this will allow step 2 to have an effect (it'll switch to read/modify/write unless i'm waking the whole array (i set 1 disk spun down so I'd need to have woken most of the array and I believe I'd need to be actively using it to do that normally) I'm hoping this is the end of this/these issues. (It'll probably still have the odd little pauses from time to time but i might be able to live with that) My future option is to build a 2nd pool and add drives to that as a zfs array. This would sidestep all the array issues i have and give me the speed I miss but I'd need to replace 2x 10tb red drives with 2x 12tb gold drive so they all match and that's expensive even at half what they cost in 2017.
  15. It didn't quite crash that time but it got choppy for a bit. I was moving files with unbalance (a fix from the last round of crashes) and it started a parity check so I stopped both and it's fine now. I don't see any reason for a busy array to cause the vm and unraid gui to become unresponsive but that seems to be what just happened.
  16. It still crashes. 2 days in and the vm locked up. I rebooted the vm and 1 hour later the whole system stalled and had to be reset. I was importing a 50gb file into Windows when I crashed. Turbo write is on incase your wondering. It booted first time though. I didn't think to try and pull a log file before I reset. I'll try and force a crash and pull a log tonight. (Now I am close to pointing out a cause)
  17. AMD CBS - ACS Enable I never noticed a ACS Enable setting in the BIOS maybe that'll help so it's getting enabled. Though now I'm wondering what PCIe ARI Support is??? (Update ARI made the vm crash on boot but ACS Enable seems good.) Update..... This setting gives me the same level of group separation as enabling the "both" option in the VM settings and i now have that set to Downstream and it works fine
  18. Mostly good but now the VM (with a nvidia 1050ti, usb and an nvme) crashes.. The rest of the system is unaffected it seems and I can stop the vm and start it again but what ever I was doing at the time is lost. I think there were several issues originally. The system seems better but I still have 1 but left to fix. I'll look for a log now. Update. Nothing in the vm's log.. And I can't find any reference to this issue elsewhere. It's crashed a few times today and this is new behaviour. (Perhaps I'll look and see if I still need both options for pcie acs override) I was hoping to gpu render on Windows so I guess that'll have to wait. The other vm (vnc and no passthrough at all) is not effected at all so it seems like a passthrough issue but I don't know.
  19. What about if it's just one file and your happy to delete that file but can't and the error is 117 structure needs cleaning.
  20. So i still don't know how to use it guys. I've got mc open in terminal. I've got a file that I need to remove. None of the keys do anything apart from f8 and that tells me Cannot stat "file.xml" Structure needs cleaning. Even just an explanation of how to launch the help would be awesome. (If it has a list of commands)
  21. Why would disabled c state and idle power make one of my gpu's disappear. Vfio and devices see it and it's in its own group not passed through or anything (it's a gpu for plex), but in the settings/ unraid nvidia section under info it's missing. (Vfio-pci config beta bug i guess) Update.................. Reboot, same. Remove gpu, Reboot, replaced gpu Reboot, same Move gpu to different slot (from x16 to x8).... it's back Move it all back to original layout, still broken. So this configuration leaves me with only 1 pcie x16 slot unless I swap that card for the one in the other x16 slot witch means move every card around change the bios to display on a different slot then remove the vfio config file from unraid resetup and then test the new configuration. What a bitch. Update 2.... That is possible. I have 2x 16x slots again and everything is passed through just need to create a new template for the vm effected and wait and see if it's fixed.. Pray for me.....
  22. Global c state and power control idle changed. 2.9ghz (below stock) core 2133hz ram Let's see how we go.
  23. Are you having issues? I would assume boosting issues are still a thing but only on certain hardware (first gen zen) It seems like there wouldn't be a point patching it now unless Zen3 is also effected as most people effected would have ether worked around by now or upgraded (not many new builds with first gen Ryzen now i wouldn't have thought) I could be wrong though. I have some sort of stability issue myself (on a 2990wx) so i'm looking at this now.
  24. I CAN'T START MY VM VFIO GROUP COULDN'T BE CLAIMED. The vfio group it was trying to claim was not one I wanted to pass through but was a multifunction counterpart I use pcie acs override set to both as that was necessary when I set it up. Today i noticed that the usb controller i pass through to one of my vm's seems to be part of the chipset and it has a multifunction part that is the sata controller so I'm going to try and use a different USB controller for pass through. This will involve some rewiring of the internal usb connectors but it allows the vm to start without requiring a full restart and still get some usb. Edit. This is a different problem to the main one it turns out. I'll leave it here though. Might help someone.