Jump to content

Marshalleq

Members
  • Content Count

    305
  • Joined

  • Last visited

Community Reputation

20 Good

About Marshalleq

  • Rank
    Advanced Member
  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

364 profile views
  1. Well it could be my imagination, but the GUI seems a lot lot faster.
  2. rc3 just came out. Changelog below. It does mention a new version of md/unraid and a kernel as per notes below. I've installed it - no idea if this is addressing this fix though. Version 6.7.3-rc3 2019-09-17 (Changes vs. 6.7.2) Linux kernel: version: 4.19.73 md/unraid: version 2.9.8 Version 6.7.2 2019-06-25 Base distro: kernel-firmware: version 20190620_7ae3a09 php version: 7.2.19 (restore sqlite support) sqlite version: 3.28.0 Linux kernel: version: 4.19.56
  3. Thanks. Seems like it was only added three weeks ago though - that's disappointing.
  4. If you can find evidence of that, I'll believe you. And it would make the remainder of the people on this list happy. But to my eyes, no it hasn't, only through a third party. Quite poor form in my opinion.
  5. Multiple hard drives across both onboard and PCIe base controllers, two completely dead SSD's and an additional brand new SSD on top of that with read errors, mostly occurring post reboot. It may or may not be related to this bug, however it seems to me it is related to this version. No idea why. Yeah it could be hardware, but again 'coincidentally' arrived with this version.
  6. The scariest thing to me is I keep getting read errors on this version. I've had several on a brand new SSD, which are permanently etched into it's record and those parts disabled, I've had them on hard drives as well - which it seems to select at random. Every now and then it kicks them out of the array and this so far always coincides with a reboot. i've also had two SSD's completely die on this version which my gut tells me is caused by this but I don't know how I could ever prove that - I just don't want to downgrade as the GPU passthrough is so much better in this version, but I'm thinking again I might do that today. It's just too much pain.
  7. Well said. Though you get what you pay for. Unraid is pretty cheap really. I run servers and things on mine, but really Unraid is a long way off from being a mission critical type of setup. It's most definitely aimed at home installations.
  8. Fantastic. How ever you squeezed that information out of him I'm grateful. Personally, I didn't think it would be that hard to just post that here. Either way it's hard to be annoyed at a guy who's profile only brings memories of island holidays and Hawaiian shirts lol.
  9. That sounds good - have you been told that officially by anyone, they're still remarkably quiet, I don't remember the database corruption issue taking this long or being so 'still' and I'd say this is at least as bad....
  10. Yes, and it's awesome. Worked quite well for me.
  11. Seems like you've not mapped your transcode folder in your docker to /tmp file in the host? I am assuming that because you're in an unraid forum you're using docker.... However, I agree with above, definately do a ramdisk. It does work better.
  12. @Danuel I've actually done this configuration myself now for a bit of fun. Specifically cloudflare, the same letsencrypt container and nextcloud. I got the same errors you got at some point, and got a bunch of others as well. Have you forwarded your firewall ports from 443 to 1443 and 80 to 180? The way you have configured it above will need that. Alternatively, you can change the ports that unraid is on (make sure you write them down otherwise you may lock yourself out). I chose to change the ports unraid is on as it meant that I could access the letsencrypt hosted platforms inside and outside my network in the same way. One of the issues I had was that I could not get Nat reflection working (I have opnsense which is very similar to pfsense in the video). I think we can get this working together if we start working through your firewall etc.
  13. Yes, of course, however I'm just particualrly referring to the difference of host pass through vs emulation of the CPU. I though host passthrough locked those cores for exclusive use, but I'm now thinking it doesn't.
  14. So I accidentally had CPU host passthrough AND emulated QEMU64 running on the same cores. I had believe that passthrough was like locking the guest to those cores physically. However after thinking about it for a bit, i thought maybe emulated was more around getting a reduced but more consistent set of CPU extensions so that virtual machines can be live migrated. So if host passthrough allows more CPU extensions to be passed through (and it does), is there anything stopping me from running all my VM's in host passthrough mode even if they touch the same cores simultaneously? Does anyone have a definitive answer? I'm still googling, but haven't come up with anything yet. Many thanks, Marshalleq
  15. So I'm on the latest RC. Definately the Governer is scaling etc, but I'm using something that requires a lot of clock speed. You could say I'm splitting hairs, but I never see the system (Ryzen 1950X) going higher than 3.7GHz and it is rated for boost clock to 4GHz in it's default state. I've tried shifting it into performance mode, but suspect that reduces it's chances of boosting to be honest, since you generally only want a few cores boosted that high. Any thoughts?