FlynDice

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FlynDice's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Works for me too, back to normal now! 👍
  2. Try clearing your browser cache in chrome. That fixed the noVNC problem for me.
  3. Had similar problem & clearing browser cache resolved this for me.
  4. FlynDice

    noVNC errors

    I had a similar problem & clearing the browser cache solved it for me...
  5. Unfortunately no difference. Update: I restored to 6.7.2 and ended up with the same problem. After clearing the browser cache noVNC access to the vms was normal in both 6.72 and 6.8 😁
  6. After upgrading to 6.8 I'm now getting this noVNC error while trying to access both vms I run. noVNC encountered an error: Uncaught SyntaxError: The requested module './util/browser.js' does not provide an export named 'dragThreshold' http://192.168.1.4/plugins/dynamix.vm.manager/vnc.html?autoconnect=true&host=192.168.1.4&port=&path=/wsproxy/5700/:0:0 SyntaxError: The requested module './util/browser.js' does not provide an export named 'dragThreshold' I can access the vms just fine with NoMachine and Remmina but not the VNC Remote menu option in Unraid.
  7. Just successfully upgraded my single 128GB ssd cachedrive to 2 250GB ssd's using Gridrunner's youtube video. Thanx Gridrunner. After finishing, a thought crossed my mind that perhaps would have made the process a bit simpler... Backup & restore is simple enough but copying the libvert file & reconstructing the docker file had a lot of steps to follow & doublecheck. I'm wondering why simply adding a new drive to the cachepool in raid1 config(default mirror) wouldn't mirror the data onto the new drive and then removing the original drive from the pool, shouldn't I be left with my cache data on the new larger drive without worrying that I missed that 1 critical step ... or doesn't it quite work that way?
  8. Don't have time at the moment to investigate more besides reverting to 6.5.2, running fix common problems & disabling & reenabling docker & vms, which did not work... Ran fix common problems and the update assistant & then upgraded to 6.5.3. After rebooting no VM or dockers and I can't find the quick fix to get them back.... Anyone else run into this or have some recovery advice? Thanks Turns out for some reason my cache drive was removed from the array. Stopping the array and placing the drive back in the cache position returned everything to normal. Not sure how the drive ended up removed from the cache slot though...
  9. Just an update on what worked & didn't work. For cpu volts, manual 1.3 & 1.4 locked within 6 hours each, zenstates --c6-disable locked at 18 hours again. Back to c-states disabled for now which seems to be the safe haven for now....
  10. Thanks for the Heads Up on this. After reading up a bit I have to agree with everything you've said. I'm trying manual 1.3 v for now on the cpu voltage to see how it works, although the hardware readings in the bios are telling me the cpu voltage is still hitting 1.37 on this setting. It seems the ASRock bios defaults to 1.45 v for several of their boards when you enable manual vs auto. Bad on them I would say... To their credit the voltages over 1.44 do show as red but there's no info as to what this means. I thought it was just alerting me that I was setting things manually. 15 minutes with fingers crossed so far....
  11. What's it supposed to be at then. It was 1.45 before I changed anything. I guess some research is in order....
  12. Working on about 30 hours so far, no problems vs 2 lockups at around 6 hr point with c-states enabled before. Enabled c-states in bios, commented out the zenstates --c6-disable fix for now, & upped the cpu volts from 1.45 to 1.55
  13. From info tab: M/B: ASRock - X370 Pro4 CPU: AMD Ryzen 5 2400G with Radeon Vega Graphics @ 3600 HVM: Enabled IOMMU: Enabled Cache: 384 kB, 2048 kB, 4096 kB Memory: 16 GB (max. installable capacity 256 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.14.31-unRAID x86_64 6.5.1 rc3 It crashed twice within about 5-6 hours with c states enabled and the zenstates --c6-disable fix. Runs rock solid with c-states disabled in the bios so far, maybe 3 weeks or so. The bios does not have the option to disable only c-6.