Jump to content

Can0nfan

Members
  • Content Count

    469
  • Joined

  • Last visited

Community Reputation

23 Good

About Can0nfan

  • Rank
    Advanced Member
  • Birthday 06/04/1977

Converted

  • Gender
    Male
  • Location
    Calgary
  • ICQ
    38688944
  • MSN Messenger
    cyphus4free@hotmail.com
  • Personal Text
    my ICQ and MSN are dead but i do remember those!

Recent Profile Visitors

1135 profile views
  1. it does seem very lagging over the unRAID VNC client as well so ill try a new template and see how that works, thank you for the suggestion
  2. nope same system as pre-6.7...Dual Xeon E5-2680 v1's the VM's are on a cache array od dual 1TB SSD's and i have plenty of ram for unRAID and the VM's 94GB ram with 16GB allocated to Windows, 16GB to one Fedora 30 Server and 1GB to another Fedora server it does seem most likely to be an issue introduced in unRAID 6.7 RC'S and the microsfot beta RDP client for my 27" 5K iMac, while still sluggish via a windows 10 laptop i have its not nearly as bad as when i use the RDP for my iMac.
  3. my three are however my windows 10 is awwwfuly sluggish and slow..still trying to see if its the mac rdp client or vm itself
  4. Hi @limetech great work team I didn’t seen anything about the missing VM’s issue from RC6 I’m still running RC5 because of that any word if this is resolved in RC7?? edit: if I am right in my thinking this is the fix? “Revert libvirt from 5.9.0 to 5.8.0 because 5.9.0 has bug where 'libvirt_list_domains()' returns an empty string when all domains are started instead of a list of domains.”
  5. im getting same issue on one of my HGST_HUH728080ALE's 8TB Sata drive that JUST showed up. I have 3 others of the same model without issue it just wont enable. I have another similar model with no issue HGST_HUH728080ALE604 is fine HGST_HUH728080ALE600 is the one that wont enable with the hdparm or sdparm commands root@Thor:~# hdparm -W 1 /dev/sdm /dev/sdm: setting drive write-caching to 1 (on) write-caching = 0 (off) also tried root@Thor:~# sdparm --set=WCE /dev/sdm /dev/sdm: ATA HGST HUH728080AL T7JF root@Thor:~# sdparm -g WCE /dev/sdm /dev/sdm: ATA HGST HUH728080AL T7JF WCE 0 [cha: y] diagnostics attached thor-diagnostics-20191114-1147.zip
  6. 6.8 RC4 and on corrected the database corruption that some got (I never did see it for the last few years) try my method I have since my last post removed all the manjaro VM's
  7. last 3 years including 6.8 have all been stable for me with plex running on my cache drives, on two servers
  8. under the wg0 tunnel Local endpoint: is your public facing IPv4 address under the peer (your phone) make sure peer end point is the static internal IP of your unraid server
  9. did you set up your port forwarding? im on telus (canada) and no issues once i set up port forwarding
  10. I found if you do someething strange in the set up and hit apply, you will lose access to the server...you will not be able to ping it or load the interface. to fix without rebooting after deleted autostart from /etc/wireguard just get to the command line locally and type ifconfig wg0 down the server immediately becomes available and then you can go back to wireguard turn it off, correct the setting and enable it again
  11. THANK THANK THANK you for this I was trying to set a backup connection on my second unraid server and wireguard connects but i cannot surf my internal lan i borked my primary wg config so unraid is unpingable so have to find a way to stop wg while my system is running a parity check I was racking my brain on trying to configure static routes on my USG4p
  12. you mean 6.8? its pretty stable you should try it, you can always roll back if you have issues
  13. wierd...i have three server all running and array starting fine on rc5 you should update and run diagnostics and post them here
  14. mine is constantly showing in cockpit under logs. but once i stop the containers unmount and remount the NFS share then start the containers they are fine for a couple hours. if i reboot fedora itself will correct it for maybe half a day or so
  15. I have been seeing a lot of NFS issues with my Fedora Server 30 VM on one unraid server hosting Sonarr and Radarr with missing root folders located on two other unraid servers the logs show this NFS: (SERVER IP for NFS Mount) server error: fileid changed fsid 0:51: expected fileid 0x2800000c6b19ee, got 0x2800000c6e58b1 I switched tunable to off as mentioned in the RC release notes on one server to see if this alleviates the issue as I have been seeing it pop up every few hours but only since 6.8RC1 and before anyone asks I cant use SMB as the script a buddy created explicitly looks for the NFS mounts to map to the dockers running in fedora