Can0n

Members
  • Posts

    613
  • Joined

  • Last visited

Everything posted by Can0n

  1. sadly putting the old server name in the keywords only pulls old syslog files i might have to stop and start the array again to get it to show up in /etc/cron.d/root againi have a sneaking suspecision its an old user.script plugin file lingering and staying active but could be something else
  2. i checked my go...issue with mine is the old job keeps coming back....its not in user scripts and not in my go file. if i stop the array and start it its back
  3. still getting this too using unRAID 6.12.2 just stopped VM service to move domains from a zfs pool/dataset to array (mover isnt working to move data from ZFS datasets to xfs array by the way) used rsync -avx to copy the domains tried to restart libvirt and it fails with same umount error long standing but buit only docker that was running was Plex and i tied the main pid to that but could stop it as it was in use
  4. it saves for me so not sure what voodoo magic my server is doing but i got some old Cron jobs there before I was using before User scripts came out and i even commented it out due to it not longer being valid and ocassionaly showing the invalid path in my logs when it ran. but after a massive issue last night with BTRFS errors flooding my logs and causing my VM and dockers to all crash I had to reboot forcibly due to stopping array failing and the comment # is still there so not sure what is happening to keep mine there User Scripts even puts its schedules in there
  5. I even had a very old from to restart a docker container ever second day at 6am
  6. It does work Any changes made to the operating system get written back to the USB immediately. Case in point I came across this post due to an rsync error I was seeing in my logs. I checked user scripts it wasn't there so i found the Rsync command I used to run on a server that doesn't exist anymore in the logs. I found it in /etc/cron.d in a file called root that server destination has been decommissioned for over a year My newest user scripts from less than a month ago were there too and my server has was restarted just 8 days ago
  7. this is kind of old but you can edit cron on /etc/cron.d/root use nano or vi and you can enter what you want but i second user scripts you set it and it enters the cron schedule directly into the file i just mentioned
  8. I have an intel i7 10700K there is no packages in my nerd tools for edac_mce_amd or rasdaemon
  9. Hello just saw a message from fix common problems that there were hardware events on my server. it recommends installing mcelog via nerd tools and posting diagnostics. I cannot find mcelog on NerdTools which has been installed for a long time I have attached my diagnostics perhaps one of you fine people might be able to tell me what the issue is? thor-diagnostics-20230429-0907.zip
  10. I have an asus board Prime 590-v updated Bios and VM's get same error no SVM option i could find ***Edit downgraded bios and had to recreate the VM but its working
  11. me too was about to post a link 4 plus years on the same Sandisk Cruzer Fit without need for replacement "knock on wood"
  12. I had the official version of the container do a similar thing last night at 10:30 p.m. everything just crashed restarting the container never resolved it I had to stop my entire array and restarted and then it started working again
  13. Space invader one has a comprehensive video on how to do this just search YouTube for his videos unfortunately I don't have it handy right now but if I get it I'll post it below it is very easy to migrate over I did it myself to the official container sadly it does not update on its own as it posts in the web gooey to update and all you have to do to update on the official container is restart the container but it would be nice if it sent a notification through Docker there was an update so it could update through normal channels other than that it's been Rock solid for me
  14. I don't think 128 GB is overkill if you end up running a whole bunch of VMS or even a few VMS Lots of RAM is never a problem haha
  15. ok thank you that makes more sense i was mostly not concerned with the trivial 2 month gap between 6 months or 4 months but more the fact that typically RC's are usually no more than a month apart so even 4 months is out of the ordinary. I hope its a big update i really am looking for the multiple arrays so I can split up my 28disk array with two arrays having 2 parity per array for better protection....something i have been asking for for a long time is letting us the user decide how many parity drives we want as my drives age i am worried about losing more than 2 at once. to the point I keep a couple of spare 10TB drives on my shelf now to have a cold spare ready to go.
  16. not in my experience I have always been able to see the updated RC same day it was posted the first post did not mention it was now available it was about putty and RSA keys may need to be updated.
  17. Actually it was posted here in September not November hence the 6 months since release
  18. I just read that last night i hope so too current 6.9.2 kernel is impacted by it
  19. Hi All I normally jump on the RC's when they come out but 6.10 i have not mainly because im down to one server now. I am curious when rc3 is coming rc2 came out back in September usually the updates are much faster than this. I hope its not a sign that unraid is losing focus or having other business related issues
  20. I just installed server 2022 trial same issue, cant get IP as NIC drivers are not set yet so cant use RDP no accessibility options on the screen its just asking for crl+alt+del which i cant send ***edit i found the little side menu for noVNC and was able to send the 3 finder solute!
  21. I am using OVMF with i440 5.1 I have to use OVMF as I am running bare metal windows on an NVMe meaning no support for legacy bios/seabios 1909 ran great all games ran perfect as did the OS, got notification 1909 was no longer supported so no more security updates would be coming through...wierdly Cyber Punk 2077 ran great until Jan 2021 then it went to max 13fps on all resolutions and all settings, as soon as i shut down and boot directly to windows off the NVMe CP2077 runs fine I chop that up to an update from them that breaks virtualization of the game. so I did try to update to 21H1 over the 1909 but it kept failing, after a few days i deceided to wipe and load windows from scratch...i installed it through a USB stick i made with it directly to the NVMe with the drive pulled and the unraid USB removed so like a normal windows install. I got all my programs and drivers then grabbed the UUID so i could update my VM config with it...shut down and connected drives and booted back into unraid, updated UUID and started VM, CPUs are pinned for a long time after start up with nothing going on Chrome was bad all 8 threads pin to 100% as long as its running, test CP2077 getting 1-2fps....I try Destiny 2 13fps (previously getting 50-60 on max settingst at 2560x1440@60Hz) I went back to bare metal windows and tried both games again and they run perfectly So i suspect Windows 21H1 breaks something in virtualization and QEMU and Hypp-V will need an update to run smoothly with Win in Meantime i am in process of moving all my data to my main unraid server then shutting this one down and booting straight to winodws the mouse lag is horrible right now and the mouse is USB to a passed through USB card so a l cant really use to to look for work or edit my resume as I was using it for...
  22. I should add windows performance itself is sluggish and slow too there is significant mouse delays on OS menus when i drag the mouse back and forth, again none of that when i shut down unraid and boot directly to the NVMe
  23. Hi I recently had to reinstall Windows to my nvme drive to get the latest update which is 21h1, since then all the games I passed through to my RTX 2060 only get around 18 frames a second when I was previously getting over 50 on Windows 10 v1909 I have Windows installed directly to the nvme and point the VM configuration to that drive I have not changed the resources allocated or anything other than throwing a couple of extra cores towards it after I discovered massive input lag and high CPU usage by pretty much every piece of software installed If I shut down and boot directly to Windows everything's fast and snappy so it leads me to believe there's something wrong with qemu or hypervisor for the latest M$ has to offer. Attached to my diagnostics but my VM configuration is pretty straightforward Core 0/HT reserved for unraid i7 8700 with cores 1-5 and HT 16GB ram/48GB Pass through of a PCIe usb 3 card and RTX 2060 I boot the VM directly to the NVMe Running unraid 6.9.2 Attached are the diagnostics freya-diagnostics-20210525-1120.zip