Jump to content

landS

Members
  • Posts

    829
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    USA

Recent Profile Visitors

4,123 profile views

landS's Achievements

Collaborator

Collaborator (7/14)

18

Reputation

1

Community Answers

  1. @methanoid Check out (my) first 2 responses to this post. I speant a lot of time and went through a lot of controllers to find a solution that worked (full speed read and write, full direct functions in vm, can update firmware, etc ) and would not cause bootloops in my rig. In short, I'm using an out of the box odd controller that I've blacklisted. A lot of controllers caused issues. A specific ASM1062 one did not. The 1 thing that will not work doing this in my rig is reboot unraid. I need to power off, then power on. @testdasi has a post in this chain with a dedicated link to folks that got full read write without a controller. This method did not provide full access to the drive in my rig.
  2. Howdy folks. I’ve been running 2 Windows 10 VMs with GPU passthrough since… 2016ish. It is time to upgrade the GPUs: 1060 3GB->3050 6GB fanless 1080 8GB ->4060 Ti 8GB semi-fanless Will the following process work: Stop each VM Edit the VM – Change the GPU from the Nvidia part to VNC. Change the SoundCard to None. Save. Reboot UNRAID Power Down UNRAID Swap out the old GPU’s for the new models Power On UNRAID Under tools/system devices/check the boxes next to the 2 new GPUs (they are NOT checked for the 1060 or 1080) Reboot UNRAID Edit the VM – add 2nd GPU with the 3050/4060 respectively. Add the Soundcard to the Nvidia part. Save. Turn on VM Using VNC, download and install the latest NVIDIA drivers Reboot each VM Thank you!
  3. The following procedure consistently allows for VM Config Updates for Windows Images on my configuration. 1 - Backup VMs first in case you need to reset Stop all VMs and Dockers Copy the VM Files. The bottom of this post contains what I use in a User Script 2 - Download the latest Virtio ISO from settings/VM Manager Edit VM, then Save ___Change the virtio ISO to the latest ISO 3 - GPU Access Prep (using Nvidia 1060 as an example) Edit VM, then Save ___Add VNC as 1st GPU, ___Move Nvidia 1060 to 2nd GPU. Ensure sound card from video card is also passed in Start with VNC console Ensure VNC access is OK Update Nvidia Game Ready Driver (GRD) Set Monitors to Mirror (VNC will be a little fuzzy at 1920x1080) Ensure GPU passthrough is OK Restart VM, Ensure OK, Shutdown VM ___Note that GPU may not initialize until Windows is at the login screen, however VNC will work immediately 3 - Bump up RedHat virtio drivers (assuming the Virtio Driver is mounted to D:/) For each of the following go to the Device Manger, Right Click, Browse to D:/ drive (virtio), Scan A - disk drivers: CD virtio-win / viostor / w10 / amd64 B - network drivers: CD virtio-win / NetKVM / w10 / amd64 C - Storage Controllers: CD virtio / Install the latest Qemu Guest: CD virtio-win / guest-agent / gemu-ga-z86_64.msi Restart VM, Ensure OK, Shutdown VM 4 – Now lets update the Machine. Here assuming 1440fx-2.11 to i440fx-7.2 Can update i440fx to newer i440fx. Can update Q# to latest Q#. Cannot updated between i440fx to Q#. Edit VM, then Save ___Change Machine from i440fx-2.11 to i440fx-7.2 Start with VNC console Restart VM, Ensure OK, Shutdown VM 5 – Now let’s update the Bios. Can updated OVMF to OVMF-TPM Edit VM, then Save ___Change Bios from OVMF to OVMF-TPM Start with VNC console Restart VM, Ensure OK, Shutdown VM 6 – Now let’s update the VirtIO CDRom Bus. Can updated from IDE to SATA Edit VM, then Save ___Change VirtIO Drivers CDRom Bus from IDE to SATA Start with VNC console Restart VM, Ensure OK, Shutdown VM 7 - Now let’s update the Network Model. Can updated virtio to virtio-net Edit VM, then Save ___Change Network Model from virtio to virtio-net Start with VNC console Restart VM, Ensure OK, Shutdown VM #!/bin/bash backuplocation="/mnt/user/VM_BackupFolder/VM/" datestamp="_"`date '+%d_%b_%Y'` dir="$backuplocation"/vmsettings/"$datestamp" if [ ! -d $dir ] ; then echo "making dated folder $datestamp" mkdir -vp $dir else echo "$dir already exists" fi echo "vm xml files" rsync -a --no-o /etc/libvirt/qemu/*xml $dir/xml/ echo "nvram" rsync -a --no-o /etc/libvirt/qemu/nvram/* $dir/nvram/ echo "iso" rsync -a --no-o /mnt/user/docker/* $dir/docker/ chmod -R 777 $dir sleep 5 exit
  4. Howdy @Djoss I just wanted to let you know that the Crashplan WebGui is indicating "CrashPlan wasn't able to upgrade but will reattempt in one hour" Checking for updates on Unraid's Docker Container tab indicates this is up to date. As always, thank you greatly for maintaining this wonderful container.
  5. Did this work with your hardware mix?
  6. I had a wicked time finding both a USB 3.1 controller and a SATA controller (for BD ODD) to pass through trouble free to a VM (on a Supermicro X10SRA-F). After about a dozen different controllers I landed on the following, which have been rock stable for the last 7 or so years: PCIe x1 ASM1061/ASM1062 Serial ATA for BD ODD - Startech PEXESAT322I original: PCIe x1 ASM1142 USB 3.1 - QNINE USB 3.1 PCIe Card Gen 2 (10Gbps), 2 Port PCI Express to USB 3.1 Type A Expansion Card (needs SATA power) replaced by: PCIe x4 ASM1142 USB 3.1 - Totovin 8541587506 (no external power needed)
  7. My rig is located on a high shelf in our bedroom closet which is limited on depth. Swapping out Internal disk’s requires getting it down with a ladder with little maneuver space and this is too brutal on my permanently injured back and knee. …. I am swapping out my 40# Fractal Design’s R4 Silent (18.27" x 9.13" x 20.59") for a unit with front hot-swap bays. I’m also going from 8 4TB drives to 2 20TB drives. This rig has 2 GPU’s in it for 2 separate VMs, 1 Full Size ODD, and an ATX Mobo (X10SRA-F). …. While not 5.25 bays top to bottom, this may help someone out: 18# InWin IW-PE689 (18.1" x 7.9" x 16.9"). … Has 4 External 5.25 bays and 1 External 3.5 bay. Icy Dock MB155SP-B 5x3.5 (3x5.25” bay) with a Noctua NF-B9 Icy Dock MB741SP-B 1x2.5 (1x3.5” bay) SilverStone Technology EPDM Sound Dampening Foam
  8. Thanks itimpi! I rather like the option for disk fault tolerance, and also think the backup is a good idea. I'll stick to 1 parity and 1 pre-cleared spare. I have a second unraid server with no exported network shares that I turn on quarterly and run 2 backup scripts: For write once, never change: rsync -r -v --progress --ignore-existing -s /mnt/disks/TOWER_WriteOnce/WriteOnce/ /mnt/user/Media/WriteOnce For write many: rsync -av --progress --delete-before /mnt/disks/TOWER_WriteMany/WriteMany /mnt/user/WriteMany In addition, I run the Crashplan Docker on folders containing important data.
  9. Strange question for you goodly folks. Is there any benefit to using 2 Parity Disks with 1 Data Disk? My array is comprised of 5 data disks and 2 parity drives - all 4 TB with an average power on time of around 7 years. I am replacing these with 3 20 TB disks: 2 Seagate IronWolf Pro CMR & 1 WD Red Pro CMR If no benefit to using 2 Parity Disks with 1 Data Disk, I will keep 1 of the disks as a pre-cleared hot-spare. Thanks!
  10. Alas, this appears to be a troublesome beast. I updated from v6.12.1 to 6.12.2. In the Crashplan Docker’s WebUi it shows in the left hand popout: Crashplan v11.1.1 & Docker Image v23.06.2 Image 1 is what the Crashplan WebUi looks like when I start the docker. Image 2 is what the Crashplan WebUi looks like after I insert the password and click Continue. This is where the message pops up. If I close the WebUi and open it up the problem persists. Image 3 is what Crashplan’s Website looks like. The only item of note here is that the computer shows as online.
  11. Thanks for the quick turn-around Djoss. Alas no, Docker's WebGui Still shows the "upgrading to new version" message after entering login credentials into Web Gui's Interface. Edit after 3 hours: Docker's WebGui still throws the "upgrading to new version" issue Docker's WebGui shows version 11.1.1.2 CP's Website Portal indicates Computer is Online & Backup is 100% CP's Website Portal shows version 11.1.1.2
  12. Howdy folks, Since updating to 6.12 from 6.11 I've been having the same issue as @Gico: I can't login the web-ui saying Crashplan is "upgrading to a new version"
  13. Howdy Folks! I'm in need of some help. Part 1 ---------------- Typical behavior & problem Typically when storms are coming through I go into the Web Gui, stop the docker, stop the virtual machines, stop the array, and only then power down. The Web Gui always comes back up after the server is powered back on. Last night a storm came in while I was away and the server shutdown via UPS due to an extended power outage. After reboot I can access dockers, VMs, network shares – but not the Web Gui Part 2 ---------------- Troubleshooting Via a IPMI KVM Redirect I can access the terminal. Running the following immediately allows access to the Web Gui: … /etc/rc.d/rc.docker stop … /etc/rc.d/rc.php-fpm restart From this point: … Diagnostics obtained (tower-diagnostics-20230507-0856) … Stopping the VM worked fine … Stopping the Array did not – stuck with /mnt/cache: target is busy – retry unmounting disk share(s) – “Log Snippet” at bottom of post … After about 15 minutes of this I then pressed power off in web gui … Part 3 ---------------- Power back on after step 2 Web Gui is accessible from Windows Os only after boot & an automatic parity check was initiated which is atypical (tower-diagnostics-20230507-0916) I only use http://###.###.#.###/Main to access the GUI – Chrome or Firefox browsers I can now fully access it via a Windows PC (had to dust this off) I can no longer access it via Android or Linux device on same network (which is all we really use here) I typically only access via Android … Part 4 ---------------- Questions 1 – What can I do so that this behavior doesn’t happen again? 2 – How can I make the Web Gui Accessible via a non-Windows device again? Thanks folks! … … … … Log snippet from part 3 above: May 7 08:58:33 Tower emhttpd: Stopping services... May 7 08:58:33 Tower emhttpd: shcmd (116): /etc/rc.d/rc.libvirt stop May 7 08:58:33 Tower root: Stopping libvirtd... May 7 08:58:33 Tower dnsmasq[7063]: exiting on receipt of SIGTERM May 7 08:58:33 Tower avahi-daemon[5931]: Interface virbr0.IPv4 no longer relevant for mDNS. May 7 08:58:33 Tower avahi-daemon[5931]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. May 7 08:58:33 Tower avahi-daemon[5931]: Withdrawing address record for 192.168.122.1 on virbr0. May 7 08:58:33 Tower root: Network a4007147-6d28-4b27-8a73-0b1a1672c02b destroyed May 7 08:58:33 Tower root: May 7 08:58:37 Tower root: Stopping virtlogd... May 7 08:58:38 Tower root: Stopping virtlockd... May 7 08:58:39 Tower emhttpd: shcmd (117): umount /etc/libvirt May 7 08:58:39 Tower cache_dirs: Stopping cache_dirs process 5345 May 7 08:58:40 Tower cache_dirs: cache_dirs service rc.cachedirs: Stopped May 7 08:58:40 Tower Recycle Bin: Stopping Recycle Bin May 7 08:58:40 Tower emhttpd: Stopping Recycle Bin... May 7 08:58:40 Tower emhttpd: shcmd (119): /etc/rc.d/rc.samba stop May 7 08:58:40 Tower wsdd2[7683]: 'Terminated' signal received. May 7 08:58:40 Tower wsdd2[7683]: terminating. May 7 08:58:40 Tower emhttpd: shcmd (120): rm -f /etc/avahi/services/smb.service May 7 08:58:40 Tower avahi-daemon[5931]: Files changed, reloading. May 7 08:58:40 Tower avahi-daemon[5931]: Service group file /services/smb.service vanished, removing services. May 7 08:58:40 Tower emhttpd: shcmd (122): /etc/rc.d/rc.nfsd stop May 7 08:58:40 Tower rpc.mountd[4443]: Caught signal 15, un-registering and exiting. May 7 08:58:41 Tower emhttpd: Stopping mover... May 7 08:58:41 Tower emhttpd: shcmd (123): /usr/local/sbin/mover stop May 7 08:58:41 Tower kernel: nfsd: last server has exited, flushing export cache May 7 08:58:41 Tower root: mover: not running May 7 08:58:41 Tower emhttpd: Sync filesystems... May 7 08:58:41 Tower emhttpd: shcmd (124): sync May 7 08:58:41 Tower emhttpd: shcmd (125): umount /mnt/user0 May 7 08:58:41 Tower emhttpd: shcmd (126): rmdir /mnt/user0 May 7 08:58:41 Tower emhttpd: shcmd (127): umount /mnt/user May 7 08:58:43 Tower emhttpd: shcmd (128): rmdir /mnt/user May 7 08:58:43 Tower emhttpd: shcmd (130): /usr/local/sbin/update_cron May 7 08:58:43 Tower emhttpd: Unmounting disks... May 7 08:58:43 Tower emhttpd: shcmd (131): umount /mnt/disk1 … May 7 08:58:45 Tower emhttpd: shcmd (141): umount /mnt/cache May 7 08:58:45 Tower root: umount: /mnt/cache: target is busy. May 7 08:58:45 Tower emhttpd: shcmd (141): exit status: 32 May 7 08:58:45 Tower emhttpd: Retry unmounting disk share(s)... tower-diagnostics-20230507-0916.zip tower-diagnostics-20230507-0856.zip
  14. Good point JonathanM. Our AC runs 9 months a year... with a setting of 78*F. I believe it highly likely I'll need to replace 2 of the drives in the next 2 years: $140. Yearly Energy Savings over the next 2 years: $210 (lowered HDD power + 9 months of AC) Likely near term Cost savings: $350. That makes the decision, not a $700 one, but a $350. Mhh... is $350 worth: Reset of the bathtub curve / peace of mind Reduced Noise It very well might be.
  15. Thanks CharNoir. I mainly wanted a sanity check on the electrical cost savings --- another set of eyes to see if what I’ve stated appears correct. Saving $60/year at a cost of $700 certainly doesn’t make economical sense. Paying $700 now when I could replace an occasional drive for $70 also doesn’t make economical sense. However, a lot of my drives in my main machine are reaching the 10+ year powered-on mark and the backside of the bathtub appears to be coming into play. As such I need to weigh: Cost Savings Reset of the bathtub curve / peace of mind Reduced Noise (this is in my master bedrooms closet) Reduced Heat
×
×
  • Create New...