Jump to content

trevisthomas

Members
  • Content Count

    81
  • Joined

  • Last visited

Community Reputation

0 Neutral

About trevisthomas

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Been playing with this epic container for a few days. I've had some good success in my test setup, including passing through a GTX 970 to HighSierra, though I didn't get the HDMI audio to work. New components are on order, planning to build a new Unraid system with ATI cards for Mac and Windows compatibility. Question that i have is, is there a preferred method to install multiple instances of MacVMs? The Macinabox Docker puts them in the OS's in the same spot. I have a Catalina and a Sierra, though through my misunderstanding i ended up creating kind of a mess because i didn't understand what the "VM Images Location" field was doing. I assumed that i would want to change that for each install, but, clearly that was not the intent. Anyway, i'm assuming that i can just move and rename the the macos_disk.img file to a different location and run the docker again? If i move the .img, do i take the Clover.qcow2 with it? Is there a best practices way to do this?
  2. I don't have any advice for the issue that you're experiencing at the moment, but I did want to make a suggestion. I've taken to backing up my vm images that I can restore them in case things go bad. Theyre pretty easy to restore, even onto different hardware,
  3. So i'm experimenting with restoring VM's from one unRaid install to a different one on different hardware. My primary unRaid system is running 6.1.7, the experimental box is running 6.2.4. I've never seen the issue described in this thread before with 6.1.7 which i've used to host various VM's for the past year, but as soon as i touched 6.2.4, i hit this issue. Someone earlier in this thread suggested that he only saw the issue when using the Windows 10 template, i saw it with Windows 10 and with Win 2012 server. I just installed Windows 2012 server using using the Windows 7 template and the problem went away. Typing exit, and then continue did not work for me. It just goes right back to that Shell command prompt. If you're hitting this issue, try installing the VM using the Windows 7 template, or try an older version of unraid.
  4. I moved one of the VM's to the cache drive and deleted the old image. Lets see how it goes. I naively thought that creating two 40gb vm images on an 80gb drive would be fine. They ran perfectly for almost a year.
  5. So do those images need room to grow? I guess that it was naive of me to not know that. Within the vm's there is free space inside of the virtual disk as seen by the OS. I guess that's not good enough? So can I just move the vdisk.img file to a different drive? (Thanks a ton for your assistance by the way.)
  6. I don't know what's wrong but my VM's all of a sudden keep pausing. Can anyone decipher anything from the logs? I've seen some posts where people suggest that running out of resources can cause this but that doesnt look like my issue unless i'm looking in the wrong place. My logs are attached thecouncil-diagnostics-20161031-2040.zip
  7. Interesting, ok. I guess I could just return it and find a different brand, I guess before I give up I can try to test it on a fresh copy of unRaid just to be sure that I see the same symptom. I suppose I can try the later versions of unRaid on a test system to to see if that matters.
  8. Yeah, I tried each port individual and both together. Didn't seem to matter.
  9. My unRAID system has been using an integrated dual nic but after a couple of odd occurrences with the network which have required rebooting the system i decided to purchase and try a PCIE nic. This is the one that i purchased: Rosewill PCI-Express Dual Port (RNG-407-Dualv2) https://www.amazon.com/gp/product/B00DODX5MA/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1 Things looked good initially. I backed up my usb drive, powered down the server, installed the new card, disabled the integrated nic's but then when i started unraid, i couldnt get to the webgui. After some fumbling around, i resorted to the integrated nic and disabled my static ip and went back to dhcp. Then, everything was working fine but when i switched back to the new nic i see that during the boot sequence it is failing to obtain an ip address when using the new card. (but works fine with the integrated one). Any one have any ideas? Is the new nic not unraid compatible? When i do "lspci" i see the two network interfaces from the dual card but for some reason it cant get an ip address. Static doesn't work and dhcp doesn't work. Is there something else i should try or something else i should be doing to switch network cards with unraid? PS. I'm running unRaid 6.1.7
  10. This is a great thread. I'm especially intrigued with the comments about drives having an annual workload limit! Fascinating. I usually preclear only once but I am trying to have hot spares in the array now so that a disk failure can be replaced immediately. I can't think of ever having disk die before it was a few years old and more often than not they drives are still fine but since their storage limits have been vastly surpassed that they're just wasting energy.
  11. Thanks for the replies. I've probably made things more difficult for myself than necessary. My intent was/is to keep the data that is on the two 1tb data drives from the test period. What started as a pure exploratory test ballooned into the underpinnings of my new system because of how well the VM stuff went. I'd rather not re-install the two windows VM's and the apps within them so instead of just restarting the build from scratch with the big drives, i'm going to migrate and maybe replace the two 1tb's last. (or just leave them, i haven't decided yet. The case will have plenty of hotswap slots) What i did was, i turned the system off put in the 3 precleared drives (5TB and 2x4TB). I removed the assignment of the 1TB parity and started the array. Then i stopped the array and added the two 4tb drives as data drives (along with the two 1tb data drives that are already in there). In a few minutes they were formatted, so i stopped the array again and now i'm building parity. I may stop it and remove the parity disk this weekend when i'm ready to do the big data move but i'd rather let it go ahead and be active for now. Thanks for the tips! And Sorry for totally hijacking this thread.
  12. Kizer, I have a question. Here's my situation. I have a 1tb parity, and two 1tb data drives that I was using for a test bed. I'm ready to grow that system and transfer data from my old unraid hardware onto it. I have a 5tb drive and two 4tb drives that I intend to use. If I want to do my copies without parity can I just remove the 1tb parity drive and add the two 4tb data drives and start my copy without priory protection? That thought had not occurred to me until reading this thread. I had figured hat I would have to add the 5tb parity first and then grow it. I never considered copying to it without parity.
  13. Personally, i preclear everything before unraid is allowed to use it. Over the years it has helped to catch problems before i put my precious bits at risk. kizer, I've never heard of disabling parity... how do you do that? Do you literally stop the array and unassign it and then re-add parity later? I'm in a rebuild state and will be transferring 13tb from an old unraid system to a new one.