hardcore_2031

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

hardcore_2031's Achievements

Noob

Noob (1/14)

2

Reputation

  1. A couple years ago I set up a NAS with an UNRAID Trial key and moved a bunch of backup data on to it. The OS version is 6.9.2. Today I had need of that data again, so I plugged in and spun up this device. The trial by now had of course expired, so I purchased an Unraid OS Unleashed License as I have 8 drives in this NAS. When I attempt to apply the activation code I just purchased I get the error "Could not find associated product Unleashed Redemption" (shown below). I see that it says "To redeem or purchase Starter, Unleashed or Lifetime licenses, please upgrade to at least Unraid 6.12.8 or later via Tools > Update OS in the Unraid webgui." however without being able to apply my code, I am not able to upgrade from the GUI. What am I supposed to do? https://drive.google.com/file/d/1PRY-VkWrEdi7mJGZ3FivpYYcny61hyDT/view
  2. I did do that bios update, because I think it was the Intel CPU microcode update for Spectre/Meltdown. As expected it didn't help, nor did a dozen other things I tried, but I did find a workaround. I tried creating and booting a new Win10 VM and it worked, which told me that it was an issue exclusive to my VM and not the KVM subsystem overall or something. When I booted the new VM (which was created with an identical hardware config to the old VM) and saw it get to the "press a button to install windows....", I immediately shut it down. I then went to the domains share and deleted the vdisk of the new VM. Then I copied the old vdisk1.img file from the old VM to the new VM's folder. I again tried to boot the new VM (which was now using the old VM's vdisk) and this time it booted all the way into Windows. Horray! Now I just have 6 months of Windows updates to do. Because this really wasn't a fix for the old VM I'm not going to mark anything solved, but I did want to update what had worked for me in case anyone stumbles across this in the future, and thank tjb_altf4 for their attempts at helping.
  3. Appreciate the response. I changed the machine type to a v3.1 but I'm still seeing the same thing. When I attempt to start this VM, the CPU1 core goes to 100% and stays there until I have to force kill the vm instance.
  4. I have a Win10 VM that I have been using for about 18 months. When it was originally set up it was done to the methods set forth by SpaceInvader One at the time. The VM worked like a dream for me for a long time, but then there was a period where I stopped using it for a few months. When I've come back to it now it no longer boots, or if it does I get no indication of such. When I try to RDP to the instance as I used to, I get a connection timeout, and if I try the built in VNC, I get a message. Guest has Initialized the display (yet). I've tried removing some of the extra vdisks that used to be attached to this instance to simplify the boot, but with no luck. Any help is greatly appreciated! prism-diagnostics-20190601-1547.zip
  5. My question is essentially who's doing it and how. Burstcoin is a cryptocurrency which uses hdd space rather than CPU/GPU/ASIC horsepower to "mine" coins. If you've not heard of it you can read more here. In my array I have 2 or 3 drives which at present have 0% usage which I'd like to use to plot for Burstcoin mining. At first I tried to just mine similar to the way I had been (several 8TB drives connected via USB to a laptop) by creating a windows VM and then trying to create vdisks that resided on each physical drive which I'd map to a drive letter in my VM and plot. I had a few issues doing it this way however as a. Unraid didn't seem to like single files which grew larger than a single 8TB disk, and b. the plotfile size seemed to "grow" as I plotted it eventually exceeding the size of the vdisk and causing errors. I then went searching for folks who'd Burstcoin mined on space on their unraid servers and found a couple references to dockers built specific for unraid. In this case a plotter docker and a miner docker. My Issue here is that the plotter docker uses mdcct which is an unoptimized plotter vs something newer like cg_obup. I spun up a Debian VM and installed cg_obup, but now I'm back to trying to figure out how to best present a raw disk to my VM. What I'm looking for is anyone who's trying to or has accomplished the same thing I am. Whether you've figured out plotting or mining from a VM, any help or even another head to bounce ideas off of would be appreciated, TY!
  6. I wanted to follow up on this in case anyone comes upon this again in the future. My problem *was* a smoked USB stick. I tried several scanning methods to repair errors and all yielded no luck. I RMA-ed the old one, bought a new stick, and did my once a year license transfer, and now I'm able to set the "use cache" settings correctly for all shares. Mover is now working perfectly.
  7. Good call, the "No" may very well be why the Mover isn't taking anything off that disk. However when I try to switch one of my shares to "Yes" and apply it switches right back to "No" The output to the log when doing this is: May 30 19:20:09 PRISM emhttpd: req (21): shareNameOrig=Multimedia&shareName=Multimedia&shareComment=&shareAllocator=highwater&shareFloor=0&shareSplitLevel=&shareInclude=&shareExclude=&shareUseCache=yes&cmdEditShare=Apply&csrf_token=**************** May 30 19:20:09 PRISM emhttpd: error: put_config_idx, 595: Read-only file system (30): fopen: /boot/config/shares/Multimedia.cfg May 30 19:20:09 PRISM emhttpd: Starting services...
  8. My problem is related to the Mover not moving data off of the cache drives. Because of issues I've had with cache in Unraid, I'm looking to *ideally* only have the domains share use the cache drive, so my VMs would run from the SSD, however if that is not possible, I'd like to migrate all data off the cache disk and remove the final cache drive from my system. Either of these would be acceptable solutions. I included my diagnostics below, but I'll include some additional info here in case it helps solve the mover issue. Initially when I set up my Unraid server I was using 2x 120GB SSDs in a redundant cache. Recently one of those drives died so I'm left with a single cache drive with no redundancy. Because of this I don't want any actual data stored on the non redundant cache, and am really only comfortable with my VM vdisks living on the cache ssd. I created an SMB share who's path points to /mnt/user This allows me to see all of my shares from a single drive mapping in windows (in cases where that's convenient). From everything I read on these forums , creating shares like this default to not using the cache drive, but since a share mapped at this level doesnt show up in Unraid's user shares, I can't tell for certain what that share's cache use situation is. Things I've tried that I found while searching the forums: 1. The domains share has its "Use cache drive" setting set to prefer, all other drives are set to no 2. Enabled logging for Mover. To me all I see is Mover starting then immediately stopping, no error codes Any help would be appreciated as I am at wit's end. prism-diagnostics-20180530-1317.zip
  9. I feel super stupid now, but the "prefer" setting, a reboot, and rerunning the mover seems to have worked. Thank you!!
  10. I'm running Unraid 6.4 Plus. I have a 10 disk array with single parity, and 2x 120GB SSDs set up as a redundant Cache. I have a single Win10 VPN named "flood". I can see the vdisk for this VM located in the flood folder of the domains directory on disk 1 of the array. From everything I read, once I added the cache drives my VM's would automatically migrate when the mover command was run. This is not happening. In fact I'm not certain the cache space is working at all. I've set the "Use Cache disk" for the domains share to "Only". I also tried "Yes" and "Prefer". However when I run mover it says only that it started and then completed in the log. I checked around the forums but didnt see a topic related to my specific issue. I'm a new Unraid user (2nd month) so any help would be appreciated. Thank you! Feb 9 15:01:49 PRISM root: mover: started Feb 9 15:01:49 PRISM root: mover: finished prism-diagnostics-20180209-1506.zip