Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

69 Good

1 Follower

About Jcloud

  • Rank


  • Gender

Recent Profile Visitors

2167 profile views
  1. I'm betting you've already seen this, but I figured, "just in case." https://thehackernews.com/2019/10/linux-sudo-run-as-root-flaw.html https://www.sudo.ws/repos/sudo/rev/83db8dba09e7 I just seen it on Slashdot today. Have a good one everyone.
  2. Just saw this (paid attention to forums) I'll take a look at it in a few, do some reading, probably make the tweeks. Thanks for pushing on my radar. Sent from my SAMSUNG-SGH-I537 using Tapatalk
  3. Thank you, I missed that. (Obviously) Sent from my SAMSUNG-SGH-I537 using Tapatalk
  4. "Proposed where" In the webui, TOWER/Main/Device?name=XXXXXX (partity, disk#) page. "Proposed feature:" A switch/button, which when pressed causes selected disk LED (from context of page listed above) to blink/pulse/constantly on or otherwise be widely annoying distinctive such that a user could look down and go, "Ah the drive-X is in this bay slot." Reasoning for feature: for replacing drives (bad & upgrade cases). The correct bay can be open first time, leaving disk array undisturbed. Possible against: Does not work with systems which has single, or no, drive-led. Decent amount of work for, "blinking light show because JCloud didn't document his drive layout." Potentially the "comments" field for disk(n) pages could be used to note physical location therefore, why re-invent the wheel. Which, if previous statement is the majority, could the "comments" field be added to Device?parity page? "So that's what things would be like if I'd invented the fing-longer. [Sigh]. A man can dream though. A man can dream..."
  5. Assuming you pass-through a USB controller, said USB-ports connected to that controller should be PnP detection to the guestVM like you want. If a user plugs into a USB port not through said controller(s) it will show up under Unraid, in the form of: syslog'ed and/or Unassigned drives plug-in (in case of USB storage device; and you've installed said plug-in).
  6. My condolences. Pretty sure it would be shutdown system; remove the two failed disks; put in two new ones; double check for no loose connections; power on system and assign the new drives one into parity and the other in replaced data drive. So long as you don't go into settings and click on new configuration, the rest of your data should remain intact. EDIT: And remember to make the parity disk the same capacity as before, or larger, because that's what dictates the largest capacity of data drive in your drive array.
  7. You may find this thread useful for passing through GPU - just fyi for your radar.
  8. The docker container, or something else? Assuming you're talking about the Docker container; open the Unraid webgui, click on Docker menu, on the line for the container click "force update."
  9. To me that appears to be the 1050 card and that second device is the audio controller of the same card (for the HDMI/Display-Port output). The PCI root address for both is 01:00. That's where I would placing my bets, if it was me.
  10. "" I might get this wrong... "" I KNEW I was going to mess up somewhere, as I hardly do that.
  11. Taking some of the easy questions... Yes. I'm not sure I understand the question. However, is it possible to grow the Unraid cache pool with additional disks? Yes. Is it possible to grow a VM disk image file and then have the said VM see the extra space? Yes. Do you want to run a VM on the disk array? Can you, yes. Do you want to, no because of the over-head in parity. Some people use image files on the cache-pool (ssd); other's like to attach a ssd, HD, or NVMe to a VM for it to use. Case 0: If you have a data disk fail, you'll want to replace the disk first, then replace your parity disk after the data has been rebuilt. I might get this wrong, if your parity is 4TB, your data disk fails ("I'll be with only 4TB drives..."), and you replace the drive with an 8TB then the system will rebuild the data on disk but it will still be 4TB of disk space. Case 1: If your 4TB parity drive fails, and you replace it with an 8TB drive, once parity has been rebuilt then yes you could start upgrading data drives in array with larger drives. Again like in case 0, you could upgrade with a >8TB drive, but only 8TB will be formatted/protected. Case 2: If you're running an array of drives without parity; lose a 4TB drive and replace it with an 8TB, you would see 8TB of space -- but this case you lost data on failed drive, because there is no parity. Can you take a disk in the array and assign it later to parity? Yes, however you'll want to copy off any data before-hand to another drive as it will be erased. Also, this gets to be a little messy as the array will be looking for a disk in the slot which your now-parity disk is taking up -- here your choices are replace the disk with another to fill in the slot, leave it blank and explicitly specify disks for shares (skipping that slot in array), or backup all data and remake the disk array (starting over). Process seems right. To aid in copying speed you may want to add the parity, on Unraid, last. I hope that helps.
  12. Assuming the hardware breakouts in clean IOMMU groups it sounds like a "go" to me. Before I upgraded my system (in footer), I did two gaming VM's with a 3930K. EDIT: just looked up the CPU, 6 cores 12 threads, and you were planning on splitting 3/3 - that leaves nothing for Unraid. You'll probably need to break that up as 2 cores, with 2-HT, so for threads per VM. and leave the other 2 cores for Unraid. Yes you could run both VM's off of the same SSD by making two disk image files for each VM. I haven't looked at it, but I think if you look through the forums you can dig up info on using ButterFS snap-shot feature to run two different VM's with just a single set of file(s). No you don't have to have a third gpu, like Linus, if you're willing to run/access Unraid "head-less," or simply from the Web interface.
  13. Sent in private message. The count is still at 22.
  14. Suggests a hosed file system, corrupted some point along the way - explains why it didn't want to boot off of it before. I've never tried this in a VM, but I use this program all the time at work: https://www.r-studio.com/Data_Recovery_Download.shtml You could install the demo in your running VM, to see if it even able to read the corrupt image file, and that would at least tell you if you have a shot at getting your files back.