• Content Count

  • Joined

  • Last visited

Community Reputation

9 Neutral

About jang430

  • Rank
    Advanced Member
  • Birthday 04/30/1973


  • Gender
  • Location
  • YIM

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. My VM, when applying Phison passthrough, stays stuck in "Updating".
  2. My purpose is to passthrough my NVME SSD. I believe I should be binding the Phison controller (says NVME). After binding it, restarting... When I go to the VM, I can see the controller in Other PCI devices, with an unchecked box (what should I do here). I don't know how to point my Primary vDisk to the said SSD when it's no longer presented on the choices.
  3. Before binding Phison controller (NVME SSD) to VFIO, I first unmounted my NVME SSD from Unassigned devices. After binding, and reboot, Unassigned devices doesn't "see" NVME SSD. Neither does making my Primary vDisk location /dev/nvme0n1 work, as it doesn't "see" /dev/... Any advise?
  4. @domrockt, Sorry missed your reply earlier. Not yet familiar how stubbing goes. I see the following on my system devices page- Do I just tick on IOMMU group 43, and click on "BIND SELECTED TO VFIO AT BOOT?" Will this achieve my goal to pass through NVME to VM directly? Anything I should be checking in syslinux.cfg? What exactly does stubbing do? EDIT: After doing above, I can now see in my VM page: shall I tick this? Furthermore, changing Primary Vdisk location to /dev/nvme0n1/vdisk1.img a
  5. I'm running windows 10 Gaming VM. I removed my Nvidia GTX1080 to replace with AMD RX560. Upon starting of Unraid, I saw VM running, and can ping it. I selected edit, and saw AMD already passed through. As I was working remotely via laptop, I came to Unraid physically, to turn on the monitor, and check if the passed through VM is working properly, it was blacked out. I removed the passed through VM from the config, and turned on VM once again, I can VNC into it. Removed my Nvidia graphics drivers, and passed through the AMD once again, with same results. I turned off Unraid,
  6. After seeing @SpaceInvaderOne's April Fool's video, I saw he was able to assign independent drives, each to their own respective pool. It made me wonder the same thing, do I leave VMs in unassigned disks? Or do I create a pool with a single disk? Are Pools Unraid's answer to "built-in" unassigned disks? Without plugin needed this time? I haven't tried USB on my Unraid to see if it will perform the same as Unassigned Disks though.
  7. Thanks @UhClem. I didn't know there was a bottleneck with the existing onboard controller. The whole time preclearing, it was 100% Extra question, I can buy another unit at super cheap price, it's the N36L model. I've read in other threads/ forums that all you need to do is to power the drives (can be done by exiting microserver), connect it to a controller (I plan to get a 4i4e controller, plug drives to controller instead of motherboard directly), and use the external port at the back of the 4i4e controller, plug a cable from 2nd unit to the back of main N40L microserver. Th
  8. I have the same idea @UhClem. But while doing preclearing on 2 8 TB drives, I noticed my cpu reached 100%, and stayed there the whole time it was preclearing. Will an LSI card offload cpu utilization when all drives are connected to it?
  9. Thanks also Hoopster! I had a backup solution if what I tried really didn't pan out . Go to the physical server, and do what the post suggested
  10. You're right! I was challenged by the instructions by Hoopster, as I don't have physical access to the unit. This morning, I downloaded the official app once again, and this time, the field there that talks about claiming something makes more sense. True enough, after putting in the token, it worked! Thank you saarg.
  11. Thank you! that gives me a new problem though. The server is remote, and I don't have access to it now. But thanks for the instructions. I will find a way to fix this.
  12. I am connecting from outside the network. I manage this unraid server. I connected via OpenVpn. Also tried accessing the server via it's public IP:32400, with same result. I don't know where to click to do the claim. I can't find it. The server don't show any Claim notifications. I just run the docker app via gui, not familiar with docker run command.