Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Silly question please: why would you want to run OpenELEC VM on unRAID and then run Plex / Emby on top of it? Simplistically in my mind, wouldn't it be simpler to just use dockers?
  2. Does anyone know if the official fix is included in 6.2.0 beta?
  3. Try this and see if it helps. Both IPv4 and IPv6.
  4. I subscribe to Linus' law: if thou can use something in a certain way, just do it and disregard whatever it was intended to be used for.
  5. Put this in the xml right before the </devices> tag: Note the index = 1 and controller = 1. <controller type='scsi' index='1' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host12'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </hostdev> Then start the VM and go to Device Manager. You should see 2 scsi devices without driver (has a exclamation mark next to it). Install the virtio scsi driver for the 2nd item (or whichever one that you do NOT get the warning that the driver is NOT intended for the device). Note that I'm using virtio latest version (0.1.117) which has Windows 10 driver. I think stable (102) only has 8.1 which I found to not work. Once the driver is installed, it should show up as blablabla SCSI Passthrough. The optical drive should show up now but I recommend restarting the VM just to be safe. I extended the above method to pass through the SSD and it appears to work too - better performance than Unassigned Devices.
  6. I think LT says XFS for array and BTRFS for cache.
  7. I think it's asking for credentials for the users on the VM, not the host. Plus, I always thought RDP is a Microsoft thing, not Linux.
  8. I think I read somewhere about unRAID not liking passing through Nvidia GPU placed on primary slot. Other than that, I bet someone will ask you to attach diagnostic log.
  9. I think you missed the part which he said You need to do this command in the console to see the exact device ID: ls -l /dev/disk/by-id It should tell you all your storage devices ID and their corresponding sd? name. Then you look for the ID that corresponds to sdk and replace the "ata-ST3500312CS_9VVERKB1" with it. The general consensus I think is to not use the sd? designation as it can change. The "by-id" is fixed. I got more luck with a slightly different method (passing through the scsi bus) but I think it's more important to get something that works for you, not me.
  10. testdasi

    Dual Boot?

    You can turn unRAID off. It's not a requirement to keep it on 24/7, unless you want to. Yes, it would add to your boot up time but you are the best judge at whether you want to save £2/day to keep it running 24/7 or save that but spend maybe a few minutes more waiting (and having to powerdown properly). Personally, I think I'm going to buy a 2.5" USB 3.0 external enclosure to keep my good old Kingston SSD 128GB (SATA2!) in. Then I can label it "Emergency" and have Win 10 with all the critical softwares on it. So in the worst case scenario, I still have something as a plan B.
  11. Something to take note of: even though it might seem like a good idea to use most free, it might not be with spindle drives i.e. all normal HDD. Assuming that you will utilise the reconstruct-write (aka Turbo Write) method to speed up write speed, Most Free theoretically would lead to slower speed on average. Reason is that the drives will have to be switched between read and write relatively a lot more frequently which adds latency to the write process. There's a reason High Water is the default mode. Personally I think Most Free is more relevant if your array is made up of SSD - which would benefit from having evenly distributed free space (which is a form of over-provisioning which helps improve performance).
  12. Attached is a quick proof that VM-tower link is definitely more than 1Gb/s.
  13. Not sure if this is a 6.2.0 beta 21 bug if I'm missing something. Please can someone help. When I edit VM xml, some tags appear to be automatically deleted by unRAID. I'm guessing there might be an automatic syntax checking or something but it is missing out on (at least) 2 tags: <alias name='something'/> - the alias tag doesn't even show up on the "approved" list of tag when you type < in the xml editor. It gets deleted if click "Update". <backingStore/> - the backingStore tag shows up as a valid tag when you type < in the xml editor. However, once you click "Update", it gets deleted I tried and it affects at least Windows 10 and Windows 8.1 templates. In fact, when I create a new VM, there is no alias and backingStore tag at all in the xml. These appear in a lot of the xmls attached to the forum so I think they should be valid.
  14. OMG I got it to work!!!!!! And it looks like Windows actually detected it as a Bluray writer too (put a blank Bluray in and it asks if I want to use it like a USB )!!!!! I put this in the xml right before the </devices> tag: Note the index = 1 and controller = 1 amendment. <controller type='scsi' index='1' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host9'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </hostdev> The start the VM (which crashes but it crashes randomly due to lack of memory - oh well, running it in VMWare Workstation) and then install virtio scsi driver (which shows up as passthrough) for the 2nd no-driver scsi item - which windows just install the drivers on the virtio disk, no need to force it. Then voila! When I checked the xml again, unRAID has automatically rearranged the codes and changed it a little: <controller type='scsi' index='1' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </controller> ... <hostdev mode='subsystem' type='scsi' managed='no'> <source> <adapter name='scsi_host9'/> <address bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </hostdev> *there should be an emoji for celebration*
  15. Windows says the scsi driver on the virtio disk is not designed for my drive. I tried to install it regardless and the errored scsi device went away but no drive.
  16. What if I have 2 test VMs sharing the same GPU pass through? Or perhaps if I'm testing a crashy VM, is there any quick way to start it (i.e. fewer clicks) as I put in the fixes?
  17. Is there any fast way start a VM / Docker without having to do a lot of clicks? For example Normal way to start VM: Open Browser => Type tower => click Dashboard => Click VM => click Start => VM starts Is there any way to: Open Browser => Click a bookmark => VM starts ? I think I can ssh in and type some commands (not sure what) to start a VM but then it's still a lot slower than just create a link to a bookmark on phone and then open it when needed. Edit: The VM can certainly be set on auto start but if I have multiple VMs which use the same resource e.g. maybe a Linux + a Windows test VMs both share the same GPU => can't autostart both without crashing. Wondering if there's any way to pick 1 VM to start without a lot of clicking.
  18. I think the issue is with prioritisation. Between something that is a (probably minor) inconvenient "quality of life" kinda item and major features (e.g. 2nd parity / nvme support etc.), I think majority of users would prefer the latter (so the devs also pick the latter even though the former would be quite easy to implement). I personally would like to have the labelling feature but still would use it otherwise. I won't use unRAID without nvme support.
  19. The connection between your VM and your share is 10Gb so you need SSD to fully utilise that. I have tried RAID0 in the past and will never do that again. It is like having a gun next to your head cuz it can fail any time (and in my case, it did, in a week, because windows crashed and not disk issues). The most benefit of SSD is not pure sequential speed but random access / seek speed.
  20. Don't know if this has been reported or not. It seems whenever I edit a VM on the WebGUI, the information regarding Primary vdisk location is reset to "None". When I choose "Manual", it automatically gets all the original values back. This happens everytime I click "Edit".
  21. Has anyone attempted this? My "sSATA" controller (I think that's the one with 4-port one since the other controller explicitly says 6 port) is in the same IOMMU group with something I don't recognise. IOMMU Device ID Group Device Name /sys/kernel/iommu_groups/19/devices/0000:00:11.0 00:11.0 19 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR (rev 05) /sys/kernel/iommu_groups/19/devices/0000:00:11.4 00:11.4 19 SATA controller: Intel Corporation C610/X99 series chipset sSATA Controller [AHCI mode] (rev 05)
  22. I tried. It doesn't work. :'( VM starts fine (and I doubled check the hostdev section is in the xml) but nothing shows up in Windows. :'(
  23. It comes down to whether you value capacity or data protection. Option 1 => you don't get any protection of your VM at all. So in the (unlikely) scenario that your SSD fails, you lose all your data. The benefit is you get more capacity. Option 2 => the reverse => less capacity but you are more resistant to hardware error. I can see why people would pick option 1 since SSD failure rate is relatively low and cost is high. Now let me throw a spanner into Option 1. I just learned (from reading the forum) recently that certain cache setting can make the cache works better than being a direct pass-through (at the cost of some risk in case of power failure) - I still have to actually test it out but there's nothing on paper that says it wouldn't work.
  24. Have you tried just typing "powerdown" when u r root on ssh? I just messed up my testing unRAID pretty bad but that command still worked.
  25. That is actually what i was about to say. I used to think I needed 2 VM and 2 GPU. Now I think I only need 1 GPU and maybe, just maybe 2 VMs since it seems all the key stuff I need have Dockers.
×
×
  • Create New...