Jump to content

M0CRT

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by M0CRT

  1. Perfect Jorge. Worked like a charm! :-). Thanks
  2. I have checked the 'cctv big disk' and it looks like it may be fully allocated: image: vdisk4.img file format: raw virtual size: 500 GiB (536870912000 bytes) disk size: 500 GiB Will the resparse still help here? i.e. am I not already allocating all the disk? XML of VM...noting the VM is a 'Synology'... <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/cctv-vm/Synology/vdisk4.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='1' bus='0' target='0' unit='4'/> </disk>
  3. That's a great idea Jorge. Thanks. Think that's what the issue may be...I've run the balance on the BTRFS and i'm in a good place there. You are right, it's likely to be the 'allocation' of the images associated with my VM. I'll cp them back with 'resparsify'
  4. I'll take a look :-). Thanks! UPDATE: Great article...thanks Kilrah! Yes...it's not as 'simple' as it may initially seem! :-)
  5. As a follow up. Calc in Unraid shows 708Gb 'occupied' but on summary page, only 70/80Gb Free...Is the BTRS volume potentially reserving some space? Surely this should be reported as approx. 290Gb free?
  6. Thanks for your feedback Jorge. Any reason why the difference in reported free space compared to usage?
  7. Hi All I've pondered on this for the last 24 hours and am still scratching my head. Cache pool with x2 identical SSDs. Reported available disk space of 1Tb I've several shares set to cache only: System (87.0Gb) Appdata (35.3Gb) cctv-vm (running my CCTV VM) (554Gb) nzbdownload (Deluge downloads et al) (31.8Gb) The rest of my shares do not use cache at all. When I run a calc on space used for these shares and a DU, the space used and remaining does not calculate up. For example, if I calc my cache only shares (using the Unraid GUI on the shares page), the utilise space comes to a total of 708.1Gb. However, my 'free space' on the cache shows 73.0Gb... With my rough math, I feel like I'm missing around 240Gb...Could someone explain what I may be missing here? I've run a balance and scrub on the cache pool (not expecting that this would 'fix' anything) This a block size issue? Any reason why I may be 'loosing' disk space? I've triple checked and don't have something else on the cache beyond the shares / files above. Maybe file permissions in order to 'calc' the disk space? Yours, awaiting enlightenment and putting straight! :-) Thanks
  8. Thanks for the response Jorge! Didn't know I could mix the disks and copy manually...Great idea. I have actually gone something a little more crazy to get this to work. ignored ZFS and created my new XFS array...then created an unraid VM and stubbed the LSI card to the VM...created the array with the SAS disks and then mounted a remote share from the VM to the Host... lol. It's copying...albeit a little slow than natively! :-)
  9. Hi Folks I've recently prep'd a new i7 12700 based device with 8X SATA ports. My original server is a HP with 'enterprise' SAS. I've moved a LSI SAS card from the HP to my new server to allow me to manually copy the files from each drive (5) (original array had x2 parity too). All going very nicely until I found I couldn't mount 4 out of the 5 SAS drives via 'unassigned devices' on the new server. I believe it's a known issue...? Well I'm feeling very distressed at the moment as I have both servers in bits and disks out! :-) It looks like there is no way of mounting the original SAS disks outside of an array without reformatting? QUESTION: This correct? Could I potentially look to 'pass-though' to an OS that could read on the new server and then copy my files off? I thought about the following solution. Can someone sanity check it? Whilst I assume I could put the disks back in the original server and mount, this isn't going to get the files on my new server without some further mess of temp keys and network transfers. Could I create a new ZFS pool as my new, permanent array on my new server and mount FOUR out of the FIVE member disks as a 'new' array to provide an opportunity to copy off the content? I would only be able to use four initially as the SAS card/cable can only support 4 at a time. I know I won't have parity protection but would this get me to my data to provide ability to copy? I could then create a new array when this copy had finished with the remaining 1 single disk. Would this work or is there 'another way'? I'm deeply concerned. Any help greatly appreciated. Thanks Mo
  10. Hi Folks Can I have a sanity check on my migration please? Migrating from ML350P Gen 8 HP server to a new 12700 Intel Asrock Z690 PG Riptide solution. The original array was operating as XFS using SAS disks. My new array will be ZFS and SATA based. The cache drives and unassigned devices, graphics cards and PCIE USB3 controller will be physically moving. The original SAS disks (XFS array) won't. :-) Whilst I assume I could just network transfer the files from old to new using a 'temp' license on the new until completed, I also assume I could drop a PCIE SAS card in to the new server in order to read the individual original array member disks and transfer the files directly off one at a time onto the new ZFS mounted array on the new server? This will save time on the transfer at 1G (although I could have bought an add-on 2.5G card for the HP. To be sure...I don't need to mount the original array on the new server to affect a solid transfer? All the data is stored on the member disks and thus I just need to copy from them...no need to bring over the parity drives? Thanks! Mo
  11. Hi Folks I've been passing through both an on-board (HP 350ML Gen8) USB2 controller to an Ubuntu VM for sometime with no issues; perfect. However, it seems that something has changed. The device I'm passing through, a FlightStick Pro SDR device, is low power and, along with another couple of devices, works perfectly well (using a fibre to USB convertor to cover some distance and isolate the power for radio work)...issue is one of 'timing' and potentially dropped data from the USB device. https://discussions.flightaware.com/t/re-mlat-timing-issue-error-configuration-check/80767 https://github.com/wiedehopf/adsb-wiki/wiki/mlat-in-VMs There are some 'Proxmox' settings...anything to suggest for QEMU? I've moved the hardware over to a Raspberry Pi4 and no issues what so ever. Testing the VM with passthrough of a USB3 Renisas card, also highlights 'clock timing issues'. Is there any best practice of how to check / tune USB pass-through performance? As mentioned, the devices pass-though fine, performance and 'timing' seems to be an issue. Will the controller type on the VM make any difference to the configuration of the pass-through? Are there any settings / manual changes I should look to make? Thanks Mo
  12. Hi All Strange issue with passthrough of my 1660S. Due to USB breakout requirements, I've forced MMU groups to split out. pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 I now have the GPU, Audio and 'USB Controller' all in their own MMU group with no sharing. IOMMU group 32:[10de:21c4] 0a:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] (rev a1) IOMMU group 33:[10de:1aeb] 0a:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) IOMMU group 34:[10de:1aec] 0a:00.2 USB controller: NVIDIA Corporation TU116 USB 3.1 Host Controller (rev a1) On the Windows 10 VM Config, I assign the GPU and sound card as expected but receive the following error: internal error: qemu unexpectedly closed the monitor: 2021-04-18T08:32:22.076398Z qemu-system-x86_64: -device vfio-pci,host=0000:0a:00.1,id=hostdev1,bus=pci.9,addr=0x0: vfio 0000:0a:00.1: failed to setup container for group 33: Failed to set iommu for container: Operation not permitted Seeing that the audio controller is in a separate, isolated MMU group, I'm struggling to understand why I'm facing this issue. Any ideas? HP MP350P Gen 8 with latest bios. Thanks Mo tower-diagnostics-20210418-0934.zip
  13. Hi Folks For those having issue with the Big Sur install and 'Method 2'. Try and edit line 273. to include an 'rm' before the dmg line. This will remove the permission denied and hopefully continue with the script. Worked for me. Remember, if you reconfigure the container, you will remove the edit as it's within the container and not part of the appdata. Line 273. of the unraid.sh within /macinabox folder of the container.
  14. This works a treat. Ensure you disable Fast Boot in Windows 10 power settings for the ACPI tables to recache. Perfect. 🙂
  15. Nice one Jummama. I'll take a look. I did attempt to patch utilising the aforementioned sed method but on checking my ACPI tables it still wasn't working. Do I need to refresh the tables somehow on an existing VM? Any reason why I cannot patch within an Unraid Terminal session? Thanks
  16. Hi Folks As documented elsewhere, it seems that certain game developers are now taking on themselves to block running of their content on virtual platforms. With the potential, documented in the linked Reddit post below, how easy would it be to either provide an option for a 'hardened' QEMU version OR facilitate building a custom version (as documented in the post)? Whilst I have dev tools installed, cannot complete the build due to missing deps...namely pixman in my case. Keen to get RDR2 up and running once more. Thoughts would be most welcome.
  17. Thanks Mysticle31. As I thought. I've copied them to the array and will attempt to write from there. 🙂
  18. Hi folks I'm currently storing two raw (.img) disk image files on an unassigned nvme ssd. If I wished to write these images out to 'build' two partitions on the ssd so I can boot from it (via host disk pass-through to a vm), is there a way I can accomplish this? Would I need to create some partitions first and then use dd? Thanks Mo
  19. Hi Folks Very clear that if I wish to pass through a GPU, I need to ensure that it is either in it's own MMU group OR I have to pass-though everything in the MMU group where the GPU is. The issue I'm facing is that all 'sub-devices' of the GPU (and the GPU it's self) are split into separate groups due to: PCIe ACS override = Both Unsafe Interrupts = Yes Group 33 0a:00.010de:21c4VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] (rev a1) Group 34 0a:00.110de:1aebAudio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) Group 35 0a:00.210de:1aecUSB controller: NVIDIA Corporation TU116 USB 3.1 Host Controller (rev a1) *USB devices attached to controllers bound to vfio are not visible to unRAID* Group 36 0a:00.310de:1aedSerial bus controller [0c80]: NVIDIA Corporation TU116 USB Type-C UCSI Controller (rev a1) *USB devices attached to controllers bound to vfio are not visible to unRAID* append vfio-pci.ids=1106:3483,10de:21c4,10de:1aeb,10de:1aec,10de:1aed modprobe.blacklist=dvb_usb_rtl28xxu isolcpus=0-7,16-23 pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot mitigations=off I'm then at a loss to understand why, when I select my GPU, Sound card and both USB and Bus Controllers (which I've stubbed...see below), all that are in their OWN MMU (with no other devices), do I receive the dreaded: 2020-09-19 06:42:05.714+0000: 25586: error : qemuProcessReportLogError:2103 : internal error: qemu unexpectedly closed the monitor: 2020-09-19T06:42:05.316795Z qemu-system-x86_64: -device vfio-pci,host=0000:0a:00.1,id=hostdev1,bus=pci.11,addr=0x0: vfio 0000:0a:00.1: failed to setup container for group 34: Failed to set iommu for container: Operation not permitted Group 34 only includes the GPU device and nothing more...and thus have no idea why the VM won't start. Any ideas greatly received as I'm sure this may be quite common. HP ML350P Gen 8 with latest BIOS et al Unraid 6.9.0 Beta 25 Thanks Mo
  20. Hello Johnnie.Black Thanks for the reply. The default write mode was set to auto so I've set it to the reconstruct write option (turbo). I'll run the performance tests again. Mo
  21. Hi Folks Just a general question regarding performance I should be expecting from my array from both a read and write perspective. I've a number of controllers in my ML350p Gen8 server: P420i HBA Mode LSI 2008 HBA Mode H240 HBA Mode Onboard SATA I've run some benchmarking tests via the very nice 'DiskSpeed' docker and, for all my HDDs, I see around 190 - 206 MB/s (see attachment). What is puzzling is that 'real-world' performance doesn't see transfer rates pushing above 50/60 MB/s even say moving data between my Cache drive (not shown) and array drives (Cache is SSD with over 250 MB/s rate. Any ideas as to further tests or settings I should be looking at? Thanks all Mo
  22. Hi. I did. I found it to be a power problem and I needed to upgrade my PSU modules from the 450 to 750w. Ensure you are using a rom file. Are you getting anything i.e. just a black screen or, like me, are you getting a 43 error?
  23. I've managed to get a boot to Windows with the TechPowerUp Gigabyte Gaming OC ROM. It shouldn't boot with this ROM but it does...unfort I've got a Code 43 error now. Going to attempt an extract via a live cd. As now, if I attempt a CAT ROM extract...I get an Input / Output error. Sigh.
  24. Hi all I've been operating a HP ML350P Gen 8 server successfully for six months with a 1060 GTX GPU passing through to a Windows 10 VM. Whilst I had to ACS Override and allow unsafe interrupts (even though both the GPU and the HDMI audio were in their own MMU group and nothing else), it did work fine...all be it even with the Override and unsafe interrupts, the VM wouldn't boot with the HDMI audio enabled (when each device was in it's own MMU). Anyway, swapped the 1060 for a 1660 Super today and am struggling to boot. With existing ACS Override and Unsafe Interrupts, the gpu and associated HDMI are in SEPARATE MMU groups...along with the additional USB and Serial Bus Controller: IOMMU group 33:[10de:21c4] 0a:00.0 VGA compatible controller: NVIDIA Corporation Device 21c4 (rev a1) IOMMU group 34:[10de:1aeb] 0a:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) IOMMU group 35:[10de:1aec] 0a:00.2 USB controller: NVIDIA Corporation Device 1aec (rev a1) IOMMU group 36:[10de:1aed] 0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1aed (rev a1) The VM is hanging. I've ensured I've also extracted and have attempted to apply ROM as per Space Invader ROM extract method; no change. I've also attempted to add all four of these devices to the syslinux config per: VFIO-PCI.IDS=10de:21c4,10de:1aeb,10de:1aec,10de:1aed No change. Any further suggestions? Seems the additional USB and Serial Bus Controller devices on the PCI card may be causing me some pain. Thanks in advance. Mo
  25. Good day I've been working on this issue for over three days now and believe I've explored all I can with regard to the issue. Any advice would be gratefully received. HP ML350p Gen 8 latest BIOS. I'm not receiving any of the RR issues and can successfully pass through a GPU (GTX 1060) without any issues. What is of issue is the performance FPS etc. wise. Even running a benchmark using 720p (low) on the Superposition test is only pulling around 40FPS. Utilising the concBandWidthTest tool, I'm seeing around 7900 - 9,000 MB/s average Bi-directional bandwidth. Whilst I've upgraded to the latest 6.8.0 rc1 and tried both the Q35 and i440 VM types, still the same. So I read up on the Root PCI-e bandwidth fix and note that the issue could have been my GPU not connecting to the VM at pci-e 3.0 x16. Utilising the latest Q35 4.1, I note that the speed in my VM BOTH in the Nvidia Control Panel AND GPU-Z shows x16 3.0 / 3.0 Gen 3 under load. I thus was wondering if this was a hardware issue with the server and not a VM / Unraid issue. I've thus run lspci -vvvn -s 0a:00.0 | grep LnkSta to confirm the current link state of the GPU. Whilst NOT under load, the link speed is 2.5GT/s @ x16 width. When under load (all be it at 40FPS), it displays 8GT/s @ x16 width, which I believe is full PCIe-3.0 speed. To note and may be an issue...may be not. I've allowed unsafe interrupts: vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream,multifunction initrd=/bzroot Both my GPU AND HDMI Audio are in separate MMU groups on their own, however I cannot boot the VM with the Audio device...Failed to set iommu for container: operation not permitted error. Although I cannot reset the GPU WITHOUT attempting to boot with the Audio device (which then releases the card). I'm thinking of a BIOS downgrade on the server...it certainly seems more hardware related than config. I thus am at a loss to appreciate why I'm seeing very poor performance. I've checked that the GPU is in a X16 slot and, whilst I initially had some concerns regarding the C600 chipset used in the ML350p Gen 8 server, HP documentation confirms it is PCIe-3.0. I've also looked into CPU bottlenecks and confirmed that both the GPU and CPU cores assigned were on the same numa node. Any ideas? Happy to post logs if useful but I didn't want to throw them out if not required. Any tests I can undertake on the HOST to check that the GPU is performing correctly prior to attempting to debug on the VM? Thanks Mo
×
×
  • Create New...