M0CRT

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

M0CRT's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. Perfect Jorge. Worked like a charm! :-). Thanks
  2. I have checked the 'cctv big disk' and it looks like it may be fully allocated: image: vdisk4.img file format: raw virtual size: 500 GiB (536870912000 bytes) disk size: 500 GiB Will the resparse still help here? i.e. am I not already allocating all the disk? XML of VM...noting the VM is a 'Synology'... <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/cctv-vm/Synology/vdisk4.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='1' bus='0' target='0' unit='4'/> </disk>
  3. That's a great idea Jorge. Thanks. Think that's what the issue may be...I've run the balance on the BTRFS and i'm in a good place there. You are right, it's likely to be the 'allocation' of the images associated with my VM. I'll cp them back with 'resparsify'
  4. I'll take a look :-). Thanks! UPDATE: Great article...thanks Kilrah! Yes...it's not as 'simple' as it may initially seem! :-)
  5. As a follow up. Calc in Unraid shows 708Gb 'occupied' but on summary page, only 70/80Gb Free...Is the BTRS volume potentially reserving some space? Surely this should be reported as approx. 290Gb free?
  6. Thanks for your feedback Jorge. Any reason why the difference in reported free space compared to usage?
  7. Hi All I've pondered on this for the last 24 hours and am still scratching my head. Cache pool with x2 identical SSDs. Reported available disk space of 1Tb I've several shares set to cache only: System (87.0Gb) Appdata (35.3Gb) cctv-vm (running my CCTV VM) (554Gb) nzbdownload (Deluge downloads et al) (31.8Gb) The rest of my shares do not use cache at all. When I run a calc on space used for these shares and a DU, the space used and remaining does not calculate up. For example, if I calc my cache only shares (using the Unraid GUI on the shares page), the utilise space comes to a total of 708.1Gb. However, my 'free space' on the cache shows 73.0Gb... With my rough math, I feel like I'm missing around 240Gb...Could someone explain what I may be missing here? I've run a balance and scrub on the cache pool (not expecting that this would 'fix' anything) This a block size issue? Any reason why I may be 'loosing' disk space? I've triple checked and don't have something else on the cache beyond the shares / files above. Maybe file permissions in order to 'calc' the disk space? Yours, awaiting enlightenment and putting straight! :-) Thanks
  8. Thanks for the response Jorge! Didn't know I could mix the disks and copy manually...Great idea. I have actually gone something a little more crazy to get this to work. ignored ZFS and created my new XFS array...then created an unraid VM and stubbed the LSI card to the VM...created the array with the SAS disks and then mounted a remote share from the VM to the Host... lol. It's copying...albeit a little slow than natively! :-)
  9. Hi Folks I've recently prep'd a new i7 12700 based device with 8X SATA ports. My original server is a HP with 'enterprise' SAS. I've moved a LSI SAS card from the HP to my new server to allow me to manually copy the files from each drive (5) (original array had x2 parity too). All going very nicely until I found I couldn't mount 4 out of the 5 SAS drives via 'unassigned devices' on the new server. I believe it's a known issue...? Well I'm feeling very distressed at the moment as I have both servers in bits and disks out! :-) It looks like there is no way of mounting the original SAS disks outside of an array without reformatting? QUESTION: This correct? Could I potentially look to 'pass-though' to an OS that could read on the new server and then copy my files off? I thought about the following solution. Can someone sanity check it? Whilst I assume I could put the disks back in the original server and mount, this isn't going to get the files on my new server without some further mess of temp keys and network transfers. Could I create a new ZFS pool as my new, permanent array on my new server and mount FOUR out of the FIVE member disks as a 'new' array to provide an opportunity to copy off the content? I would only be able to use four initially as the SAS card/cable can only support 4 at a time. I know I won't have parity protection but would this get me to my data to provide ability to copy? I could then create a new array when this copy had finished with the remaining 1 single disk. Would this work or is there 'another way'? I'm deeply concerned. Any help greatly appreciated. Thanks Mo
  10. Hi Folks Can I have a sanity check on my migration please? Migrating from ML350P Gen 8 HP server to a new 12700 Intel Asrock Z690 PG Riptide solution. The original array was operating as XFS using SAS disks. My new array will be ZFS and SATA based. The cache drives and unassigned devices, graphics cards and PCIE USB3 controller will be physically moving. The original SAS disks (XFS array) won't. :-) Whilst I assume I could just network transfer the files from old to new using a 'temp' license on the new until completed, I also assume I could drop a PCIE SAS card in to the new server in order to read the individual original array member disks and transfer the files directly off one at a time onto the new ZFS mounted array on the new server? This will save time on the transfer at 1G (although I could have bought an add-on 2.5G card for the HP. To be sure...I don't need to mount the original array on the new server to affect a solid transfer? All the data is stored on the member disks and thus I just need to copy from them...no need to bring over the parity drives? Thanks! Mo
  11. Hi Folks I've been passing through both an on-board (HP 350ML Gen8) USB2 controller to an Ubuntu VM for sometime with no issues; perfect. However, it seems that something has changed. The device I'm passing through, a FlightStick Pro SDR device, is low power and, along with another couple of devices, works perfectly well (using a fibre to USB convertor to cover some distance and isolate the power for radio work)...issue is one of 'timing' and potentially dropped data from the USB device. https://discussions.flightaware.com/t/re-mlat-timing-issue-error-configuration-check/80767 https://github.com/wiedehopf/adsb-wiki/wiki/mlat-in-VMs There are some 'Proxmox' settings...anything to suggest for QEMU? I've moved the hardware over to a Raspberry Pi4 and no issues what so ever. Testing the VM with passthrough of a USB3 Renisas card, also highlights 'clock timing issues'. Is there any best practice of how to check / tune USB pass-through performance? As mentioned, the devices pass-though fine, performance and 'timing' seems to be an issue. Will the controller type on the VM make any difference to the configuration of the pass-through? Are there any settings / manual changes I should look to make? Thanks Mo
  12. Hi All Strange issue with passthrough of my 1660S. Due to USB breakout requirements, I've forced MMU groups to split out. pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 I now have the GPU, Audio and 'USB Controller' all in their own MMU group with no sharing. IOMMU group 32:[10de:21c4] 0a:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] (rev a1) IOMMU group 33:[10de:1aeb] 0a:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) IOMMU group 34:[10de:1aec] 0a:00.2 USB controller: NVIDIA Corporation TU116 USB 3.1 Host Controller (rev a1) On the Windows 10 VM Config, I assign the GPU and sound card as expected but receive the following error: internal error: qemu unexpectedly closed the monitor: 2021-04-18T08:32:22.076398Z qemu-system-x86_64: -device vfio-pci,host=0000:0a:00.1,id=hostdev1,bus=pci.9,addr=0x0: vfio 0000:0a:00.1: failed to setup container for group 33: Failed to set iommu for container: Operation not permitted Seeing that the audio controller is in a separate, isolated MMU group, I'm struggling to understand why I'm facing this issue. Any ideas? HP MP350P Gen 8 with latest bios. Thanks Mo tower-diagnostics-20210418-0934.zip
  13. Hi Folks For those having issue with the Big Sur install and 'Method 2'. Try and edit line 273. to include an 'rm' before the dmg line. This will remove the permission denied and hopefully continue with the script. Worked for me. Remember, if you reconfigure the container, you will remove the edit as it's within the container and not part of the appdata. Line 273. of the unraid.sh within /macinabox folder of the container.
  14. This works a treat. Ensure you disable Fast Boot in Windows 10 power settings for the ACPI tables to recache. Perfect. 🙂
  15. Nice one Jummama. I'll take a look. I did attempt to patch utilising the aforementioned sed method but on checking my ACPI tables it still wasn't working. Do I need to refresh the tables somehow on an existing VM? Any reason why I cannot patch within an Unraid Terminal session? Thanks