MatzeHali

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by MatzeHali

  1. I'm not sure that what you see has anything to do with SMB. My ZFS pool has been running for years now, and when using internally on the server, there is no speed problems at all. My problem is still that the SMB-protocol is totally bottlenecking any really fast network activity in combination with MacOSX clients. What I did to mitigate is that the software I'm mainly working with is now caching files on internal NVMEs under MacOSX, so the data which is needed when working on something is fastly available, but the ZFS in the background covers all the important stuff to make it safe.
  2. Hi there, my UnRAID install was running strong for about 1.5 years without a restart. The machine is running a few dockers for local network use, an externally available Nextcloud-Docker and a big ZFS as a SMB share for a MacPro and my mobile MacBookPro. It has a normal Gigabit onboard network interface which is under the 192.168.0.111 address and via this is connected to the internet with my router as a gateway. Also I have configured a Dual 10Gigabit network interface which is running on 192.168.1.111 with a fixed IP and also connected to a different 10G switch and no gateway, so this is for internal network traffic only. The Macs also connect via their 10G ports to this switch. Historically, I could connect to either the SMB shares and the GUI via both interfaces IP addresses. Suddenly, the 10Gigabit one doesn't work anymore, so I can neither connect to SMB shares nor the Web GUI via the 192.168.1.111 address. My main problem is: I'm remotely connected through my MacPro, so not on site, but I need to render a lot of stuff remotely and this is very slow with the Gigabit connection, only. I can't ping the 10G network address from the MacPro, but I can't test if I can ping any other device on the 10G network, because the third device is with me on the road. How can I check if the 10G interface is correctly connected to the network switch, or if the network switch is still responding? Is there any way to know? Internally, the interface seems to respond to the normal commands, I can address it to ping outbound, but of course, nothing answers this way, either. Thanks for any ideas, Cheers, M
  3. Hi, yes of course. Totally wanted to attach it before, but forgot. :D mhions-storage-diagnostics-20220102-1656.zip
  4. Hi guys and gals, I've have the situation that I have a 5 data, 2 parity disk UNRAID configuration, and one data disk had been not working, so I took a fresh disk, precleared it, then stopped the array, replaced the faulty disk with the newly precleared one, started the array again and it started to rebuild. Now the rebuild has finished and it shows valid parity, but the disk in question is shown as unmountable. As far as I can say, the array is not missing any data, and I can access everything. So, is that disk not used in the array, now, and is simulated? How is that possible after a rebuild? What would be my next step to make sure that everything is actually in order? Thanks, M
  5. I probably would not hot plug for changes to the main array, but there's other stuff I do on that box, so I hot plug drives in for other stuff all the time. Can't just shut down a server which is in use because I need to do something else. Just that one time I somehow must have jiggled a drive of the main array. Rebuild worked fine, also just added another drive to have double parity. Better safe than sorry. So all went well.
  6. OK, so new config would just assume all disks are OK but I'd have to resync/check parity to make sure everything is in order? But this would mean the array is down for that time, right? While the rebuild of the disk I can actually use it? Thx, M
  7. Hi there, I added a drive to my Unraid machine while it was running, and apparently I somehow jiggled one of the main array drives (a 16TB drive), which made Unraid detect some read errors for a few seconds and dropped the drive. The array has an 18TB parity drive and some more 14TB drives in it. I'm quite certain that the drive in question is totally fine. Is there a chance of "resilvering" it, without the need of stopping the array, removing the device, preclear it and put it in again to rebuild? Thanks, M
  8. Hi there, I started to use Unraid excessively as a Hypervisor and running multiple VMs (Ubuntu and MacOSX - thanks @SpaceInvaderOne) to do various tasks and computations. Now I'm running into the problem that some of those VMs are calculating stuff (ML mainly), and I want to add some storage to them, without switching them off. So, question is, how can I achieve that, since growing the Unraid pool is only possible when it's deactivated, so the VMs will be shut down. So, question is, could I pass through physical disks to the VMs while they are running? I know I did things like that in Parallels under MacOSX, and I know how I can mark disks for passthrough in UnRAID GUI, so the question is, how can I then hand them to a running VM? Thanks for any ideas or pointers to the right places. Cheers, M
  9. Hey hey, I have acquired two P3600 1.6TB NVME PCI-E cards, marked them as passthrough in the shiny new 6.9.2 of Unraid and passed them onto my MacOSX BigSur VM. There they show up as expected and I started using them but at a certain point one was just gone. Not only the drive disconnected to be able to mounted again from disk utility, but also in System Report of MacOS the device was only half there anymore. Just the generic NVME-controller part, but not the actual NVME branch of the card. When rebooting the VM, everything seemed fine. Then it happened again. So, I thought, let's test this. Switched slots of the two cards and voila, next time, after about 30min of usage the second card went missing. On reboot of the VM (not UNRAID mind you) everything is fine, again. So I used a third slot and avoided the slot where both cards went missing in, first. After a while of usage (probably again about 30min), both went missing in the VM. UNRAID shows the devices fair and square, no problem. I need direct access from MacOSX to very fast storage, so it would be great to have them in passthrough, but then, again, what are my options to decently fast but stable? Thanks, M
  10. Yes, I did run the helper script every time I changed something. When I run it, I always get the following output: Script location: /tmp/user.scripts/tmpScripts/1_macinabox_helper/script Note that closing this window will abort the execution of this script Starting to Fix XML error: failed to get domain 'put the name of the vm here' No network adapters in xml to change. Network adapter is already vmxnet3 Added custom qemu:args for macOS topolgy line left as is custom ovmf added error: Failed to define domain from /tmp/put the name of the vm herefixed.xml error: (domain_definition):3: Extra content at the end of the document ^ This is what has been done to the xml Your network type was already correct. Network has not been changed. The custom qemu:args have been added to you xml. VM is set to use custom ovmf files. xml is now fixed. Now goto your vm tab and run the VM Rerun this script if you make any other changes to the macOS VM using the Unraid VM manger Any ideas on how to get passed the Apple logo? Thanks.
  11. I did that, but it didn't help with the startup of the VM. It still is stuck at the Apple logo.
  12. Hi there, I had used MacInABox a few weeks ago and created a Catalina machine (checked BigSur but it got Catalina, I didn't care) with 2 cpu cores and 4GB of memory to do some data reconstruction of an old hfs+ harddrive. During this, somehow the user share was somehow residing on cache and the cache drive got filled up, so the machine halted and I needed to stop everything and clean up the shares and correct the ones who should actually use the cache and which shouldn't. After that, because the data reconstruction was slow in the first place, I thought, well, since the VM with Catalina works, so far, I'll just give it another two CPU cores and some more memory, so I changed that and executed the user script again. Upon starting the machine now and connecting to it, it was kind of stuck in the boot media selection menu, but when I tried to select the vdisk where Catalina was on, some graphic glitches happened, and I couldn't boot into anything. So, to battle that, I took out the macosx installer disk from the VM, since it wasn't needed anymore, and run the helper script, again. Now, when I start the machine, I see only that one boot drive vdisk Catalina is installed on. When I hit enter, it shows the Apple on black screen like expected, but never goes beyond that. Here's the XML code copied from UNRAID web interface: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='27'> <name>Macinabox BigSur</name> <uuid>76af2ce9-41c3-46f4-9057-2d3ac0e85e9f</uuid> <description>MacOS Big Sur</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="default.png" os="osx"/> </metadata> <memory unit='KiB'>17301504</memory> <currentMemory unit='KiB'>17301504</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='18'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='19'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/76af2ce9-41c3-46f4-9057-2d3ac0e85e9f_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/isos/BigSur-opencore.img' index='2'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Macinabox BigSur/macos_disk.img' index='1'/> <backingStore/> <target dev='hdd' bus='sata'/> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='58:54:23:c2:cf:78'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-27-Macinabox BigSur/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> How can I troubleshoot this? I tried to go back to the original CPU and RAM configuration, but it just always stuck on the bootup there. Any help appreciated. Thanks, MH
  13. Hi NuWanDa, I had the same problem. To solve it, stop the docker, remove it, remove the docker template by clicking add template, then choose the MacInABox Template and hit the minus button. After that use a file manager like Krusader and delete the MacInABox folder in Appdata. (This is the important step that the scripts will get installed, later) For good measure, also delete the iso-files of the MacOSX version in the iso folder. After that, reinstall the MacInABox-Docker. Hope that helps. Cheers, MH
  14. Hi SpaceInvaderOne, firstly, thanks for this, a wonderful tool. My question would be if it was possible to add Sierra as an OS option, for a legacy FCP Studio 3 and DVD Studio 3 and an old Adobe Encore installation, which is unsupported software under High Sierra, and is still the best way for building the occasional DVD or BluRay when they are asked for? So far I had a dedicated machine standing around for this, but it's only in use occasionally and I'd love to do this virtually. That would be really really nice. Thanks and keep up the great work. M
  15. Hi there, I today had to shut down all my devices (including the UNRAID server and also an older fibrenetix fibrechannel RAID) and the latter didn't switch on again after being away from power for 3hours. So, basically, while I also just bought the same enclosure used on eBay, which will probably need two to three weeks to get here, I was thinking, since I have a Storinator with a lot of empty slots, if I could put the 6 3TB drives into the storinator (and mount them strictly read only) and firstly backup all the six drives to vdisks and then attempt a rebuild of the RAID as a Virtual RAID set. So, while I'm aware of that concept, is that even possible with UNRAID? Any directions to which tools I'd need to study the manpages would be highly appreciated. Thanks, M
  16. Hi Pourko, thanks for the suggestion. I did just that and it sadly changed nothing. I'm still on the same speed limitations via SMB. Not only that. If I, for example, render a JPG2000-sequence from the Mac to a share on UnRAID via SMB, having a constant render speed, meaning, there is basically always the same calculation involved, this goes well with an expected render speed of in my case 5fps, roughly, and then gradually degrades down to under 1fps. Finder will take ages to display the render folders files in them and it's all painfully slow. If I now break that render, rename the render folder in Krusader on UnRAID and resume the render, which recreates the original render folder, speed is up again until half an hour later or so, when it's down to about an 8th or so. So, I'm still trying to wrap my head around how I can access UnRAID on 10GB-speeds with lots of files from MacOSX. So far, I'm really not sure what to do. Thx, M
  17. Hi, it’s a native UnRAID installation on a 16core Xeon with 144GB RAM! I think I installed FIO via Nerdtools, too! Cheers, M
  18. Solved this by setting up bridging for the second bond, too. That slipped my attention before. With that bridging enabled, I was able to set br1 within the VM settings.
  19. If I share my pool to nfs via: zfs set sharenfs=on poolname I can't access it from MacOSX via nfs://servername/poolname, which is the path I would expect it to be. Where is it shared to? Thanks, M
  20. In the end the easiest solution to have nextcloud docker behave normally on both subnets was to install a second docker for the second interface. Easy as pie and so far no hiccups. Reason was I did want the next cloud behave as a "real" server with it's own IP on both subnets, not just ports on the main IP, so that's the route I took. Thanks for all the help and suggestions. M
  21. Hi. The share is on the UnRAID, but I have now solved it by connecting via WebDAV in the Finder which works well. Cheers, M
  22. Hi there, I'm set up with a primary network bond configured with the onboard eth0 and eth1 being bond1 and a secondary bond2 with an additional 10gbe-card eth2&eth3. How do I configure a VM to bridge to the second bond. With the normal br0 it bridges to bond1 and I can't get it off of this. If I put in br1 in the XML it throws an error that this is not available. Same with bond2. Thanks, M
  23. Hi there, running next cloud with your provided docker for a few weeks now and everything is smooth, there's just one question: When connecting to app data via SMB, to directly up- or download files to the server without using the next cloud web interface, I can't access the data folder. It has a big one way street sign on it in Finder with MacOSX Catalina. It does not make a difference with which user I log into the smb share. Is there a way to change that, or is that by design and I can only put files on the next cloud shares utilising the web interface? thx, M
  24. OK. But my server address is for example 192.168.0.100 and 192.168.1.100, so I can reach the web interface and everything with the 100-address. The NextCloud docker gets assigned a new IP (in my case 192.168.0.101), so what you are describing doesn't work, because it's just reachable at a specific IP, and I would love it to be reachable also from 192.168.1.101 on the additional NIC. I'll try out stuff within the next days and report back what seemed to be the best solution. Thanks for your suggestions so far. M
  25. Since the slow and the fast network are having different IP addresses, I guess that doesn't work? Because explicitely want the different IPs on different NICs, which are also different from the main server IP. Cheers, M