• Posts

  • Joined

  • Last visited

Posts posted by Jorgen

  1. Hi JTok,

    Thanks for your work on this. I'm setting this up for the first time and was wondering about this warning for backups to keep:

    # WARNING: If VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.).

    I've ended up with two vdisk's with the same number somehow, but in two different locations:

    • /mnt/user/domains/JorgenOSX/vdisk2.img
    • /mnt/disk3/J-VM-vdisk2/vdisk2.img

    In my case, will the script not backup both disks in the first place, or will the identical numbering just affect the retention/deletion?

    Do I need to rename one of the vdisks to resolve this? This makes me nervous as I don't have a backup yet, catch-22 moment here... :)

  2. On 5/2/2018 at 3:18 AM, Struck said:

    Another thing i am wondering about is how to use symlinks in the watch folder.

    Instead of me moving the data from the array to the watch share i would like to use symlinks (if possible)

    I tried creating a symlink with Krusader, but this didn't trigger handbrake to do anything.

    I tried to use symlinks for both folders and files directly in the watch folder.


    Any ideas?


    Here's one way of doing it from the command line:

    I'm sure you can achieve the same from Krusader but I don't know how. I use Midnight Commander and there it's a two step process: 1. create symlink 2. edit symlink to replace the first part (/mnt/user/Media) with something the docker can understand (/storage)


    I know this works for files, but haven't tried folders. Please report back if you test that.


  3. Update: on second start of VM, with no changes to XML or hardware, it all just works. Still get the libusb errors in the VM log, but they don't seem to affect anything.

    The missing GPU output was a loose cable, and the missing network is something that happens occasionally to this VM so it was probably a coincidence.


    So, I think this would work for you.


    Alternatively, you could always use the excellent Libvirt Hotplug USB plugin instead. That will let you configure the VM without the USB device attached, and then simply attach it while the VM is running if/when required.


    • Like 2
  4. Ok, decided to try this myself.


    I have a VM with a bluetooth dongle passed through as a USB device.

    The VM starts fine with the dongle plugged in to the unRAID box and this XML:

        <hostdev mode='subsystem' type='usb' managed='no'>
            <vendor id='0x0a12'/>
            <product id='0x0001'/>
          <address type='usb' bus='0' port='2'/>

    If I unplug the dongle form the host and try to start the VM, I get the expected error:



    I then edited the XML to this:

        <hostdev mode='subsystem' type='usb' managed='no'>
          <source startupPolicy='optional'>
            <vendor id='0x0a12'/>
            <product id='0x0001'/>
          <address type='usb' bus='0' port='2'/>

    Now the VM starts, but it's not working properly. It has no network and the passed through GPU doesn't output a signal, so I can't see what's going on.

    There are also three new errors in the VM log that doesn't normally show up, no doubt related to the issues:

    libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/004: Operation not permitted
    libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/002: Operation not permitted
    libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/004: Operation not permitted

    I'll continue to experiment with this, but at this point it doesn't look like the startupPolicy='optional' is working. at least not for a Mac OS VM.

    • Like 1
  5. From what I've read you should be able to specify a USB device as optional on start of a VM.

    Fair warning, I haven't tried this myself, so I don't know if it works or breaks something. Try it on a test VM first. And please report back if it works or not.


    After adding your USB device as per normal in the VM edit screen, you need to manually edit the XML and add startupPolicy='optional' to the source tag of the USB device.

    Here's the example from the link above:

    Your vendor and product id will be different, and you will also have an address tag added.

      <hostdev mode='subsystem' type='usb'>
        <source startupPolicy='optional'>
          <vendor id='0x1234'/>
          <product id='0xbeef'/>
    • Thanks 2
  6. Also got a nice speed bump from the 2-minute performance video, thanks @gridrunner!!


    Now, are there any other cpu features we could/should add?

    I've compared the features of my actual CPU (i7 4770):

    root@Tower:~# lscpu | grep Flags
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti retpoline tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts

    to the CPU that MacOS thinks it is using (via terminal inside the MacOS VM):

    sysctl -n machdep.cpu.features

    Which gives me this long list of "missing" CPU features:

    abm	acpi	aperfmperf	arat	arch_perfmon	avx	avx2	bmi1	bmi2	bts	clflush	constant_tsc	cpuid	cpuid_fault	ds_cpl	dtes64	dtherm	dts	epb	ept	erms	est	f16c	flexpriority	fma	fsgsbase	ht	ida	invpcid	invpcid_single	lahf_lm	lm	monitor	movbe	nonstop_tsc	nopl	nx	pbe	pcid	pclmulqdq	pdcm	pdpe1gb	pebs	pln	pni	popcnt	pti	pts	rdrand	rdtscp	rep_good	retpoline	sdbg	smep	smx	ss	sse4_1	sse4_2	syscall	tm	tm2	tpr_shadow	tsc_adjust	tsc_deadline_timer	vmx	vnmi	vpid	xsaveopt	xtopology	xtpr

    Where is the best place to start learning about what these features do, and how they behave in KVM?


    Although, I note that AVX, AVX2 and XSAVEOPT are all missing from the sysctl results, so maybe that command in not correct? I did add them to the XML as per the video.




  7. I love my node 304 and would pick similar components to you if I had to upgrade today.

    For PSU I’d consider the Corsair SF450 450W SFX instead.

    0 RPM mode makes it silent under low load, and it comes with flat cables. The silver stone PSU has fat, braided cables. And trust me, cable management is not fun in this case!
    I don’t have personal experience with either of these PSUs, so do your own research.

    Speaking of cable management, because of the way drives are mounted, the power and sata connectors end up in alternating directions between disks. Ideally you want two sata power cables with at least three connectors each, or you’ll have problems connecting all six disks without twisting and contorting the cables.
    Not sure if the Corsair or silver stone comes with that.

  8. Has anyone managed to pass through an intel IGD to OSX?

    I've got my OSX VM running fine using VNC on unRAID 6.4.1 but can't get any output signal if I try to swap to IGD graphics with the XML below. I've tried plugging a screen into both the HDMI and DVI ports, neither outputs a signal. I haven't tried the VGA port as I don't have a cable handy.


    The same hostdev tags work fine for a libreelec VM, so I'm assuming it's a problem on the OSX side?

    OSX VM boots fine and I can access it via apples screen sharing or nomachine, but the connected display remains black.


    Do I need to change anything in clover to get this to work? Or is it a lost cause and intel IGD will never work?

    Here's what OSX system report thinks is happening



    The resolution is different from what I've specified in the clover bios and config.plist, so at least something is changing compared to booting with VNC graphics in the XML.

    The display preferences pane in system settings report the screen as built-in with 1280x1024 the only option for resolution. As far as I can tell this is not the native resolution of the screen I've got connected or the 27 inch iMac, so not sure where it's coming from.



    Any advice will be greatly appreciated




    <domain type='kvm' xmlns:qemu=''>
        <vmtemplate xmlns="unraid" name="Linux" icon="Apple_vintage_white.png" os="linux"/>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <vcpu placement='static'>4</vcpu>
        <vcpupin vcpu='0' cpuset='2'/>
        <vcpupin vcpu='1' cpuset='6'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='7'/>
        <emulatorpin cpuset='0,4'/>
        <type arch='x86_64' machine='pc-q35-2.10'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <boot dev='hd'/>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='2' threads='2'/>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/JorgenOSX/vdisk2.img'/>
          <target dev='hdc' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/disk3/J-VM-vdisk2/vdisk2.img'/>
          <target dev='hdd' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        <controller type='pci' index='5' model='dmi-to-pci-bridge'>
          <model name='i82801b11-bridge'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
        <controller type='pci' index='6' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='6'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        <interface type='bridge'>
          <mac address='52:54:00:54:45:cf'/>
          <source bridge='br0'/>
          <model type='e1000-82545em'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        <serial type='pty'>
          <target port='0'/>
        <console type='pty'>
          <target type='serial' port='0'/>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x01' function='0x0'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        <qemu:arg value='-usb'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-mouse,bus=usb-bus.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-kbd,bus=usb-bus.0'/>
        <qemu:arg value='-smbios'/>
        <qemu:arg value='type=2'/>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='Penryn,vendor=GenuineIntel,kvm=on,+invtsc,vmware-cpuid-freq=on,'/>


    Ensure you have the correct credentials that you set during the setup phase and that you don't have capslock on/off or numlock on/off whichever is appropriate to your chosen password,, and or clear browser cache / cookies etc.
    Spun up new instance of this image and set a password and logged in with no problems whatsoever

    Also, be aware that the user name is case sensitive. I went through several re-installs with the same login problem before realizing what was happening.

    Sent from my iPhone using Tapatalk
  10. I use the Virtual Machine Wake On Lan plugin with iOS app Mocha WOL to wake two VMs without problems. One is LibreElec ant the other Mac OS. Never tried it with Windows.
    Apart from installing the plugin and enabling it under Settings/VM manager, i also have assigned static IPs to the VMs in my router.
    In the mocha app I've configured each VMs MAC address and static IP.
    Can't remember if I had to change any settings within each VM but don't think so.

    Don't think it would work via a reverse proxy though, as you're not on the same LAN as the host.

    Sent from my iPhone using Tapatalk

    • Upvote 1
  11. @jademonkee,

    Timely coincidence. I just had this happen on my system for the first time that I can remember (been using unRAID as a Time Machine target for at least 2 years) and remember seeing this post.

    Decided to do some troubleshooting.

    From the unRAID console, I ran this command to find all files opened on my Time Machine share (named "TimeMachine"):

    lsof | grep TimeMachine

    Which gave me this (abbreviated, there were about 50 very similar lines owned by these three processes):

    shfs       7747  7749       root    7u      REG                9,4    8388608 5255019797 /mnt/disk4/TimeMachine/Jorgen’s MacBook Pro.sparsebundle/bands/169
    afpd      24546           jorgen   11ur     REG               0,30    8388608     110480 /mnt/user/TimeMachine/Jorgen’s MacBook DIR               0,30       4096        Pro.sparsebundle/bands/169
    cnid_dbd  25478           nobody  cwd       DIR               8,17        169  470203911 /mnt/cache/system/TimeMachine/.AppleDB

    Which all looks normal for an AFP share being in use by Time Machine. Three processes have files open: shfs (unRAID share system), afpd (AFP, Apple's fileshare protocol) and cnidb (also part of AFP). You can see that the afpd process is started by the user name that my MacBook is using to connect to the TimeMachine share. The other two are unRAID system accounts.

    I suspect the files in use by either the afpd or the cnid_dbd are the ones that the error messages refers to.


    Now the problem is of course that the share ISN'T actually in use by the MacBook or any other Mac on my network.

    My best guess is that something interrupted the previous TimeMachine backup in a way that prevented the process to exit cleanly on the unRAID side. In my case, I suspect the MacBook was put to sleep mid-backup. Easy to do if you don't pay attention to the Time Machine status when you shut the laptop lid. Naturally I'd blame other family members for this... :) 


    Anyhow, with a bit of research I figured out that I could kill the afpd and cnid_dbd processes by restarting AFP on unRAID. So I made sure I ejected all unRAID shares from the Macs on my network, then in the unRAID web UI went to Settings / AFP, changed Enable AFP to No and clicked Apply.

    I then ran the lsof command again and it came back with no results, as expected. Good, whatever had the files in use was gone.

    Went back to Settings and enabled AFP again.

    Then I kicked off another Time Machine backup form my MacBook and it completed successfully. No error message, happy days.


    So I don't know for sure what caused the problem, but at least I found a workaround that helped in my case.


    Hope this helps


    • Upvote 1
  12. Ah, thanks Garycase, I've clearly over-complicated this!


    I have plenty of free space, so here's the plan:

    1. Move all current data (800GB) off disk1.

    2. Fill disk1 with copies of data from other disks. These files will go into a "temp test" folder at the root of disk1.

    3. Move all the test data to another disk "temp test 2" folder

    4. Delete both "temp test" folders


    Or maybe I could simplify it further

    If deleting or writing data also results in a read I could remove step 3. 

    This seemed to happen when I moved to 800GB onto disk1.

  13. Extended smart test came back clean as a whistle.


    Moving on to the next step in my testing plan, and I'm struggling.

    I'd like to run only the pre-read part of the preclear script, but it doesn't look like the preclear plugin or Joe L's or bjp999's scripts can do this. Or at least I can't work out how to.

    The next problem is that disk1 has data on it and is assigned to the array already. And both the plugin and the scripts prevent you from running on a disk assigned to the array.


    I'm open to moving the data off the disk and removing it from the array, then running a full pre-clear cycle before adding it back.

    But I need help with the steps on how to do this without stuffing up my array or lose data. I've never taken a disk out of the array before.


    If someone could point me in the right direction I'd be very grateful.

  14. I'm far from an expert, but here's my understanding:

    AFP is Apples own (proprietary) network protocol. With older version of Mac OS X it was definitely recommended to use AFP, because it was faster and supported apple specific extended attributes. Some things only worked over AFP, like time machine.

    All non-apple servers use the open source netatalk to serve up AFP shares. I assume your synology uses it. I don't know if you can get any details of the version or way Synology have implemented it to compare to the unRAID implementation?


    Sometime recently, maybe when Sierra was released, apple announced better support for SMB, and an intention to deprecate AFP.


    You could try enabling AFP for the share, and re-connect the client via AFP. Might make a difference. I think it's recommended to not connect via AFP and SMB to the same share at the same time though.


    Finally, since you're using SMB, there seem to have been some performance problems with the Mac implementation in Sierra. Here's one thread about it, might be worth digging into further:

    I don't know if this has been fixed in later versions.

  15. Did you ever change the cable?   The simplest way to confirm if this is a cable issue is to simply replace the SATA cable with a high-quality cable and be sure it's firmly seated on both ends.

    Yes, cable is replaced. Just need to trigger lots of reads. Copying 1TB of files over to the disk now, will run extended smart after that followed by the pre-read portion of the pre-clear script.
    If I get a read error I will swap to another mb socket and repeat the tests.
    If I still get errors it have to be disk itself.
    I'll report back...

    Sent from my iPhone using Tapatalk
  16. I've had this problem in the past on my Mac. I remember reading posts from other Mac users having the same problem, but never found a definitive solution. And others didn't seem to be affected at all.

    My theory at the time was that it's something to do with the way Finder reads and stores folder metadata. The things that are stored in those pesky .DS_store or other Mac specific hidden files. I eventually gave up on it and just accepted fate.


    But there are two settings that you could try:


    1. For each share under the AFP Security Settings area, set "Volume dbpath" to a cache only folder. This will store some (all?) Mac specific files on your cache drive instead of the array disk.


    2. For each share  under the SMB Security Settings, set Enhanced OS X interoperability to Yes. According to the built-in help this should speed up Finder browsing if you're using SMB instead of AFP.


    Let us know if either of those works.




    Screen Shot 2017-07-19 at 11.14.51 pm.png

    Screen Shot 2017-07-19 at 11.24.18 pm.png

    • Like 1
  17. On 11/07/2017 at 0:23 AM, said:


    Very doubtful.


    You are of course right. The correlation was only that they both started happening on the same day after 2 years of flawless operation. But I have now seen both types of errors occur independently of each other.


    Since my last post I've added a new 8TB drive and decided to move all content off disk1 (the one with read errors) onto the new disk4. Mainly because I wanted to convert disk1 to XFS.

    During this move, I kept seeing lots of read errors, both in the syslog and the raw smart value (that I'm now trying my hardest to ignore). The move of 3TB+ resulted in about 1000 read errors according to syslog. Diagnostics attached. 

    I didn't realise this before the data move, but I must have somehow moved disk1 back to the dodgy motherboard socket when I added the new disk4. Sigh. Although to be honest I don't have much choice. I only have 6 sata ports on the motherboard, and they are all occupied now.


    So in an effort to try to pinpoint where the read errors are coming from, I'd like to stress test both the mb port, disk1 and the cable between them, while changing only one thing at the time.

    First test is the cable. I've replaced the old cable with a brand new one, but left disk1 connected to the suspect mb port.


    So here's my question (finally): since there's no data on disk1 now, how do I trigger lots of reads? I need big volumes, because the errors are intermittent Is there a way to run only the pre-read part of the pre-clear scripts? Or should I just copy big chunks of data to/from the disk?


    This attribute is okay:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       66

    Raw value is not important for this one, current value is 200 and threshold is 51, it means you only need to worry if the value goes below 51.

    Yes agree, by itself it's not important, but they seem to correlate with the read errors in the syslog that trigger scary warning emails.