Jump to content

CS01-HS

Members
  • Posts

    478
  • Joined

  • Last visited

Posts posted by CS01-HS

  1. 2 hours ago, itimpi said:

    Does the plugin only back up the vdisk files for a VM or does it back up the XML (and potentially BIOS) files that are stored in the libvirt image?

     

    Looks like it backs up all of them. See log from a recent run (core-backup/domains is the destination directory)

     

    2021-04-27 10:41:02 information: Debian is shut off. vm desired state is shut off. can_backup_vm set to y.
    2021-04-27 10:41:02 information: actually_copy_files is 1.
    2021-04-27 10:41:02 information: can_backup_vm flag is y. starting backup of Debian configuration, nvram, and vdisk(s).
    sending incremental file list
    Debian.xml
    
    sent 7,343 bytes received 35 bytes 14,756.00 bytes/sec
    total size is 7,237 speedup is 0.98
    2021-04-27 10:41:02 information: copy of Debian.xml to /mnt/user/core-backup/domains/Debian/20210427_1040_Debian.xml complete.
    sending incremental file list
    55c212-015b-6fc0-dada-cba0018034_VARS-pure-efi.fd
    
    sent 131,246 bytes received 35 bytes 262,562.00 bytes/sec
    total size is 131,072 speedup is 1.00
    2021-04-27 10:41:02 information: copy of /etc/libvirt/qemu/nvram/55c212-015b-6fc0-dada-cb6ca0018034_VARS-pure-efi.fd to /mnt/user/core-backup/domains/Debian/20210427_1040_55c21fa2-015b-6fc0-dada-cba0018034_VARS-pure-efi.fd complete.
    '/mnt/cache/domains/Debian/vdisk1.img' -> '/mnt/user/core-backup/domains/Debian/20210427_1040_vdisk1.img'
    2021-04-27 10:42:19 information: copy of /mnt/cache/domains/Debian/vdisk1.img to /mnt/user/core-backup/domains/Debian/20210427_1040_vdisk1.img complete.
    2021-04-27 10:42:19 information: backup of /mnt/cache/domains/Debian/vdisk1.img vdisk to /mnt/user/core-backup/domains/Debian/20210427_1040_vdisk1.img complete.
    2021-04-27 10:42:19 information: extension for /mnt/user/isos/debian-10.3.0-amd64-netinst.iso on Debian was found in vdisks_extensions_to_skip. skipping disk.
    2021-04-27 10:42:19 information: the extensions of the vdisks that were backed up are img.
    2021-04-27 10:42:19 information: vm_state is shut off. vm_original_state is running. starting Debian.
    Domain Debian started
    
    2021-04-27 10:42:20 information: backup of Debian to /mnt/user/core-backup/domains/Debian completed.

     

  2. I had a strange problem where maybe every fifth boot my Mojave VM wouldn't have a network connection (using e1000-82545em.) The adapter (Realtek RTL8111H) was detected but no connection.

     

    I fixed it by manually configuring the adapter in the VM. Dozens of boots so far and it hasn't happened again. Hope that's helpful.47k8x86pa1f51.png.6a2909e86e9d27346017a23b31a8fb4e.png

     

     

  3. Anyone else getting very high latency and low speeds with PIA's wireguard?

     

    Here's a speedtest from within an unRAID VM:

    $ curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
    Retrieving speedtest.net configuration...
    Testing from Optimum Online (XX.XX.XXX.XX)...
    Retrieving speedtest.net server list...
    Selecting best server based on ping...
    Hosted by Syndeo Solutions (Springfield, MO) [1748.93 km]: 48.05 ms
    Testing download speed................................................................................
    Download: 103.52 Mbit/s
    Testing upload speed................................................................................................
    Upload: 33.52 Mbit/s

     

    And here it is on the same machine from within the container (configured like so):

    # grep Endpoint /mnt/cache/appdata/binhex-qbittorrentvpn/wireguard/wg0.conf
    Endpoint = bahamas.privacy.network:1337

     

    # docker exec -it binhex-qbittorrentvpn sh
    sh-5.1# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
    Retrieving speedtest.net configuration...
    Testing from Redcom-Internet (XX.XXX.XXX.XX)...
    Retrieving speedtest.net server list...
    Selecting best server based on ping...
    Hosted by Gold Data (Miami, FL) [295.19 km]: 1927.194 ms
    Testing download speed................................................................................
    Download: 8.65 Mbit/s
    Testing upload speed......................................................................................................
    Upload: 7.85 Mbit/s

     

    I tried adding the following to Extra Parameters per the guide but it doesn't seem to make a difference.

    --sysctl="net.ipv4.conf.all.src_valid_mark=1"

     

  4. 3 hours ago, frodr said:

    Emby Server do not render HDR, but faded colors. With both Apple TV, and LG app on the tele.  "Enable HDR tone mapping" is suggested by some on the Emby forum, but no such setting inside Emby Server. 

     

    Any ideas?

     

    It's in the beta. Switch your repository to emby/embyserver:beta or wait for the next release.

    • Like 1
    • Thanks 1
  5. Maybe it's backing up the full Recovery partition every time?

     

    Stab in the dark but I believe spotlight indexing of the mac and the backup plays a role.

    You can rebuild the mac index easily enough by turning it off:

    sudo mdutil -a -i off

    then on:

    sudo mdutil -a -i on

    but the backups are more complicated, maybe easier to start fresh.

    You can set multiple time machine destinations so you wouldn't have to wipe the old one.
     

  6. 19 hours ago, Joseph said:

    8.21 MB/s is terrible!! I have no idea what's going on.

     

    That's what I'm getting at. We're blaming unRAID but I think Time Machine itself's the limiting factor.

     

    You can disable Time Machine throttling by running this on your Mac which might speed it up (it gets reset on reboot)

    sudo sysctl debug.lowpri_throttle_enabled=0

     

    Still that log showing a 40GB backup in about an hour and a half isn't bad, better than what I get so it might be a combination of factors. Is there a good reason the backup was so large? Mine are usually around 1GB.

     

    Not sure about your "Recovery" error.

  7. On 3/10/2021 at 6:36 PM, Joseph said:

    However, someone suggested on the Mac unchecking "Put hard disks to sleep when possible" in the Energy Saver settings and then reboot. It seems to have 'fixed' the issue for me. Haven't had R/W problems since, even after upgrading to 6.9.0.

     

    Has this fix stuck? Approximately how long does it take to complete a backup and of what size?

    Mine is about 20 minutes for 700MB but I wiped and re-started my backup set a few days ago so it might slow down over time.

     

    You can run this command in an OS X terminal to see details as time machine runs:

    log stream --style syslog  --predicate 'senderImagePath contains[cd] "TimeMachine"' --debug

     

    I think it would help if we had a baseline - what's the fastest we can expect with networked backups whether that's to unRAID, synology, Time Capsule, a shared Mac, etc.

     

    So far my unRAID pool beats Time Capsule.

  8. 49 minutes ago, voodood said:

    Hi there, is this broken in Unraid 6.9.1?  I've been using it for years but after the last Unraid update the docker just will not stay up.    Looking at the logs, it all works fine until..

     

    Info App: All entry points have started
    libgcc_s.so.1 must be installed for pthread_cancel to work
    Aborted
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    libgcc_s.so.1 must be installed for pthread_cancel to work
    Aborted
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

     

    Are you running embyserver:beta ?

    I got the same error but only with the latest beta (4.6.0.34)

     

    You can wait for a fix or specify the previous beta in Repository: emby/embyserver:4.6.0.33

    • Like 2
  9. 2 hours ago, dlandon said:

    If the file in Share B is being replaced, the Share B file will go to the recycle bin.

     

    Right. In my example the file only existed on Share A prior to the move.

     

    I ran a test:

    1. Uploaded test.mp4 to share Public
    2. Moved test.mp4 from share Public to share system

    Result: test.mp4 exists in share system and in share Public's recycle bin

     

    Note:

    • This is over SMB from a Mac client
    • Both shares have cache enabled
    • test.mp4 existed on neither share prior to the test

    Here's the File Activity log beginning just before the move (note the last few lines)

     

    EDIT:

     

    I suspect there's no specific SMB command to "move" files between shares and what's happening is the client first copies (to the destination) then deletes (from the source) so there's no way for your plugin or any other to differentiate. I tried to confirm but this low-level samba stuff is beyond me.

     

    Long-winded way of saying "you should probably ignore my original post but users beware of large moves."

     

    ** Cache
    Mar 21 12:00:29 OPEN => /mnt/cache/Public/.DS_Store
    Mar 21 12:00:29 ATTRIB => /mnt/cache/Public/.DS_Store
    Mar 21 12:00:29 ATTRIB => /mnt/cache/Public/.DS_Store
    Mar 21 12:00:29 CREATE => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:29 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 OPEN => /mnt/cache/Public/test.mp4
    Mar 21 12:00:30 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:30 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/system/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 ATTRIB,ISDIR => /mnt/cache/Public
    Mar 21 12:00:31 ATTRIB,ISDIR => /mnt/cache/Public/
    Mar 21 12:00:31 CREATE,ISDIR => /mnt/cache/Public/.Recycle.Bin
    Mar 21 12:00:31 ATTRIB,ISDIR => /mnt/cache/Public/.Recycle.Bin
    Mar 21 12:00:31 ATTRIB,ISDIR => /mnt/cache/Public
    Mar 21 12:00:31 ATTRIB,ISDIR => /mnt/cache/Public/
    Mar 21 12:00:31 MOVED_FROM => /mnt/cache/Public/test.mp4
    Mar 21 12:00:31 ATTRIB => /mnt/cache/Public/.Recycle.Bin/test.mp4
    Mar 21 12:00:32 OPEN => /mnt/cache/system/test.mp4
    Mar 21 12:00:32 OPEN => /mnt/cache/system/test.mp4

     

  10. If I move a file (over SMB) from Share A to Share B, and Share A has Recycle Bin enabled, the file will exist on Share B and in Share A's Recycle Bin. Essentially duplicated.

     

    Not the biggest problem but if say someone did a bunch of moves between cache-enabled shares it could end up unexpectedly filling the cache pool.

     

    Has anyone else noticed this?

    I'm on a Mac client and I've tweaked some SMB parameters so it's possible this is a "me" problem.

  11. I noticed none of the containers I have routed through this one could communicate with services on LAN IPs.

     

    I found the solution in A27 of the FAQ:

    Adding a variable VPN_OUTPUT_PORTS with a list of port exceptions (mine are Emby and SMTP)

    362418861_ScreenShot2021-03-19at12_17_39PM.thumb.png.bdc988594c85596acc1c621b7ddc8b7e.png

     

    I didn't see this mentioned here (maybe it's new) so I thought I'd mention it.

  12. Actually the latest versions of UD relinquished spin control to unRAID so disks in UD spindown according to the default unRAID spindown timer.

     

    Maybe you just have to wait longer or maybe you're encountering the same problem I did - where the USB drive returns an error when spindown's called so even though it's spun down unRAID thinks it's active.

     

    I have a fix but I'm not confident enough in it to really recommend it:

     

     

  13. 11 hours ago, dlandon said:

    Switch to number of reads and writes and you'll not see it change.

     

    Yup, both zeros. Now I see what you meant.

     

    I had to wait for the temperature to disappear again before I cat'd devs.ini but it did and they match up:

    ["dev2"]
    name="dev2"
    id="ST4000VN008-2DR166_ZGY68"
    device="sdc"
    sectors="7814037168"
    sector_size="512"
    rotational="1"
    spundown="1"
    temp="*"
    numReads="0"
    numWrites="0

     

  14. 40 minutes ago, dlandon said:

    I see the issue here.  Unraid spun down the disk because there was no read or write activity. 

     

    Maybe I'm misunderstanding or I wasn't clear but the relevant disk is Dev 2 (sdc) which shows ~150MB/s reads in the screenshot.

     

    I'm not planning to run hdparm -y but I took the last line of that log:

    spinning down /dev/sdc

    to mean that unraid had.

  15. 3 hours ago, dlandon said:

    I'd say the issue is that Unraid does not recognize preclear disk activity as disk file read and write activity.

     

    Ah, I bet that's because the fix for spindown in 6.9.1 was to monitor partition stats rather than disk stats and pre-clear writes/reads directly from the disks. I wonder if there's a risk running hdparm -y during the write phase - I guess I'll find out soon.

  16. 21 hours ago, doron said:

    Yup, that is, in fact, intentional 🙂

     

    I don't want to derail further but for anyone following, I ended up customizing sdspin to ignore "bad/missing sense" returns from USB drives, so I can uninstall the plugin (I have no SAS drives.) Thanks for the plugin and your help - otherwise I never would have found the fix.

     

     

  17. On 3/14/2021 at 4:41 PM, dlandon said:

    That's what I am looking for.  I don't see anything in your log though.  Several  versions ago there was a bug that messed up some config file settings.  Feel free to edit the /flash/config/plugins/unassigned.devices/unassigned.devices.cfg file and see if there is something out of order.  It might just be quite obvious - a stray automount setting.  Make your edit, save the file, then click on the double arrow icon on the UD webpage to refresh the configuration in ram.

     

    Okay I figured out what's causing this. I have this block in my SMB Extras:

    #unassigned_devices_start
    #Unassigned devices share includes
       include = /tmp/unassigned.devices/smb-settings.conf
    #unassigned_devices_end

     

    And on boot the file it references is populated:

    root@NAS:~# more /tmp/unassigned.devices/smb-settings.conf
    
    
    include = /etc/samba/unassigned-shares/usb-hdd.conf
    
    root@NAS:~# more /etc/samba/unassigned-shares/usb-hdd.conf
    [usb-hdd]
    path = /mnt/disks/usb-hdd
    browseable = yes
    force User = nobody
    ...

    Which explains why it's shared.

     

    If I share then un-share the partition in the WebGUI it blanks the file and all's well.

    root@NAS:~# more /tmp/unassigned.devices/smb-settings.conf
    
    ******** /tmp/unassigned.devices/smb-settings.conf: Not a text file ********
    
    root@NAS:~#

     

  18. I have a USB drive (sdb) mounted with UD:

    1658500954_ScreenShot2021-03-16at3_10_21PM.thumb.png.a712a2da7e8772bd01b300265865cb46.png

     

    v6.9.1 spins it down correctly but the (substandard) interface returns bad/missing sense data:

    root@NAS:~# hdparm -y /dev/sdb
    
    /dev/sdb:
    issuing standby command
    SG_IO: bad/missing sense data, sb[]:  f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

     

    ...which this line in /usr/local/sbin/sdspin catches and records as an error:

      [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1

     

    So although the drive's spun down unRAID thinks it isn't causing (a) the activity LED to stay green and (b) unRAID to try to spin it down every spindown interval.


    I've worked around it by modifying the code (below) to exclude USB drives from the test.

    Is that risky, should I not have?

    I'm not sure what cases this test is meant to handle.

     

    I've also updated the status check for USB drives to use smartctl since (at least with mine) hdparm can't detect standby.

     

    #!/bin/bash
    
    # spin device up or down or get spinning status
    # $1 device name
    # $2 up or down or status
    # ATA only
    
    # hattip to Community Dev @doron
    
    RDEVNAME=/dev/${1#'/dev/'}      # So that we can be called with either "sdX" or "/dev/sdX"
    
    get_device_id () {
      LABEL="${RDEVNAME:5}"
      DEVICE_ID=`ls -l /dev/disk/by-id/ | grep -v " wwn-" | grep "${LABEL}$" | rev | cut -d ' ' -f3 | rev`
      echo "$DEVICE_ID"
    }
    
    smartctl_status () {
      OUTPUT=$(/usr/sbin/smartctl --nocheck standby -i $RDEVNAME 2>&1)
      RET=$?
      # Ignore Bit 1 error (Device open failed) which usually indicates standby
      [[ $RET == 2 && $(($RET & 2)) == 2 ]] && RET=0 
    }
    
    hdparm () {
      OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1)
      RET=$?
      # ignore missing sense warning which might be caused by a substandard USB interface
      if [[ ! "$(get_device_id)" =~ ^usb-.* ]]; then
        [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1
      fi
    }    
    
    if [[ "$2" == "up" ]]; then
      hdparm "-S0"
    elif [[ "$2" == "down" ]]; then
      hdparm "-y"
    else
      # use smartctl (instead of hdparm) for USB drives
      if [[ "$(get_device_id)" =~ ^usb-.* ]]; then
        smartctl_status
      else
        hdparm "-C"
      fi
      [[ $RET == 0 && ${OUTPUT,,} =~ "standby" ]] && RET=2
    fi
    exit $RET

     

  19. Okay, mystery solved. Compare the return code from your version to unRAID's:

     

    root@NAS:~# sdspin /dev/sdb down; echo $?
    SG_IO: bad/missing sense data, sb[]:  f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    0
    root@NAS:~# cp /usr/local/sbin/sdspin.unraid /usr/local/sbin/sdspin
    root@NAS:~# sdspin /dev/sdb down; echo $?
    1
    root@NAS:~#

     

    Now the two scripts:

     

    Unraid's returns 1 (RET=1)

    hdparm () {
      OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1)
      RET=$?
      [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1
    }
    
    ...
    
    exit $RET

     

    Yours executes this block

        else  # Not SAS
    
          $DEBUG && echo $HDPARM -y $RDEVNAME
          $HDPARM -y $RDEVNAME > /dev/null
    
        fi

     

    without catching a return code then hits the last line, returning 0

    exit 0 # Just in case :-)

     

     

  20. 20 minutes ago, doron said:

    It does.

    Check out /usr/local/sbin/sdspin - this is what Unraid calls (as of 6.9). 

    That code in turn calls hdparm.

     

    sdspin! That's what I was looking for, thank you. I mistakenly assumed that script was specific to your plugin. And I see the plugin helpfully saves the default version so I'll dig in and figure out the difference.

×
×
  • Create New...