Jump to content

54tgedrg45

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by 54tgedrg45

  1. I want to remove disk1 from the array, and rebuild the parity without disk1.

    I've moved all data from disk1 to disk2, however there is still 8.8GB data on disk1(hidden?)

    When I remove disk1 and start the array three shares show "SOME FILES ARE UNPROTECTED", this probably because at first files where terrible scattered across all disks, with now each share assigned to it's own disk and data moved/arranged to disk (I moved data so no message "data outside assigned disk" on shares).

     

    All shares that live in cache only, don't show the unprotected message.

    All shares are assigned to disk2/3or4 or cache with disk1 excluded

     

    parity disk

    disk1 no file index 8.8GB hidden?

    disk2 in use by one share showing unprotected(holding the data of disk1)

    disk3 in use by one share showing unprotected

    disk4 in use by one share showing unprotected

     

    cache: (includes shares like Docker and ISO's)

    nvme1n1

    nvme1n0

     

    What hidden data lives on disk1, and if userdata how do I get it out?

  2. Eum, where are the script output logs stored? after execution(of all scripts) I get the trashcan to delete log icon/prompt to delete log of task name, not informing path of log.

    How to view them?

    No idea where it would be. at flash /logs it shows none.

     

    Edit: OK, they can be found at /tmp/user.scripts/tmpScripts/

     

    # logfile log.txt

  3.  

    On 5/17/2016 at 5:05 PM, Naldinho said:

    I created a VM -- installed Windows 7 + drivers etc. Then I just copied the vdisk and created a new VM except that I selected the copy of the vdisk that I made. Boot up the VM and it worked.

    So as far as I can tell this is all that is required. I'm pretty new to VMs so I could be wrong but it worked fine for me.

     

    With that approach I end up in UEFI shell at power on for a Debian vdisk1.img copy.

    Removing the Unraid share makes no difference.(v6.8.3)

    Edit:

    It seems to be such cause

     

    I found that when I run the following in the presented UEFI shell for my copied Debian image:

    fs0:
    
    cd efi/debian
    
    grubx64.efi

    that it will boot, but it's not persistent...

     

    The fix: https://wiki.debian.org/GrubEFIReinstall

    # Reinstalling grub-efi on your hard drive
    
    # Check that the computer booted in computer in EFI mode:
    [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD"
    # should return "EFI boot on HDD".
    
    # After starting a root shell ( if you boot from a live media, you should start a chroot shell instead, as explained in https://help.ubuntu.com/community/Grub2/Installing#via_ChRoot ) check that your EFI system partition (most probably /dev/sda1) is mounted on /boot/efi. If the /boot/efi directory does not exist, you will need to create it.
    # find partition
    lsblk # sda1 => vda1
    mount /dev/vda1 /boot/efi
    # Reinstall the grub-efi package
    apt-get install --reinstall grub-efi
    
    # Put the debian bootloader in /boot/efi and create an appropriate entry in the computer NVRAM
    grub-install
    
    #Re create a grub config file based on your disk partitioning schema
    update-grub
    
    #You should check afterwards that:
    #Check 1. the bootloader is existing in /boot/efi/EFI/debian/grubx64.efi
    
    file /boot/efi/EFI/debian/grubx64.efi
    
    # /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 (stripped to external PDB), for MS Windows
    
    # Check 2. the nvram entry was properly created.
    efibootmgr --verbose | grep debian
    # You can now reboot, and Grub should greet you. 

     

  4. I wonder how Synology is able to wakeup when receiving things like SMB request, I can only think of basic packet detection done with some BMC interface.

    For my situation I assigned an old rpi to send magic packets to the Unraid on known clients ping status 0. as the operating times vary.

    a.t.m I have this current cron job for testing:

    #!/bin/bash
    
    # 20200607
    # sudo crontab -e
    # sudo apt-get install etherwake
    # sudo apt-get install fping
    
    # Config
    MACADDR[0]="AA:BB:CC:DD:EE:FF"
    #MACADDR[1]="AA:BB:CC:DD:EE:FF"
    IPHOSTS[0]="x.x.x.x"
    #IPHOSTS[1]="x.x.x.x"
    IPCLIENS[0]="x.x.x.x"
    IPCLIENS[1]="x.x.x.x"
    NICID="ethx"
    
    
    # Wakeup
    wakeupDevice(){
        printf "\nSending Magic WOL packet with: \n\tsudo etherwake $1 -i $2\n"
        sudo etherwake "$1" -i "$2"
    }
    
    triglog="/var/log/cron_triglog.log"
    
    lenh=${#MACADDR[@]}
    lenw=${#IPHOSTS[@]}
    if [ $lenh != $lenw ]; then
        echo "[$timestamp] Error: MACADDR and IPHOSTS array do not match in lenght, check config..." > $triglog
        exit 1
    fi
    
    # Monitor every ~10 seconds for status
    i=0
    while [ $i -lt 6 ]; do
        # NOTE: Use absulute paths!
    	timestamp=$(date +%s)
    	echo "$timestamp"
    
        hostsonline=false
        requestwol=false
    
        # Ping hosts
        for hip in ${IPHOSTS[*]}; do
            fping -c1 -t300 ${hip} 2>/dev/null 1>/dev/null
            if [ "$?" = 0 ]; then
                printf "Host %s found\n" ${hip}
                hostsonline=true
            else
                printf "Host %s not found\n" ${hip}
                hostsonline=false
            fi
        done
    
        # Ping clients if needed
        if [ ${hostsonline} = true ]; then
            printf "All hosts replied\n"
        else
            for cip in ${IPCLIENS[*]}; do
                fping -c1 -t300 ${cip} 2>/dev/null 1>/dev/null
                if [ "$?" = 0 ]; then
                    printf "Client %s found\n" ${cip}
                    requestwol=true
                else
                    printf "Client %s not found\n" ${cip}
                fi
            done
        fi
    
        # Send WOL
        if [[ ${requestwol} = true && ${hostsonline} = false ]]; then
            echo "[$timestamp] WOL: Sending Magic WOL packet using ${MACADDR[$i]} - ${NICID}" >> $triglog
            for (( i=0; i<$lenw; i++ )); do
                wakeupDevice ${MACADDR[$i]} ${NICID}
            done
        fi
    
        sleep 9 # ~6 sec of tollerance, consider https://mmonit.com to manage jobs below 1 minute
        i=$(( i + 1 ))
    done

     

  5. I run this plugin since 28Feb2020 on an Intel server board, and it only slept well for a few days straight after client activity, but further on it's was quite random/very rare. SSD cache Move is daily but takes only a few minutes.

    Parity check consumes whole Mondays every week.

     

    Is there something about btrfs partitioned drives in the pool? I have one drive(WD2000F9YZ (SE) HDD) with btrfs (only 17.3MB/2TB in use), other drives/parity are HGST Ultrastar He10/WD2002FAEX with xfs.

    It keeps unraid 6.8.3 awake according to log, while all disk spinned down according to unraid UI (ssd cache drives are excluded for monitoring):

    Quote

    ...

    whole idle night includes sdh Disk activity in log

    ...

    Wed Mar 18 15:51:08 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:51:08 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:52:08 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:52:08 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:53:08 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:53:09 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:54:09 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:54:09 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:55:09 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:55:09 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:56:09 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:56:09 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:57:09 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:57:09 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:58:10 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:58:10 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 15:59:10 CET 2020: Disk activity on going: sdh
    Wed Mar 18 15:59:10 CET 2020: Disk activity detected. Reset timers.
    Wed Mar 18 16:00:10 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:01:10 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:02:10 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:03:10 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:04:10 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:05:11 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:06:11 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:07:11 CET 2020: Excluded day [3] or hour [16].
    Wed Mar 18 16:08:11 CET 2020: Excluded day [3] or hour [16].

    ....

  6. 22 minutes ago, Squid said:

    If the log is insanely long, any browser will choke on it.

    If so, is it somehow possible to just capture tail of log for the plugin to display within browser limits?

  7. Anyone also experiencing instant Firefox tab hang2crash after clicking script log of an in background running script?

    This happens when running rsync with quite some stdout.

    Currently running UR 6.8.3/ US 2020.02.27 / FF 73.0.1

×
×
  • Create New...