54tgedrg45

Members
  • Posts

    15
  • Joined

  • Last visited

Converted

  • Personal Text
    Missing ZFS, iSCSI, NFS4, Podman and enjoying these;
    rsync: symlink "/.../lib/python3.6/os.py" -> "/usr/lib/python3.6/os.py" failed: Operation not not supported (95)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

54tgedrg45's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Okay thank you for that , I now feel fine with it to rebuild the parity without disk1. * https://wiki.unraid.net/index.php/FAQ_remove_drive the two v6 links on top seem dead here, did one refer to: https://wiki.unraid.net/Shrink_array ?
  2. I want to remove disk1 from the array, and rebuild the parity without disk1. I've moved all data from disk1 to disk2, however there is still 8.8GB data on disk1(hidden?) When I remove disk1 and start the array three shares show "SOME FILES ARE UNPROTECTED", this probably because at first files where terrible scattered across all disks, with now each share assigned to it's own disk and data moved/arranged to disk (I moved data so no message "data outside assigned disk" on shares). All shares that live in cache only, don't show the unprotected message. All shares are assigned to disk2/3or4 or cache with disk1 excluded parity disk disk1 no file index 8.8GB hidden? disk2 in use by one share showing unprotected(holding the data of disk1) disk3 in use by one share showing unprotected disk4 in use by one share showing unprotected cache: (includes shares like Docker and ISO's) nvme1n1 nvme1n0 What hidden data lives on disk1, and if userdata how do I get it out?
  3. I was searching for the path of the logs since there is no download button that grabs them. I'm now thinking of putting all output together with the rsync log file at backup destination.
  4. Eum, where are the script output logs stored? after execution(of all scripts) I get the trashcan to delete log icon/prompt to delete log of task name, not informing path of log. How to view them? No idea where it would be. at flash /logs it shows none. Edit: OK, they can be found at /tmp/user.scripts/tmpScripts/ # logfile log.txt
  5. This happen to me after stopping the array, changing the DNS, and restarting the array. ty sturmstar
  6. Just started using VM's with Unraid. Made a template Debian VM, copied that on to a new folder to setup a new one, but I get confronted with UEFI shell on boot The solution for me
  7. With that approach I end up in UEFI shell at power on for a Debian vdisk1.img copy. Removing the Unraid share makes no difference.(v6.8.3) Edit: It seems to be such cause I found that when I run the following in the presented UEFI shell for my copied Debian image: fs0: cd efi/debian grubx64.efi that it will boot, but it's not persistent... The fix: https://wiki.debian.org/GrubEFIReinstall # Reinstalling grub-efi on your hard drive # Check that the computer booted in computer in EFI mode: [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD" # should return "EFI boot on HDD". # After starting a root shell ( if you boot from a live media, you should start a chroot shell instead, as explained in https://help.ubuntu.com/community/Grub2/Installing#via_ChRoot ) check that your EFI system partition (most probably /dev/sda1) is mounted on /boot/efi. If the /boot/efi directory does not exist, you will need to create it. # find partition lsblk # sda1 => vda1 mount /dev/vda1 /boot/efi # Reinstall the grub-efi package apt-get install --reinstall grub-efi # Put the debian bootloader in /boot/efi and create an appropriate entry in the computer NVRAM grub-install #Re create a grub config file based on your disk partitioning schema update-grub #You should check afterwards that: #Check 1. the bootloader is existing in /boot/efi/EFI/debian/grubx64.efi file /boot/efi/EFI/debian/grubx64.efi # /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 (stripped to external PDB), for MS Windows # Check 2. the nvram entry was properly created. efibootmgr --verbose | grep debian # You can now reboot, and Grub should greet you.
  8. I wonder how Synology is able to wakeup when receiving things like SMB request, I can only think of basic packet detection done with some BMC interface. For my situation I assigned an old rpi to send magic packets to the Unraid on known clients ping status 0. as the operating times vary. a.t.m I have this current cron job for testing: #!/bin/bash # 20200607 # sudo crontab -e # sudo apt-get install etherwake # sudo apt-get install fping # Config MACADDR[0]="AA:BB:CC:DD:EE:FF" #MACADDR[1]="AA:BB:CC:DD:EE:FF" IPHOSTS[0]="x.x.x.x" #IPHOSTS[1]="x.x.x.x" IPCLIENS[0]="x.x.x.x" IPCLIENS[1]="x.x.x.x" NICID="ethx" # Wakeup wakeupDevice(){ printf "\nSending Magic WOL packet with: \n\tsudo etherwake $1 -i $2\n" sudo etherwake "$1" -i "$2" } triglog="/var/log/cron_triglog.log" lenh=${#MACADDR[@]} lenw=${#IPHOSTS[@]} if [ $lenh != $lenw ]; then echo "[$timestamp] Error: MACADDR and IPHOSTS array do not match in lenght, check config..." > $triglog exit 1 fi # Monitor every ~10 seconds for status i=0 while [ $i -lt 6 ]; do # NOTE: Use absulute paths! timestamp=$(date +%s) echo "$timestamp" hostsonline=false requestwol=false # Ping hosts for hip in ${IPHOSTS[*]}; do fping -c1 -t300 ${hip} 2>/dev/null 1>/dev/null if [ "$?" = 0 ]; then printf "Host %s found\n" ${hip} hostsonline=true else printf "Host %s not found\n" ${hip} hostsonline=false fi done # Ping clients if needed if [ ${hostsonline} = true ]; then printf "All hosts replied\n" else for cip in ${IPCLIENS[*]}; do fping -c1 -t300 ${cip} 2>/dev/null 1>/dev/null if [ "$?" = 0 ]; then printf "Client %s found\n" ${cip} requestwol=true else printf "Client %s not found\n" ${cip} fi done fi # Send WOL if [[ ${requestwol} = true && ${hostsonline} = false ]]; then echo "[$timestamp] WOL: Sending Magic WOL packet using ${MACADDR[$i]} - ${NICID}" >> $triglog for (( i=0; i<$lenw; i++ )); do wakeupDevice ${MACADDR[$i]} ${NICID} done fi sleep 9 # ~6 sec of tollerance, consider https://mmonit.com to manage jobs below 1 minute i=$(( i + 1 )) done
  9. I run this plugin since 28Feb2020 on an Intel server board, and it only slept well for a few days straight after client activity, but further on it's was quite random/very rare. SSD cache Move is daily but takes only a few minutes. Parity check consumes whole Mondays every week. Is there something about btrfs partitioned drives in the pool? I have one drive(WD2000F9YZ (SE) HDD) with btrfs (only 17.3MB/2TB in use), other drives/parity are HGST Ultrastar He10/WD2002FAEX with xfs. It keeps unraid 6.8.3 awake according to log, while all disk spinned down according to unraid UI (ssd cache drives are excluded for monitoring):
  10. If so, is it somehow possible to just capture tail of log for the plugin to display within browser limits?
  11. Anyone also experiencing instant Firefox tab hang2crash after clicking script log of an in background running script? This happens when running rsync with quite some stdout. Currently running UR 6.8.3/ US 2020.02.27 / FF 73.0.1