cholzer

Members
  • Posts

    164
  • Joined

  • Last visited

Posts posted by cholzer

  1. 1 hour ago, JorgeB said:

     

    okay, I eventually found the option. 😅

     

    convert.thumb.jpg.de1c37669642f87d66b9fd5a2a632f75.jpg

    So I:
    1.convert the cache pool to RAID1
    2.shut down unraid
    3.replace ONE of the 1TB cache pool drives with a new 4TB drive

    4.start unraid and add the newly installed 4TB to the pool replacing the now missing 1TB drive
    5.let it resync the RAID1 cache pool
    6.shut down unraid
    7.replace the SECOND 1TB cache pool drive with the 2nd new 4TB drive
    8.start unraid and add the newly installed 4TB to the pool replacing the now missing 2nd 1TB drive
    9.let it resync the RAID1 cache pool

    Question:
    will I then end up with a 1TB cache pool?
    or does it scale the size to the full 4TB automatically?

  2. Hi!

     

    Currently I run 6.11.5 with a cachepool that consists out of 2x Samsung_SSD_970_EVO_1TB - so I have a total of 2TB for the cache pool.
    I am running docker, vm's and one smb share on the cachepool.

    Now I want to replace these 2x 1TB ssd's with 2x 4TB ssd's (and this time configure the pool to be redundant) - but I could not find a step by step guide that explains the process.

    (Note: I can physically add the 2 new 4TB drives to the system so I could spin up a 2nd pool if that is required for the process.)

    Ideally I want this to be a hardware swap only where I just copy the data from the old to the new CachePool and docker, vm's as well as shares just work after. 😅

    Any help would be highly apprechiated! 

  3. On 4/7/2018 at 11:43 AM, feraay said:

    the script is only needed if you don't want the server to wake up every day.

    I do want the server to wake up every day, sadly the wake up timer in my bios does not seem to work as the system stays in sleep. 

    gonna try this script now. thanks! 

  4. 6 hours ago, ljm42 said:

     

    Please read the first two posts in this thread very carefully, particularly the part titled "Complex networks"

     


    Thank you for your reply, my error was that I misread this section.
     

    Quote

     

    With "Use NAT" = Yes and "Host access to custom networks" = enabled (static route optional)

    server and dockers on bridge/host - accessible!

    VMs and other systems on LAN - NOT accessible

    dockers with custom IP - NOT accessible

    (avoid this config)

     


    After I added a static route on my router it worked.

    I guess the aspect which confused me was that wg-easy on the rpi did not require this, but the networking on Unraid is certainly different.

    QUESTION:
    Why cant this route be added directly inside Unraid? :)
    Like in the "Routing Table" section.

  5. I followed this guide to achive "Remote access to LAN" on 6.11.5.

    My problem is that:

    - I can access the Unraid GUI on 192.168.1.5

    - I can access Plex on 192.168.1.5:32400
    - I can NOT access my windows VM 192.168.1.10 running on Unraid using RDP
    - I can NOT access any other device on my LAN (i.e. 192.168.1.1)

    It looks like my WG connection terminates at 192.168.1.5 (Unraid) and can't access any other IP on the network - feels like a routing issue.

    Ideas? :)

     

    routes.jpg

  6. thx for your reply! :)

    Unraid:

    • 192.168.1.5 (my main network - gateway 192.168.1.1)
    • 192.168.2.5 (this interface is used so that my brother also has access to the SMB shares from his network, that is the only thing this interface is used for)

    Ubuntu VM running on Unraid: (br0 - static IP) 192.168.1.26

    Windows VM running on Unraid: (br0 - static IP) 192.168.1.10

    docker image plex: (host) 192.168.1.5

    docker image n8n: (br0 - static IP) 192.168.1.21

    docker image code-server: (br-0 static IP) 192.168.1.9

    on my Router: port 51820 (UDP) forwarded to 192.168.1.5

    Network I am connecting from: 192.168.123.0/24

     

    -------------------

    The following happens with "remote tunneled access" as well as "remote access to LAN"

    -------------------


    Through the Unraid Wireguard tunnel I can:

    • access the WebGUI of Unraid 192.168.1.5
    • access the WebGUI of Plex on 192.168.1.5:32400
    • access the WebGUI of docker image code-server on 192.168.1.9:8443
    • access the SMB shares on unraid 192.168.1.5

    Through the Wireguard tunnel I can not:

    • ping nor access the WebGUI of any other device on my network (i.e. 192.168.1.1 - router)
    • access the WebGUI of docker image n8n on IP 192.168.1.21:5678
    • ping nor ssh (putty) into the Ubuntu VM
    • ping nor remote desktop into the Windows VM

    I have the exact same issues when I use the "wireguard-easy" docker on unraid - however using wireguard-easy on my RPI works just great.

    urvpn.jpg

     

    blow are my routing tables.
    I have no idea where these came from or what they are used for:

    • br-b14fa2d6b9b6
    • br-b14fa2d6b9b6
    • shim-br0
    • virbr0

    routing.thumb.jpg.00e408bb8f003b7acd74f7a544479977.jpg

  7. Goals:

    1. connect to my home network via wireguard
    2. have access to all shares / webgui's of devices on my home network
    3. do NOT use the internet connection of my home network (my remote device uses its own local internet connection)
    4. be able to ssh into my VMs

     

    Currently I run wireguard-easy on a RPI, and it achieves all the above goals. But I thought I could use the wireguard implementation in Unraid instead and replace the RPI that way.

     

    So to achieve the above goals I selected "Peer type of access: remote access to LAN" in the Unraid Wireguard config.
    That way I achieved goals 1 ,2, and 3.

    HOWEVER I cannot ssh into my VMs through that tunnel, the connection cant be established.

    I also run into the same issue when I use the "Wireguard-easy" docker in unraid. So I guess there is some networking issue inside unraid preventing me from using ssh through the tunnel?

  8. I have wireguard-easy on my rPI and have been very happy with it. So I though I should try the docker version because I could then get rid of my rPI.

    Using this docker I can connect to network shares and web gui's on my home network.
    I have set it to bridge mode and assigned a static IP.
     

    However using this docker I cannot SSH into a virtual machine through that wireguard tunnel - putty only throws a "network error: Software caused connection abort".

    Using wireguard-easy on my rPI I do not have that problem, I can ssh into my VMs just fine.

  9. I just ran into this issue as well.

    I recently switched from running Plex in a VM to the Plex Official Docker on Unraid.

    I use Plex DVR which records into a share on unraid.


    From my windows PC I can add/rename/delete files in that share just fine (drwxrwxrwx).

         001.jpg.5099d230ac92be4f7b3355197ee45e2b.jpg

     

    The problem is that I can not delete files/folders created by Plex DVR (drwxr-xr-x).

          002.jpg.f0831da3127d3120b33b3f2907ca4009.jpg

     

    The folders/files newly created by the Plex Docker are owner by "nobody" drwxr-xr-x (the older folders were created by my old VM running Plex drwxrwxrwx).
           003.jpg.af565bafeaa04790a5abf3276aff38a8.jpg

  10. 4 minutes ago, Kilrah said:

    You might already have a folder called Ubuntu with an existing vdisk in it. If that belongs to another VM you might want to change the name of this new one. 

    You are right!
    Deleted the folder now the vDisk size setting shows up again.
    There should be a notification inside the Vm creation page to explain what is going on.

  11. Hey guys!

    I am tearing my hair out over this.

    I have created an Unraid Mount Tag for a share in an Alpine Linux VM.

     

    This WORKS when I mount it via the terminal using

    mount -t 9p -o trans=virtio,version=9p2000.L,posixacl,cache=loose onedrive /mnt/unraid/onedrive

     

    however this in /etc/fstab does not mount it

     

    onedrive /mnt/unraid/onedrive  9p  trans=virtio,version=9p2000.L,_netdev,rw 0 0

     

    Anyone an idea what I am doing wrong? :)

  12. I have stitched together this script because I could not find any solution for my usecase.

    Thx to @mgutt and his backup script I took some parts from

     

    Goal:

    1. Connect hot-swap-able disk to unraid
    2. automatically mount the disk (can be disabled)
    3. automatically perform (multiple) mirror backup jobs which simply copy all changes from the SourceDir to the DestinationDir (including deletions)
    4. automatically unmount the disk once the backup(s) are done (can be disabled)
    5. safely remove the disk once everything is done thx to notifications that are sent for each step of the process

     

    Required:

    1. some kind of hot-swap-able storage media
    2. Unassigned Devices Plugin

     

    Setup:

    1. connect your backup storage media
    2. make sure it has a mount point
    3. click on the little "gear" icon
    4. paste the script below into the script window
    5. adjust the Backup Jobs configuration section
    6. save
    7. done
    8. click on mount to test if the script is working correctly
    9. if everything is working click on unmount

    if you want to automate the entire process:

    1. click on the little "gear" icon again
    2. enable "automount"
    3. and set "unmount_after_backup to 1

     

    Hope someone finds this helpful! :)

     

    #!/bin/bash
    PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
    
    ########
    # SIMPLE MIRROR BACKUP / V01 -  2022-07-14 / by cholzer 
    # This script is meant to be used in the script area of a disk inside the Unassigned Devices plugin
    ########
    # I stitched together this backup script as I could not find a solution for my Offline Backups which are done onto hot-swapable disks.
    # With this script you can define multiple backup jobs - you get a notification for each important action.
    #
    # The whole backup process gets super easy if you choose to enable "automount" for the disk in Unassigned Devices and enable "unmount" in the config section below then .
    # Just plug in the disk and the script starts the backup and unmounts the disk once it is done so that you can safely unplug the disk.
    # 
    ########
    # Do your modifications below
    ########
    
    # Here you can setup all your backup jobs.
    # The mountpoint of the disk you add this script to inside Unassigned Devices is automatically used as destination target. 
    # You can set a subfolder per backup if you'd like to use one.
    # Use Unraids new file manager to find the exact source path (available in Unraid 6.10 and later)
    backup_jobs=(
    # sourcePath                     # destinationPath-Subfolder (/ for none)               #jobName
    "/mnt/user/documents"            "/Backup"                                              "Documents"
    "/mnt/user/photos"               "/Backup"                                              "Photos"
    )
    
    # Unmount the backup disk - (Will only be done if the backup was successful)
    unmount_after_backup=0
    
    # Notifications:
    # You can disable notifications if you really want to.
    notifications=1
    
    # rsync options which are used while creating the full and incremental backup
    rsync_options=(
      --human-readable # output numbers in a human-readable format
      --delete # when a file was deleted in source directory it will be delete in the destination directory too
      --exclude="[Tt][Ee][Mm][Pp]/" # exclude dirs with the name "temp" or "Temp" or "TEMP"
      --exclude="[Tt][Mm][Pp]/" # exclude dirs with the name "tmp" or "Tmp" or "TMP"
      --exclude="Cache/" # exclude dirs with the name "Cache"
      --exclude=".Recycle.Bin/" # exclude dirs with the name ".Recycle.Bin"
    )
    
    ####
    # WARNING! DRAGONS BELOW! Do not change anything that comes next unless you know what you are doing!
    ####
    
    ## Available variables:
    # ACTION     : if mounting, ADD; if unmounting, UNMOUNT; if unmounted, REMOVE; if error, ERROR_MOUNT, ERROR_UNMOUNT
    # DEVICE     : partition device, e.g. /dev/sda1
    # SERIAL     : disk serial number
    # LABEL      : partition label
    # LUKS       : if the device is encrypted, this is the partition device, e.g. /dev/sda1
    # FSTYPE     : partition filesystem
    # MOUNTPOINT : where the partition is mounted
    # OWNER      : "udev" if executed by UDEV, otherwise "user"
    # PROG_NAME  : program name of this script
    # LOGFILE    : log file for this script
    
    case $ACTION in
      'ADD' )
        if [ "$OWNER" = "udev" ]; then
            # do your hotplug stuff here
    		sleep 1
        else
            # do your user initiated stuff here
    		
        # sync the file system to commit all writes to disk
        sync -f "$MOUNTPOINT"
    	# notification
    	if [ "$notifications" == 1 ]; then
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Unassigned Devices" -d "Device mounted" -i "normal"
    	fi
    	
    	######## Lets run the Backup Job(s)
    	# remove the trailing slash from the source and destination path should there be one
    	remove_trailing_slash() { [[ "${1%?}" ]] && [[ "${1: -1}" == "/" ]] && echo "${1%?}" || echo "$1"; }
    	# now lets loop through each individual backup job
    	for i in "${!backup_jobs[@]}"; do
        case $(($i % 3)) in
            0) src_path="${backup_jobs[i]}"; continue ;;
            1) dst_path="$MOUNTPOINT${backup_jobs[i]}"; continue ;;
            2) job_name="${backup_jobs[i]}" ;;
        esac
    				
    		# check user settings
    		src_path=$(remove_trailing_slash "$src_path")
    		dst_path=$(remove_trailing_slash "$dst_path")
                    echo "Source Path is $src_path" "Destination Path is $dst_path"
    		
    		# Notification Backup Started
    		if [ "$notifications" == 1 ]; then
    		/usr/local/emhttp/webGui/scripts/notify -s "`hostname` Backup Job: "$job_name" started" -d "Sync started. `date`"
    	    fi
    
    		# first we need to make sure that the log directory does exists, if it doesn't, create it
    		if [ ! -d "$MOUNTPOINT"/rsync-logs/"$job_name"/ ]; then
    		mkdir -p "$MOUNTPOINT"/rsync-logs/"$job_name"/;
    		fi
    	
    		# Now lets run the actual backup job. 
    		# It does a mirror, which means that all changes including deletions are pushed from the source directory to the destination directory.
    		# Folders named '.Recycle.Bin' are excluded
    		rsync -av --log-file="$MOUNTPOINT"/rsync-logs/"$job_name"/log.`date '+%Y_%m_%d__%H_%M_%S'`.log --progress "${rsync_options[@]}" "$src_path" "$dst_path"
    
    		# Notifications sync complete or error
    		if [ "$notifications" == 1 ]; then
    			latestRsyncLog=$(ls -tr "$MOUNTPOINT"/rsync-logs/"$job_name"/ |tail -1)
    			if [ $? -eq 0 ]
    			then
    			/usr/local/emhttp/webGui/scripts/notify -s "`hostname` Backup Job: "$job_name" completed" -d "Sync okay! `date`"
    			else
    			/usr/local/emhttp/webGui/scripts/notify -s "`hostname` Backup Job: "$job_name"  FAILED" -i "alert" -d "Sync ERROR! `date`" -m "`tail -5 "$MOUNTPOINT"/rsync-logs/"$job_name"/"$latestRsyncLog"`"
    			fi
    		fi
    	done
        sleep 1
    		
        fi
    	
    	# unmount the backup disk once the rsync backup finished
    	if [ "$unmount_after_backup" == 1 ]; then
    	/usr/local/sbin/rc.unassigned umount $DEVICE
        fi
      ;;
    
      'UNMOUNT' )
        # do your stuff here
    	# since we also get notified once the disk was unmounted, I commented this notification out
        # /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Unassigned Devices" -d "Device unmounting" -i "normal"
      ;;
    
      'REMOVE' )
        # do your stuff here
    	# notification
    	if [ "$notifications" == 1 ]; then
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Unassigned Devices" -d "Device unmounted" -i "normal"
    	fi
      ;;
    
      'ERROR_MOUNT' )
        # do your stuff here
    	# notification
    	if [ "$notifications" == 1 ]; then
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Unassigned Devices" -d "Error mounting device" -i "alert"
    	fi
      ;;
    
      'ERROR_UNMOUNT' )
        # do your stuff here
    	# notification
    	if [ "$notifications" == 1 ]; then
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Unassigned Devices" -d "Error unmounting device" -i "alert"
    	fi
      ;;
    esac

     

     

    • Thanks 1
  13. ein forced umount kann dazu führen dass unraid diese disk als aus einem "broken array" ansieht und man diese erst nach einem reboot von unraid wieder mounten kann (fragt mich woher ich das weis 😅 )

     

    unmount sollte man daher nur unassigned devices machen lassen, nicht manuell via 'umount'.
    aus meinem script 

    /usr/local/sbin/rc.unassigned umount '/dev/disk/by-uuid/3C9751517A72CC80'

     

    uuid muss man natürlich die seiner disk eintragen

  14. I was looking for the same info, thank you! I just migrated a debian VM from proxmox to unraid and I am amazed how easy it was!

    One thing I'd like to add is that proxmox itself is capable of extracting a vma file. :)

    So if you still have access to your proxmox system you just have to open the shell and then

    cd /path/where/you/store/your/vma
    vma extract  [name].vma -v /target/folder/to/extract/to


    to extract the raw image. :)

    https://pve.proxmox.com/wiki/VMA
     

    • Like 1
    • Thanks 3
  15. On 4/1/2022 at 8:52 AM, cholzer said:

    Is it possible to ssh into unraid and then remotely call UD to unmount or mount a disk?
     

    I did take a look at the scripts in the first post but these dont seem to hold the answer. 😅

     

    Reason for my question is that as it truns out I run into several issues with the SMB share creation/destruction when I remotely SSH into unraid and then just mount/unmount the disk.

    These problems do not exists when I use the "mount" button in the UD gui, so I guess the solution would be to call UD remotelly via SSH and let it do the mount/umount. But how?

    Thanks in advance!


    I think I figured it out. :)

    Goal:
    Short Version:
    Use ssh to have UnassignedDevices mount/umount a specific disk and create an SMB/CIFS share for that disk.

    Long Version - my usecase:

    1. (pre-backup script) a Server on my network connects to UNRAID via SSH and instructs UD to mount a specific disk which also creates the share
    2. this server then runs a backup job which has this UD share as target
    3. (post-backup script) once the backup is done the server connects to UNRAID via SSH and instructs UD to unmount this specific disk which also removes the share

     

    Step-by-step guide:

    I assume that your disk already has a single partition

    1. Connect the disk to your Unraid system
    2. go to the settings of the disk, enable the 'share' setting
    3. mount the disk, make sure that the share is created and can be accessed
    4. note the disk id (sd* i.e. sdg) next to the drive serial in the UD GUI
    5. ssh into the UNRAID server or open the terminal in the webgui
    6. run "ls -ahlp /dev/disk/by-uuid" to list all drives by their UUID (look for the sd* to find your disk's UUID)
    7. now you can use these 2 commands to mount/unmount this specific disk via SSH or a script on your UNRAID machine.
       
    /usr/local/sbin/rc.unassigned mount '/dev/disk/by-uuid/THEUUIDOFYOURDISK'
    /usr/local/sbin/rc.unassigned umount '/dev/disk/by-uuid/THEUUIDOFYOURDISK'


    Hope this might help someone who finds himself in the same situation as I was in. :)