aim60

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by aim60

  1. 20 hours ago, SimonF said:

    is this the type of thing you are looking for

    virsh change-media seems to work for changing the iso in a cdrom drive that has been pre-defined in the VM.  I was thinking of USB Manager creating virtual usb ports with associated iso or vdisk files.  They could be hot-plugged into the VM as usb devices with VM Attach/Detach.  The advantage of implementing this in USB Manager is that commonly used iso or vdisk "ports" could be pre-defined, and switched between VMs as desired.  The advantage of implementing hot-plug in VM Manager is that not everyone has discovered this plugin.

  2. Knowing full well that you may not want to pollute the code of USB Manager ...

     

    I was looking for a way to hot plug a vdisk or iso file into a running VM.  And I thought that USB Manager may already have most of the plumbing to attach them as USB devices.

  3. 1 hour ago, dlandon said:

    What was inconsistent?  The disk label?

     

    Go back to my original post. 

    Carefully compare the (incorrect) screenshot with the (correct) ls -l outputs. 

       The Sandisk Cruzer Fit shows 1 partition on /dev/sdp.

          It has 4 partitions and is on /dev/sdo

     

        The Samsung flash shows 4 partitions on /dev/sdo

          It has 1 partition and is on /dev/sdp

     

    I have to admit, what I did is far from typical.  The Samsung drive originally had 4 partitions, with 4 unique labels.  The Cruzer had 1 partition.  I moved both drives to a workstation.  I repartitioned the Sansung drive with 1 partition, and the Cruzer with 4 partitions -  with the same 4 labels that existed on the other drive.  

     

     

     

  4. Changing the mount points on UD volumes is a convenient way to change volume labels.  As a result, the mount points are stored in unassigned.devices.cfg.  However, subsequent changes to the drive, not done in UD, can leave the plugin very confused.    

     

    1982489556_UDScreenshot.thumb.png.6692bc25f42de0fe6220d295db31aabe.png

     

    The Sandisk Cruser Fit is currently /dev/sdo with 4 partitions, and the Samsung Flash Drive is /dev/sdp with 1 partition.

     

    From:
    ls -l /dev/disk/by-id
    
    lrwxrwxrwx 1 root root  9 Aug  8 09:09 usb-Samsung_Flash_Drive_0374421030005963-0:0 -> ../../sdp
    lrwxrwxrwx 1 root root 10 Aug  8 09:09 usb-Samsung_Flash_Drive_0374421030005963-0:0-part1 -> ../../sdp1
    
    lrwxrwxrwx 1 root root  9 Aug  8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0 -> ../../sdo
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part1 -> ../../sdo1
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part2 -> ../../sdo2
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part3 -> ../../sdo3
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part4 -> ../../sdo4
    
    From:
    ls -l /dev/disk/by-label
    
    lrwxrwxrwx 1 root root 10 Aug  8 09:09 SAMSUNG -> ../../sdp1
    
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 UNBOOT -> ../../sdo1
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 UN610 -> ../../sdo3
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 UN611 -> ../../sdo4
    lrwxrwxrwx 1 root root 10 Aug  8 06:48 UN692 -> ../../sdo2
    

     

    Unraid 6.9.2, UD 2022.08.07.

    unassigned.devices.cfg

  5. I am using a VM as my daily workstation.  It has a passed through GT 1050Ti, and is OVMF and Q35-5.1.  In the XML I have bootmenu enable='yes'.

     

    Sometimes I find it useful to enter the Tiano Core setup menu.  Since changing to a 4K monitor, I get no video until windows launches.  I have tried setting the windows resolution down to 1920x1080 before rebooting.  No luck.

     

    Any ideas?

  6. I tried to create a schedule for an incremental snapshot that doesn't run automatically.  I set the schedule mode to daily or hourly, and unchecked all the days to run.  The GUI did not cooperate.

     

    With the server down, I edited subvolsch.cfg on the flash drive.  I changed the rund line to

        "rund": "",

    It had the desired effect, until I re-edited the schedule in the GUI.

     

    Might this be a supportable option?

     

  7. Restoring a Subvol

     

    Simon’s Snapshots plugin is an awesome solution for managing btrfs snapshots.  I had problems with the plugin after replacing a drive and restoring a subvol, so I set out to find a working scenario.

     

    The goal was to create a subvol, setup incremental snapshots to a backup drive, simulate a failed drive, restore from snapshot, and be able to continue the process of taking incremental snapshots.

     

    The plugin has some limitations in its current form, but I found a working scenario.  I recommend that anyone depending on snapshots for recovery, upgrade to at least Unraid 6.10.

     

    Assumed starting conditions - You are currently taking snapshots.

     

    Restore Scenario - You have replaced a failed drive

     

      Send the latest snapshot from the backup drive to the new drive, directly to the same position as the original subvol.  It will not look like a subvol to the plugin until the terminal session below is completed.

     

      Open a terminal session

          Using the mv command rename the snapshot to the original subvol name.

          btrfs property set –f <path to subvol> ro false

          This will make the subvol read/write.  This command must be executed from Unraid 6.10 or above.

              Earlier versions will make the subvol r/w, but will leave it unusable to the plugin.

     

      Before you can continue taking incremental snapshots, you must manually create a new snapshot and send it (non-incrementally) to the backup drive.

     

     

    Restore Scenario - you are restoring a subvol from a snapshot on the same drive

     

      Delete the corrupted subvol

      Send the snapshot to the original subvol’s position

      Proceed as above starting with the terminal session.

      Note – this is a full data send, and will use twice the amount of disk space.

        Delete the source snapshot and take a new snapshot of the subvol to reclaim the disk space.

      Methods using “btrfs sub snap” are not currently successful.

     

     

      

      

        

     

     

     

     

  8. 6 hours ago, SimonF said:

    Please can you send me output of the following also

     

    btrfs sub show /mnt/cache

    root@Tower7:~# btrfs sub show /mnt/cache
    
            Name:                   <FS_TREE>
            UUID:                   ba110717-bbbb-4c18-ae17-abf768fad644
            Parent UUID:            -
            Received UUID:          -
            Creation time:          2021-03-06 11:44:53 -0500
            Subvolume ID:           5
            Generation:             16952037
            Gen at creation:        0
            Parent ID:              0
            Top level ID:           0
            Flags:                  -
            Snapshot(s):

     

    As the problem clarified in my mind, I realized that the solution was not a trivial bug fix, but significant additional development.  Your efforts will be greatly appreciated by the community.

  9. With the increasing popularity of btrfs and zfs snapshots and snapshot replication, it is becoming difficult to maintain a directory structure that doesn't run afoul of FCPs extended test for duplicate file names.  This is especially true since FCP ignores the disk setting "Enable user share assignment".

     

    I am proposing an enhancement to FCP, to include the capability to enter a list of paths for exclusion from the duplicate file name test.

     

    Just brainstorming:
      A path beginning with /mnt would be drive specific.
      A path beginning with /, but not /mnt would apply to all drives, e.g. "/.snapshots".

      An entry not beginning with /, ignore the folder contents anywhere, e.g. ".snapshots".
     

  10. My appdata folder on my original cache drive corresponds to test1.  It was snapshotted and send-received to a backup drive.  The cache drive was replaced, and the snapshot was send-received to a new drive.  It looks to the plugin like test2.

     

    There needs to be a mechanism in the plugin for the new appdata, a writeable snapshot, to be treated exactly like an original (btrfs sub create) subvolume,  i.e. Settings, Schedule, & Create Snapshot, so subsequent snapshots can be created.

     

    My situation is not unique.  Anyone who has to restore a subvolume from a snapshot will be in the same situation.

     

     

  11. @SimonF, not sure if this simplifies your troubleshooting:

     

    Test done on 6.9.2

     

    btrfs sub create /mnt/cache/test1

      The subvol shows up correctly in the plugin.  You can go into settings for the subvol, and create a snapshot of it.

    btrfs sub snap /mnt/cache/test1 /mnt/cache/test2

      The only option in the plugin is to send the snapshot

  12. 28 minutes ago, SimonF said:

    Can you provide me the output of

     

    btrfs subvolume list  -puqcgR /mnt/cache

    btrfs subvolume list  -spuqcgR /mnt/cache

    btrfs subvolume list  -s /mnt/cache

    btrfs subvolume list -opuqcgR /mnt/cache

     

    also do they should if you toggle show docker as I see you are using a docker folder.

    Toggling show docker doesn't change anything about the subvols in question

    btrfsSubListOutput.txt

  13. Thanks for this great plugin.  I have always found managing snapshots manually to be challenging.

     

    I am in the process of reorganizing my manual snapshots to be usable with the plugin, so the what is in the attachments is somewhat of a mess.

     

    On my cache drive, the important folders that needed to be backed up were subvolumes.  Some time ago, when I needed to replace the drive, I created snapshots and send-received them to a backup drive.  I then replaced the drive and send-received them back, and made them read/write.

     

    The folders that were restored with send-receive (appdata, domains, isos, nospin) do not show up in the plugin as subvolumes.  In fact they don't show up at all, so I have no way to use the plugin to continue to snapshot them.

     

    Please help.

     

    btrfs sub list /mnt/cache
    
    ID 258 gen 16897694 top level 5 path nospin
    ID 260 gen 16897704 top level 5 path appdata
    ID 270 gen 16897705 top level 5 path domains
    ID 314 gen 16838685 top level 5 path isos
    ID 961 gen 16897705 top level 5 path system
    ID 3568 gen 16750283 top level 5 path .snapshots/nospin/nospin-old
    ID 4657 gen 16750119 top level 5 path .snapshots/isos_20220711142307
    ID 4658 gen 16750119 top level 5 path .snapshots/isos_20220711143028
    ID 4659 gen 16750283 top level 5 path .snapshots/appdata/appdata-old
    ID 4660 gen 16750283 top level 5 path .snapshots/domains/domains-old
    ID 4661 gen 16750283 top level 5 path .snapshots/nospin/nospin-new
    ID 4662 gen 16750283 top level 5 path .snapshots/appdata/appdata-new
    ID 4663 gen 16750283 top level 5 path .snapshots/domains/domains-new

    102145333_SnapshotCache.thumb.PNG.c6e3785939d2b62c8976e92af6417cbc.PNG

  14. I am in the process of consolidating my array onto larger disks.  The new disks are currently mounted in UD, and I am rsync copying the data.  When the copies are complete, the I will make them the array disks, and build parity.

     

    On one of the shares, I would like to preserve the recycle bin, with its contents intact.  If I rsync the .Recycle.Bin folder within the share, will I retain full recycle bin functionality after the new array is brought online?
     

  15. I have a drive in UD formatted NTFS and set to automount and share.  During the last boot the drive was disconnected.  After hot plugging the drive, it mounts successfully, and I can browse the files from the GUI.  However, I can't get it to share.

     

    I have tried combinations of replugging the drive, unmounting/remounting, share yes/no, and refreshing the UD disks and configuration.

     

    Any suggestions other than rebooting?

     

    UD 2022.06.19 and Unraid 6.9.2

     

    Thanks

  16. I ran across an article whose author claimed that on recent versions of qemu,  discard=unmap was functional on virtio disks.  So I did some testing.

     

    I have not run all permutations, but have tested:

      Unraid 6.9.2

      UEFI and Seabios VMs

      Q35-5.1 and i440fx-5.1 VMs

      Windows 10 and Windows 11 with the latest virtio drivers, virtio-win-0.1.196.iso

      Ubuntu 21.4

      raw and qcow2 vdisks

      vdisks on XFS and BTRFS disks

      vdisks with and without the copy-on-write attribute set

     

    In all cases, the virtio disk was functionally equivalent to a virtio-scsi disk.  On file deletion, windows unmapped blocks immediately.  Linux unmapped blocks after running fstrim.

     

    https://chrisirwin.ca/posts/discard-with-kvm-2020

    • Like 5