• Posts

  • Joined

  • Last visited

Everything posted by aim60

  1. I wouldn't expect NerdTools to include every package that the community wants to run. What is the effect of using NerdTools for some packages and manually adding others to /boot/extra?
  2. Even after upgrading to 2022.10.25a and rebooting, smbd_audit messages are appearing in the syslog. Running Unraid 6.9.2.
  3. Can we assume that all of the packages that were in Nerdpack will be migrated to Nerdtools?
  4. I've experimented with echo '1' > /sys/block/sdX/device/delete and it gave me a warm and fuzzy before powering off a device. But sometimes I wanted to remount the device without unplugging it first. However, it had disappeared completely. Even doing an UD Refresh Disks wouldn't bring it back. Research lead me to echo 1 > /sys/class/scsi_device/<scsi bus>/device/rescan which was successful. But there was no way to tell which bus the disk was on unless you noted it before the device delete. If you implement the above, please include a way to bring back the disk.
  5. In the image below, both the successful pihole docker, and the about:blank#blocked binhex-plex docker are running on the same custom network, and have fixed ip addresses.
  6. I ran into the same problem. 6.11-rc4 binhex-plex about:blank#blocked in chrome and edge. Firefox does nothing, doesn't even bring up a new tab.
  7. <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <source file='/mnt/cache/domains/Win10_Ent_Left_Q35/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/>
  8. Thanks for the screenshot. Never noticed the USB Manager Hotplug section on the VMs page. Since discovering that with USB Manager, I can use both of my licensed flash drives (with the same vendor id) at the same time, I haven't gone back to Libvirt hotplug. I found this, It looks like the hotplug cdrom on a usb bus was never implemented. There was a commit 3 weeks ago. Since cdroms are read-only, there might not be a conflict with ud. And ud doesn't currently mount vdisks, although it would be a useful feature. Think about a usb mounted vdisk as a virtual external hard drive. It could be plugged into any vm, and with ud enhancement, could be mounted on the host. Not every vm has network access to the host.
  9. virt-manager is really convenient for this. And for maintaining serial numbers on vdisks when running Unraid in a VM.
  10. virsh change-media seems to work for changing the iso in a cdrom drive that has been pre-defined in the VM. I was thinking of USB Manager creating virtual usb ports with associated iso or vdisk files. They could be hot-plugged into the VM as usb devices with VM Attach/Detach. The advantage of implementing this in USB Manager is that commonly used iso or vdisk "ports" could be pre-defined, and switched between VMs as desired. The advantage of implementing hot-plug in VM Manager is that not everyone has discovered this plugin.
  11. Knowing full well that you may not want to pollute the code of USB Manager ... I was looking for a way to hot plug a vdisk or iso file into a running VM. And I thought that USB Manager may already have most of the plumbing to attach them as USB devices.
  12. Go back to my original post. Carefully compare the (incorrect) screenshot with the (correct) ls -l outputs. The Sandisk Cruzer Fit shows 1 partition on /dev/sdp. It has 4 partitions and is on /dev/sdo The Samsung flash shows 4 partitions on /dev/sdo It has 1 partition and is on /dev/sdp I have to admit, what I did is far from typical. The Samsung drive originally had 4 partitions, with 4 unique labels. The Cruzer had 1 partition. I moved both drives to a workstation. I repartitioned the Sansung drive with 1 partition, and the Cruzer with 4 partitions - with the same 4 labels that existed on the other drive.
  13. I have a very large number of archived notifications on my flash drive. I don't want to loose them, but they greatly slow down manual flash backups. I'd appreciate a mod to the viewer that allows you to select an alternate directory. Thanks
  14. The partition structure of both flash drives were changed while in a workstation. After bringing them back to the server, the GUI information was inconsistent with the current state of the drives. Rebooting the server resolved the issue. Thanks.
  15. Changing the mount points on UD volumes is a convenient way to change volume labels. As a result, the mount points are stored in unassigned.devices.cfg. However, subsequent changes to the drive, not done in UD, can leave the plugin very confused. The Sandisk Cruser Fit is currently /dev/sdo with 4 partitions, and the Samsung Flash Drive is /dev/sdp with 1 partition. From: ls -l /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 8 09:09 usb-Samsung_Flash_Drive_0374421030005963-0:0 -> ../../sdp lrwxrwxrwx 1 root root 10 Aug 8 09:09 usb-Samsung_Flash_Drive_0374421030005963-0:0-part1 -> ../../sdp1 lrwxrwxrwx 1 root root 9 Aug 8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0 -> ../../sdo lrwxrwxrwx 1 root root 10 Aug 8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part1 -> ../../sdo1 lrwxrwxrwx 1 root root 10 Aug 8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part2 -> ../../sdo2 lrwxrwxrwx 1 root root 10 Aug 8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part3 -> ../../sdo3 lrwxrwxrwx 1 root root 10 Aug 8 06:48 usb-SanDisk_Cruzer_Fit_4C530007541116121314-0:0-part4 -> ../../sdo4 From: ls -l /dev/disk/by-label lrwxrwxrwx 1 root root 10 Aug 8 09:09 SAMSUNG -> ../../sdp1 lrwxrwxrwx 1 root root 10 Aug 8 06:48 UNBOOT -> ../../sdo1 lrwxrwxrwx 1 root root 10 Aug 8 06:48 UN610 -> ../../sdo3 lrwxrwxrwx 1 root root 10 Aug 8 06:48 UN611 -> ../../sdo4 lrwxrwxrwx 1 root root 10 Aug 8 06:48 UN692 -> ../../sdo2 Unraid 6.9.2, UD 2022.08.07. unassigned.devices.cfg
  16. I am using a VM as my daily workstation. It has a passed through GT 1050Ti, and is OVMF and Q35-5.1. In the XML I have bootmenu enable='yes'. Sometimes I find it useful to enter the Tiano Core setup menu. Since changing to a 4K monitor, I get no video until windows launches. I have tried setting the windows resolution down to 1920x1080 before rebooting. No luck. Any ideas?
  17. That would be great. Thanks
  18. I tried to create a schedule for an incremental snapshot that doesn't run automatically. I set the schedule mode to daily or hourly, and unchecked all the days to run. The GUI did not cooperate. With the server down, I edited subvolsch.cfg on the flash drive. I changed the rund line to "rund": "", It had the desired effect, until I re-edited the schedule in the GUI. Might this be a supportable option?
  19. Restoring a Subvol Simon’s Snapshots plugin is an awesome solution for managing btrfs snapshots. I had problems with the plugin after replacing a drive and restoring a subvol, so I set out to find a working scenario. The goal was to create a subvol, setup incremental snapshots to a backup drive, simulate a failed drive, restore from snapshot, and be able to continue the process of taking incremental snapshots. The plugin has some limitations in its current form, but I found a working scenario. I recommend that anyone depending on snapshots for recovery, upgrade to at least Unraid 6.10. Assumed starting conditions - You are currently taking snapshots. Restore Scenario - You have replaced a failed drive Send the latest snapshot from the backup drive to the new drive, directly to the same position as the original subvol. It will not look like a subvol to the plugin until the terminal session below is completed. Open a terminal session Using the mv command rename the snapshot to the original subvol name. btrfs property set –f <path to subvol> ro false This will make the subvol read/write. This command must be executed from Unraid 6.10 or above. Earlier versions will make the subvol r/w, but will leave it unusable to the plugin. Before you can continue taking incremental snapshots, you must manually create a new snapshot and send it (non-incrementally) to the backup drive. Restore Scenario - you are restoring a subvol from a snapshot on the same drive Delete the corrupted subvol Send the snapshot to the original subvol’s position Proceed as above starting with the terminal session. Note – this is a full data send, and will use twice the amount of disk space. Delete the source snapshot and take a new snapshot of the subvol to reclaim the disk space. Methods using “btrfs sub snap” are not currently successful.
  20. Simon, I sent you a PM.
  21. root@Tower7:~# btrfs sub show /mnt/cache Name: <FS_TREE> UUID: ba110717-bbbb-4c18-ae17-abf768fad644 Parent UUID: - Received UUID: - Creation time: 2021-03-06 11:44:53 -0500 Subvolume ID: 5 Generation: 16952037 Gen at creation: 0 Parent ID: 0 Top level ID: 0 Flags: - Snapshot(s): As the problem clarified in my mind, I realized that the solution was not a trivial bug fix, but significant additional development. Your efforts will be greatly appreciated by the community.
  22. With the increasing popularity of btrfs and zfs snapshots and snapshot replication, it is becoming difficult to maintain a directory structure that doesn't run afoul of FCPs extended test for duplicate file names. This is especially true since FCP ignores the disk setting "Enable user share assignment". I am proposing an enhancement to FCP, to include the capability to enter a list of paths for exclusion from the duplicate file name test. Just brainstorming: A path beginning with /mnt would be drive specific. A path beginning with /, but not /mnt would apply to all drives, e.g. "/.snapshots". An entry not beginning with /, ignore the folder contents anywhere, e.g. ".snapshots".
  23. My appdata folder on my original cache drive corresponds to test1. It was snapshotted and send-received to a backup drive. The cache drive was replaced, and the snapshot was send-received to a new drive. It looks to the plugin like test2. There needs to be a mechanism in the plugin for the new appdata, a writeable snapshot, to be treated exactly like an original (btrfs sub create) subvolume, i.e. Settings, Schedule, & Create Snapshot, so subsequent snapshots can be created. My situation is not unique. Anyone who has to restore a subvolume from a snapshot will be in the same situation.
  24. @SimonF, not sure if this simplifies your troubleshooting: Test done on 6.9.2 btrfs sub create /mnt/cache/test1 The subvol shows up correctly in the plugin. You can go into settings for the subvol, and create a snapshot of it. btrfs sub snap /mnt/cache/test1 /mnt/cache/test2 The only option in the plugin is to send the snapshot