-
Posts
10,395 -
Joined
-
Last visited
-
Days Won
20
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by dlandon
-
Nothing has changed in UD over the last few releases that should affect the display of the size, free, and used space on remote shares. That being said, there may have been some changes in CIFS that affect the connection to the remote share. The problem is that the lost connection as shown by the lack of the size, free, and used display is an indication of underlying problem that will probably affect the operation of the remote share.
-
Having a remote server go to sleep while UD has a share mounted and then expecting things to just pick back up is expecting a lot. CIFS will do the best it can to re-connect, but UD has no control over what CIFS does. Is there a way to mount/unmount the remote shares on a schedule so you can prevent this issue?
-
If you have the zfs plugin installed, I think UD will now let you create single disk zfs file systems on an earlier Unraid version. It will sense zfs being installed and should let you format a zfs disk. Once you create zfs disks, it is my understanding you can join them to make a zpool. UD does not create zpools with multiple disks.
-
This is not how UD will create zfs disks. They will be created after the disk is partitioned. This is what lsblk shows: root@BackupServer:/mnt/user/unraid/unassigned.devices/unassigned.devices.emhttp# lsblk -f | grep sdc sdc └─sdc1 zfs_member 5000 Testing_fmt 5758782561912062602 This is what fdisk shows: fdisk -l | grep sdc Disk /dev/sdc: 111.79 GiB, 120034123776 bytes, 234441648 sectors /dev/sdc1 2048 234441647 234439600 111.8G 83 Linux
-
I've made some changes. This is really a UD issue, and not a preclear issue. UD considers any disk without a file system to be a candidate for a preclear. You have found an edge case where your zfs disk(s) are not being recognized by Linux and therefore show a blank 'FS'. This is now how it will look when a disk is passed through and the file system is not recognized in the next release: I'm working on UD chages for Unraid 6.12 which includes zfs. Notice that the "Dev 1" disk is a zfs file system and is recognized. Also note that "Dev 3" is passed through and there is no recognized file system. UD Preclear is installed and the icon to preclear is not shown. With Unraid 6.12, zfs file systems are created after the disk is partitioned, so there won't be a partition 9 created. This is how UD will create zfs disks so the partitioninig UD uses will be compatible with array disks and can be introduced into the array without reformatting.
-
Is there any way I can marks certain disks not eligible for preclear option? attached screenshot to show the issue What shows as the file system (FS) on that disk? Once a file system shows on the disk, the preclear goes away. Prelcear does not respect the destructive mode. All preclear looks for is disks that do not have a file system.
-
Define an "isos" share: UD does not create user shares, and the browser built into Unraid limits the scope of browsing for security reasons. In later versions of Unraid, an 'isos' share is automatically created, so it makes sense to use that. Why create a UD location when there is already an "isos" share? Why not upgrade Unraid to the latest version?
-
I am able to reproduce your issue. I've been working on implementing zfs disk handling into UD on Unraid 6.12, along with many updates to be compatible with php8.1. I released a version that was a partial zfs implementation and because of the way zfs disks are mounted, UD was not detecting a passed through disk properly because UD thought it had mounted the disk. A fix will be in the next release of UD. I'm not ready to release it though because I've made a lot of changes for zfs and I don't want to release it until I've done more testing. The early release of UD was to prepare for a 6.12 beta release. There were several php8.1 changes that kept UD from running at all on 6.12 and the syslog ended up getting clogged with php warnings from UD. I wanted an updated UD that would at least run on 6.12 and would not clog the syslog with php warnings. It's really not causing a problem with the zfs disks, it's just a display issue.
-
Inside a UD device script, either $DEVICE or $LUKS (depending on whether or not the disk is encrypted) works fine because UD is maintining the correct values. If you do some UD script work outside UD, for example I have a User Script run at night doing some backup work and it spins down the disk by using the alias, because the alias is fixed and never changes. If you get into the habit of using an alias, you'll not have to be concerned about which device variable to use to spin down the disk.