Jump to content

dlandon

Community Developer
  • Posts

    10,395
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. You can pause the preclears, stop and restart the array, and then restart the preclears.
  2. Sorry, I understood the drive was in an enclosure. Without knowing your arrangement, I can't say if the enclosure might be an issue.
  3. Your disk has a problem: Jan 1 17:08:02 Apollo unassigned.devices: Mounting 'Auto Mount' Devices... Jan 1 17:08:02 Apollo unassigned.devices: Partition 'sda1' does not have a file system and cannot be mounted.
  4. Nothing has changed in UD over the last few releases that should affect the display of the size, free, and used space on remote shares. That being said, there may have been some changes in CIFS that affect the connection to the remote share. The problem is that the lost connection as shown by the lack of the size, free, and used display is an indication of underlying problem that will probably affect the operation of the remote share.
  5. Having a remote server go to sleep while UD has a share mounted and then expecting things to just pick back up is expecting a lot. CIFS will do the best it can to re-connect, but UD has no control over what CIFS does. Is there a way to mount/unmount the remote shares on a schedule so you can prevent this issue?
  6. Go to a command line and do this command: df /mnt/remotes/your mountpoint and post the results.
  7. Your remote server is gong off-line: Dec 31 09:48:05 Unraid kernel: CIFS: VFS: \\KILRAH\D error -11 on ioctl to get interface list Dec 31 09:48:51 Unraid kernel: CIFS: VFS: \\KILRAH has not responded in 180 seconds. Reconnecting...
  8. It will fix the device designations when a drive is first detected. Click on the double arrows in the upper right hand corner of the UD page and that will initiate a hot plug event and fix the designations.
  9. Native zfs is coming in Unraid 6.12. I've been working on UD to get it ready to support zfs.
  10. If you have the zfs plugin installed, I think UD will now let you create single disk zfs file systems on an earlier Unraid version. It will sense zfs being installed and should let you format a zfs disk. Once you create zfs disks, it is my understanding you can join them to make a zpool. UD does not create zpools with multiple disks.
  11. The mount point/name stays with the disk and is only changed with UD. If you've never set a mount point manually, it will default to the disk label/zpool name if there is a label. You can make it default by changing the mount point and clearing off the old value.
  12. This is not how UD will create zfs disks. They will be created after the disk is partitioned. This is what lsblk shows: root@BackupServer:/mnt/user/unraid/unassigned.devices/unassigned.devices.emhttp# lsblk -f | grep sdc sdc └─sdc1 zfs_member 5000 Testing_fmt 5758782561912062602 This is what fdisk shows: fdisk -l | grep sdc Disk /dev/sdc: 111.79 GiB, 120034123776 bytes, 234441648 sectors /dev/sdc1 2048 234441647 234439600 111.8G 83 Linux
  13. I've made some changes. This is really a UD issue, and not a preclear issue. UD considers any disk without a file system to be a candidate for a preclear. You have found an edge case where your zfs disk(s) are not being recognized by Linux and therefore show a blank 'FS'. This is now how it will look when a disk is passed through and the file system is not recognized in the next release: I'm working on UD chages for Unraid 6.12 which includes zfs. Notice that the "Dev 1" disk is a zfs file system and is recognized. Also note that "Dev 3" is passed through and there is no recognized file system. UD Preclear is installed and the icon to preclear is not shown. With Unraid 6.12, zfs file systems are created after the disk is partitioned, so there won't be a partition 9 created. This is how UD will create zfs disks so the partitioninig UD uses will be compatible with array disks and can be introduced into the array without reformatting.
  14. I asked you the question to determine the best way to handle this situation because I'm currently working on zfs integration in UD for Unraid 6.12 and I wanted to know what the "FS" shows for the zfs_member as UD should now handle zfs file systems.
  15. Is there any way I can marks certain disks not eligible for preclear option? attached screenshot to show the issue What shows as the file system (FS) on that disk? Once a file system shows on the disk, the preclear goes away. Prelcear does not respect the destructive mode. All preclear looks for is disks that do not have a file system.
  16. Define an "isos" share: UD does not create user shares, and the browser built into Unraid limits the scope of browsing for security reasons. In later versions of Unraid, an 'isos' share is automatically created, so it makes sense to use that. Why create a UD location when there is already an "isos" share? Why not upgrade Unraid to the latest version?
  17. It shows files in the “isos” share. Be sure to define an “isos” share and put your files there. It doesn’t have to be defined in the vm config.
  18. I am able to reproduce your issue. I've been working on implementing zfs disk handling into UD on Unraid 6.12, along with many updates to be compatible with php8.1. I released a version that was a partial zfs implementation and because of the way zfs disks are mounted, UD was not detecting a passed through disk properly because UD thought it had mounted the disk. A fix will be in the next release of UD. I'm not ready to release it though because I've made a lot of changes for zfs and I don't want to release it until I've done more testing. The early release of UD was to prepare for a 6.12 beta release. There were several php8.1 changes that kept UD from running at all on 6.12 and the syslog ended up getting clogged with php warnings from UD. I wanted an updated UD that would at least run on 6.12 and would not clog the syslog with php warnings. It's really not causing a problem with the zfs disks, it's just a display issue.
  19. Inside a UD device script, either $DEVICE or $LUKS (depending on whether or not the disk is encrypted) works fine because UD is maintining the correct values. If you do some UD script work outside UD, for example I have a User Script run at night doing some backup work and it spins down the disk by using the alias, because the alias is fixed and never changes. If you get into the habit of using an alias, you'll not have to be concerned about which device variable to use to spin down the disk.
×
×
  • Create New...