Jump to content
We're Hiring! Full Stack Developer ×

dlandon

Community Developer
  • Posts

    10,289
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. The plugin was not cleaning up properly and left some things behind. I've posted a new recycle.bin.plg. Install the plugin and then remove it and it will clean things up.
  2. Do you have Pool devices in UD? Try removing the UD plugin and see if it stops the messages.
  3. Update to the latest release of UD and let me know if it solved the problem.
  4. That's not an actual mount of a device. UD is checking to see that a device is actually mounted. Go to your command line and type '/sbin/mount' and see how long it is taking. One second might not be enough.
  5. I'm working on cleaning up the way this is handled in UD. The problem is that I do the btrfs check on every refresh of the UD screen. That's every 3 seconds! I've changed to only look for the pool status when it is mounted. This was some poor coding on my part. Had a senior moment I supporse. Once I have it tested and complete some other issues I'm working on, I'll issue an update. Probably today.
  6. I use btrfs fi show on only a single mountpoint, so this should not be an issue.
  7. This is a recent addition to UD to support pool devices better. I think I see an issue with the way I am doing this and I'll make a change.
  8. @JorgeB UD is using the following command to find which UD disks are in a pool, It also tells me which one is the primary member. /sbin/btrfs fi show mountpoint Based on what I see in this post, it is causing these btrfs warnings? I guess I'm very confused. Btrfs pools have the same UUID, so what's this all about? Is there another way to determine the btrfs devices in a pool?
  9. You can change the name of a VM without having to rebuild it.
  10. Those files are not used unless the disks are installed in UD. Rebooting won't fix anything as you have seen. I still have no idea how UD trying to auto mount disks can cause those log messages. For the moment, as @JorgeB suggested, they are warnings and can be ignored,
  11. If you are talking about the disks on this screen shot, they are different disks with the same mount point (you need to change one to not be a duplicate). This is correct, they are different disks. Due to some recent changes to UD, any disk that is installed in UD will now create an Historical entry for that disk. Ths was done to track the 'devX' assignment and to allow a duplicate mount point (share name) check so two devices are never mountd with the same mount point.
  12. It shouldn't if both aren't mounted at the same time. In fact UD now checks that a mount point is not duplicated when a disk is mounted and will refuse to mount it if the mount point is already assigned to another disk. I just found a bug I've fixed that the mount point (share name) was not being checked against the historical devices when you change the mount point. I don't think we have solved your script failure yet, so keep posting what you find so we can get you fixed.
  13. You can't have both of those disks labeled the same as 'transport data'. They each need a unique name. Unmount the disk in UD currently, then click on the mount point and change to a different name. Maybe 'transport data 1', and 'data transport 2'.
  14. Just wondering. What does the s3 sleep expect of the drives? No file activity, or all disks in standby? I don't use the s3 sleep but from what I remember, it wants all the disks to be spun down. Adding a second drive to UD shouldn't cause this kind of issue. If it is added and not mounted UD does nothing but show status it gets from udev. Not sure how that would cause this issue. Unraid does do some monitoring of the disk in the background like read smart info. Because the SSD is never spun down, it will continue to check smart info. Is there an s3 sleep log or some diagnostics on why it won't sleep?
  15. You should turn on CA docker update checks and keep them updated. I switched to 8.3 some time ago.
  16. Php has a lot of trouble with an apostrophy in a string. It requires a lot of special handling. This is an edge case and your best solution is to not use an apostrophy. The programming needed to handle this edge case is not worth the investment in time since you have a simple solution.
  17. Is the UD disk mounted when a sleep is initiated? Can you post a screen shot of the UD webpage and your diagnostics zip file?
  18. Can you post a screen shot of the UD page. It appears you only have one UD device and it's an nvme disk. All the BTRFS errors you are seeing are on array disks. These errors are generated by udev. The messages are generated when UD does the auto mount of disks. After you post a screen shot of UD, uninstall the UD plugin and see if that makes a difference.
  19. There have been no changes to UD recently that could cause this issue. This has been the case since 6.9. Unraid does not support spinning down SSD devices. Could there really be disk activity on that disk?
  20. The container was already on 8.3. I just updated it to the latest nightly build that appears to be the 8.3 final release.
  21. The docker container will have to be updated.
×
×
  • Create New...