Jump to content

dlandon

Community Developer
  • Posts

    10,411
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. You have disk sharing turned on. You cannot have a root share while disk sharing is on.
  2. Correct. You'll find that NFSv4 is much more reliable and robust. It's possible the issues are on the old NAS end.
  3. This is the error I see that relates to your issue: Mar 23 07:52:03 Tower unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/VEDA_Media' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 5s! ### [PREVIOUS LINE REPEATED 2 TIMES] ### This is from the server dropping of-line or more likely a network issue. You are also running 6.9 and you are using NFSv3 which has a lot of issues with remote shares being dropped. I order to use NFSv4 that is a lot more robust, you need to be running 6.10-RC3 or above and set NFSv4 in the UD Settings, You have some other issues going on here also: Mar 23 07:48:17 Tower kernel: traps: lsof[19515] general protection fault ip:14672eb76a9e sp:dbff8b74eba941f error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[19616] general protection fault ip:14d601371a9e sp:50744c1256c0c217 error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[19172] general protection fault ip:14aa3dc2ba9e sp:91585a64f396a3e8 error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[20207] general protection fault ip:14df2b7c7a9e sp:5b116e228ef7db7f error:0 Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14d601352000+16b000] Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14aa3dc0c000+16b000] I have no idea what these are, but are probaby related because the remote share dropped off-line right after these log entries.
  4. The button is greyed out when you have disk shares enabled. When disk shares are enabled, you can't create a root share. They conflict and certain operations will crash shfs. You can have disk shares or a root share, but not both.
  5. The "Array" indicator on the mount button indicates in this case the drive was removed (or disconnected if it's acting up) while it was mounted and assigned a new devX designation by Linux when it reconnected. I'd recommend you preclear the disk using the "Erase Disk" option to remove the data. Do the following: Reboot your system to clear the "Array" indication. Don't mount the disk. Enable "Destructive Mode" in UD Settings. Clear the disk by clicking the red X next to the drive serial number. Install UD Prclear and preclear the disk using the "Erase Disk" option. If the disk is failing, it may also fail a preclear, but you can at least try.
  6. You are behind on your ownCloud versions. The latest version (10.9.1) won't run on php 7.2. I'd suggest you do the following: Edit /Tower/appdata/ownCloud/www/owncloud/config/config.php and confirm your db credentials. If that doesn't solve it: Restore your appdata/ownCloud backup. If that doesn't solve it: Back up your ownCloud appdata. Re-start your ownCloud with php 7.3. You'll need that for the latest version. Go through the manual ownCloud upgrade in the second post on this forum. If that doesn't solve it start over with a fresh ownCloud install. After you upgrade to the latest ownCloud, set the php version to 7.4. You need to keep ownCloud updated. As it is upgraded to a new version they generally let one previous version of php work. If ownCloud jumps a few php versions, you will probably have to start over if you can't upgrade.
  7. There's an issue with the share size calculations causing shares to show sizes when they are empty and having sizes duplicated from other shares. I'll have a fix in the next few hours.
  8. Are you running the latest ownCloud docker container?
  9. You don't need to mount a disk to preclear it. Just enable destructive mode in UD Settings and then clear the disk. You can then preclear it.
  10. Working on a fix. Made some changes for UD Root Share and I seem to have broken something.
  11. It's always been that way. What folder do you see .recyclebin?
  12. The log is stored on the flash and will not clear the 'Preclear' status on the 'Mount' button until it is deleted. Hover your mouse over the red X and the tool tip will tell you what clicking the X will do.
  13. If you mean the preclear log after it is precleared, yes it keeps the preclear signature. If you mean the clear disk, then yes you'll lose the preclear signature If you get a prompt that the disk data will be lost, you will lose the preclear signature.
  14. For the record, UD does not limit writes to deviices properly mounted at /mnt/disks/. The protection is for incorrect writes directly to /mnt/disks/ that end up in the tmpfs. Those writes would not be written to a device, but instead to ram file system.
  15. I've seen several reports of shares disappearing that could be attributed to UD. UD was doing an incorrect unmount of a rootshare when the array was stopped that crashed shfs and all shares would be gone when the array was restarted. This was only when the array was stopped. Normal unmounts worked fine and did not cause this issue. I just released an update to UD to fix this issue.
  16. All of this through your network? Are you using a 10GB network? If not, you probaly should re-think your approach. Because there is no need for this feature to any users except possibly yourself. Be sure you have a flash backup so you don't have to do this again.
  17. It's too early to have this discussion about the mount points. When ZFS is implemented in Unraid, UD will probably be able to mount ZFS disks that are not in the array. I expect that UD would mount legacy ZFS disks so the data could be copied into the array or a new Unraid ZFS pool. The mount point for UD mounted ZFS disks would be /mnt/disks/. That was recommended for now to avoid the FCP /mnt/ warning. This does not mean the initial installation of UD. When your serever is booted, plugins are installed in alphabetic order. UD has to complete it's installation and set up the protection on the /mnt/disks/ folder before anything can be mounted or UD detects that mount and insists on a reboot to clear the mount on /mnt/disks/ so it can install the protection. In your situation the ZFS mounts are auto mounting to /mnt/disks/ before UD can apply it's protection mechanism and that's why you see the reboot message. For now mount your ZFS disks at /mnt/zfs/ and ignore the FCP warning.
  18. Put it back the way you had it that worked. The update I'm issuing won't move the vfs_recycle entry.
  19. The issue is with the recycle bin plugin, not UD. Regardless, I'm issuing an update to the recycle bin plugin to fix this.
  20. The vfs_recycle entry was changed by the recycle bin plugin. You'll see the entry moved to the end of the file. It contains a [global] tag that probably changed your settings. Did you remove and re-install the the recycle bin plugin? Let's move this discussion to the recycle bin plugin forum.
  21. Answer to both questons is no. No one else is reporting issues and I've not seen any issues. Are you pinning CPUs to ownCloud?
  22. I just updated the ownCloud Container. The update was to update to the latest phusion Focal build and set the initial ownCloud install version to 10.9.1. Of course it didn't initially go very well. I had to change some things about the redis server. The downside to the latest container is I can't prebuild the redis server for the default php version of 7.4. What happens is when the container is first started, it builds the redis server based on the current php version. It's not a problem, but updating will take a while to complete. Subsequent restarts will not rebuild the redis server and will be much faster. Just be patient and give it time to complete. You can follow the progress in the log.
×
×
  • Create New...