-
Posts
10,411 -
Joined
-
Last visited
-
Days Won
20
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by dlandon
-
This is the error I see that relates to your issue: Mar 23 07:52:03 Tower unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/VEDA_Media' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 5s! ### [PREVIOUS LINE REPEATED 2 TIMES] ### This is from the server dropping of-line or more likely a network issue. You are also running 6.9 and you are using NFSv3 which has a lot of issues with remote shares being dropped. I order to use NFSv4 that is a lot more robust, you need to be running 6.10-RC3 or above and set NFSv4 in the UD Settings, You have some other issues going on here also: Mar 23 07:48:17 Tower kernel: traps: lsof[19515] general protection fault ip:14672eb76a9e sp:dbff8b74eba941f error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[19616] general protection fault ip:14d601371a9e sp:50744c1256c0c217 error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[19172] general protection fault ip:14aa3dc2ba9e sp:91585a64f396a3e8 error:0 Mar 23 07:48:17 Tower kernel: traps: lsof[20207] general protection fault ip:14df2b7c7a9e sp:5b116e228ef7db7f error:0 Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14d601352000+16b000] Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14aa3dc0c000+16b000] I have no idea what these are, but are probaby related because the remote share dropped off-line right after these log entries.
-
The "Array" indicator on the mount button indicates in this case the drive was removed (or disconnected if it's acting up) while it was mounted and assigned a new devX designation by Linux when it reconnected. I'd recommend you preclear the disk using the "Erase Disk" option to remove the data. Do the following: Reboot your system to clear the "Array" indication. Don't mount the disk. Enable "Destructive Mode" in UD Settings. Clear the disk by clicking the red X next to the drive serial number. Install UD Prclear and preclear the disk using the "Erase Disk" option. If the disk is failing, it may also fail a preclear, but you can at least try.
-
You are behind on your ownCloud versions. The latest version (10.9.1) won't run on php 7.2. I'd suggest you do the following: Edit /Tower/appdata/ownCloud/www/owncloud/config/config.php and confirm your db credentials. If that doesn't solve it: Restore your appdata/ownCloud backup. If that doesn't solve it: Back up your ownCloud appdata. Re-start your ownCloud with php 7.3. You'll need that for the latest version. Go through the manual ownCloud upgrade in the second post on this forum. If that doesn't solve it start over with a fresh ownCloud install. After you upgrade to the latest ownCloud, set the php version to 7.4. You need to keep ownCloud updated. As it is upgraded to a new version they generally let one previous version of php work. If ownCloud jumps a few php versions, you will probably have to start over if you can't upgrade.
-
There's an issue with the share size calculations causing shares to show sizes when they are empty and having sizes duplicated from other shares. I'll have a fix in the next few hours.
-
Are you running the latest ownCloud docker container?
-
Working on a fix. Made some changes for UD Root Share and I seem to have broken something.
-
It's always been that way. What folder do you see .recyclebin?
-
The log is stored on the flash and will not clear the 'Preclear' status on the 'Mount' button until it is deleted. Hover your mouse over the red X and the tool tip will tell you what clicking the X will do.
-
If you mean the preclear log after it is precleared, yes it keeps the preclear signature. If you mean the clear disk, then yes you'll lose the preclear signature If you get a prompt that the disk data will be lost, you will lose the preclear signature.
-
Reboot required to apply Unassigned Devices update + ZFS
dlandon replied to Tommy's topic in General Support
For the record, UD does not limit writes to deviices properly mounted at /mnt/disks/. The protection is for incorrect writes directly to /mnt/disks/ that end up in the tmpfs. Those writes would not be written to a device, but instead to ram file system. -
I've seen several reports of shares disappearing that could be attributed to UD. UD was doing an incorrect unmount of a rootshare when the array was stopped that crashed shfs and all shares would be gone when the array was restarted. This was only when the array was stopped. Normal unmounts worked fine and did not cause this issue. I just released an update to UD to fix this issue.
-
Reboot required to apply Unassigned Devices update + ZFS
dlandon replied to Tommy's topic in General Support
It's too early to have this discussion about the mount points. When ZFS is implemented in Unraid, UD will probably be able to mount ZFS disks that are not in the array. I expect that UD would mount legacy ZFS disks so the data could be copied into the array or a new Unraid ZFS pool. The mount point for UD mounted ZFS disks would be /mnt/disks/. That was recommended for now to avoid the FCP /mnt/ warning. This does not mean the initial installation of UD. When your serever is booted, plugins are installed in alphabetic order. UD has to complete it's installation and set up the protection on the /mnt/disks/ folder before anything can be mounted or UD detects that mount and insists on a reboot to clear the mount on /mnt/disks/ so it can install the protection. In your situation the ZFS mounts are auto mounting to /mnt/disks/ before UD can apply it's protection mechanism and that's why you see the reboot message. For now mount your ZFS disks at /mnt/zfs/ and ignore the FCP warning. -
The vfs_recycle entry was changed by the recycle bin plugin. You'll see the entry moved to the end of the file. It contains a [global] tag that probably changed your settings. Did you remove and re-install the the recycle bin plugin? Let's move this discussion to the recycle bin plugin forum.
-
Answer to both questons is no. No one else is reporting issues and I've not seen any issues. Are you pinning CPUs to ownCloud?
-
I just updated the ownCloud Container. The update was to update to the latest phusion Focal build and set the initial ownCloud install version to 10.9.1. Of course it didn't initially go very well. I had to change some things about the redis server. The downside to the latest container is I can't prebuild the redis server for the default php version of 7.4. What happens is when the container is first started, it builds the redis server based on the current php version. It's not a problem, but updating will take a while to complete. Subsequent restarts will not rebuild the redis server and will be much faster. Just be patient and give it time to complete. You can follow the progress in the log.