Jump to content

dlandon

Community Developer
  • Posts

    10,413
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. waited a few days, this repository appears to still be broken. Works for me. I click on the link in your post and it downloads.
  2. Release 2016.03.16 available. You can now format a ntfs drive in UD and Windows will recognize and mount the drive. Thanks to gfjardim for this fix.
  3. Just a heads up about 6.2 beta. 6.2 forces updates of all Dockers whether or not there is an update. When you do this it appears to be the same as deleting the ownCloud Docker and starting over. Any customization in the ownCloud Docker will be lost and have to be re-done. i.e apps like calendar and contacts are lost. EDIT: I stand corrected. The calendar and contacts are in Productivity Apps, but I can't get them working. EDIT: I enabled them one at a time, logged off and ownCloud went through an update and they were enabled and working. Very strange.
  4. I haven't seen anything else with 6.2 beta yet except that NFS is not starting so remote NFS mounts will not work.
  5. That was quick!! ? Very easy fix. I don't have a system that I can set up two parity disks, so let me know if it worked.
  6. New release 2016.03.12 to deal with second parity drive showing up as unassigned in 6.2.
  7. New release 2016.03.11 available that fixes the NFS share mounting problem. In my testing I had some very strange things happening that I attribute to the way I was testing the NFS share mounts. I was mounting NFS shares on the same server I was testing, that was causing some strange problems. Because of that, I added a check that doesn't allow mounting a NFS share on the same server. I don't know why anyone would do that, but just to be sure, I block it. Because of the issues I was having, I had to make quite a few adjustments to the code to prevent problems. While none of the adjustments related to functionality, I made a lot of changes. I've tested pretty well and don't expect any problems, but watch UD carefully for any strange behavior. The areas I was not able to test thoroughly are preclear status, and formatting a drive.
  8. What device is it? UD recognizes /dev/hd* and /dev/sd*. Thanks for the response. It is of type sd (/dev/sdh and /dev/sdh1 is the partition). I am able to mount it and use it but this plugin does not see it. What format is the partition? I replied earlier that it was a BTRFS partition, I assume that is supported? While doing some further test I noticed this error in the log: >>>>>>> Tower udevd[23965]: '/usr/local/emhttp/plugins/unassigned.devices/scripts/rc.unassigned reload >/dev/null 2>&1 & disown' [24095] terminated by signal 7 (Bus error) <<<<<<< This was when I pressed the rescan button in the plugin. It appeared in the logs. Does this indicate anything? Thanks I didn't catch that the format was initially btrfs. I thought you re-formated the drive with btrfs. Currently btrfs is not supported in UD. The log entry comes from udev when UD refreshes a device's udev information. I'm not sure what it means.
  9. Go to the dashboard and click on the hand icon below the device indicator and you can run extended smart self-tests. Oh cool, thanks! But now I'm curious, why doesn't UD link to the same place as the dashboard? http://192.168.10.51/Dashboard/New?name=sdk Also, the tooltip on the device link in UD says "Run Smart Report on sdk", which isn't exactly correct if it is only linking to the attributes I didn't see the need for anything other than the Attributes. I'll change the tool tip text.
  10. Where do you see the format error? I can't reproduce it. You can't change the device_serial number. To change the name, click on the serial number, then on the mount point and change the name, then press 'Enter'. I knew it was something simple and easy! Thank you again for that. I have attached two screenshots of me formatting the drive. First shows the format failed and then the second is of the drive mounted. As I said, looks like everything works, I can create a docker.img on the drive, make folders, even set up a docker template and had it running. So just an oddity. Can you post the UD log so I can see the format log entries? The log under UD, just right of the share On/Off option is blank. So is the partition one as well. Click on the 'Help' button and you'll see a 'Download Log' button at the bottom. Click on that buttom to download the UD log.
  11. Where do you see the format error? I can't reproduce it. You can't change the device_serial number. To change the name, click on the serial number, then on the mount point and change the name, then press 'Enter'. I knew it was something simple and easy! Thank you again for that. I have attached two screenshots of me formatting the drive. First shows the format failed and then the second is of the drive mounted. As I said, looks like everything works, I can create a docker.img on the drive, make folders, even set up a docker template and had it running. So just an oddity. Can you post the UD log so I can see the format log entries?
  12. Go to the dashboard and click on the hand icon below the device indicator and you can run extended smart self-tests.
  13. Where do you see the format error? I can't reproduce it. You can't change the device_serial number. To change the name, click on the serial number, then on the mount point and change the name, then press 'Enter'.
  14. What device is it? UD recognizes /dev/hd* and /dev/sd*. Thanks for the response. It is of type sd (/dev/sdh and /dev/sdh1 is the partition). I am able to mount it and use it but this plugin does not see it. What format is the partition?
  15. It seems that UD is mounting nfs shares using cifs instead of nfs. I missed this because nfs was added after smb was implemented and in my testing all the shares I tested were shared with both smb and nfs, so a cifs mount was successful. Of course this fails when a remote server only shares with nfs. I am now working on fixing this issue, but I need some help on mounting an nfs file system. I am using the following command to mount an nfs share on Tower: mount -t nfs -o defaults 'Tower:/Public' '/mnt/disks/Tower_Public' But I am getting the following error: Mount of 'Tower:/Public' failed. Error message: mount.nfs: access denied by server while mounting Tower:/Public How do I solve the access denied error? I'm not sure about how to set nfs permissions and this might be related to a permissions problem.
  16. I believe error code 111 is a connection refused. Authentication problem? Look at the UD log and you should see more information on why the remote NFS share is not mounting. If you take another look at those log snippets, it's trying to use CIFS to connect, not NFS. I do *not* have samba running on that remote server. That's the problem, whenever I create an NFS mount, UD uses SMB instead. That's the way it works. It mounts the remote NFS locally then shares it as an SMB and/or NFS share based on the UD settings. Please post a UD log or I can't help you. EDIT: Ok I think I get it. Your server with the NFS share is not sharing it as SMB.
  17. Currently the green ball is on when the iso file is found and off (blinking) when the file is missing. The SMB mount shows the same status depending on whether or not the remote share is available.
  18. I believe error code 111 is a connection refused. Authentication problem? Look at the UD log and you should see more information on why the remote NFS share is not mounting.
  19. I don't think it's a good idea for a plugin to access the Internet to connect to an SFTP server. I'm also not comfortable that I know enough to do it securely. For the time being, I feel that UD is accomplishing its intended goals and won't be adding any new features.
  20. Help me understand what it is you are expecting this feature to do. So at the moment, I mount my virtual disk images that are created by KVM into a user share (well actually to a subfolder of a user share) like so... mount -o offset=16384 /mnt/virtualisation/vm/unraid-vm/unraid-vm.img /mnt/cache/.unraid-vm/ and then I can copy files to and from it. When I'm finished then I then unmount it. umount /mnt/virtualisation/vm/unraid-vm/unraid-vm.img One problem I can forsee is that you need to get the offset, which is found by running fdisk -l on the img file and multiplying the start value by 512 Also, fdisk doesn't work on GPT disks, that requires parted.. In fact the more I look at this, not sure how feasible it is to be honest, it gets pretty complicated... That's why I asked. I am having the same concern because of the offset and trying to make it foolproof. At this time, I think this is probably not a good idea. The offset issue makes it too complex to make mounting a virtual disk straight forward, and I am concerned some will mount their active VM disks and that would pose a corruption problem.
×
×
  • Create New...