unRAID Server Release 6.2.0-rc1 Available


Recommended Posts

Yeah your indeed right. If you set your ip adres to static there is some trouble with dns. If I set the DNS to 192.168.10.1 that is the adres of my router Docker is not working. If I add the google DNS server to the network settings it's working but slow. If everything is set to automatic (DHCP) it's working okay.

Link to comment
  • Replies 155
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Yeah your indeed right. If you set your ip adres to static there is some trouble with dns. If I set the DNS to 192.168.10.1 that is the adres of my router Docker is not working. If I add the google DNS server to the network settings it's working but slow. If everything is set to automatic (DHCP) it's working okay.

So keep it at DHCP and set the static address within the router.
Link to comment

I have the update done, but now i can't access my shares anymore from Windows. I can Ping it, i can connect via website, but no more from explorer... What is wrong? All my shares are still ok from the website....

Give windows a little time to come to it's senses.  You should be able to type "\\tower" right in the explorer address bar to see your shares.  (Of course use your own server name if changed from 'tower'.)

Link to comment

nope, i've rebooted both systems a few time...

tried with the \\192.168.1.3 and then unraid is asking for a login...

when i give the root access, unraid is asking again for root access...

sounds as if Windows may have some cached credentials causing a problems.  Try using Windows Credentials Manager to remove any existing credentials for the unRAID server.
Link to comment

nope, i've rebooted both systems a few time...

tried with the \\192.168.1.3 and then unraid is asking for a login...

when i give the root access, unraid is asking again for root access...

sounds as if Windows may have some cached credentials causing a problems.  Try using Windows Credentials Manager to remove any existing credentials for the unRAID server.

nope, when i tried to connect with a new windows, the same problem occurs...

what happened?

 

Link to comment

It would be nice to be able to delete parity check history entries. In 6.1, I just went into my flash drive and edited the file in notepad. However, with 6.2 it seems to regenerate the last entry no matter what. Even deleting the entire parity history file it creates a new one within seconds with the same information. "2016 Jul 10 19:20:03|22|Unavailable|-4"

 

So on my dashboard I show a 22 second incomplete parity check when I just did a full parity check yesterday.

Link to comment

 

- docker: fix update to always request manifest.v2 information

 

Updates Always Available (and subsequent pulls of 0B) are still happening on Dolphin (aptalca/docker-dolphin:latest).  Happens on a virgin docker.img file.

 

Hmmm, not able to reproduce here.  I added this container on a test machine here (with rc1) after which it showed 'up-to-date'.  Did a Check for Updates but still remained 'up-to-date'.

 

...

Actually, this is what I was seeing:
    "aptalca/docker-dolphin:latest": {
        "local": null,
        "remote": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "status": "undef"
    }

 

But I looked very closely at the my* template after you couldn't reproduce and the problem was somehow there was some trailing spaces in the repository entry.  It would pull correctly but messed up the updates.

 

Might not be a bad idea to do trim's on all the entries in dockerMan

 

Ah that makes sense now.  The next release will trim leading/trailing spaces from a few of the Docker input boxes.

Link to comment

[fuse/shfs does support symlinks but not hard links.

 

Are you sure? I don't know a lot about FUSE and I've just heard about SHFS. I have read that SHFS stopped development in 2004 and has been superseded by SSHFS. In my searching for a solution or workaround I've seen numerous pages mentioning hard link support for FUSE and SSHFS. If that's the case replacing SHFS with SSHFS (if possible) should allow hard linking.

 

'shfs' is limetech-proprietary fuse-based user share file system - has nothing to do with other projects out there that might also be named 'shfs'.

 

Ah, that makes sense then. Well, in that case, what is the limiting factor in adding hard link support? I know FUSE itself supports it. Since hard linking works on the disk shares and the user share will honor those hard links, is it possible to have shfs intercept system calls to hard link from a user share and redirect them to the appropriate disk share? Is it unreasonable to add a feature request for this ('cause I already did)?

Link to comment

I'm afraid I may have hosed docker. After the upgrade, emhttp wasn't responding for 15-20 minutes. I could see the process running. I issued a "reboot" command over ssh, and the server gracefully shut down.

 

Once it came back up, emhttp was working okay, but the Docker tab is missing. The syslog seems to mount my old docker image from /mnt/vms/docker.img, but it also seems to fail a subsequent mount command:

 

Jul 11 20:01:31 storage emhttp: shcmd (76): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger
Jul 11 20:01:31 storage kernel: BTRFS: device fsid 52c0aba5-9923-418c-9465-89971a7367f3 devid 1 transid 1124822 /dev/loop0
Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): disk space caching is enabled
Jul 11 20:01:31 storage kernel: BTRFS: has skinny extents
Jul 11 20:01:31 storage root: Resize '/var/lib/docker' of 'max'
Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): new size for /dev/loop0 is 21474836480
Jul 11 20:01:31 storage emhttp: shcmd (78): /etc/rc.d/rc.docker start |& logger
Jul 11 20:01:31 storage root: starting docker ...

<snip>

Jul 11 20:01:43 storage emhttp: shcmd (102): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger
Jul 11 20:01:43 storage root: /mnt/vms/docker.img is in-use, cannot mount
Jul 11 20:01:43 storage emhttp: shcmd: shcmd (102): exit status: 1

 

I can see that the image is mounted at /var/lib/docker, but lsof says that nothing is using that folder and docker is not running.

 

I'm worried that I interrupted the container update process, or somehow confused unraid with the reboot.

 

If I change the image location to /mnt/user/system/docker/docker.img and let unraid create a new image file, Docker starts okay. But if I move my /mnt/vms/docker.img into that location, I get the same behavior. I guess I broke my docker.img file?

Link to comment

I'm afraid I may have hosed docker. After the upgrade, emhttp wasn't responding for 15-20 minutes. I could see the process running. I issued a "reboot" command over ssh, and the server gracefully shut down.

 

Once it came back up, emhttp was working okay, but the Docker tab is missing. The syslog seems to mount my old docker image from /mnt/vms/docker.img, but it also seems to fail a subsequent mount command:

 

Jul 11 20:01:31 storage emhttp: shcmd (76): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger
Jul 11 20:01:31 storage kernel: BTRFS: device fsid 52c0aba5-9923-418c-9465-89971a7367f3 devid 1 transid 1124822 /dev/loop0
Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): disk space caching is enabled
Jul 11 20:01:31 storage kernel: BTRFS: has skinny extents
Jul 11 20:01:31 storage root: Resize '/var/lib/docker' of 'max'
Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): new size for /dev/loop0 is 21474836480
Jul 11 20:01:31 storage emhttp: shcmd (78): /etc/rc.d/rc.docker start |& logger
Jul 11 20:01:31 storage root: starting docker ...

<snip>

Jul 11 20:01:43 storage emhttp: shcmd (102): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger
Jul 11 20:01:43 storage root: /mnt/vms/docker.img is in-use, cannot mount
Jul 11 20:01:43 storage emhttp: shcmd: shcmd (102): exit status: 1

 

I can see that the image is mounted at /var/lib/docker, but lsof says that nothing is using that folder and docker is not running.

 

I'm worried that I interrupted the container update process, or somehow confused unraid with the reboot.

 

If I change the image location to /mnt/user/system/docker/docker.img and let unraid create a new image file, Docker starts okay. But if I move my /mnt/vms/docker.img into that location, I get the same behavior. I guess I broke my docker.img file?

 

Sounds like it, no massive issue to recreate it though, unless you had a lot of dev work going on in there....  :-\

Link to comment

I'll mention this here. I don't think it's a bug per se, but rather an unexpected behavior of the new shares. You might want to warn people.

 

As part of this upgrade, I'm doing a p2v transition for my desktop PC. The OS disk is 239 GB, and I made a backup. Now that unraid has the new shares, I'm trying to use them. So I put my disk image and backup in /mnt/disk1/domains/myvm. My cache disk is a 128 GB SSD, so I didn't bother trying to put them into /mnt/cache/domains/myvm.

 

I didn't realize that the "prefer" setting for the share would attempt to move the VM images to the cache drive, filling it up and hanging the mover. It would be nice if the mover was smart enough to not shoot itself in the foot with regard to the "prefer" option, for the scenario where the file exists in the array but not in the cache.

 

You might also want to warn people about this. Previously, I was putting my VMs and docker data on a separately mounted SSD. Since we don't have a supported RAID 0 way to combine SSDs to increase the cache size, if I want to use the new shares with SSD performance, I'll have no choice but to buy an SSD that's twice the size, replacing my previous two SSDs.

 

For now I've done the obvious thing and modified the share settings to prevent the domains share from using the cache drive.

 

P.S. Is the mover safe to use on vdisk1.img files while the VM is running?

Link to comment

I'll mention this here. I don't think it's a bug per se, but rather an unexpected behavior of the new shares. You might want to warn people.

 

As part of this upgrade, I'm doing a p2v transition for my desktop PC. The OS disk is 239 GB, and I made a backup. Now that unraid has the new shares, I'm trying to use them. So I put my disk image and backup in /mnt/disk1/domains/myvm. My cache disk is a 128 GB SSD, so I didn't bother trying to put them into /mnt/cache/domains/myvm.

 

I didn't realize that the "prefer" setting for the share would attempt to move the VM images to the cache drive, filling it up and hanging the mover. It would be nice if the mover was smart enough to not shoot itself in the foot with regard to the "prefer" option, for the scenario where the file exists in the array but not in the cache.

 

You might also want to warn people about this. Previously, I was putting my VMs and docker data on a separately mounted SSD. Since we don't have a supported RAID 0 way to combine SSDs to increase the cache size, if I want to use the new shares with SSD performance, I'll have no choice but to buy an SSD that's twice the size, replacing my previous two SSDs.

 

For now I've done the obvious thing and modified the share settings to prevent the domains share from using the cache drive.

 

P.S. Is the mover safe to use on vdisk1.img files while the VM is running?

 

use

 

-dconvert=raid0 -mconvert=raid1

 

to make your cache pool a raid0/job device - I did this and got 2 120s to show up as a single 240 - I know its not protected - but I keep it backed up.

 

Myk

 

Link to comment

In the VM settings page, the handy little directory chooser for the ISOs isn't working. I can still paste the full path though. Is it just me?

 

Works for me. Maybe clear your browser cache or try another browser?

 

BTW, the work you guys have done on the VM creation and management is pure awesomeness! Everything I have tried and played with has worked perfectly. I must have created a dozen VM's yesterday trying different options with and without hardware pass through and it works flawlessly. Thanks!

 

Gary

Link to comment
Guest
This topic is now closed to further replies.