perhansen Posted March 12, 2016 Share Posted March 12, 2016 Seems nice! Any update on the Plus license storage device limits on 6.2? There is no change to the number of storage devices you can use with a Plus license in 6.2. This would mean that the limits are the same for the free and Basic license if free is increased to 6. This seems to remove much of the incentive to ever move from Free to Basic licenses. I would have thought you should reduce the limit for Free back to 4 or 5 (I agree 3 was too low) or increase the limits for Basic. A suggestion might be something like 8 for Basic and 15 for Plus. This seems more in line with the increase in the Pro limits. I vote for that. Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Took the plunge.. Updated the system disabling all VM and Dockers on forehand. System did not autostart. I enabled my second parity disk (has been precleared and waiting for this very moment). Started the array I had no VM tab In VM settings VM's were still enabled I disabled it and enabled it again, this made the tab appear (guessing this is due to the Dynamix webgui). I did the pre-startup actions for the VM's (edit, change video to QXL). I have 3 VM's, neither started. All primary disks were no longer allocated, I set to manual, browsed to the primary disk and save/updated. This made it work again. One of my VM's had two disks attached, this one also did not start but when setting to manual the primary disk was found again by itself, did not have to browse to it. Now on to the dockers.. All dockers appear to need an update.. Kind of weird.. needo/Couchpotato, upgrade/start: worked (took a long time to start again) needo/Deluge, upgrade/start: worked aptalca/dolphin, upgrade/start: worked needo/sabnzbd, upgrade/start: worked needo/sickrage, upgrade/start: worked (took a long time to start again) gfjardim/transmission, upgrade/start: worked gfjardim/crashplan, upgrade/start: worked Yes, I thought I had written that in the guide but apparently not! The reason we have not upgraded Docker in unRAID 6.1 has been because of a significant change to the Docker Hub API. Legacy versions of Docker can still talk to the legacy API, but the newer versions require you to talk through the newer API. This newer API broke a number of functions in the Docker Manager that is in unRAID 6.1, but since the API for Docker still functions against that release version, it has continued to serve its purpose for the community. In unRAID 6.2, we are using the latest release of Docker (1.10.2) and we've resolved all the API related issues so that Docker manager works ok. However, there is a one-time update procedure that each container will need to go through in order to point it towards that new API going forward, even if the container itself truly isn't in need of an update. Link to comment
Helmonder Posted March 12, 2016 Share Posted March 12, 2016 Seems nice! Any update on the Plus license storage device limits on 6.2? There is no change to the number of storage devices you can use with a Plus license in 6.2. This would mean that the limits are the same for the free and Basic license if free is increased to 6. This seems to remove much of the incentive to ever move from Free to Basic licenses. I would have thought you should reduce the limit for Free back to 4 or 5 (I agree 3 was too low) or increase the limits for Basic. A suggestion might be something like 8 for Basic and 15 for Plus. This seems more in line with the increase in the Pro limits. There is no free any more I thought ? Its trial.. Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Seems nice! Any update on the Plus license storage device limits on 6.2? There is no change to the number of storage devices you can use with a Plus license in 6.2. This would mean that the limits are the same for the free and Basic license if free is increased to 6. This seems to remove much of the incentive to ever move from Free to Basic licenses. I would have thought you should reduce the limit for Free back to 4 or 5 (I agree 3 was too low) or increase the limits for Basic. A suggestion might be something like 8 for Basic and 15 for Plus. This seems more in line with the increase in the Pro limits. There is no "free" license in unRAID 6. There is a free 30 day trial. Link to comment
itimpi Posted March 12, 2016 Share Posted March 12, 2016 Seems nice! Any update on the Plus license storage device limits on 6.2? There is no change to the number of storage devices you can use with a Plus license in 6.2. This would mean that the limits are the same for the free and Basic license if free is increased to 6. This seems to remove much of the incentive to ever move from Free to Basic licenses. I would have thought you should reduce the limit for Free back to 4 or 5 (I agree 3 was too low) or increase the limits for Basic. A suggestion might be something like 8 for Basic and 15 for Plus. This seems more in line with the increase in the Pro limits. There is no free any more I thought ? Its trial.. it is actually the free Trial license, so which is used to reference it seems irrelevant at the moment. It might become more meaningful if LimeTech start limiting the ability to renew trial licenses so that at some point you HAVE to buy a license to continue using unRAID. Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Dockers also needed updated, and they took forever. From the Docker 1.10 release notes: https://github.com/docker/docker/releases/tag/v1.10.0 IMPORTANT: Docker 1.10 uses a new content-addressable storage for images and layers. A migration is performed the first time docker is run, and can take a significant amount of time depending on the number of images present. Refer to this page on the wiki for more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration Maybe this should be added to the OP? Done! Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Can i download the libvirt.img somewhere? It will be created for you. Not in my case.. :'( changed it to: /mnt/user/VMS/Libvirt/ Log: Mar 12 19:46:35 serverramon emhttp: shcmd (316): /etc/rc.d/rc.libvirt start |& logger Mar 12 19:46:35 serverramon root: no image mounted at /etc/libvirt The path needs to end with libvirt.img. /mnt/user/VMS/Libvirt/libvirt.img is a valid path. Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Seems nice! Any update on the Plus license storage device limits on 6.2? There is no change to the number of storage devices you can use with a Plus license in 6.2. This would mean that the limits are the same for the free and Basic license if free is increased to 6. This seems to remove much of the incentive to ever move from Free to Basic licenses. I would have thought you should reduce the limit for Free back to 4 or 5 (I agree 3 was too low) or increase the limits for Basic. A suggestion might be something like 8 for Basic and 15 for Plus. This seems more in line with the increase in the Pro limits. There is no free any more I thought ? Its trial.. it is actually the free Trial license, so which is used to reference it seems irrelevant at the moment. It might become more meaningful if LimeTech start limiting the ability to renew trial licenses so that at some point you HAVE to buy a license to continue using unRAID. The renewals are not indefinite. At some point, you do have to either purchase or stop using the software. Link to comment
BrianAz Posted March 12, 2016 Share Posted March 12, 2016 I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file. To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity? Without Turbo Write Enabled: With Turbo Write Enabled: Can you really claim "slower"? Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison. I haven't said it's slower for purely that reason (no benchmark testing was done on this hardware beforehand), therefore to everyone reading this I guess it is purely anecdotal. What I asked was: To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity? Only in cases of extremely weak hardware (CPU). Any data on what would be considered an extremely weak CPU? I'm running an Intel® Celeron® CPU G1610 @ 2.60GHz. Anyone else testing with this CPU? Thanks Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 I'd like to know if anyone (with Dual Parity) is experiencing a dip in performance over v6.1x According to Windows 10 File Copy GUI Stats - On my test server, which I admit has some "older" disks, I don't seem to be able to get more than 26MB/s (After an initial burst of about 90MB/s for about 5 seconds) write for a 4GB file. If I enable Turbo Write I get about 60MB/s (after a similar initial burst as above) retained for the same file. To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity? Without Turbo Write Enabled: With Turbo Write Enabled: Can you really claim "slower"? Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison. I haven't said it's slower for purely that reason (no benchmark testing was done on this hardware beforehand), therefore to everyone reading this I guess it is purely anecdotal. What I asked was: To LT Staff - if you are reading - is there is there expected to be performance penalty as a result of implementing Dual Parity? Only in cases of extremely weak hardware (CPU). Any data on what would be considered an extremely weak CPU? I'm running an Intel® Celeron® CPU G1610 @ 2.60GHz. Anyone else testing with this CPU? Thanks I think an Atom 1.0 GHz processor might be a little lightweight, but honestly, not sure. Haven't had that kind of hardware in a while. Link to comment
Helmonder Posted March 12, 2016 Share Posted March 12, 2016 Auch... I tried to pin my crashplan docker to a specific core... The docker update failed and my docker now appears to be gone, I have an "orphaned image" in the GUI that I seem to not be able to do anything with but for removing it.. :-( help ? Link to comment
jphipps Posted March 12, 2016 Share Posted March 12, 2016 Any ideas on getting NFS to work? Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Auch... I tried to pin my crashplan docker to a specific core... The docker update failed and my docker now appears to be gone, I have an "orphaned image" in the GUI that I seem to not be able to do anything with but for removing it.. :-( help ? hmm, can you remove the orphaned container, then add crashplan back using your user template? Just navigate to the Docker Tab, remove the orphan, then add container, then select the my-Crashplan container from the top of the drop down list, then click create (no need to fill out any fields because the template should take care of that. Link to comment
jonp Posted March 12, 2016 Share Posted March 12, 2016 Any ideas on getting NFS to work? Thought you found the temp workaround a few pages back? We're still investigating the issue. Link to comment
JorgeB Posted March 12, 2016 Share Posted March 12, 2016 Any data on what would be considered an extremely weak CPU? I'm running an Intel® Celeron® CPU G1610 @ 2.60GHz. Anyone else testing with this CPU? Thanks You're fine, these were done on a G2030 @ 3.0Ghz, not much faster, 3 runs each, 10GB test file: Single parity 10240000000 bytes (10 GB, 9.5 GiB) copied, 262.776 s, 39.0 MB/s 10240000000 bytes (10 GB, 9.5 GiB) copied, 266.573 s, 38.4 MB/s 10240000000 bytes (10 GB, 9.5 GiB) copied, 261.387 s, 39.2 MB/s Dual parity 10240000000 bytes (10 GB, 9.5 GiB) copied, 265.284 s, 38.6 MB/s 10240000000 bytes (10 GB, 9.5 GiB) copied, 262.617 s, 39.0 MB/s 10240000000 bytes (10 GB, 9.5 GiB) copied, 259.61 s, 39.4 MB/s Ignore the low write speed, I was using old 250GB 5900rpm Seagates. Link to comment
BRiT Posted March 12, 2016 Share Posted March 12, 2016 For -beta and -rc releases, of all key types, the server must validate at boot time. The reason is that this lets us "invalidate" a beta release. That is, if a beta gets out there with a major snafu we can prevent it being run by new users who stumble upon the release zip file. For example, if there is a bug in P+Q handling. Remember the reiserfs snafu last year? We want to minimize that. For stable releases, Basic/Plus/Pro keys do not validate at boot time, that is, works the same as it always has. Starting with 6.2, Trials will require validation with key server. This is in preparation for making the Trial experience easier. That's commendable. Are there any forced deprecation checks in place too, for the situations where a person is running the troublesome version has hasn't rebooted in months or years? It's not unheard of having long uptimes on servers. I suspect the boot-time check won't have as much of an impact in preventing trouble, which is the intent. Link to comment
Helmonder Posted March 12, 2016 Share Posted March 12, 2016 root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="CrashPlan" --net="host" --privileged="true" -e TZ="Europe/Berlin" -e HOST_OS="unRAID" -v "/boot/custom/crashplan_notify.sh":"/etc/service/notify/run":rw -v "/mnt/cache/.config/crashplan":"/config":rw -v "/mnt/user/":"/data":rw --cpuset=3 gfjardim/crashplan flag provided but not defined: --cpuset See '/usr/bin/docker run --help'. The command failed. I removed the --cpuset command and am trying again.. I missed that it saved that in the advanced view.. Its back now... I was attempting to try and give the crashplan docker more, or more dedicated resources, I'll hold off on that till the beta has quieted down. Thanks for the quick help ! Link to comment
MyKroFt Posted March 12, 2016 Share Posted March 12, 2016 Before I attempt to convert my two unassinged SSD/zfs pool drives to a cache pool - are we allowed to set and stick the raid mode? I want to combine the 2 240G drives into a single unprotected 480G pool Thanks Myk Link to comment
jphipps Posted March 12, 2016 Share Posted March 12, 2016 Any ideas on getting NFS to work? Thought you found the temp workaround a few pages back? We're still investigating the issue. nfsd didn't give any errors starting, but found it didn't really startup, seems like statd is throwing an error on startup. It seems to be something with IPV6 disabled in the kernel, but some config files still have entries. Link to comment
EMKO Posted March 12, 2016 Share Posted March 12, 2016 okay its checking in so you can block bad beta releases etc, but can't you add a Warning that you have not connected and tell the user there is a risk ? and still let them use it if they accept? i want to test out the new beta but i am not sure what will happen since i use Pfsense Vm to run my internet and i need Unraid to be running Link to comment
eschultz Posted March 12, 2016 Share Posted March 12, 2016 root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="CrashPlan" --net="host" --privileged="true" -e TZ="Europe/Berlin" -e HOST_OS="unRAID" -v "/boot/custom/crashplan_notify.sh":"/etc/service/notify/run":rw -v "/mnt/cache/.config/crashplan":"/config":rw -v "/mnt/user/":"/data":rw --cpuset=3 gfjardim/crashplan flag provided but not defined: --cpuset See '/usr/bin/docker run --help'. The command failed. I removed the --cpuset command and am trying again.. I missed that it saved that in the advanced view.. Its back now... I was attempting to try and give the crashplan docker more, or more dedicated resources, I'll hold off on that till the beta has quieted down. Thanks for the quick help ! Instead of --cpuset it is now --cpuset-cpus Link to comment
eschultz Posted March 12, 2016 Share Posted March 12, 2016 Before I attempt to convert my two unassinged SSD/zfs pool drives to a cache pool - are we allowed to set and stick the raid mode? I want to combine the 2 240G drives into a single unprotected 480G pool Thanks Myk Yes, you can post-configure your cache pool as raid0. After assigning both SSDs to your cache pool and starting the array, you can click on the first Cache disk and Balance with the following options for raid0: -dconvert=raid0 -mconvert=raid0 Link to comment
trurl Posted March 12, 2016 Share Posted March 12, 2016 Sat on the fence on this for a bit but looked like none of the issues would affect me. Upgrade smooth, except for having to manually copy bzroot-gui from zip. Probably won't use it much anyway since I normally run headless. No problems with docker updates. Just now started parity2 sync. One slight anomaly on the Dashboard. Don't remember seeing this before and don't know what it is supposed to mean. *edit* Now that parity2 sync has completed the anomaly is gone. Link to comment
EMKO Posted March 12, 2016 Share Posted March 12, 2016 Start up a new Openelec 6.0 VM i get this in the logs never seen this before is there a problem? Openelec does run fine warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24] 2016-03-12T21:42:40.054309Z qemu-system-x86_64: AMD CPU doesn't support hyperthreading. Please configure -smp options properly. warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23] warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24] 2016-03-12T21:42:42.885398Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:08:00.0 Device option ROM contents are probably invalid (check dmesg). Skip option ROM probe with rombar=0, or load from file with romfile= Also now i can stop/start up Openelec VM as much as i want the GPU pass thought works every time. Has the no on board single Nvidia GPU card pass through been fixed? Link to comment
jphipps Posted March 12, 2016 Share Posted March 12, 2016 Finally got NFS to work... A bit of a long process. I basically had to convert from portmap over to rpcbind and upgrade/install a few packages. I installed the following packages: libtirpc-1.0.1-x86_64-2.txz nfs-utils-1.3.3-x86_64-1.txz rpcbind-0.2.3-x86_64-1.txz and switched the rc.rpc to start rpcbind instead of rpc.portmap and also comment out the 2 ipv6 lines from the /etc/netconfig Now I can mount over nfs... Link to comment
Recommended Posts