unRAID Server Release 6.0-rc5-x86_64 Available


Recommended Posts

He's not going to let Jon steal all the thunder  8)

 

Well get him on here a bit more then! jonp has been another very valuable bridge between limetech and the great unwashed..  ;D

 

eric has helped me out with a container before, and the stuff he taught me is being put into other containers of mine.

Link to comment
  • Replies 140
  • Created
  • Last Reply

Top Posters In This Topic

 

 

Please try these commands and post output:

 

cat /etc/resolv.conf
cat /etc/hosts
hostname

 

Then pick one of your containers with no dns and type:

 

docker exec <container-name> cat /etc/resolv.conf
docker exec <container-name> cat /etc/hosts
docker exec <container-name> cat /etc/hostname

replacing <container-name> with the actual container name.

 

I think my CouchPotato is having this issue.. I had no idea, it just wouldn't work for a while.

A quick bridge/host fixed it, now I think it's broke again.

 

Results

cat /etc/resolv.conf
# Generated by dhcpcd from br0
# /etc/resolv.conf.head can replace this line
nameserver 192.168.1.1
# /etc/resolv.conf.tail can replace this line

 cat /etc/hosts
# Generated
127.0.0.1       Server localhost

docker exec CouchPotato cat /etc/resolv.conf
# Generated by dhcpcd from br0
# /etc/resolv.conf.head can replace this line
nameserver 192.168.1.1

docker exec CouchPotato cat /etc/hosts
172.17.0.9      f7a6e8729f82
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

docker exec CouchPotato cat /etc/hostname
f7a6e8729f82

 

 

I am attempting to refresh the charts, it fails, will not grab info...

I could just be being impatient, however I had no idea last time (wasn't downloading anything for weeks) and I had the same symptom for the charts being empty and not refreshing.

 

This does not look like that bug - the bug shows itself when /etc/resolve.conf on the host (unRaid) does not match /etc/resolve.conf in the container.  They are supposed to, and the fact they don't is the "bug".

 

Alright well that's good to know for that reason, and I suppose unrelated then, but I will mention it here and take it to its respective thread if completely unrelated.

 

For this specific container (CouchPotato)the charts over time fail to load (which also means movies don't download as it can't get a connection out to check), stopping it, nor restarting the container fixes this... The charts error out on load with something snazzy like "damn, maybe you should enable some charts?" or something of that nature. You hit refresh, it seems to fail instantly.

Stop it, flip host/bridge, all is fixed, working like a champ....

Link to comment

Just wondering if anyone has seen this before...  I am at 6.0 rc4 and hit the update button to go to rc5 and after reboot I lost my appdata, images and docker shares on my cache drive. So I rebooted again and still no shares. You can open the cache drive and everything is there so I did the rollback to rc4 and they are back just fine. Any suggestions?

 

Thanks

Link to comment

Can it access the internet?

 

It should be able to...  I've used the auto-updater before, and I've installed plugins.  Nothing in the log indicates it can't connect out.  All I see is a line saying:

Jun 11 22:09:47 Tower emhttp: /usr/local/sbin/plugin checkall 2>&1

 

I should note I tried rebooting the box, but that didn't help.

 

Link to comment

I haven't added new drives for awhile… I've just been upgrading the size of drives on my array. So this week I tried to add a new drive… precleared it for 36 hours (1 pass, seagate 4tb drive) and just finished…. and when I was trying to add it to the array, I was prompted to clear the drive (again?) :

 

AxeMsYd.png

 

Is it just the UI (that doesn't recognize the pre-cleared disk), or was there something wrong with the preclear?

Link to comment

I haven't added new drives for awhile… I've just been upgrading the size of drives on my array. So this week I tried to add a new drive… precleared it for 36 hours (1 pass, seagate 4tb drive) and just finished…. and when I was trying to add it to the array, I was prompted to clear the drive (again?) :

 

AxeMsYd.png

 

Is it just the UI (that doesn't recognize the pre-cleared disk), or was there something wrong with the preclear?

 

We won't know until you post all the preclear logs.

Link to comment

We won't know until you post all the preclear logs.

 

========================================================================1.13
== invoked as: /boot/preclear_disk.sh /dev/sdx
==  ST4000DM000-1F2168    ZXXXXXXX
== Disk /dev/sdx has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 10:31:18 (105 MB/s)
== Last Cycle's Zeroing time   : 8:21:42 (132 MB/s)
== Last Cycle's Post Read Time : 17:55:57 (61 MB/s)
== Last Cycle's Total Time     : 36:49:58
==
== Total Elapsed Time 36:49:58
==
== Disk Start Temperature: 27C
==
== Current Disk Temperature: 28C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdx  /tmp/smart_finish_sdx
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   116     100            6        ok          110899040
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    72      73           45        ok          28
      Temperature_Celsius =    28      27            0        ok          28
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change. 
============================================================================

 

Probably just UI problem right? I'll try adding it anyway…

 

EDIT: WTF it's clearing it again and locking up everything in my unraid rig now; all the dockers are down waiting for this single mistake.

Link to comment

 

Probably just UI problem right? I'll try adding it anyway…

 

EDIT: WTF it's clearing it again and locking up everything in my unraid rig now; all the dockers are down waiting for this single mistake

 

Yeah that's a rough edge we need to smooth out.  There is an option to write the 'factory cleared' signature to the disk if you know it's already cleared.

Link to comment

 

Probably just UI problem right? I'll try adding it anyway…

 

EDIT: WTF it's clearing it again and locking up everything in my unraid rig now; all the dockers are down waiting for this single mistake

 

Yeah that's a rough edge we need to smooth out.  There is an option to write the 'factory cleared' signature to the disk if you know it's already cleared.

 

Ok, it was my mistake because I haven't updated my preclear script and it turns out I need v1.15 for 64bit unraid 6.

 

But I think having all the dockers and KVM depend on (main) array being online should be addressed too… it's especially bad when you're doing multiple array start-stop when changing drive configuration, migrating from reiser to xfs, etc (which all I've gone through the past months).

Link to comment

But I think having all the dockers and KVM depend on (main) array being online should be addressed too… it's especially bad when you're doing multiple array start-stop when changing drive configuration, migrating from reiser to xfs, etc (which all I've gone through the past months).

Just set them all to not autostart until you are finished making changes to the array.

 

Or do you mean don't stop docker and VMs when you are making changes to the array? I don't think that will work since many of these will require access to the array.

Link to comment

But I think having all the dockers and KVM depend on (main) array being online should be addressed too… it's especially bad when you're doing multiple array start-stop when changing drive configuration, migrating from reiser to xfs, etc (which all I've gone through the past months).

Just set them all to not autostart until you are finished making changes to the array.

 

Or do you mean don't stop docker and VMs when you are making changes to the array? I don't think that will work since many of these will require access to the array.

 

For the drive clearing issue… (I'm sure it's been discussed before, but..) isn't it possible for unraid to run the preclear script while keeping the array working and even if it's locking everything else (if necessary); that's still better than having all the drives just waiting/spun down while this single drive is clearing…. oh, I guess then it will have to stop the array to add the new drive, and that's a ui/ux issue. I think ideally the preclearing step should just be incorporated to the unraid default gui, so new users are prompted to preclear the drive and it'll happen without shutting down the main array… and user can then add the new disk when it's done with another step.

 

As for dockers… I agree that many dockers rely on user shares, especially media-acquisition/management related ones (cp, sb, sabnzbd, etc)… but we are seeing (and should be fostering) the growing dockers that just takes advantage of unraid as a vm vessel, with no care of the media/content in the user share at all. I think having the cache drives to start/stop independently from the main array should help… then you can draw the line on which dockers can continuously run uninterrupted.

Link to comment

For the drive clearing issue… (I'm sure it's been discussed before, but..) isn't it possible for unraid to run the preclear script while keeping the array working and even if it's locking everything else (if necessary); that's still better than having all the drives just waiting/spun down while this single drive is clearing…. oh, I guess then it will have to stop the array to add the new drive, and that's a ui/ux issue. I think ideally the preclearing step should just be incorporated to the unraid default gui, so new users are prompted to preclear the drive and it'll happen without shutting down the main array… and user can then add the new disk when it's done with another step.

 

Unraid doesn't use the preclear script, it's a completely seperate 3rd party script, if it was integrated into unraid, then unraid would probably have to provide proper support for it which I imagine they would rather leave to the community as SMART outputs can be pretty confusing and not really standardised.  There is already a 3rd party preclear gui script which works well for this purpose though http://lime-technology.com/forum/index.php?topic=39985.msg375195#msg375195

Link to comment

 

Unraid doesn't use the preclear script, it's a completely seperate 3rd party script, if it was integrated into unraid, then unraid would probably have to provide proper support for it which I imagine they would rather leave to the community as SMART outputs can be pretty confusing and not really standardised.  There is already a 3rd party preclear gui script which works well for this purpose though http://lime-technology.com/forum/index.php?topic=39985.msg375195#msg375195

 

Thanks... I've seen that plugin before and I should've used that (or checked the preclear script thread for update)...

Link to comment

Can the kvm run independent of array status? this is a powerful feature of the version 6. The kvm mount does not run from user share anyway. I can see a situation where there are users who want to use unraid for the virtualization feature by itself. Couple this with dockers and unassigned drives and that's a great solution for many situations. Now if we can decouple docker from array status... :)

 

Probably just UI problem right? I'll try adding it anyway…

 

EDIT: WTF it's clearing it again and locking up everything in my unraid rig now; all the dockers are down waiting for this single mistake

 

Yeah that's a rough edge we need to smooth out.  There is an option to write the 'factory cleared' signature to the disk if you know it's already cleared.

 

Ok, it was my mistake because I haven't updated my preclear script and it turns out I need v1.15 for 64bit unraid 6.

 

But I think having all the dockers and KVM depend on (main) array being online should be addressed too… it's especially bad when you're doing multiple array start-stop when changing drive configuration, migrating from reiser to xfs, etc (which all I've gone through the past months).

 

Link to comment
Guest
This topic is now closed to further replies.