unRAID OS version 6.4.0 Stable Release Available


limetech

537 posts in this topic Last Reply

Recommended Posts

  • Replies 536
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Download   If you are running a previous stable release, clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade. If you are running a 6.4.0-rc, click 'Check fo

Not criticizing. It's just that simple answers to simple questions often lead to more questions, some of which may have already been answered.   By the way, we are all mostly just volunteers

For PFSense if you're using unbound dns resolver just put into custom options and this will work correctly server: private-domain: "unraid.net"  

Posted Images

Hi,
i just updated today to 6.4 and i'm having problems with my windows vm 
 
when i start it it keeps spamming this error
Jan 14 17:35:54 Server kernel: vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]
to the console, until it fills the log folder.
 
it's a windows 10 vm with igpu passthrough, and the device mentioned in the log seems to be the intel gpu.
does anyone know how that can be solved?
 
thank you
 
 
server-diagnostics-20180114-1740.zip
You post has the same error as this guy's.

https://lime-technology.com/applications/tapatalk/index.php?/topic/65494-unRAID-OS-version-6.4.0-Stable-Release-Available#entry619723

Sent from my SM-G955U using Tapatalk

Link to post
36 minutes ago, dlandon said:

I suspect that the preclear plugin is not respecting the safe mode.  Remove it for now.

Nope.  The plugins are not installed.  update_cron is still parsing everything on the flash drive.

 

See here: 

 

Link to post
16 minutes ago, francesco995 said:

Hi,

i just updated today to 6.4 and i'm having problems with my windows vm 

 

when i start it it keeps spamming this error

Jan 14 17:35:54 Server kernel: vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]
to the console, until it fills the log folder.

 

it's a windows 10 vm with igpu passthrough, and the device mentioned in the log seems to be the intel gpu.

does anyone know how that can be solved?

 

thank you

 

 

server-diagnostics-20180114-1740.zip

https://lime-technology.com/forums/topic/65494-unraid-os-version-640-stable-release-available/?do=findComment&comment=619723

Link to post
27 minutes ago, Squid said:

Its a problem with 6.4  Looks like the fix LT put in place

 

That fix is still in place and correct (only active plugins are read).

 

The cause comes client 192.168.1.220 which tries to execute a non-existing preclear.

 

 

Link to post
1 minute ago, bonienl said:

 

That fix is still in place and correct (only active plugins are read).

 

The cause comes client 192.168.1.220 which tries to execute a non-existing preclear.

 

 

Sure.  With regards to preclear.  But dynamix system stats .cron is still being added.  Follow my link.  Cron entries are still being processed in safe mode.

Link to post
1 minute ago, Living Legend said:

So how should I go about diagnosing my hard crash issue

Disable all VMs, VT-d in the BIOS, all docker apps, and see what happens in safe mode.

Link to post
8 minutes ago, Squid said:

Sure.  With regards to preclear.  But dynamix system stats .cron is still being added.  Follow my link.  Cron entries are still being processed in safe mode.

 

Yeah, something funky is going on...

Link to post

I'm experiencing hard crashes with 6.4.0 on my Ryzen build.  For those that don't know, I'm the original discoverer of the Global C-state Control fix for Ryzens, and my unRAID server is extremely susceptible to this issue for whatever reason.  I typically get crashes within a few hours if the problem exists.

 

Until today, I was running 6.4-rc7a for several months, with Global C-state control ENABLED, and no other manual fixes.  I had uptime exceeding 50 days, and never experienced a crash under rc7a.

 

This morning I updated to 6.4.0 stable, and applied the ZenStates fix to the config/go file (my file shown here):

#!/bin/bash
# Start the Management Utility
zenstates --c6-disable
/usr/local/sbin/emhttp 

 

Within 3 hours, I have already experienced my first hard crash.  Console was not responsive, no output anywhere.

 

I have disabled Global C-state Control again, so hopefully I can be stable on 6.4.0 while this is being addressed.

 

Here's what I don't understand:  If this is an AMD bug, and is AMD's responsibility to fix, why did the fixes that were put into rc7a work so well?  And why can't those same fixes be brought forward to 6.4.0?

 

Thanks,

Paul

Link to post

Hello, just upgraded to 6.4, rebooted just fine but now it's telling me ext4 for my SSD cache is unsupported and is prompting me to format.

 

I've never had an issue with this previously, so I guess my first question is, is there any way to get it back to allowing the cache SSD as ext4?

 

If not, what would be recommended process so I don't lose everything current stored on the cache?  If it was mountable, I could move everything off, format, then move it back on but given I can't get to that point, looking for some tips.
 

Appreciate the assistance in advance!

Screenshot 2018-01-14 12.17.19.png

  • Like 1
Link to post
1 minute ago, Sean M. said:

Hello, just upgraded to 6.4, rebooted just fine but now it's telling me ext4 for my SSD cache is unsupported and is prompting me to format.

 

I've never had an issue with this previously, so I guess my first question is, is there any way to get it back to allowing the cache SSD as ext4?

 

If not, what would be recommended process so I don't lose everything current stored on the cache?  If it was mountable, I could move everything off, format, then move it back on but given I can't get to that point, looking for some tips.
 

Appreciate the assistance in advance!

Screenshot 2018-01-14 12.17.19.png

 

ext4 has never been a supported filesystem for cache or array disks. Possibly your previous version mounted them anyway. You should be able to mount ext4 with Unassigned Devices.

Link to post
8 hours ago, remati said:

The upgrade process from 6.3.5 to 6.4 went smoothly but I'm having a weird logging issue when starting one of my VM's with an Intel HD Graphics passthrough. It loads on the monitor ok but I'm getting a flood of these errors in the unraid log:

 

Jan 14 01:18:50 Tower kernel: vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref]

 

It seems to be filling up my /var/logs folder 100% completely. I've attached my diagnostics zip. What could be causing this issue now?

 

 

tower-diagnostics-20180114-0125.zip

 

Try adding to your append line in syslinux:

video=efifb:off

From: https://www.redhat.com/archives/vfio-users/2016-April/msg00236.html

 

Please report if this solves the problem.

Link to post
Just now, zin105 said:

 

You can generate your own cert with OpenSSL.

I didn't think generating my own would effect the auto renewal thats built in. Essentially, I just want to alter what unraid is doing when it reaches out to letencrypt to generate a new cert. Adding my own fqdn and nixing the unraid dns would be ideal. I don't really like being able to resolve an fqdn outside of my network to an ip that is internal. I know its technically safe, but I just think its weird and don't really care for it. I can see it being a nice feature for most though.

Link to post
34 minutes ago, manofcolombia said:

Is there a way to change the fqdn that was generated for us? I have my own internal domain and dns and "local" isn't really preferred for that reason. 

No but that's a good idea to make it configurable.

Link to post
21 minutes ago, limetech said:

No but that's a good idea to make it configurable.

I realize I'm being a little picky at this point since I'm sure I'm a small percentage of users who have their own internal domain, along with their own internal home lab, but it would be nice to have for those who want it.

EDIT: Better yet, it would be amazing to be able to configure the details of the cert registered with letsencrypt, but be able to toggle the limetech dns feature and cert auto renewal independently. That way users can just leave it at auto for those who are less picky or we can enter our own details and choose to use limetech dns or auto renew independently.  

Edited by manofcolombia
Link to post

Just lost user shares again. Here's what's in the syslog. All drives are approximately half full.

 

Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdc 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdd 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sde 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdf 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdg 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdh 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:55 Tower kernel: notify[6545]: segfault at 0 ip 0000000000603e0d sp 00007fffb4f509c0 error 4 in php[400000+74a000]
Jan 14 11:19:56 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdf 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:56 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdg 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:56 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdh 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:56 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdi 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdc 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdd 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sde 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdf 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdg 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdh 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:57 Tower rc.diskinfo[8573]: PHP Warning: shell_exec(): Unable to execute 'timeout -s 9 60 udevadm info --query=property --name /dev/sdi 2>/dev/null' in /etc/rc.d/rc.diskinfo on line 369
Jan 14 11:19:58 Tower rc.diskinfo[8573]: PHP Warning: exec(): Unable to fork [timeout -s 9 60 /bin/lsblk -nbP -o name,type,size,mountpoint,fstype,label 2>/dev/null] in /etc/rc.d/rc.diskinfo on line 361
Jan 14 11:19:59 Tower rc.diskinfo[8573]: PHP Warning: exec(): Unable to fork [timeout -s 9 60 /bin/lsblk -nbP -o name,type,size,mountpoint,fstype,label 2>/dev/null] in /etc/rc.d/rc.diskinfo on line 361
Jan 14 11:19:59 Tower rc.diskinfo[8573]: PHP Warning: file_put_contents(): Only 0 of 2 bytes written, possibly out of free disk space in /etc/rc.d/rc.diskinfo on line 266

Link to post

HI,

I started using Unraid for the first time with 6.4.0 Stable release .(On Ryzen 1700X/X370 Asus Prime)

USB booting is Ok,

The system just had one & only one new hard disk.(WD Datacenter)

I think I started the array, Unraid was telling that the hdd is unpartitioned/unformatted and such.

After that  I initiated a quick smart test & was clicking around the GUI,

Suddenly the Web interface got stuck, went to the putty/ssh session to check, ran top, saw a process @100% and then the ssh session go stuck too.

But ping to the Tower was responding.

Initiating a second ssh session did not help either. Had to hard shutdown & restart the machine.

Wondering what is going on...

 

Then later after a fresh reboot the hdd was formatted, Tried to create a Ubuntu VM & so on and then, the server was left on its own for an hour.

ping was running on the client machine & Tower was responding.

Then tried to refresh the unraid page, and then that was it.. Unraid got stuck.. Then ping also timed out.

Screenshot attached of 100% cpu use.

 

Client machine runs on XP & uses Firefox ESR & Putty to connect to Unraid Tower.

 

(Also had another issue where I started VirtIO download & then could not stop the array.. Had to kill the wget, so that it did not use the share.)

emhhtpd_100_percent.jpg

Link to post
  • limetech unpinned and locked this topic
Guest
This topic is now closed to further replies.