Jump to content
limetech

Unraid OS version 6.7 available

247 posts in this topic Last Reply

Recommended Posts

40 minutes ago, eschultz said:

Looks like in 6.7 the default route is being set incorrectly to br1 instead of br0.  Are you using br1 for anything?  It looks like eth1, the link for br1, is disconnected anyways...

Thank you for this lead!

 

I do have 4 NICs. And I use only one of them - eth0, with br0 attached to everything - Dockers and VMs. I tested eth1 a lot of time ago if it worked or not, but never used br1.

 

What is the resolution to this? Removing everything eth1 and br1 related from network.cfg perhaps?

Share this post


Link to post
3 minutes ago, eybox said:

Thank you for this lead!

 

I do have 4 NICs. And I use only one of them - eth0, with br0 attached to everything - Dockers and VMs. I tested eth1 a lot of time ago if it worked or not, but never used br1.

 

What is the resolution to this? Removing everything eth1 and br1 related from network.cfg perhaps?

Yeah, get rid of the following from network.cfg:

IFNAME[1]="br1"
BRNAME[1]="br1"
BRNICS[1]="eth1"
BRSTP[1]="no"
BRFD[1]="0"
DESCRIPTION[1]="VMs connections"
PROTOCOL[1]="ipv4"
USE_DHCP[1]="no"
IPADDR[1]="192.168.86.125"
NETMASK[1]="255.255.255.0"
GATEWAY[1]="192.168.85.1"
METRIC[1]="2"

Also change SYSNICS="2" to SYSNICS="1" at the end.  Then reboot server.

Share this post


Link to post
Posted (edited)

Actually the configuration for eth1 (br1) is wrong

IPADDR[1]="192.168.86.125"
NETMASK[1]="255.255.255.0"
GATEWAY[1]="192.168.85.1"

The gateway address "192.168.85.1" falls outside the interface network "192.168.86.125" (note 86 vs 85).

This will cause routing issues.

Do as @eschultz proposes and it will resolve the network issue.

Edited by bonienl

Share this post


Link to post
2 hours ago, rpj2 said:

A software update broke a previously working system is my point. I rolled back and will try a future update. Throwing out working hardware is wasteful. A software update could break another card also. Blame the user? 

Not blaming the user, simply offering an alternative to get server back working again.  Sometimes the easiest solution is a h/w one.  Sometimes the only solution is a h/w one.

 

But to your point: You're right: something broke in the kernel.  When this was first reported we spent quite a bit of time, short of bisecting the kernel, to find out what might have caused this.  We are willing to try any patch you, or someone else might run across.  For example, take a look at this post in the -rc8 release topic regarding the 'hpsa' storage driver.  I delayed this 6.7.0 release a day to put this patch in, and user has reported that it works and fixes the issue.

 

Note that we never see these kinds of issues in the h/w we have - we would not publish a release where we see problems like this.  When I see an issue like this, first thing I do is 'google' the error message and start searching for similar reports.  But I can't spend more than a few hours doing this on any one issue.  If a solution is not apparent, then I let it sit for a while because eventually it will happen in one of the bigger distros such as Ubuntu or Fedora, and those guys have the resources to further investigate the problem.  This is kinda how it works with open source unfortunately.

  • Like 1
  • Upvote 2

Share this post


Link to post
3 hours ago, MrSage said:

 

  So I've moved all of the appdata and system files to the cache via MC and im still not getting my apps nor vms running.

Looks like the docker and vm image are corrupted.

You need to stop the docker and VM service and delete and recreate the respective image.

This implies re-installing all docker containers, you can use CA to do this. Containers will come up with all their existing settings and data.

To re-install VMs you need to create a new XML and can point to the existing vdisk image. If any manual modifications were done to the XML file, they need to be redone.

 

Share this post


Link to post
5 hours ago, m8ty said:

 I've been meaning to buy a cheap LCD screen to attach to it for diagnosing issues like these, any recommendations?

Anything that emits light.  🤣  Seriously, if you login via the command line, you are looking at a straight  text screen.  IF you are using the GUI web browser interface, it will have to using one of the default resolutions like svga.

Share this post


Link to post

Just a quick update on my VMS issues with the webui... no luck. I even tried backing up then wiping out my libvirt.img. I still got the same issue where the VMS page just looks like it loads forever. I've dug around in the forums but couldnt find any info about this particular problem.

 

Is there a process outlined to backup my VM configurations and start completely fresh? I thought wiping out the libvirt.img would basically do that.

 

I also noticed that occasionally the libvirt log would show errors from trying to find references to isos in directories I deleted well over a year ago. It would be nice to clean up all this old stuff.

Share this post


Link to post

Is it just me: Since release of 6.7 I try to download it from 3 servers. At least 1min per percentage. I'm sitting on a fast lane here. Everything else is really fast.

 

What's the matter with that ZIP?

 

Share this post


Link to post

Hi All.

 

I also upgraded without incident.  Looks superb...thank you to all the team!

 

I do want to pass along a GUI issue.  I don't obviously see now what dockers are due for an update in the dashboard?  I seem to recall they had an indicator stating as much.  A right click on one due for an update does show the option, so this is just a cosmetic fix.

 

TY.

 

Kev.

Share this post


Link to post
10 hours ago, rpj2 said:

 

A software update broke a previously working system is my point. I rolled back and will try a future update. Throwing out working hardware is wasteful. A software update could break another card also. Blame the user? 🤷‍♂️

 

I'm not blaming the user. 

 

This really isn't that different from the ReiserFS (RFS) issues. It worked perfectly fine earlier but then issues started popping up as the software evolved (Linux Kernel). The preventative users took the initiative and migrated away from RFS to XFS before it became a larger issue. Others waited until they had larger issues, which always happens during inconvenient times, and were forced to migrate anyways.

 

Switching from hardware with questionable software dynamics to newer hardware with better software dynamics to prevent larger issues is a wise investment for something you're already invested in to keep your data safe. Running a server for data safety is never a One and Done event, it's an ongoing event that requires investment maintenance.

 

Would you rather take your car into the shop for replacement tires when you notice signs of wear and tear or during the first leg of a road-trip vacation where you're forced to put on the anemic spare tire at the side of the road in the middle of nowhere?

 

It's up you whether you take preventative measures or not.

  • Like 2
  • Upvote 2

Share this post


Link to post
54 minutes ago, TDD said:

Hi All.

 

I also upgraded without incident.  Looks superb...thank you to all the team!

 

I do want to pass along a GUI issue.  I don't obviously see now what dockers are due for an update in the dashboard?  I seem to recall they had an indicator stating as much.  A right click on one due for an update does show the option, so this is just a cosmetic fix.

 

TY.

 

Kev.

Thanks for your feedback, I can't talk on the behalf of the developers but this is certainly something that should be fixed.

Share this post


Link to post
9 hours ago, bonienl said:

Actually the configuration for eth1 (br1) is wrong


IPADDR[1]="192.168.86.125"
NETMASK[1]="255.255.255.0"
GATEWAY[1]="192.168.85.1"

The gateway address "192.168.85.1" falls outside the interface network "192.168.86.125" (note 86 vs 85).

This will cause routing issues.

Do as @eschultz proposes and it will resolve the network issue.

Hi, guys.

 

Thanks to you @bonienl , @eschultz and @limetech for helping out. It finally seems to be working now. Upgraded and so far the initial couple of hours of testing are OK.

 

Share this post


Link to post
15 hours ago, Glassed Silver said:

Also, not a big fan of removing colors from icons. Do you know why we originally used colors when going from black and white monitors to color screens?

 

To make things discernible at a glance. Colors are processed MUCH faster then shapes.

 

I agree and I mentioned the fact during the rc phase but it seems our preference for function over style puts us in a minority.

  • Like 2
  • Upvote 1

Share this post


Link to post
On 5/11/2019 at 1:20 PM, bonienl said:

Yeah this is correct. At the end make sure the folder "appdata" does not exist anymore on disk1 (and the other disks where it is present)

Repeat the same for the system folder.

 

Both these folders (shares) need to have the setting "Use cache disk = ONLY"

 

Okay...so i have my dockers back up and running... the only thing now is to get the vm's back up.   Ive attached my diag.   Any help would be awesome.  Thanks.

a.png

tower-diagnostics-20190512-1726.zip

Share this post


Link to post
23 minutes ago, John_M said:

 

I agree and I mentioned the fact during the rc phase but it seems our preference for function over style puts us in a minority.

 

I guess you guys weren't around when all the colored indicators were changed to shapes to accommodate color blind users 😮

 

An idea which may or may not gain traction is to keep icons for built-in functionality in the new style, and icons for add-ons colored.

Share this post


Link to post
23 minutes ago, limetech said:

 

I guess you guys weren't around when all the colored indicators were changed to shapes to accommodate color blind users 😮

 

An idea which may or may not gain traction is to keep icons for built-in functionality in the new style, and icons for add-ons colored.

 

The thing about the indicators is that they were originally coloured balls of identical shape. Changing the shape of the red ball to a red cross was a positive move in that it made the shape distinctive while keeping the colour. That added information and helped everyone by increasing the number of visual cues.

 

Changing application and configuration icons from coloured ones to monochrome ones has taken away information because they are now distinguishable only by their shape. Also, the new icons are less obvious in their meaning, being more simplistic and somewhat cryptic. I rather liked the skeuomorphic can of sardines VM Manager icon and it was very easy to spot on the Settings page but I'm sure I'll get used to the split-in-two monitor icon.

Share this post


Link to post
On 5/11/2019 at 5:52 PM, dAigo said:

summary:  - amd_iommu conflict with Marvell Sata controller

                 + amd_iommu conflict with Marvell 88SE9230 SATA Controller

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1810239

 [1b4b:9230] 02:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)

Yes, I knew that before I tried it. I had no issues until 6.7.0.

I don't mind, just feedback for those that might also have that exact model.

 

That's interesting. One of my servers has a SATA controller based on the similar Marvell 88SE9235 and it has never been any trouble. I had put that down to the fact that it uses the standard AHCI module rather than a driver specific to the chip but presumably the 9230 also uses the standard AHCI driver. The only difference I can see between the two chips is that the 9230 has hardware RAID capability while my 9235 doesn't. I think I would struggle to find a suitable replacement if I suddenly become unable to use my controller. The 923x has a x2 PCIe interface and provides four 6 Gb/s SATA ports. My particular card has two internal SATA ports and two eSATA ports and I don't know of any alternative that provides the same functionality. That said, I don't have IOMMU enabled on that server though I think I'll leave it on version 6.6.7 for a while longer until I've got time to experiment with it properly.

Share this post


Link to post
Posted (edited)

I notice that with the "white" theme the numbers on the disk usage bars are now quite difficult to read, especially on a red bar.

 

41545836_ScreenShot2019-05-12at20_09_33.png.f4d361bd75b3c74f30fcba4303254f5c.png

 

EDIT: For comparison, here's how it looks under 6.6.7:

 

653844212_ScreenShot2019-05-12at21_28_14.png.e82dd86eec68bd3d4154190e0a51c853.png

 

Edited by John_M
Added comparison

Share this post


Link to post
Posted (edited)

I have been using 6.7.0-rc7 since release and I only had a couple issues that I put down as 1. being a nooby to unraid 2. being rc7 or my system, and 3. These issues are minor.

 

The issues I had with rc7 were...

1. The unRaid GUI would hang for an unknown reason to me. My fix...log out, log in, this would often resolve it, but worst case I would reboot to fix the hang. What I mean by hang is the GUI seemed to respond changing menus etc but changing settings etc and selecting apply would not do anything. Once again not sure if becasue of using rc7 or my system.

2. Plex, not wanting to start or not wanting to display the Plex GUI correctly. My fix...stop the docker, log out, log in, and re-enable docker again, then Plex would display correctly.

3. Plex does not seem to work correctly if the majority of the drive array have spun down. Plex seems to be sluggish if not stop if the array is spun down. Once the array is spun up Plex runs like a charm.

 

I have now within 10min of this post updated to 6.7.0 stable and so far (with a quick check) all my plugins and dockers are working fine, but will be looking out for the minor issues as mentioned above if they occur over the next week or so.

 

Still loving unRaid!!!

Edited by KillerNAS

Share this post


Link to post
Posted (edited)

Upgrade from 6.6.7 to 6.7.0 okay. Initially macOS Finder tags applied to files and folders on share (connected via SMB) vanished – no major consequence. Change SMB setting "Enhanced macOS interoperability: Yes" restored macOS Finder tags applied to all files and folders.

 

unRAID Plus | Array 40TB (4x8TB array, 1x8TB Parity) | 0x SSD Cache | Ryzen 7 1700X | ASRock X370 Taichi | EVGA GTX 1050ti | Thermaltake Suppressor F51 | 16GB RAM | Docker containers: binhex-plexpass, tautulli, HandBrake, transmission.

Edited by Maxrad
Added build components.

Share this post


Link to post

Another uneventful update... THANKS! 😁 Those are the best kinds. Dashboard is looking great.

Share this post


Link to post
29 minutes ago, KillerNAS said:

The issues I had with rc7 were...

If any of these problems happen with this version, try to get the Diagnostics file    Tools   >>>>   Diagnostics and post it up in a new post with the symptoms at that time.  

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.