eybox Posted May 12, 2019 Share Posted May 12, 2019 40 minutes ago, eschultz said: Looks like in 6.7 the default route is being set incorrectly to br1 instead of br0. Are you using br1 for anything? It looks like eth1, the link for br1, is disconnected anyways... Thank you for this lead! I do have 4 NICs. And I use only one of them - eth0, with br0 attached to everything - Dockers and VMs. I tested eth1 a lot of time ago if it worked or not, but never used br1. What is the resolution to this? Removing everything eth1 and br1 related from network.cfg perhaps? Quote Link to comment
eschultz Posted May 12, 2019 Share Posted May 12, 2019 3 minutes ago, eybox said: Thank you for this lead! I do have 4 NICs. And I use only one of them - eth0, with br0 attached to everything - Dockers and VMs. I tested eth1 a lot of time ago if it worked or not, but never used br1. What is the resolution to this? Removing everything eth1 and br1 related from network.cfg perhaps? Yeah, get rid of the following from network.cfg: IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" DESCRIPTION[1]="VMs connections" PROTOCOL[1]="ipv4" USE_DHCP[1]="no" IPADDR[1]="192.168.86.125" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.85.1" METRIC[1]="2" Also change SYSNICS="2" to SYSNICS="1" at the end. Then reboot server. 1 Quote Link to comment
bonienl Posted May 12, 2019 Share Posted May 12, 2019 (edited) Actually the configuration for eth1 (br1) is wrong IPADDR[1]="192.168.86.125" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.85.1" The gateway address "192.168.85.1" falls outside the interface network "192.168.86.125" (note 86 vs 85). This will cause routing issues. Do as @eschultz proposes and it will resolve the network issue. Edited May 12, 2019 by bonienl 1 Quote Link to comment
whitedwarf Posted May 12, 2019 Share Posted May 12, 2019 Updated with no issue. I really like the new look and feel. Great job ! Quote Link to comment
limetech Posted May 12, 2019 Author Share Posted May 12, 2019 2 hours ago, rpj2 said: A software update broke a previously working system is my point. I rolled back and will try a future update. Throwing out working hardware is wasteful. A software update could break another card also. Blame the user? Not blaming the user, simply offering an alternative to get server back working again. Sometimes the easiest solution is a h/w one. Sometimes the only solution is a h/w one. But to your point: You're right: something broke in the kernel. When this was first reported we spent quite a bit of time, short of bisecting the kernel, to find out what might have caused this. We are willing to try any patch you, or someone else might run across. For example, take a look at this post in the -rc8 release topic regarding the 'hpsa' storage driver. I delayed this 6.7.0 release a day to put this patch in, and user has reported that it works and fixes the issue. Note that we never see these kinds of issues in the h/w we have - we would not publish a release where we see problems like this. When I see an issue like this, first thing I do is 'google' the error message and start searching for similar reports. But I can't spend more than a few hours doing this on any one issue. If a solution is not apparent, then I let it sit for a while because eventually it will happen in one of the bigger distros such as Ubuntu or Fedora, and those guys have the resources to further investigate the problem. This is kinda how it works with open source unfortunately. 1 2 Quote Link to comment
bonienl Posted May 12, 2019 Share Posted May 12, 2019 3 hours ago, MrSage said: So I've moved all of the appdata and system files to the cache via MC and im still not getting my apps nor vms running. Looks like the docker and vm image are corrupted. You need to stop the docker and VM service and delete and recreate the respective image. This implies re-installing all docker containers, you can use CA to do this. Containers will come up with all their existing settings and data. To re-install VMs you need to create a new XML and can point to the existing vdisk image. If any manual modifications were done to the XML file, they need to be redone. Quote Link to comment
Frank1940 Posted May 12, 2019 Share Posted May 12, 2019 5 hours ago, m8ty said: I've been meaning to buy a cheap LCD screen to attach to it for diagnosing issues like these, any recommendations? Anything that emits light. 🤣 Seriously, if you login via the command line, you are looking at a straight text screen. IF you are using the GUI web browser interface, it will have to using one of the default resolutions like svga. Quote Link to comment
johnsanc Posted May 12, 2019 Share Posted May 12, 2019 Just a quick update on my VMS issues with the webui... no luck. I even tried backing up then wiping out my libvirt.img. I still got the same issue where the VMS page just looks like it loads forever. I've dug around in the forums but couldnt find any info about this particular problem. Is there a process outlined to backup my VM configurations and start completely fresh? I thought wiping out the libvirt.img would basically do that. I also noticed that occasionally the libvirt log would show errors from trying to find references to isos in directories I deleted well over a year ago. It would be nice to clean up all this old stuff. Quote Link to comment
hawihoney Posted May 12, 2019 Share Posted May 12, 2019 Is it just me: Since release of 6.7 I try to download it from 3 servers. At least 1min per percentage. I'm sitting on a fast lane here. Everything else is really fast. What's the matter with that ZIP? Quote Link to comment
TDD Posted May 12, 2019 Share Posted May 12, 2019 Hi All. I also upgraded without incident. Looks superb...thank you to all the team! I do want to pass along a GUI issue. I don't obviously see now what dockers are due for an update in the dashboard? I seem to recall they had an indicator stating as much. A right click on one due for an update does show the option, so this is just a cosmetic fix. TY. Kev. Quote Link to comment
BRiT Posted May 12, 2019 Share Posted May 12, 2019 10 hours ago, rpj2 said: A software update broke a previously working system is my point. I rolled back and will try a future update. Throwing out working hardware is wasteful. A software update could break another card also. Blame the user? 🤷♂️ I'm not blaming the user. This really isn't that different from the ReiserFS (RFS) issues. It worked perfectly fine earlier but then issues started popping up as the software evolved (Linux Kernel). The preventative users took the initiative and migrated away from RFS to XFS before it became a larger issue. Others waited until they had larger issues, which always happens during inconvenient times, and were forced to migrate anyways. Switching from hardware with questionable software dynamics to newer hardware with better software dynamics to prevent larger issues is a wise investment for something you're already invested in to keep your data safe. Running a server for data safety is never a One and Done event, it's an ongoing event that requires investment maintenance. Would you rather take your car into the shop for replacement tires when you notice signs of wear and tear or during the first leg of a road-trip vacation where you're forced to put on the anemic spare tire at the side of the road in the middle of nowhere? It's up you whether you take preventative measures or not. 2 2 Quote Link to comment
Can0n Posted May 12, 2019 Share Posted May 12, 2019 On 5/11/2019 at 12:18 AM, Helmonder said: Did you see my post ? I got mine working: https://forums.unraid.net/topic/79988-mellanox-interface-not-showing/?tab=comments#comment-742772 thanks ill take a look Quote Link to comment
PSYCHOPATHiO Posted May 12, 2019 Share Posted May 12, 2019 Thanks for the continuous effort UNRAID team & devs, both servers upgraded & running smooth. 1 Quote Link to comment
Mex Posted May 12, 2019 Share Posted May 12, 2019 54 minutes ago, TDD said: Hi All. I also upgraded without incident. Looks superb...thank you to all the team! I do want to pass along a GUI issue. I don't obviously see now what dockers are due for an update in the dashboard? I seem to recall they had an indicator stating as much. A right click on one due for an update does show the option, so this is just a cosmetic fix. TY. Kev. Thanks for your feedback, I can't talk on the behalf of the developers but this is certainly something that should be fixed. Quote Link to comment
eybox Posted May 12, 2019 Share Posted May 12, 2019 9 hours ago, bonienl said: Actually the configuration for eth1 (br1) is wrong IPADDR[1]="192.168.86.125" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.85.1" The gateway address "192.168.85.1" falls outside the interface network "192.168.86.125" (note 86 vs 85). This will cause routing issues. Do as @eschultz proposes and it will resolve the network issue. Hi, guys. Thanks to you @bonienl , @eschultz and @limetech for helping out. It finally seems to be working now. Upgraded and so far the initial couple of hours of testing are OK. Quote Link to comment
John_M Posted May 12, 2019 Share Posted May 12, 2019 15 hours ago, Glassed Silver said: Also, not a big fan of removing colors from icons. Do you know why we originally used colors when going from black and white monitors to color screens? To make things discernible at a glance. Colors are processed MUCH faster then shapes. I agree and I mentioned the fact during the rc phase but it seems our preference for function over style puts us in a minority. 2 1 Quote Link to comment
MrSage Posted May 12, 2019 Share Posted May 12, 2019 On 5/11/2019 at 1:20 PM, bonienl said: Yeah this is correct. At the end make sure the folder "appdata" does not exist anymore on disk1 (and the other disks where it is present) Repeat the same for the system folder. Both these folders (shares) need to have the setting "Use cache disk = ONLY" Okay...so i have my dockers back up and running... the only thing now is to get the vm's back up. Ive attached my diag. Any help would be awesome. Thanks. tower-diagnostics-20190512-1726.zip Quote Link to comment
limetech Posted May 12, 2019 Author Share Posted May 12, 2019 23 minutes ago, John_M said: I agree and I mentioned the fact during the rc phase but it seems our preference for function over style puts us in a minority. I guess you guys weren't around when all the colored indicators were changed to shapes to accommodate color blind users 😮 An idea which may or may not gain traction is to keep icons for built-in functionality in the new style, and icons for add-ons colored. Quote Link to comment
John_M Posted May 12, 2019 Share Posted May 12, 2019 23 minutes ago, limetech said: I guess you guys weren't around when all the colored indicators were changed to shapes to accommodate color blind users 😮 An idea which may or may not gain traction is to keep icons for built-in functionality in the new style, and icons for add-ons colored. The thing about the indicators is that they were originally coloured balls of identical shape. Changing the shape of the red ball to a red cross was a positive move in that it made the shape distinctive while keeping the colour. That added information and helped everyone by increasing the number of visual cues. Changing application and configuration icons from coloured ones to monochrome ones has taken away information because they are now distinguishable only by their shape. Also, the new icons are less obvious in their meaning, being more simplistic and somewhat cryptic. I rather liked the skeuomorphic can of sardines VM Manager icon and it was very easy to spot on the Settings page but I'm sure I'll get used to the split-in-two monitor icon. Quote Link to comment
John_M Posted May 12, 2019 Share Posted May 12, 2019 On 5/11/2019 at 5:52 PM, dAigo said: summary: - amd_iommu conflict with Marvell Sata controller + amd_iommu conflict with Marvell 88SE9230 SATA Controller https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1810239 [1b4b:9230] 02:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11) Yes, I knew that before I tried it. I had no issues until 6.7.0. I don't mind, just feedback for those that might also have that exact model. That's interesting. One of my servers has a SATA controller based on the similar Marvell 88SE9235 and it has never been any trouble. I had put that down to the fact that it uses the standard AHCI module rather than a driver specific to the chip but presumably the 9230 also uses the standard AHCI driver. The only difference I can see between the two chips is that the 9230 has hardware RAID capability while my 9235 doesn't. I think I would struggle to find a suitable replacement if I suddenly become unable to use my controller. The 923x has a x2 PCIe interface and provides four 6 Gb/s SATA ports. My particular card has two internal SATA ports and two eSATA ports and I don't know of any alternative that provides the same functionality. That said, I don't have IOMMU enabled on that server though I think I'll leave it on version 6.6.7 for a while longer until I've got time to experiment with it properly. Quote Link to comment
John_M Posted May 12, 2019 Share Posted May 12, 2019 (edited) I notice that with the "white" theme the numbers on the disk usage bars are now quite difficult to read, especially on a red bar. EDIT: For comparison, here's how it looks under 6.6.7: Edited May 12, 2019 by John_M Added comparison Quote Link to comment
KillerNAS Posted May 12, 2019 Share Posted May 12, 2019 (edited) I have been using 6.7.0-rc7 since release and I only had a couple issues that I put down as 1. being a nooby to unraid 2. being rc7 or my system, and 3. These issues are minor. The issues I had with rc7 were... 1. The unRaid GUI would hang for an unknown reason to me. My fix...log out, log in, this would often resolve it, but worst case I would reboot to fix the hang. What I mean by hang is the GUI seemed to respond changing menus etc but changing settings etc and selecting apply would not do anything. Once again not sure if becasue of using rc7 or my system. 2. Plex, not wanting to start or not wanting to display the Plex GUI correctly. My fix...stop the docker, log out, log in, and re-enable docker again, then Plex would display correctly. 3. Plex does not seem to work correctly if the majority of the drive array have spun down. Plex seems to be sluggish if not stop if the array is spun down. Once the array is spun up Plex runs like a charm. I have now within 10min of this post updated to 6.7.0 stable and so far (with a quick check) all my plugins and dockers are working fine, but will be looking out for the minor issues as mentioned above if they occur over the next week or so. Still loving unRaid!!! Edited May 12, 2019 by KillerNAS Quote Link to comment
Maxrad Posted May 12, 2019 Share Posted May 12, 2019 (edited) Upgrade from 6.6.7 to 6.7.0 okay. Initially macOS Finder tags applied to files and folders on share (connected via SMB) vanished – no major consequence. Change SMB setting "Enhanced macOS interoperability: Yes" restored macOS Finder tags applied to all files and folders. unRAID Plus | Array 40TB (4x8TB array, 1x8TB Parity) | 0x SSD Cache | Ryzen 7 1700X | ASRock X370 Taichi | EVGA GTX 1050ti | Thermaltake Suppressor F51 | 16GB RAM | Docker containers: binhex-plexpass, tautulli, HandBrake, transmission. Edited May 12, 2019 by Maxrad Added build components. Quote Link to comment
archedraft Posted May 12, 2019 Share Posted May 12, 2019 Another uneventful update... THANKS! 😁 Those are the best kinds. Dashboard is looking great. Quote Link to comment
Frank1940 Posted May 12, 2019 Share Posted May 12, 2019 29 minutes ago, KillerNAS said: The issues I had with rc7 were... If any of these problems happen with this version, try to get the Diagnostics file Tools >>>> Diagnostics and post it up in a new post with the symptoms at that time. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.