Rudder2

Members
  • Posts

    157
  • Joined

  • Last visited

Everything posted by Rudder2

  1. I'm confused what this has to do with anything. Are you saying that once the Microcode is loaded that even updating the BIOS the kernel will never stop loading it so I need to reinstall the OS to fix it or are you saying that there is a kernel error and I need to wait for the next kernel update to fix this problem?
  2. This is the LOG after the BIOS update on 6.6.5. It still loads the Microcode. Look at the Motherboard.txt and the first entry in the log. 6.6.5 is loading the Microcode in error than...Why would it do that? rudder2-server-diagnostics-20181114-2357.zip Sorry is posting Before and After Diagnostics in the same post confused the issue. I uploaded both thinking a comparison might help...
  3. The problem looks like that 6.6.5 loads the microcode even though my BIOS is up to date in the second LOG file (2357). Someone already said above I am starting to think this is the smoking gun after reading you comment. Why is my system loading the Microcode even though I have the latest BIOS that has the microcode in it?
  4. Which l log? I ran 6.5.0 for a long time on BIOS 2.70 with out knowing there was an update. The 6.6.5 time stamp 1923 log file was before the BIOS update and the 2357 log was after the BIOS update. I uploaded the log before and after the BIOS update post crash. I just checked the 1923 log and it said I had the old BIOS and the 2357 log has the new BIOS.
  5. Any idea what the difference between 6.5.0 and 6.6.x is that can cause the hardware error? How about giving us an option to easily store logs to cache instead of the ramdisk? Anything I should try? Really want to upgrade.
  6. The latest BIOS is installed. I ran a MEM check back when I updated to 6.6.0 with the first lockup and it passed fine. The weird thing is I'm 100% stable on 6.5.0. I kept saying 6.5.3 but it's actually 6.5.0 because I missed the .3 update. I hate to be lock to this version. I was going to wait for the 6.6.6 update and try again. That Machine Check Message is what made me look at the logs and see that there was a message about the Microcode and update my BIOS after searching the logs and reading that was recommended if 6.6.x locked up. It only shows up when running 6.6.x Never on 6.5.0.
  7. Hardware from Info Button. M/B: ASRock - Z97 Extreme6 CPU: Intel® Core™ i7-4790K CPU @ 4.00GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 (Intel Onboard NIC) eth1: not connected (RealTek Onboard NIC usually used for NIC Bonding disconnected for diagnostic reasons) Kernel: Linux 4.14.26-unRAID x86_64 OpenSSL: 1.0.2n
  8. I've been wanting my logs to write my my Cache Drive. I'm not sure about having continuous write to the USB Thumb Drive. Is there away to change unRAID to write logs to Cache Drive and have it rotate it say every week?
  9. Ahhh, OK. Sorry, I sometimes forget that unRAID isn't just used by computer geniuses only. I will try to keep that in mind. I also sometimes think your so much smaller than Micro$haft that y'all will know me. I know it's not practical but your response time to a problem and repair time make me feel that way.
  10. Total system lockup unresponsive from console, ssh, network, parity check stops, no disk activity at all, no VM working, nothing functioning what so ever. I suffer this lock up when ever I install 6.6.0+. Though it was because of the RealTek NIC problem so downgraded to 6.5.3 and then updated to 6.6.3 and had worse network issues so down graded to 6.5.3. Then Upgraded to 6.6.5 and the RealTek NIC error has been rectified but the Lockup problem remains. The system will run for about 5 or 6 days lock up then run 6 hours lock up and run 3 hours lock up then 30 minutes and lock up. I can't get system logs because they are not stored on flash. I already downgraded to the stable 6.5.3 image. My system log on first boot after the 4 lock up in one day is attached. The first was after the first lock up and the last after the 4th. Don't know if they will help because they were taken after a reset button press to get the system back up. BIOS was updated to the latest BIOS for my motherboard after the second lockup because I saw a hardware error about microcode in the log file 1923. That same error is in log file 2357. rudder2-server-diagnostics-20181114-1923.zip rudder2-server-diagnostics-20181114-2357.zip
  11. Yes, I have a monitor hooked up and a keyboard for fixing things from the console. The screen was locked up at the GUI login with no messages on the screen.
  12. I read the forum all the time. I get headaches from reading it I do so much...It is impossible to read the entire forums. I have read about lockups after 6.6.0+ and nothing they suggest fixes it. Why do you think I updated my BIOS? I never just update a BIOS unless I need to. The problem is something changed in the 6.6.0+ things that is effecting hardware that is used by many people.
  13. There is only one definition of "Lockup" or "Locked up" in my mind. Total system lockup unresponsive from console, ssh, network, parity check stops, no disk activity at all, no VMs working, absolutely nothing works during a lockup. How could this definition change over the years? It's baffling. I'm opening up a bug report like you requested...
  14. My server just locked up for the 2nd time. This is how all the 6.6.x lock ups go...It works for about 6 days than locks up and then it locks up in 6 hours then 3 hours then less. I guess I have to downgrade to 6.5.3 again. What a pain. I upgraded my BIOS today and it included the new Microcode but this didn't fix the locking up problem. Not sure what happened from 6.5.3 to 6.6.0 threw 6.6.5 but my server cannot run on it. I cannot afford for my server to keep locking up so back to 6.5.3 I go again. This is getting annoying...My server is critical and can't afford for it to not be reliable. This is only happening with 6.6.0+. rudder2-server-diagnostics-20181114-2357.zip
  15. I just suffered a lockup in 6.6.5. The reason I was staying on 6.5.3 was because of these lockups and the Realtek NIC problems. The Realtek NIC problem seams to be fixed but not the lockup issue. Because of the way unRAID does it's logs I have none to offer you. How do you want me to proceed? Here is my current diagnostics file. I'm getting Hardware Errors on my CPU. Something about the Microcode. Could this of caused it? rudder2-server-diagnostics-20181114-1923.zip
  16. I will do this. Thank you for the information!
  17. What the heck, I will give it a shot not. Don't like rebooting my server out side of the 0400 auto reboot every Wednesday because it takes too long for Plex to become stable. I started that automated reboot a couple years ago because after my server was up for 3 weeks to a month it became sluggish to unresponsive. I downgraded to 6.5.3 rebooted, to retain my easy downgrade to the stable release I know works, and then upgraded to 6.6.5. The first thing I notice is Plex came right up fast...Didn't take it's normal 20 to 40 minutes to be usable inside my network and 40 to 60 minutes out of my network...Both came up immediately...
  18. That was faster than I expected seeing that there have been discussions in this post this AM. Why doesn't unRAID notify my that updates are available anymore? It hasn't done this since the software upgrade was taken out of the plugins tab?
  19. I can confirm, parity check and writing to the array after update to 6.6.3 is really slow. Here is my diagnostics file. I tried to update to 6.6.0 and 6.6.1 and 6.6.2 and had to revert back to 6.5.3 every time because of Realtek NIC problem. I like some of the new features but think I might have to revert back again. My array doesn't spin down anymore after update also. Here is my diagnostics to assist with problem tracking. I will most likely be downgrading during the system automated reboot Wednesday so if you need anything else please ask before then. This is the first time that unRAID updates have been downgrades since I built my server in 2014. I really like you guys' work and understand you can't test all equipment because your dev team is small...It is just rough when hardware that has worked for 4 years without a hitch stops working during an update. Thank you for your time. Model: N/A M/B: ASRock - Z97 Extreme6 CPU: Intel® Core™ i7-4790K CPU @ 4.00GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 - Intel Onboard NIC eth1: not connected - RealTek Onboard NIC, still caused problems with updates even though not in use. Kernel: Linux 4.18.15-unRAID x86_64 OpenSSL: 1.1.0i Uptime: 2 days, 11:44:01 rudder2-server-diagnostics-20181109-0802.zip
  20. I like it! This looks like it will work beautifully! I had to manually create all the .Recycle.Bin folders in all my shares to begin with but this was no biggie! I discovered 600GB in my darr app's Recycling Bin Share from the years since I upgraded to using all darr apps. Thank you for your help! Rudder2
  21. I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them. I would like this share to delete the files every 14 or 30 days. It might be redundant since I have the Recycling Plugin installed. I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it? If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them. I will look at those links also. Thank you, Rudder2
  22. Your right. I think I know what killed it. I had a problem where I lost 2 DATA disks (Make sure that the new controller card you buy is compatible with unRAID...) When I recovered the Disks I had to restore my APPs folder from a back up to recover my Databases because they scanned and saw the missing DATA and I didn't feel like rebuilding it my self because I would have to correct a lot of incorrect matches. Not thinking about it I recovered all APPs DATA instead of just the ones I needed. I wander if this broke LetsEncrypt. I nuked the APPs folder and Docker Image let it start using the LinuxServer/LetsEncrypt repository to create the APP DATA and them copied back in the CloudFlare.ini and the site-confs back in and it's back up. Should of done this to begin with...This is the beauty of the way the Docker Images from LinuxServer.io are written, easy recovery. One good thing from all this is I discovered I was still on the Preview Channel when I should of been back on the Main update channel. This happens often (usually BETA channel) and I never figure it out till the Docker brakes. Thank you for all your help! Your AWESOME!
  23. I changed the channel back to main channel and now I get this error: ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... 4096 bit DH parameters present SUBDOMAINS entered, processing Sub-domains processed are: -d www.MYDOMAIN.com -d nextcloud.MYDOMAIN.com -d vpn.MYDOMAIN.com -d onlyoffice.MYDOMAIN.com -d collabora.MYDOMAIN.com E-mail address entered: [email protected] Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created Saving debug log to /var/log/letsencrypt/letsencrypt.log Unable to load: [('PEM routines', 'CRYPTO_internal', 'no start line')],[('asn1 encoding routines', 'CRYPTO_internal', 'header too long')] Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Obtaining a new certificate Performing the following challenges: Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA. Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA. IMPORTANT NOTES: - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container
  24. Here is is. I purposely changed my domain to MYDOMAIN.com and my account number to ACCT# trying to prevent privet data from being posted in a form. AWESOME! just changed it back to the linuxserver/letsencrypt channel. MYDOMAIN.com.conf