Hoopster

Members
  • Posts

    3491
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by Hoopster

  1. This happens to me weekly but only with the appdata backup which I have configured to run weekly. All docker containers are stopped for this backup. I have never seen a docker container stopped for a parity check. Do you perhaps have appdata backup configured to run at the same time as a parity check?
  2. Since the author of this docker container used to be quite active in the forums but has not shown up in over 13 months, I would say updates are unlikely.
  3. By default in Docker, there is no communication between docker containers and the host (bridge or host). Have you tried enabling Host access to custom networks in Docker settings?
  4. Version 6.10 allows the docker custom network type to be set to ipvlan instead of macvlan. This setting was introduced to try to work around the macvlan call trace issue. For some it has helped. Since I implemented a docker VLAN on my router and switch, the problem has disappeared for me.
  5. Your first call trace is the macvlan broadcast call trace when docker containers are assigned custom IP address on br0. This is well documented in this thread. That particular call trace will not always cause immediate lockups, but eventually you will get a server lockup from these call traces.
  6. The *.key file is stored in the config folder on the flash drive. If there is no *.key file you should be able to get it from a backup of the config folder or through the original registration email.
  7. The usual method is to enable one at a time and reboot until you find the offender.
  8. Is this a flash backup from the appdata backup/restore plugin or one you manually made by clicking on Flash Backup from the Flash Device Setting page in the GUI? If you were using the MyServers plugin from Limetech, it has a flash backup to the cloud option which you may have had enabled. The important things are stored in the config folder if you want to return to a prior configuration. If you want to return to the same unRAID version you can recover everything from the backup. If it is a backup from the plugin, go to the location where you have it configured to store the flash backup and copy over what you want to the USB flash drive. If you made the backup manually through the GUI, go to the download location and unzip that backup somewhere and then copy what you want to the USB flash drive. I can't tell you exactly how recovery from MyServers works as I have yet to use it.
  9. I am running 6.10.0 RC1 on my main system and have not seen any "anomalies" with this version. My NVMe temps are normal in the mid to high 30s (at idle) and mid to high 50s (under heavy activity load). I have been trying to diagnose a random lockup problem and have changed PSUs, run RAM rests, redone cabling, etc. and none of that has caused any problems. Perhaps installing 6.10.0 RC1 or 6.9.2 could help you eliminate the unRAID version as any potential cause of your issues.
  10. This is a problem with 6.10.0 RC2. Other users have reported that with this release, NVMe temps are showing 84C (same temp for everyone). Rolling back to prior version makes that go away. It is not clear if that is an erroneously reported temperature or if something in RC2 is actually causing the NVMe to get that hot.
  11. Depends on who you ask. I use server motherboards and CPUs that support ECC RAM. Since it is supported and my servers run 24x7, I choose to use it. UnRAID does not require the use of ECC RAM for anything the server does. I would say most use non-ECC RAM and probably have very, very few problems that ECC would have corrected.
  12. unRAID is very hardware agnostic. It works on almost anything and certainly that MB/CPU combination is supported. Hardware choice for many comes down to what you want the server to be able to do; NAS only? NAS + Docker + VMs + several plugins? Something in between? Does it need to be able to do hardware transcoding for Plex, Emby, Jellyfin, etc? Do you need ECC RAM support (not required by unRAID)? Decide what you want the system to do and then choose hardware that supports that.
  13. This also works in 2.34 and 2.32. I first tried it in 2.34 and the setting remained in place when I rolled back to 2.32 yesterday.
  14. Lockups started with 6.9.2. I switched to 6.10.0 rc1 to see if the ipvlan setting (rather than macvlan) for docker made any difference. It did not and lockups continue from time to time.
  15. I have both turbo boost disabled and docker network set to ipvlan. Makes no difference in my case. Lockups still occur. I still get the high MB temps within hours after every reboot. It starts out normal (26C to 32C usually) but several hours later it is reporting in the 80s again.
  16. Nothing meaningful in the event log or the syslog. Event log just reports OS Stop/Shutdown or Microcontroller/Coprocessor transition to power off and there is nothing in the syslog before any of the shutdowns that looks suspicious (or even informative).
  17. I am also running BIOS L2.34 and have been getting system lockups after between 2 and 15 days of runtime. I have tested the RAM and swapped out the PSU. RAM is good and a PSU swap did not solve the problem. I do not know if it has anything to do with BIOS 2.34 but the lockups started in July around the time I think I upgraded to this BIOS. I have rolled back to 2.32 just to see if that makes any difference. I am not convinced the BIOS version has anything to do with it as others have 2.34 installed and are not seeing system lockups. Just something else to try as the lockups are annoying and a lack of clarity regarding the cause is even more annoying.
  18. Try changing the boot method. When I saw this it was because the motherboard wanted to boot UEFI after a firmware change. Others cannot boot in UEFI and must legacy boot the BIOS. There is a folder on the flash drive called EFI- rename it removing the "-" character to just EFI. This will cause the system to try and boot into UEFI. If the "-" character is missing, add it back (EFI-) this will force a legacy BIOS boot. You may also need to make a change in the UEFI/BIOS to indicate the flash drive should boot in one method or the other. In my system, I set the first boot device as UEFI:[name of flash drive} to boot UEFI and to just {name of flash drive} to boot into the legacy BIOS. Probably a good idea to make sure the latest BIOS for the board has been installed as well.
  19. Often, power failures can mess up configuration files or they can be corrupted by other issues. Try deleting network.cfg and network-rules.cfg from the config folder on the flash drive. They will get recreated with defaults on the next boot. The default is DHCP but you can edit the network.cfg file with a text editor and add a static IP address if you wish. Of course, you can also make network configuration changes in the GUI if you get successfully booted. Here is the relevant section from my network.cfg with a static IP address assigned: USE_DHCP[0]="no" IPADDR[0]="192.168.1.10" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" If you can get booted into the GUI, it might help to attach your server diagnostics in a new post (Tools --> diagnostics from the GUI) if you continue to have specific issues.
  20. I used to do this as well. However, Microsoft is attempting to get rid of SMB v1 support for a good reason and I got tired of fighting Microsoft on SMB v1. When unRAID introduced WSD support, I disabled SMB v1 on all my client computers as well as in unRAID and have been happily working with SMB v2/3 for quite a while now. I have found no scenario in which SMB v1 is necessary for client access to unRAID.
  21. Have you checked in with the Unifi community forums? I have never seen this error. Of course, I am still on the controller version that actually works (5.14.23). I have also been slow to update firmware but recently updated to the latest available for my devices (after watching the forums for weeks/months to see what others experienced). My USG, switches, UAP-AC-LRs, AC-IW and U6-Lite have all been recently updated and seem to be working great. Fortunately, I have no devices that require a 6.x.x controller version. Even my U6-lite only needs 5.14 so I am good.
  22. You may find this spaceinvader video useful when configuring cache pools. You currently have it configured as RAID 1 (redundancy) in which the smaller drive limits the size of the pool.
  23. Bad RAM does not appear to be the cause of the lockups. I downloaded the free Memtest86+ from the Passmark site as it supports ECC RAM. I ran the default test which runs four passes of 13 tests. This took 10+ hours and resulted in 0 errors. There is not really a good way to test the motherboard other than replacing it. I had to use extender cables on the CPU power and motherboard power cables from the CPU as the included cables with the SFX PSU were too short. Perhaps one of them is bad.
  24. Well, so much for the idea that is was a bad PSU. The server just locked again after a little over 1 day on the "old" PSU that previously ran in the server for 23 days without issue. That is the quickest it has ever locked up after a reboot since the freezing started. Motherboard bad? RAM? I'll run a memtest on the ECC RAM. Yes, I know the included memtest does not support ECC.
  25. The server ran for 23 days without problems on the replacement PSU. I put the original PSU back in and 4 days later the server locked up again. Pretty sure a failing PSU was the issue. I have had it only 18 months and it has a 7-year warranty. I made a warranty claim today. The "old" PSU is back in the server for now.