Jump to content

neumann13

Members
  • Posts

    10
  • Joined

  • Last visited

neumann13's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. I have an ongoing issue where my docker containers are unable to access the unraid host network. I have enabled the "Host access to custom networks" setting. This setting seems to work intermittently and rarely survives a reboot. Is anyone aware of a permanent fix or workaround?
  2. I would like to see a function where we can mark a drive for removal and have unraid evacuate all the data off in the background. That way we don't have to risk the long parity rebuild and the uncertainty of the New Config function. Alternatively, I'd be happy with some other method of achieving this goal. Basically, I'm just looking for an easy built-in way for gracefully removing a drive without risking the data. ----- As I understand, the current process for removing a drive is to use the New Config tool and hope for the best. However, when I have tried this in the past, it tells me that my parity drives will be destroyed. If the parity drives are destroyed as part of this process, then it seems what I need to actually do is evacuate the drive first, before using the New Config tool. If I don't, then the data on the drive being removed is also lost, yes? The documentation makes it seem like we can just pull a drive out using New Config and that no data will be lost and everything will be fine. But when I go to actually do it, unraid is telling me that my parity drives will be wiped. If they're wiped, then the data on the drive cannot be rebuilt. So either the docs should reflect that risk or my understanding of the whole process is incorrect (and the docs and New Config flow should be modified to spell that out more clearly). In any case, it would be nice to have a more graceful method of removing drives that still have data on them.
  3. Downgrading the BIOS on my board to 1.6 solved the issue for some reason.
  4. Upgraded from 6.9.2 to 6.11.4 and the system no longer boots. I get to the boot selection menu and, regardless of what I select, the system just hangs at this screen. This also happened back with 6.10. I decided not to upgrade at that time as there was talk of a change in how drivers were being handled. I wanted to wait and see if that would address my issues. There is no further info on the console nor is there anything in the logs that I can see. I rolled back to 6.9.2 and have nothing really to go on for further troubleshooting. I tried enabling the syslog server and mirroring logs to flash, but I don't think the system is even getting to the stage where syslog is available to write the logs. Server diagnostics are attached but only shows the logs from the last successful boot. How can I go about capturing the logs when attempting to boot to 6.11.4? Alternatively, how can I enable more output in the console during boot? Supermicro A2SDi-8C-HLN4F, Version 1.01 American Megatrends Inc., Version 1.7 BIOS dated: Mon 09 May 2022 12:00:00 AM CDT Intel® Atom™ CPU C3758 @ 2.20GHz server-diagnostics-20221119-1551.zip
  5. When attempting to boot 6.11.3, I tried the normal boot, GUI mode, and GUI safe mode. All with the same results as the screenshot above. After rolling back to 6.9, the system was able to boot but I had to go in and manually repair my network config. That is when I realized that 6.11 wasn't booting at all. I had assumed it was an issue rendering the console in my ILO but now I believe that the system just isn't booting at all. I can try setting up a syslog server somewhere (unraid is usually my host, so this'll probably take a while to do), but I don't think that will help me in this case. I don't believe that the system is booting at all on 6.11.3. Is there a way that I could configure persistent debug logs in order to capture more information about why 6.11 isn't even booting?
  6. Attaching the diags. System now boots, but still no network available. Logs are seemingly only available for the last successful boot, so I'm not sure how best to go about diagnosing the upgrade failure. tower-diagnostics-20221117-1102.zip
  7. So I should have a console after booting up normally? My console has always looked like this after booting. Maybe it's my ILO then or something idk. I don't have any issues accessing the BIOS or the boot menu but I've never been able to see a CLI console, even on 6.9. Edit: I rolled back to 6.9 and I am actually able to see a console now. I guess I only ever log on to this thing when the system fails to boot entirely, thus my confusion around the console situation. So, I logged in via the console and I still don't have any network. In the rollback, I replaced the bzroot, bzroot-gui, and bzimage files. I'm guessing the drivers are still on the newer version. I'll see if I can figure out how to get a diag off the system and attach it here.
  8. Updated from 6.9 to 6.11.3 and now have zero networking. I have access to a console via a lights-out card, but regardless of the boot options I select, I can't seem to get a command line. Is there a way I can get to a command line and maybe get some sort of diagnostics bundle? I knew it was a risk to update with the Intel X553 and IXGBE driver issues that cropped up in 6.10. I just assumed it had all been fixed by now...
  9. I had issues back with the 6.10 RCs where the Intel NICs were not supported. I remember people saying something about the way drivers were implemented changed in 6.10 or something. I have been running on 6.9 for the time being and I am wondering if the issues with those drivers has been resolved in 6.11. I don't want to upgrade only to lose network connectivity. I read through the release notes of 6.11 and didn't see any specific mention, but I recall that the entire driver system was supposed to be changed in this version or something like that. Example of the issue being discussed:
×
×
  • Create New...