Krythel

Members
  • Posts

    6
  • Joined

  • Last visited

Krythel's Achievements

Noob

Noob (1/14)

0

Reputation

  1. OS Version: 6.10.3, upgraded from 6.9.2 recently. Hello there, I recently encountered a very odd situation with Unraid. I did upgrade recently, but server worked fine after upgrade until I decided to update a few docker containers causing the server to freeze. I essentially was left with no access to the Web GUI or SSH. I left alone for about 10 to 15 minutes and then decided to force shutdown. When the system came back up there were some errors displayed on the Web GUI: I tried to log in and it threw these errors at me: I was able to access the server via SSH at this point, shares seems to be available. I decided to reboot to see if this would fix the issue. After reboot. I was no longer able to access Web GUI (would give me a 404, more on this later) or SSH, I also noticed the name of my server reverted back to Tower. I powered cycled the machine to gain local UI access. I was able to get into the UI via the server and collected logs. I decided to search for the Web GUI 404 error, and found others had this issue: After this I went back to my server settings and found out a few things: The 2 cache pools I had were misconfigured, no drives were assigned. SSH and SSL were both turned off. Server name had been changed to Tower. When I looked at the config in the USB I noticed it looked like it did a few months back, when my cache pool used other drives. All my information in my drives and cache pools remained intact however. At this point I reconfigured everything back how it's supposed to be and I'm back online. So from all this experience it just seemed to somehow had rolled back in time and was using an old USB config. Not sure how it happened at all. Any explanation would be most welcomed at this point. From my end, USB backups were no longer supported nad I didn't know (had the server down for a few months), so I'm working on my own auto backups at the moment, but have done so manually for the time being. Here's the logs: 6.9.2 (Last log in file before upgrade) arkhiver-diagnostics-20220825-2134.zip 6.10.3 (First logs after upgrade) arkhiver-diagnostics-20220517-1036.zip This one is after the issue happened (on 6.10.3): tower-diagnostics-20220902-0358.zip I also have manual logs I got while the problem was happening, but I believe they are not anonymous, so I can DM them to an official Unraid member if needed. Let me know if you need anything else. Hopefully an explanation can be found! Thank you for your time!
  2. I also request for WiFi to be an available option for us Unraid-ers. I have multiple setups, one of them is a mobile development box with mainly containers. In this case I'm not looking for a solid 100% uptime but a quick, easily managed dev box, which Unraid is perfect for, but because of the nature of the "mobile" it's rather cumbersome to rely on an ethernet port at all times. And to be honest those suggesting to buy an adapter, this is just a patch to an actual problem. Usually included antennas pretty beefy, most my mobos have Wifi 6E in them, not to mention it already comes with it for free! Plus I don't want to carry more devices and do extra setups. There's definitely a need for wifi to be supported, and trust me I hardwire everything possible that I know won't move, but in other cases it's really needed. I can also be used as an additional channel of communication for certain non-crucial services or to just not hog a single network. So I dare ask for this feature to become reality! (Please!)
  3. Hello there, I recently noticed excessive logging/spam on my server, which lead me to this topic: In my case I would like to keep bluetooth enabled, but will make another post for that. The main concern is about the Web UI getting frozen while these logs are happening. After the log window is opened and left there for a few mins, the web UI starts to slow down and eventually completely freezes. Not too sure if the log window uses the same process as Web UI, but seems like it does and Web UI in no longer accessible while the window is open. As soon as it's closed the Web UI works again. I actually had to force shutdown the server once thinking there was something wrong. The tricky thing is that the log window can be easily opened and completely forget to be closed, this can even be done prior to excessive logging. This will cause this behavior and never realize what's wrong, not to mention consoles and other features such as Wireguard and Dockers work flawlessly while the Web Ui is frozen. OS Version: 6.9.2 Steps to reproduce: 1. Have something that constantly logs to system. Guess you could force this with a script if necessary? 2. Open log window from top right toolbar of the Web UI. 3. Leave the window open for a few minutes, 5 mins should suffice. You will notice the effects are progressive, the longer you wait the worst it gets, until it freezes. 4. Notice how Web UI is no longer responsive/accessible but all other features remain intact. Workaround: Make sure to look for opened log window sessions and close them. Perhaps there's a way to force close/kill processes via terminal to fix it. Expected behavior: Web UI should work normally even with log window open. In my opinion this is a bit more common than it seems, since I would guess people monitor the logs quite often and forgetting to close the window is rather easy to do.
  4. Hello there, mover is running and I started getting messages about the cache being full. I checked few topics and checked the cache floor and shares minimum requirements. They all seem ok. The disk is also mostly empty so not sure why the message. I noticed this since the logs memory was almost full. Any idea of why this may be happening? Thanks in advance! I've attached the diagnostics. diagnostics-20220211-1047.zip ~~~~~~~~~~~~~~~~~~~~~~ Edit: After experimenting around, this may actually be a bug. Is important to note Docker was disabled during the whole process to avoid files moving around/being used in appdata. TL;DR: When having 2 pools, one cache's floor setting causes mover get stuck while moving the other pool. Steps to repro: Create two cache pools. Let's call them Pool A and Pool B. Create a two sets shares, one to be used in Pool A the other for Pool B. In my case: Pool A: Share A. Floor set to 5GB, Set to No Share A2. Floor set to 5GB, Set to No Share A3. Floor set to 0, Set to No Share A4. Floor set to 5GB, Set to No Pool B: Share B. Floor set to 100GB. Set to Prefer. Make it so that one pool reaches the floor limit. Chose Share B for Pool B here. In my case I set Share B to Prefer and apply changes. Transfered files until pool B reached the limit (100GB of free space). Put some items in [Share] A, A2, A3 and A4. I had a total of 16.4GB. Set [Share] A, A2, A3 and A4 to Prefer, Apply changes. Kick off the mover. Monitor system logs, it may move some items, but eventually it will print that Pool A is full, when in fact is not. Mover will be in a loop and will never stop. I don't think floor size or the number of shares used per pool matters too much. The setup: Server is running version 6.9.2. 2 Pools used as cache: Pool 1: Composed of one NVMe SSD RAID1 Used for system, domains, appdata and isos. Set to Prefer and floor of 5GB, except for isos which was set to 0. Pool 2: Composed of two NVMe RAID1 Used as file transfer cache for a specific share. Set to Prefer and floor set to 100GB I did notice the setting for Pool 2 was set to Prefer, which as not intended. I intended for Yes to in place. Prior to running mover where the problem originated: I had a replaced the drive in Pool 1 for another, same size however. for this I followed these steps: Set all Pool 1 shares to Yes. Ran mover. Checked Pool 1 to be empty. Set Pool 1 shares to No. Powered off machine. Replaced NVMe drive. Formatted and reconfigured pool, shares and options as before. Changed [New] Pool 1 to Prefer. Ran the move. <-- This is where the problem started. Didn't noticed until next morning since I left it running overnight. But was transfering fine before I went to sleep. Problem: Mover was running and got stuck printing cache from Pool 1 was full when in fact it was almost empty. Status of the system at the time: Pool 1 was almost empty. Pool 2 was at 99GB, so full based on floor settings. Docker was disabled throughout the whole process, prior, during and after moving. Changes I made: Stopped mover with mover stop command. Set all Pool 1 shares to Yes. Ran mover. Wait for it to complete. Double check all cache was moved to disk and nothing was left in Pool 1. Set all Pool 1 shares to No. Set Pool 2 share, just have one, to Yes. Ran mover again, waited for completion. Changed all Pool 1 shares to Prefer. Ran mover again and waited for completion. The whole thing just worked as expected after. But somehow the previous settings had made the mover get stuck. Almost seemed like it used Pool 2 check but blocked Pool 1 items from being moved. Let me know if you need more info. Hopefully this is not too confusing to understand.
  5. Yikes... Well, at least I'm glad it's a simple fix! (damn that line) Thank you SOOO much @doron!
  6. Hello, I am actually facing the opposite. I configured encryption with passphrase, however, I am now fetching the keyfile from an external source. When the keyfile is finally available, the system rejects it saying it's "Missing key" and fails to decrypt disks: I checked the contents of the file and its as expected, no added spaces, no tabs or anything of the sort. If I delete the keyfile and then use passphrase it does work however. Even if I create the keyfile as it still gets rejected. I also performed a diff between the downloaded keyfile and the file generated by the echo method and it comes out clean. (passphrase is not actually thepassphrase but you get the idea...) Am I missing a step? Thanks in advance!