Ometoch

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by Ometoch

  1. Like a lot of people have said already, my favorite thing about Unraid is the ability to mix and match disk sizes in the array. I always kind of dreamed about having a RAID array to store my data on, but the cost of getting a bunch of matched hard drives all at once was prohibitive. Unraid allows me to get the benefits of a parity-protected array without the inconvenience involved with establishing and expanding proper RAID arrays. The biggest things I'd like to see are just a focus on greater performance where possible (like the change in 6.8 that made the performance penalty of using the array while the Mover is running less noticeable), and some means to verify data integrity against bitrot (whether that's through ZFS or something else I'm not particularly fussy about).
  2. Small bug report. In the Active Streams plugin, when you have hosts connected with IPv6 addresses, the User Names tab only displays the first four characters of the IP address, so you can't assign names to each address individually. I have example images, though only one user connected when I took these screenshots. I have, however, seen multiple addresses (all starting with 2600) connected at one time and still only one "[2600" under User Names.
  3. Forgot about the FCPsyslog_tail.txt that Fix Common Problems tells you to upload to the forums along with the diagnostics... but when I look at it I notice that it stopped being updated on the flash drive on November 2, for some reason, despite the system continuing to run and FCP's troubleshooting mode continuing to be active. I have no idea what that means. FCPsyslog_tail.txt
  4. Crashed again about thirty minutes ago (after running quietly since, looking back at my last post, October 13, which goes to show what a pain this is to troubleshoot). Guess reseating the memory didn't help after all. I've now updated Unraid to 6.6.4 and tried running that virsh command, and absent other ideas I'll hope that does something and I don't have to look at replacing hardware (probably the memory?). I was running the Fix Common Problems troubleshooting mode, which writes the diagnostics bundle to the flash drive every half an hour, and also taking a look through the syslog every day or so between my last post and now, and I don't see any errors between then and the crash happening. I'll attach the last diagnostics bundle that was saved before the crash anyway, in case I'm missing something. But as far as I can tell the main thing I have to go on is again some cryptic error screen. imogen-diagnostics-20181108-0437.zip
  5. I found that as well, and I also found an account of someone reseating the memory in their computer to fix errors about tainted kernels from bad page state. I figured I would try that before I run that command (since neither of us really know what it does; as near as I can tell it's related to NUMA which should be irrelevant to my machine because it's just a four-core i5 and not a multi-CPU or Threadripper/Epyc system). I've been watching the syslog like a hawk lately and so far it hasn't thrown new errors, but it hasn't been running long enough for me to feel confident that that was the fix (not entirely sure how long will be long enough since the crashing was so sporadic). If it does crash again I'm going to try that node-memory-tune thing and see what happens, though. As for 6.6.1, I do use NFS to access a share from one of my VMs. I looked through the 6.6.1 release thread and saw people having trouble with NFS but I don't know quite how dangerous it would be to update in my circumstance. I'll probably hold off until 6.6.2 though. My problem has been happening since before the 6.6.x series anyway.
  6. I went in the room the server is in and saw something I didn't expect to see. A bunch of "BUG: Bad page state in process php" errors. I've only ever seen any of the previous error screens (like what I attached in the first post) after the system had already crashed and I had gone to check on it, but the server is still running here, with the web UI still working and SMB shares still accessible; nothing obviously wrong right now except what's on this screen. The server is currently running a parity check after the crash that prompted the first post in this thread, if that's relevant to this. I'm attaching a photo of this new screen plus a fresh copy of the diagnostics. What I posted in my first post is the screen after the system had become unresponsive and a diagnostics file from immediately after a reboot. What is attached to this post is after the system has been running for about 12 hours or so and remains responsive. imogen-diagnostics-20181012-1544.zip
  7. I ran memtest from the boot menu for close to 24 hours a couple months ago and it passed. Is there a more thorough memory test I can do, or is that enough to probably rule it out?
  8. A somewhat non-descriptive post title for a problem that's been sort of baffling me. For the past several months, I've been having an issue where my Unraid server will become unresponsive and when I go in to check on it there seems to be the output from a kernel panic or something on the monitor. The server will run just fine for several weeks (I've seen it happen after two weeks of uptime or after over a month of uptime as well) and then this happens with no obvious trigger beforehand. I've seen this happen on all versions of Unraid between 6.4.1 (if I recall correctly) and 6.6.0 (what I was running when it happened again about half an hour ago). I ran a memory check for a solid day or so with no problems. I installed an HP H220 HBA (flashed with the LSI firmware) recently, but it crashed both using that and using the combination of onboard SATA and some cheap SYBA SATA card I was using before. I'm attaching a photo of the monitor, just in case someone can make any sense of the junk onscreen (although I doubt it's useful), as well as the server diagnostics bundle. Hopefully someone will be able to find something I'm not seeing. imogen-diagnostics-20181012-0344.zip
  9. I bought one of these cards from this vendor. Arrived fine, works fine out of the box in unRAID (although I had to update the firmware on an old Crucial m4 SSD I had before the card would see it). As for this, if you look at HP's spec sheets they have multiple host bus adapters that implemented this same SAS2308 chipset, and as far as I can tell the main difference is how many TB of attached disks they support (the H220 supports up to 42 TB). I believe that this is a firmware limitation that flashing to the LSI firmware gets rid of (but I don't have enough disks to verify this at the moment).