Jump to content

turnma

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

turnma's Achievements

Noob

Noob (1/14)

2

Reputation

  1. One week stable, so hopefully looking positive for xfs.🤞
  2. Hi. Minor thing, but the first post refers to the container by the name "unraid-controller-reborn" rather than "unifi-controller-reborn"!
  3. I did move for that reason, but also btrfs was problem-free for two years before the upgrade, so if there's a hardware issue then it's only become apparent since the upgrade. Thanks, I'll try xfs.
  4. No, just the one. I only created this when I moved from btrfs last week, so if zfs is only going to last a week at a time then am I better recreating the pool as xfs?
  5. Just to add, server has been back online for under an hour, symptoms with /mnt/user access hanging (and admin UI not being available etc.) have returned. I can't run diagnostics because it also hangs, but there's nothing new in syslog since the server/disk became unresponsive. In other scenarios I'd think that this was likely to be a disk issue, but again it seems like a massive coincidence that I had no issues for the 6 months before the upgrade and the zfs change.
  6. The server was stable until today after making the ipvlan switch. Today I spotted that my containers were effectively unreachable (web servers responsive but not returning content post logon/timing out). I found that if I tried to ls /mnt that the terminal would hang, but the same on /mnt/disk1 was fine. Nothing in syslog at the time of the issues being seen, but issue much earlier in the day (when things had seemed okay still), e.g. PANIC: zfs: removing nonexistent segment from range tree I couldn't reboot the server because issuing a reboot would also hang, so eventually had to do a hard reset. After the reset the array got stuck starting with the cache pool the apparent culprit. I rebooted (which was now possible) with a plan to mount the cache read-only, but after the reboot the array has started fine. This all feels like it's related to the cache pool (single SSD), but again I'd had zero problems before the 6.12.x upgrade when on btrfs and only moved to zfs to get around the apparent issues with btrfs on 6.12.x. So my question at this point, assuming that zfs doesn't like something about my hardware that btrfs on 6.11.x was fine with, is whether I should consider reformatting the cache pool to xfs instead. thanks
  7. Thanks, I'll make that switch. I knew about that from the last upgrade, but the advice back then was also steering towards a second NIC (which I added at the time) and so this time I wasn't changing anything until I was more sure it was necessary.
  8. Last year I upgraded my server to 6.12.x and immediately suffered from stability issues due to btrfs. I downgraded back to 6.11.x and the server has been online without interruption for another 8 months. On Monday I upgraded again, this time recreating the cache pool as zfs. The server was stable for about 24 hours but it died in the early hours of this morning, although clearly not with a btrfs issue this time. I was unable to contact the server over the network and had to force a reboot. I've grabbed and attached diagnostics, although the syslog data is form post-reboot. I have a copy of syslog data that I had sent to an external server, so I've pulled about 10K lines from that and also attached it here. Hopefully there's something in here to give pointers. Again, I'd stress that the server has been running without the slightest hiccup on UPS for well over a year, interrupted only by the issues that I had during the last aborted attempt to move to 6.12. It would be really nice if I didn't have to downgrade again! thanks tower-diagnostics-20240410-1022.zip syslogresults_20240410_102435.zip
  9. Sigh, hopes raised too soon. Not a btfrs error, but server died in the early hours of the morning. I'll raise a separate post, but looks like I'm cursed getting onto 6.12!
  10. Hi. Sorry, think you may have misunderstood. I have an external syslog server, let's assume its DNS name is mylongsyslogserver.somewhere.com. That's longer than 23 characters, so Unraid won't allow it. But FQDNs are valid up to lengths of around 255, so there's no obvious reason for Unraid to have this restriction. (The article you linked relates to NETBIOS names, which have a limit of 15 characters, but doesn't apply here.). thanks
  11. (and yes, I could probably create a CNAME in a domain that I own to get around this, but I'm trying to keep things simple!)
  12. Just reporting back, 16 hours since upgrade, with cache pool now running on zfs. No issues yet.
  13. I have my Unraid box logging to an external syslog server. Until this point I've used an IP address, but the target IP occasionally changes and so I want to switch across to DNS instead. The external syslog provider uses fairly long DNS names, and it seems that Unraid limits the length of this field to 23 characters. What's the best way to request a feature change in this area? I can't think why it would be deliberately limited to such a short field length, thanks
  14. Thanks, I'm currently emptying my cache back onto the seat so that I can upgrade and then go straight to zfs cache. Fingers crossed.
  15. Thanks. I've set up syslog to log remotely so hopefully that will capture any warning signs when I next try to upgrade.
×
×
  • Create New...