nomisco

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nomisco's Achievements

Noob

Noob (1/14)

3

Reputation

  1. As a further bit of information, I updated to the latest build about two days after this event, and upon reboot, a different disk was disabled, so I repeated the process. The server has been stable for in excess of a year, so it is troubling as to why I might suddently be seeing this. I purchased another LSI 9211 and a breakout cable (as a spare) as I don't want this suddenly failing over the holiday period!
  2. Thank you. It is rebuilding at the moment.
  3. I've been playing around with it - perhaps foolishly. I can now see the contents of the drive and it appears in the array, without a specific error message, but it still has a red X next to it. Could someone advise what I need to do next? unraid-diagnostics-20231202-1018.zip
  4. Please can I have soome assistance with a disk becoming disabled? Unfortunately I have rebooted since it happened so am probably missing log files. Nothing has changed; the server was sitting idle and it suddenly happened. It had been up for nearly a month. Obviously I want to avoid any data loss. I have no idea what I'm doing beyond basic setup so I'd appreciate some explicit guidance. Thanks! Edit: I've tried the repair in maintenance mode and it says: Phase 1 - find and verify superblock... - block cache size set to 1117984 entries Phase 2 - using internal log - zero log... zero_log: head block 118210 tail block 118206 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I have no idea what this means. unraid-diagnostics-20231202-0827.zip
  5. Mine is showing a blank page this morning (not 500 error), though SMB and docker services such as Plex are still available.
  6. ljm42: Changed the port to something else. trurl: Yes, I do recognise the IP address, that's OK. And I'm aware of the corruption in the cache pool. I don't know if it's a failing SSD or not. No further attempts to access the server. The only things which are forwardded on my router are the Plex server (running on unRAID) and the port aforementioned which I've changed for the WebUI remote access.
  7. Haven't done anything as far as I know. The password is long and not the kind you could guess. unraid-diagnostics-20230126-1719.zip
  8. I've just seen this in my log. Never seen anything like this before. Is this someone trying to gain access to my server? Anything I should worry about? Jan 26 15:43:38 unRAID nginx: 2023/01/26 15:43:38 [crit] 24956#24956: *1977125 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 152.32.141.142, server: 0.0.0.0:443 That IP address is from Nigeria. Thanks
  9. Start an iperf server with iperf -s to create a server and then run iperf -c <serverIP> on the client.
  10. There may be a perfect storm of something in my case with the many recent changes to the SMB implementation. It most certainly used to saturate the Gb network during SMB transfers. I shall do a fresh install in the next day or two and report back. Thanks for your help.
  11. The disk settings are set to reconstruct write. It is writing to the largest available space, which is about 2TB of space on a 4TB disk. The write speed is still ~50MB/s from the client to unraid, but it appears to buffer in the unraid memory, then dump large chunks to disk before waiting for some buffer to fill on unraid, they repeating. Hopefully the below images give you some idea of the behaviour. The disk writes in the top image are to the parity and array disk. Cache disk (SDD) is not used during test.
  12. Same problem. It used to max the link speed. About 50% of that now both ways. Only unraid and SMB are the common factors.
  13. Bypassed the entire segment of the network, so Win10 > switch > unraid. No change. SMB still about half the data rate it used to be. I can transfer from the win10 machine to another on the network at ~110MB/s. The iPefr tests show good performance. SMB is the problem.
  14. I don't believe that the number of retries is of concern when the network is being saturated. 1Gb/s can max out at about 80,00+ packets per second and the network was minimally in use elsewhere; skpe, online gaming etc (which would have prioritised packets). I'll subsitiute a segment of the network with a long cable which will byass a couple of switches, but because there's a 50% drop in throughput I don't have much hope for that. Just to add, iPerf shows that the network performance is largely as expected, and I see better when using a different protocol through lancache, so it points to an SMB problem to me.