JPilla415

Members
  • Posts

    61
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JPilla415's Achievements

Rookie

Rookie (2/14)

0

Reputation

1

Community Answers

  1. Shoot! Didn't think about that. Thanks very much!
  2. Forgive me if I am missing something painfully obvious, but the new Unbalanced UI is indicating the following: However, when looking through /var/log via Krusader, I am not seeing any log file. The last modified date for /var/log is back in 2023 for me. Appreciate the help!
  3. 28 hour MemTest with clean results! Booting back up the server again after several days of downtime. Fingers crossed! Thanks again for all the help.
  4. Thanks very much for the time and help on this! I'll go ahead and mark the MemTest post from you as the solution. If other issues crop back up, I'll come back to the thread and add more details. Fingers crossed this is the end of it. Is there a current recommendation for my RealTek Adapter, it's unclear to me if I should be using the RealTek plug-in or not:
  5. I had similar BTRFS errors in my logs last week, and the first step recommended to me from the team here was to run a MemTest.
  6. 12 hours and error free with the new ram. I'm going to let this run the rest of the day to play it safe. Assuming I encounter no errors in the next 12-24 hours, what is the recommended next step to troubleshoot my slow cache write performance outlined earlier in the thread? Just go back to normal day-to-day use of the server and see if the issue pops up again? Thank you again for all of the help!
  7. Much appreciated. I will start to track down some new ram and report back. Sorry to repeat this: but would failing memory explain the drop in cache write speeds I have been observing? Or is that likely another scenario which I will need to troubleshoot after getting new ram. Thanks again.
  8. I believe the recommendation is to run Memtest for a full 24 hours, but given that I am 12 hours in and have already logged 462 errors, I'm suspecting this is enough data to call my dimms bad? Is that that the general consensus? I assume even one error is above a comfortable threshold. If so, I need to work on tracking down some ram compatible with my motherboard and CPU. I am using older hardware and I am not clear on whether or not ram is still manufactured or available. Are there current recommendations around ram? Is it better/worse to utilize all four slots on the motherboard? If I have been comfortably running 16GB for years, is there any reason to bump to 32GB? If I move from 4 dimms to 2 dimms, is that going to present any sort of issue? I assume the answer to all of these is no, but better to check than be surprised. Last question, would failing memory explain the drop in cache write speeds I have been observing? Appreciate everyone's time and thoughts.
  9. Understood, thank you very much for the suggestion. Running a memtest now. I will report back tomorrow.
  10. I've made no changes to my network, Unraid or PC. Tried another copy to the Unraid cache and this is what I am seeing now. These are the speeds I am used to seeing.
  11. Hello, sincere appreciation in advance for any guidance and thoughts from the community on this topic. Over the last several weeks, I have noticed in some instances the speeds with which I can write TO my cache drive will drop significantly. I use Teracopy to write files to my server, and in these instances of poor cache write performance, I will frequently get an error that the hashes don't match for the files written to the server. In each case, when troubleshooting the issue I have rebooted my server, and it has resolved the issue for some amount of time. An example of the change in write speed is as follows: Average normal write speed: 85-90 MB/s Average degraded write speed: 25-40MB/s Here is the current write speeds to my Unraid Server, writing to SSD Cache drive: As a point of comparison, here are the write speeds to my Synology server, no cache SSD. The same source computer, same network, same router, same switch, same file are being used for both servers: It is worth noting that in this window, I updated to Unraid 6.12.6 and I understand that there are potential networking issues with RealTek adapters. Last week I downloaded the suggested RealTek driver from the Fix Common Problems Plug-in but the same issue popped up. I removed the plug-in yesterday. Here is the information being shown in my system devices for the network adapter: I have also noticed the following errors in my system log from this morning: I also had BTRFS errors in my logs over the past several days. I suspect these are related, but I can't be certain. Apologies if I am conflating two separate issues, but I wanted to share as much as possible. I have attached Diagnostics, as well as my system logs from these periods. My apologies for the noise in the logs, when rebooting my server, a new IP address was assigned and the server lost access to the UPS. I didn't notice the noise in the logs from this until today. I copied and pasted errors above. Please let me know if I can provide any other useful information. Thanks so much! tower-syslog-20240120-1710.zip tower-diagnostics-20240120-0909.zip tower-syslog-192.168.1.2-20240120-1711.zip
  12. Ah, fair point. In this case, I want to replace the drive with a larger capacity drive.
  13. OK, thanks very much for the clarification. I know it was empty as I cleared it, with the intention of shrinking the array, in order to remove the disk. However, I read that the script method of zeroing the drive was no longer the preferred approach with recent versions of Unraid, so I decided to scale up my drive sizes instead. I have one more drive with ReiserFS still kicking around. Is the best method to fully empty it, format it and THEN remove/replace the drive?