• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About DesertCookie

  • Rank
  1. I've retested today - after having made no changes, not even shutting down the computers since the last test. Now I get the following results: 1GbE: 412 Mb/s (spread of 277 to 859 Mb/s) 2.5GbE: 508 Mb/s (spread of 432 to 567 Mb/s) While this is better it still doesn't represent what I would expect from this set up. Edit: A little while later I found that 1GbE is mroe reliable. Whether it's problem of the ethernet cards or one of Unraid with the ethernet cards, I don't know. I'll be adding the name of the cards to the original post as soon as I stop my server for the next t
  2. The 2.5GbE link is direct, as I wrote. I only have a phone via WiFi 4 available as second client (or a VM in the server but that's obviously useless here). I didn't change MTU as I didn't expect that to be relevant for only 2.5GbE. Could that cause an improvement?
  3. Using SpaceInvader One's video as a guide I got the following results: 1GbE: 74 Mb/s (spread of 38 to 120 Mb/s) 2.5GbE: 126 Mb/s (spread of 70 to 191 Mb/s) iperf3_results.txt
  4. Edit: What ultimately solved my problem was reinstalling Windows - now I'm back on 1GbE but at least get around 100-110MB/s. The Windows installation was about a year old and only had seen new RAM and some new HDD and SSD storage. Let's hope the upgrade to 10GbE later this week goes well. I recently installed a 1TB WD Blue M.2 NVMe SSD as cache, as well as a DeLock 2.5GbE network card and directly connected my PC (via its onboard 2.5GbE port to my server). Read and write speeds are down compared to before the update. Here are some metrics: 20-40MB/s write and 20
  5. Just for future passers-by: I had the drive I had issues with mounted via the Unassigned Devices plugin and had forgotten to unmount it.
  6. Alright, the repairs runs correctly. I don't know if it actually did anything to releviate the drive errors that seem to move with the data from disk to disk.
  7. I used the name displayed in the web ui of "sdh". I used "/dev/sdh". I see how I might have messed up...
  8. I've pulled both out of the array for the moment; I've ran the extended diagnostics - the original drive with known bad sectors wouldn't even run it. I got errors on a completely healthy drive that went into error mode too. I'll try swapping the HBA card asap! It's an Adaptec ASR-71605 and it's not actively cooled. I suspect that might have caused issues as those are known to run hot. I'm running xfs_repair -vn on these 4TB drives. Is it normal for it to take multiple hours and only display dots for an extended period of time? There have been 2M reads on the drive so far.
  9. I will try a reseat when I have physical access to the hardware again. I still have two free x16 slots I can try. For now, after a restart, it picks them all up again. I have found the drive that threw up errors this time to also have some SMART alerts. I have appended the SMART report (first is said drive, second the other drive I already knew to have some issues).
  10. Looking at how I had a brand new flash drive fail after just a month I'm worried I'll be in a similar situation too at some point. With all the issues I'm having I'm wondering why I didn't stay with Windows Storage Spaces.
  11. After a recent monthly parity check, one of my drives went into error state. I pulled the data off the drive and removed the drive. Now, running the parity check I get a lot farther in the parity check but now a different disk is throwing 79k errors and went into error state. The parity check paused. The drive originally throwing errors wasn't the most healthy with about 70 bad sectors. It had amassed these bad sectors one year, and three years ago and hasn't gotten any new bad sectors since then. They all were corrected. The second drive's diagnostics are appended. I've had i
  12. I'm getting an error when accessing the web UI: Error 500 Internal Server Error | nginx . I can still SHH into it. My VMs are still running and accessible. Most Docker containers are running but my reverse proxy container is non-reactive and cannot be rebooted. I've tried shutting down the VMs with sudo virsh shutdown <name> - nothing happens and I abort the command with CTRL+C after a long time; I've tried shutting down all Docker containers with sudo docker stop $(docker ps -q) - my Nginx Proxy Manager container won't shut down (the rest does); then I tried stopp
  13. Thank you. I'll order a new flash drive then. Strange, since it was brand new (Samsung BAR Plus 32GB).
  14. My system loses network access but seems to stay operational. Only a hard reset will get it back to be visible in my LAN - at least for a couple of hours. Strangely, the VMs still appear as online to my router, though it doesn't know what IP adress to give them or how they are connected (doesn't show LAN_1GB like for my other devices). Of course, a hard reset it somewhat undesirable. I'm not at home to troubleshoot myself. I asked him to log in and export the log and send it. This is what my homemate sent me instead: He said stuff like this was scrolling by. This means the VM