Majyk Oyster

  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Majyk Oyster

  • Rank
  • Birthday 05/08/1982


  • Gender
  1. You can use PSU Calculator to make sure, but both those PSU should be fine. Hard drives don't need that much power.
  2. Both seem to be just as fast while the data is written. I guess the extra work needed for a move is done once the transfer in completed, and the source file deleted. It's data to data (/mnt/diskA > /mnt/diskB/). I just witnessed such a transfer at 80MB/s, so the aforementioned fluctuation is indeed confirmed.
  3. As the parity is done rebuilding and the array is protected once again, I checked various things : - Reads to the array from ethernet are stable at 120MB/s (which is pretty nice). - Writes from ethernet to cached shares on the array are stable at 80MB/s. - Writes from ethernet to non cached shares on the array are stable at 80MB/s. - Disk to disk transfers within the array start around 120MB/S and slowly stabilize around 50MB/s. I've double checked, the files indeed land on /mnt/cache/ or /mnt/diskX/ as they're supposed to. I'm not sure how to simulate th
  4. Absolutely, every word of it. I only used the Read/Modify/Write mode for a few tests, my array is configured with turbo mode. Disk rotation wait shouldn't be an issue (?). As I understand things, considering Diskspeed results and the way turbo mode works, the slowest writing speed would be around 60MB/s when reading from the end of my 3TB WD Green disk to reconstruct parity, but it also could be much better when writing past the "half" of my parity drive, which means reading only from faster disks. So I don't get why I never experienced better speeds since I activated parity. I cou
  5. Thanks for the details, much appreciated. Seems like a wasted a bunch of time and a perfectly good parity then. 😁 The WD Greens can definitely max out that 60MB/s "hard limit", so I guess it's pointless replacing them while using parity. I'd still love to know exactly what's the limiting factor in all this, but that may be Unraid secret sauce. On a side note, I shrunk my VM and docker.img, so now I have under 3,5GB used space on the cache, and it's also freed of all automated writes. That should let it breeze a little.
  6. Exactly, disk to disk within the array using MC or another CLI command. I can't find any reference of what to expect performance-wise on average/enthusiast/non-pro hardware, so I have a hard time knowing what's normal and what could be improved. Troubleshooting something that's not broken can be hard sometimes. 🤓
  7. Thanks for the input. That's closer to what I first imagined. I just installed Dynamix stats, so there's not much data to inspect right now. But I took a closer look at the parity sync (still running). It just went over 50% (4TB) while I was watching, which is where WD Greens are not used anymore, and only the Reds and the Ironwolf are left. The speed instantly went from 70MB/s to 160MB/s. During the first hours, the speed was around 110-120MB/s. The last time I made a parity sync, it ended with a 120MB/s average speed. So everything looks normal and coherent with the Diskspeed res
  8. Thanks for your replies. I expected some overhead, but I had no idea how much since I have 0 experience with software raid-like solutions. I couldn't find any infos to roughly quantify real life performances. I'm still moving/sorting lots of data (disk to disk) to make my server nice and tidy, which would bypass the cache. That kind of transfer should diminish drastically soon. I'm still somewhat confused : since the CPU, RAM and bandwidth are all far from saturation, what would the typical limiting factor ? Could I do anything to slightly improve things ?
  9. Hi folks, I've been trying Unraid for the past month (1 hour remaining on my trial actually), and I'm quite pleased with it so far. Since I'm impatient, have proper backups and had my future parity drive stuck somewhere due to COVID-19, I first set everything up without parity. All was fine, transfer speed within the server was around 150-180 MB/s on average. Last Monday, the parity drive finally arrived. I precleared it (I get it's not necessary anymore, point was avoiding premature failure issues), set the disk as the new parity, and let Unraid do it's job for the nex