Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Hikakiller

  • Rank
    Advanced Member


  • Personal Text
    nips test

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Too bad. Any other way to make sure used ssds are good?
  2. Well, I also use it as a stress test of sorts. I've never detected a bad drive that way, but I imagine if say the wear leveling went down 1% from a single drive pass I'd return it. I buy a lot of used ssds.
  3. Why not? I understand it technically reduces their life, but I'm using mostly Samsung drives with 40-400tbw warranties. If I write 500gb to it once, am I really reducing the life? My 850 pro, for example, I've had since 2011 and I have only written 68tb of the 400tb warranty.
  4. Damn, I thought my drives were dead. Yeah, same issue here. Ssd, hdd, doesn't matter what port either.
  5. EXACT same issue and it's annoying. Did you find a fix? edit: Nevermind, it wasn't working for like 3 months but it magically fixed itsself?
  6. How can removing drives that aren't on the array cause instability?
  7. Why not? Many of us have hotswap bays and restarting things like 12th gen poweredge servers can take a LONG time.
  8. i'm not up on all your posts, but have you tried just making the unraid server also the plex server via docker?
  9. Anyone else getting constant emails with 'cron for user root /sbin/fstrim -a -v | logger &> /dev/null' Here's a relevant (I think) screenshot of the log.
  10. Did you find anything? I Absolutely hate cron scheduling.
  11. I'm not too smart but I think the link is dead. @gfjardim
  12. @limetech any insight? @Benson @johnnie.black Changing my vm dirty cache numbers didn't change my network transfer speed, but they made my average parity check go from 180Mb/s to 970Mb/s.
  13. Right, but I'm seeing a slowdown before 5% cap. So even if It were forcing a hard stop, I'm not seeing the slowdown at that hard cache flush.