Jump to content

guru69

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

1 Neutral

About guru69

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I had the same message from Fix Common Problems on Dell Precision T5500 with dual Xeons. The issue was that "Intel Speedstep Technologies" was disabled in the bios. All good now!
  2. It doesn't seem to be causing any issues, I think Unraid is trying to spin down the SSDs, thought might need some change in Unraid so it excludes SSD drives from spindown.
  3. I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors. I have 4 Samsung 960 Pro NVME SSD cache drives (3 on motherboard, one on a PCI-E adapter). They are BTRFS/RAID10 and I have the trim enabled. Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme1n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31778): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31779): /usr/sbin/hdparm -S0 /dev/nvme2n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme2n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31779): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31780): /usr/sbin/hdparm -S0 /dev/nvme0n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme0n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31780): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme3n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31781): exit status: 25
  4. Excellent Quiks! This one has been so annoying, Thanks for sharing the fix!
  5. I'd like to request this it87 driver be added to unRAID to support X399 motherboard's temperature sensors: https://github.com/groeck/it87 I am trying to get thermal readings on Gigabyte Designare EX motherboard (Ryzen Threadripper board) Seems it is working from reading this: https://github.com/groeck/it87/issues/65 Thanks!
  6. Has anyone been able to get the temperature sensors working on the Gigabyte Designare EX? Found this, looks like an updated driver is required: https://github.com/groeck/it87/issues/65
  7. Disregard about RAID10, now having same issue with RAID1. Looks like I got a bad Samsung 960 SSD, works for about 3 days then goes missing. Whats odd is that it passes every test I can throw at it. I will replace this SSD and give RAID10 another go.
  8. I just completed my Ryzen build a few days ago and I agree with ars92, I did not change the C-States, nor did I do the zenstates fix. My system has now been up for a few days and have been waiting to see if it hung, but so far been stable. I did upgrade the Bios to latest right away and also upgraded Unraid to 6.5.3 at same time. My new build: Norco 4224 case Gigabyte X399 Designare EX Ryzen Threadripper 1950X 64 GB Kingston ECC ram 4 x 512gb Samsung 960 NVME SSDs Raid1 cache (Raid10 was not stable for me) 18 x HGST 4TB NAS drives, Dual parity Dual 10gbe bond (LACP) I still haven't been able to get the temperature sensors working on this board
  9. If it were a connection issue I'd assume once it landed in Unassigned Devices that it would produce errors there as well. It completed Preclear twice and passed Smart tests. It also produced the same odd behavior in a different socket once back in the cache pool. I haven't seen any issues on RAID1 yet (up for an hour now), but I'll keep you posted. If its a connection issue then I'm sure the errors will return (fingers crossed).
  10. Maybe I spoke too soon. After 24 hours w/ 4 SSDs on RAID10, Got a message about a cache drive missing. Logs showed lots of "read errors corrected.." and 2 of my dockers disappeared. I ran the Preclear/Erase all Data on the newly unassigned SSD, all 100% Success. I switched the SSD drives around to different sockets (new build, to rule out the socket). I attempted once more on RAID10 with same result (on new socket, same SSD). I switched the array to RAID1, all errors in log have ceased, so will restore appdata and see how it goes. Interestingly, DD reports the exact same speeds of 1.5GB/s to 1.6GB/s using RAID1, hopefuly its a bit more reliable.
  11. Here are my results from DD... (Before) 2 x Samsung 850 Pro 256GB Sata3 in Raid1 -> 425MB/s to 465MB/s (After) 4 x Samsung 960 Pro NVME in RAID10 -> 1.5GB/s to 1.6GB/s I'm a happy camper ?
  12. Yes I've used DD in the past, mostly interested in raw speeds. I wasn't sure if there was another tool or plugin in Unraid that was better. I will post my results to forum, Thanks for your advice
  13. What do you recommend to test read/write speed on the cache RAID? I just ordered 4th SSD, but am interested now in testing both ways.
  14. OK. I will order one more SSD, Thanks!
  15. Ahh yes, the write hole problem. I will forget RAID5 then. You're right, 3 is not a good number. Would it make more sense to pickup one more SSD and use RAID10? Am I right in assuming RAID10 will provide the best write speeds/reliability for the cache pool?