Jump to content

Shunz

Members
  • Content Count

    36
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Shunz

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. The 2 HPE Samsung SM863 SSDs running as a RAID 1 cache pool on my Unraid. Works perfectly so far, though temps are wrongly reported, around 12 to 15 degrees reported too low - which according to some reddit threads, can be a common issue for certain enterprise drives not being used in environments they were customized for. Still, cheaper than a QVO, but faster, more reliable, and endurance of, what, 15x more? (though I'll probably never even reach 10% of the endurance before it's time to change them again) Anyway, sharing the good deal!
  2. I bought mine here. https://www.ebay.com/itm/113767323096 Reviews of this seller looks good (at least not much issues). The seller also sells the 5100 Max (among other server SSDs like the Intel DC 3520), I wonder if they are the same merchant as GoHardDrive. Bought 4 units including 2 for my friends. They arrived in a proper 5-unit carton packaging, and serial numbers are very close to each other. Anti static wrap looks great, SSDs look brand new as far as I can recall, and maybe I should take a look at any traces of usage on the connector pins on that last unit when my friend opens his. Basically, at this moment everything looks legit, and both my drives has been working well, and appear to perform better than the advertised speeds, at least based on Crystalmark and some unraid situations. Again, the only problem is that I can't upgrade the firmware (being HPE drives), and the temperatures posted are a good 13-15 degrees lower than what they should be. I even did a preclear on them (I know I should NOT do so to SSDs, heh) to make sure everything reads okay, before using them as my cache drives. They also do not support the low sleep power states that consumer drives have. At this moment, at these prices, these feel like wonderful drives for cache pools, and can support high write intensive usage or dockers or VMs. My hypothesis is that such enterprise drive names and specific capacities (e.g. 1.92TB) are not what most people search for, and being HPE re-brands, hence merchants find it good to sell at a low price if they have ample surplus stock to clear. (heh, I shouldn't talk about this so much, if I want things to keep this way) CrystalDiskMark shots below. They probably can't tell the whole story (e.g. no latency values, etc), but I ran these tests anyway for the sake of making sure they aren't lemons. CrystalDiskMarks for both my SM863 1.92TB CrystalDiskMarks for 850 Pro 512GB, 860 EVO 4TB, and an Intel DC 3.84TB The carton the bunch of SM863 drives arrived in
  3. Don't mean to necro this thread, but some really good deals for HPE (HP Enterprise) drives - similar to your Intel D3-4510 SSDs available here at a steal. (actually, better performance, for the Samsungs) These enterprise drives have crazy endurance (e.g. the Micron 5100 Max is even more over-provisioned than the 5100 Pro you were looking at). Posted these on the Good Deals forum. Micron 5100 Max 1.92TB - Around $200 to $220 https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/ 17.6PB (17,600 TBW) Samsung SM863 1.92TB - Around $215-229 https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV 12.32PB (12,320TBW)
  4. 2 really great enterprise grade SSDs going at what I'd feel is a steal. Both appear to be HPE (HP Enterprise) branded SSDs. They each have a ridonculous 2-digit Petabye endurance! For comparison, at time of writing, a 2TB Samsung 860 Pro and a 860 EVO goes at $477 and $297 respectively. (Endurance 2400TBW and 1200TBW) Unfortunately, it is nearly impossible to find side-by-side reviews and benchmark comparisons of these type drives against consumer SATA drives, but they are certainly more than capable (especially the Samsung) for read-intensive server/enterprise types of heavy loads. I'm personally really curious how these would fare against consumer drives in a PC desktop environment. But being so heavily over-provisioned and having insane endurance, these should be perfect for heavy downloading/par/unrar, and for content creators (render videos without worry of NAND wear). Micron 5100 Max 1.92TB - Around $200 to $220 https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/ 17.6PB (17,600 TBW) endurance The Amazon page says its MLC, though according to Micron brochures it is eTLC NAND. Reviews are decent, but the Sammys seem to perform better. Samsung SM863 1.92TB - Around $215-229 https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV 12.32PB (12,320TBW) endurance Probably a bona fide MLC NAND drive. I splurged on 2 of these SM863s a week ago for my cache pool (RAID 1), from eBay. Seems to work really well so far, just that these being HPE drives, the model displayed on Unraid isn't Samsung SM863, but the HPE rebrand. Temperatures appear to be wrongly reported by the SSDs as 10+ degrees lower than ambient temp. Will post some pictures and CrystalMark benchmarks if anyone is interested (summary - they perform roughly similar to my 850 Pro 512GB, 860 EVO 4TB, 850 EVO 512GB) Am I missing something - are there problems with these HPE ssds? (e.g. dated firmware that's difficult to upgrade)
  5. here Oh crap, I've been planning to purchase extra RAM since last year so that I could comfortably transcode to RAM. So I've finally just purchased an extra 16GB of RAM - before reading this. Bummer
  6. Has been getting one every 2 months... Now it's happening even on unraid v6.0.1. Happened live while I was looking at the main screen, error count moved from 1 to 4 during a refresh. Had some noises from the disk. I didn't save the SMART short report that was done after the error, but it looked okay; doing a preclear cycle now. All other disks look good - no increase in CRC error counts etc. Anything I might be missing? Aug 29 23:03:50 unraid kernel: mpt2sas0: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00) Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 81 56 1c f0 00 00 00 08 00 00 Aug 29 23:03:50 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960 Aug 29 23:03:50 unraid kernel: md: disk5 read error, sector=10759838896 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 81 56 1c f0 00 00 00 08 00 00 Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960 Aug 29 23:04:01 unraid kernel: md: disk5 write error, sector=10759838896 Aug 29 23:04:01 unraid kernel: md: recovery thread woken up ... Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 86 74 6c 28 00 00 00 08 00 00 Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400 Aug 29 23:04:01 unraid kernel: md: disk5 read error, sector=10845711336 Aug 29 23:04:01 unraid kernel: md: recovery thread has nothing to resync Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 86 74 6c 28 00 00 00 08 00 00 Aug 29 23:04:02 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400 Aug 29 23:04:02 unraid kernel: md: disk5 write error, sector=10845711336
  7. Finally moved to unraid v6!! Setting up unraid and plugins up was much easier than I expected!! Did a clean install (with several safe .cfg files from v5) on an 850 EVO. Not sure whether I've made the right choice in using LimeTech's Plex docker install, than Needo's. Took a while to figure out the mapping of Plex Library directories. Needed to install and run Plex once to see where the Library directory goes, then used the command line to copy the Plex libraries from the old cache drive over to the new SSD. Plex posters now load so much faster!
  8. Does migrating from 32bit to 64bit of plex work okay on the existing 32bit library files? I don't want Plex to sift through the entire library again; and more importantly, re-do all the manual edits to the Plex libraries all over again!!! https://forums.plex.tv/discussion/88712/migrating-plex-from-32-to-64-bit-on-ubuntu-should-i-expect-issues
  9. Thanks guys! Interesting stuff! I'd wonder if 16gb of RAM is enough - my Plex transcodes to the Roku is always done at 20mbps. I would love to upgrade to 32gb, though at the moment purchasing another 16gb ECC memory would hurt a little, haha.
  10. Any comments on which would be better? I'm about to upgrade from a v5 installation to a v6, and this should be a great occasion to replace my existing Cache Drive (and especially the Plex Library and transcode folders) with an SSD. Usage Patterns: Array WD Red x6 + 1 HGST Parity, core i3 4150 16gb, on X10SL7-F Currently mostly used as a Media Server, using PLEX. Maximum 3 streams at a time. My transfers to the Cache drive varies from about 50gb, to 200+ gb at a time Ideally one should now use a 2x SSD Cache Pool, running plex off the pool... Existing Setup, on unraid v5: Cache WD Black 750gb (2.5-inch, 1.5 years old) > Includes Plex Library and Transcode Folders (all in a share-only App folder). Covers and menu currently takes some time to load!! Upgrade option 1, on unraid v6 Benefit: no more spinners for cache drive; straightforward clean upgrade from unraid v5 to v6 Upgrade Cache Drive to Samsung 850 Evo 500gb SSD > Includes Plex Library and Transcode Folders (all in a share-only App folder) Upgrade option 2, on unraid v6 Benefit: I get to upgrade my PC gaming drive from a 256gb to a 500gb No change to Cache Drive WD Black 750gb (2.5-inch) Add 2-year old SSD from PC (about 6.5TBW) Samsung 840 Pro 256gb SSD For Plex Library and Transcode Folders Upgrade option 3, on unraid v6 Benefit: Same as 1 and 2, but at 1/2 the Cache Swap Cache Drive Add 2-year old Samsung 840 Pro 256 gb (about 6.5TBW) > Includes Plex Library and Transcode Folders (all in a share-only App folder)
  11. Did a quick read-through of the massive v6 upgrading guide and the v6 manual. Fwaaah! Some really amazing and long-awaited improvements - better file systems, streamlining (v5 feels generally patchwork by comparison), and dockers are probably my favorite ones right now.
  12. Thanks Rob, bjp999!! I guess I really need to take a good look at v6 soon...!
  13. Thanks Rob! I'm using a Supermicro X10SL7-F with an LSI 2308 SAS controller. There doesn't seem to be any firmware updates for the controller, but the board bios is up to date. Oh wow - I'm only reading it now - unRaid v6 has been officially stable since last month! Woohoo!!! Was waiting for that for a while! I'm gonna take some time to read up on v6 and the upgrading methods soon. The disk rebuild should take another half a day. Should I RMA this other WD Red? It is the previous drive that failed, and it is now unused. After taking notice of the UDMA_CRC_Error_Count, I've already replaced ALL SATA cables with short ones. === START OF INFORMATION SECTION === Device Model: WDC WD60EFRX-68MYMN1 LU WWN Device Id: 5 0014ee 20b26f3dc Firmware Version: 82.00A82 User Capacity: 6,001,175,126,016 bytes [6.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5700 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Wed Jul 15 10:56:30 2015 SGT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 4424) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 698) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 217 187 021 Pre-fail Always - 8108 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 498 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 4965 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 11 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 0 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1045 194 Temperature_Celsius 0x0022 116 110 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 170 000 Old_age Always - 3261 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4295 - # 2 Short offline Completed without error 00% 4283 - # 3 Short offline Completed without error 00% 4283 - # 4 Extended offline Aborted by host 90% 4283 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  14. My 2nd redball in 3 months...! Its a Western Digital Red 6TB. Does the Power-Off_Retract_Count look normal? The value has been unchanged since May 18.
  15. Heh, in contrast, today a raspberry pi costs just a few cups of Starbucks!