Jump to content

MortenSchmidt

Members
  • Content Count

    299
  • Joined

  • Last visited

  • Days Won

    1

MortenSchmidt last won the day on February 26 2017

MortenSchmidt had the most liked content!

Community Reputation

1 Neutral

About MortenSchmidt

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. The sale was April 20-21 only, as noted in the headline. It was in their email promo (which I usually never read).
  2. Green (model WDC_WD50EZRX). But fail to see how that matters. Most of those colors are the same product with different settings, label, warranty terms and price. The beautiful art of product segmentation.
  3. So, how have these been working for you so far?
  4. http://www.bestbuy.ca/en-CA/product/wd-wd-my-book-5tb-3-5-external-hard-drive-wdbfjk0050hbk-nesn-wdbfjk0050hbk-nesn/10384896.aspx I've shelled a 5TB Elements last summer, works fine so far and had warranty coverage on the internal drive's serial when I looked it up. This is the my book, but assume it is the same drive inside. Cheap harddrives have been hard to come by lately up here. If you have any better deals, please share.
  5. I too get this, probably happened 3 or 4 times in total. Today happened on 6.1.9 while building a docker image I'm (trying to) develop. In my case, syslogd is the sender and I noticed my tmpfs (/var/log) is full. Next time you guys get this, check df -h Look for: Filesystem Size Used Avail Use% Mounted on tmpfs 128M 128M 0 100% /var/log In my case /var/log/docker.log.1 was about 127MB in size (of mostly jibberish). Last time this happened docker didn't like it a lot either - already running dockers worked fine but I was unable to start/stop new ones (docker deamon seems to crash - impossible to check syslog since that stops working too). Any good ideas how to prevent docker logs from balooning like they seem to do for me?
  6. I kept looking at this a bit longer - the other way to get rid of the false warnings is to change from monitoring raw to normalized values. This way one could still monitor field 187's normalized value. I also looked closer at the smartctl-x report which breaks out which fields are relevant to pre-failiure prediction and which are not. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocated_Sector_Ct -O--CK 100 100 000 - 0 9 Power_On_Hours -O--CK 000 000 000 - 909937 (110 2 0) 12 Power_Cycle_Count -O--CK 100 100 000 - 116 170 Unknown_Attribute PO--CK 100 100 010 - 0 171 Unknown_Attribute -O--CK 100 100 000 - 0 172 Unknown_Attribute -O--CK 100 100 000 - 0 174 Unknown_Attribute -O--CK 100 100 000 - 27 184 End-to-End_Error PO--CK 100 100 090 - 0 187 Reported_Uncorrect POSR-- 118 118 050 - 199827011 192 Power-Off_Retract_Count -O--CK 100 100 000 - 27 225 Unknown_SSD_Attribute -O--CK 100 100 000 - 327259 226 Unknown_SSD_Attribute -O--CK 100 100 000 - 65535 227 Unknown_SSD_Attribute -O--CK 100 100 000 - 30 228 Power-off_Retract_Count -O--CK 100 100 000 - 65535 232 Available_Reservd_Space PO--CK 100 100 010 - 0 233 Media_Wearout_Indicator -O--CK 100 100 000 - 0 241 Total_LBAs_Written -O--CK 100 100 000 - 327259 242 Total_LBAs_Read -O--CK 100 100 000 - 146395 249 Unknown_Attribute PO--C- 100 100 000 - 11051 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning I decided to add fields 170, 184 and 232 since these are classified as "prefailure warning" fields, and also field 232 because of its title. (But not 249 since that already has a high raw value which like 187 is not 'auto-kept', so presumably resets with power-on like 187 does). Interestingly, the only field that UnRaid monitors per default, that is a classified as a prefailure warning field for this SSD is 187 (which it monitors incorrectly in raw mode). Not sure if monitoring all those fields including 187 in normalized mode or all of them excl 187 in raw mode is better. I'm leaning toward the latter, thinking that will give the earliest warning possible, but on the other hand since it is just a cache device maybe we don't need to be concerned with the raw values. In case anyone at limetech sees this, the obvious improvement suggestion here is to offer the possibility to monitor some fields in raw mode and others in normalized mode. Or implement the extended attributes so field 187 can be monitored in raw mode correctly.
  7. Back again. Took all of 10 minutes for the uncorrectable error count notifications to start popping up. With smartctl-a and Unraid WebGui: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 187 Uncorrectable_Error_Cnt 0x000f 114 114 050 Pre-fail Always - 76820312 (BTW, please note that worst and value are way above the threshold) With smartctl -x the same raw value is reported, but this is added below: Device Statistics (GP Log 0x04) Page Offset Size Value Description 4 ===== = = == General Errors Statistics (rev 1) == 4 0x008 4 0 Number of Reported Uncorrectable Errors
  8. I too have this. Same drive Intel 520 120GB. I'm flooded with "uncorrectable error count" emails and notifications in webgui. I have had the drive hooked up to a windows PC running the Intel SSD Toolbox today to check for new firmware, but drive already has the latest firmware version "400i". Did you find a solution? What did you find that suggested this is causes by a bug in the SSD? I'm thinking since the power on hours reporting is also messed up in linux but reports correctly in windows with SSD toolbox, maybe the problem is with linux or its unraid implementation. This topic seems to suggest the drive works with "extended logs" and requires running "smartctl -x" instead of "smartctl -a" https://communities.intel.com/thread/53491?start=0&tstart=0 For what it's worth, with smartctl -a (and in unraid WebGui) the power hours are reported like this: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 9 Power_On_Hours_and_Msec 0x0032 000 000 000 Old_age Always - 909799h+57m+24.280s But with smartctl -x it adds this interpretation: Device Statistics (GP Log 0x04) Page Offset Size Value Description 1 ===== = = == General Statistics (rev 2) == 1 0x010 4 15004 Power-on Hours So the -x makes a lot more sense, and is the same value I see in windows with the SSD toolbox. Have just rebooted the server so don't have the uncorrectable notifications popping up yet. Can update this later when it returns (from what I have read, the uncorrectable error count raw value resets at power-on with these SSD's).
  9. Ran 4TB Seagate drives on mine so I don't think this is true. Ditto. If you have plenty of pci-e slots, some cheap 2-port pci-e 1x cards can be a good way to go - what specifically was your concern with performance with 1x cards?
  10. A forum user PM'ed me that he was having trouble getting this to compile with Unraid 6.1.9. If this is a general problem I'd like to know - I never updated beyond 6.0.1 but suppose I'm going to eventually.. In case anyone else needs the copiled file, here is a copy of mine: https://drive.google.com/file/d/0B_rN33HaibHiMW43a2N3NDBqZjQ/view?usp=sharing
  11. http://www.amazon.com/Seagate-Desktop-External-Storage-STDT5000100/dp/B00J0O5R2I/ref=sr_1_1?s=pc&ie=UTF8&qid=1448735873&sr=1-1&keywords=Seagate+Backup+Plus+5TB This deal is back, this time at $110.99. Looks like they were on for $114 last week. I know the 8TB version is SMR, but does anyone know if this 5TB version is PMR or SMR?
  12. I know nothing of these drives, but if you need WDIDLE you could also use this and do it on the unraid box without having to boot from USB: http://lime-technology.com/forum/index.php?topic=40062.msg376418#msg376418
  13. If you wanted to make sure you encountered this problem, what you would do is free up 10-20% on a drive, fill it up again, repeat several times. I never had a problem simply filling a reiserfs drive, it was once a disk had been partially emptied and refilled. Could be a month or a year until I would see the problems, and on many disks I never did. I have even seen the problems on disks that were never filled beyond 98% (40GB free), just a matter of time and use. I too migrated to XFS and haven't had problems since.
  14. Thanks for responding. I still think it would be worthwhile to list as a known defect in the first post. EDIT : Or call it a known issue. Whatever. Just please, lets not fill up this thread with pointless chatter. No matter if this is an issue with the NZBget code or a docker issue, listing that sucker right up front would keep the thread more reader-friendly IMHO - just a friendly suggestion.