SSD

Moderators
  • Posts

    8022
  • Joined

  • Last visited

  • Days Won

    13

Everything posted by SSD

  1. Seagate drives in the 1T-3T range were pretty awful IMO, but newer drives (8T+) have faired much better. Seagate was sued over their drive failure rates, and I believe they cleaned up their act as a result. I still prefer HGST drives when they are price competitive, but cheap shucked 8T Seagate and WDs (<$200 USD) have held up well for me.
  2. Love it! Is there a way to take a backup of an existing VM that is not running on a passthrough physical disk, restore it to the physical disk, and make this magic happen without loosing all the installed apps and configurations from the current VM? Since a lot of people already have a Win10 VM configured, this would be very helpful and avoid relicensing Windows and other components. Maybe a future video??
  3. Yes - typically read speed is gated by the speed on the single HDD you are accessing. The speed is based on the rotational speed + the data density. A 7200 RPM 12TB drive will be considerably faster than a 5200 RPM 2T drive (and even faster than 7200 RPM 2T drive). By faster I means sequential read and write speeds. 2 Although HBAs are normally best, there are times a RAID card, in addition to the HBA, can be helpful. My parity disk is actually a RAID0 volume from my Areca HW RAID card. This has two adavantages - 1-The disk is faster due to the striped RAID0 architecture, and 2 - I am able to economically repurpose a pair of smaller drives for parity. So I am able to buy a single new larger disk, repurpose 2 smaller drives as parity, and add the new large disk to the array taking advantage of its full capacity. Without a HW RAID card, you'd have to buy 2 larger disks to see value. I actually consider both of these to be very worthwhile. Note that unRAID does not work with all RAID cards. The Areca are the only ones I know are compatible. Some features don't work (unRAID can't spindown RAID volumes), but the RAID card includes configurable spindown features that work quite well so this is not really a serious limitation. Do some research if you buy an Areca controller. There are some settings you need to use. I find SSDs to be quite reliable, and backup files to the array that I want extra protection. I have not experimented with cache pools. With your configuration you would likely get the read and write performance of the slower SSD. But not 100% sure - it depends on how smart is the BTRFS pool.
  4. The instructions do not include shrinking an array to remove a failed disk. They are focused on removing a functional drive from an array. The failed drive is simulated by parity + all other disks in the array working together. Parity itself is not a backup. You may have already gone too far with the steps to be able to recover the data on the failed disk. But I'm not sure exactly how far you went, so there may yet be a way to recover. And even if you went farther, there still may be a way to put the array back as it was and have is simulate the failed disk. What you should have done (and anyone who is finding this thread for instructions) is to copy the data from the failed disk (which unRAID would be simulating and it would have appeared as if it were present) to other non-failed disks in your array that had available space. Once the data is copied, you would have been able to do the new config, redefined the array omitting the failed disk, and rebuilt parity. The net affect is you would have kept all of your data but using fewer physical drives. The amount of available space would have dropped by the size of the failed disk. Give some more details on the current state and someone may be able to assist.
  5. @wisem2540 - Your CPU has a passmark of about 4800. That would make transcoding HEVC streams in real time very unlikely. But you should be seeing the CPU getting heavily utilized. If you are not you are likely having other issues. Just for reference, I have an i9 7920x (12 core / 24 thread), each running at 4.5Ghz. I am able to transcode one 4k HEVC stream using between 1/3 to 1/2 of the CPU. Have never tried 2 at a time, but might be possible. The passmark is estimated at about 29,000. (Note my CPU was delidded to improve cooling, and able to run at 4.5GHz with all cores active, unlike stock chip that is limited to 2.9Ghz when all cores are active). If the CPU is not hitting high utilization, the first thing you might want to check is where the transcoded output is stored. The best place is to have it stored in RAM. (See HERE). If you are transcoding to an SSD, it might be slowing the transcode mildly to moderately impacted. Worst if the SSD routinely gets a lot of writes and has not been TRIMmed in a while. And if you are transcoding to an array disk, you could definitely expect a very significant performance hit that would result in lower CPU utilization due to waiting on the disk I/O. If you are transcoding to RAM and still not seeing high CPU usage, it could be that the number of cores available to the PLEX docker container is limited. I tend to give Plex access to all the cores (except Core0), and also give my Windows VM access to all cores (except Core0). Since it is rare that my VM is going to be doing something processing intensive while I am watching a movie, and vice versa, this is basically giving both Plex and my VM access to 11/12ths of the processing horsepower of my CPU. A third thing to check - sometimes a player may "advertise" to Plex that it has the capability to directly play back an HEVC stream, when it is really not able to do so in real time without glitching. You can look at Plex Web GUI while an HEVC video is playing and see if it is transcoding or doing direct play. I was seeing HEVC streams lagging and couldn't understand why, and found that Plex was NOT transcoding, and instead depending on my old Brasswell NUC to be able to play it back - which it is not able to do with 10bit HEVC. Disabling direct play in the player made Plex do the transcode, and then my full server horsepower was available to transcode, and the NUC could play it back perfectly smoothly. Can't really think of other reasons your CPU would not be pegged trying to transcode an HEVC steam.
  6. @Laup - Not running trim is going to seriously affect performance over time. I had an SSD in my windows box holding VMs and never knew to run trim. It seemed to be getting slow so i ran some performance benchmarks and sure enough it was very slow - slower than a spinner. Running trim had a dramatic impact on its performance. I'm not a fan of SSD array. Why not use an ssd as a UD, And back it up to the array. Could even set up several SSDs as a cache pool that provides redundancy. Use it instead of the unraid array.
  7. @Laup - I would be cautious using SSDs for array drives. Here is some background. The basis of unRAID protection is parity. Parity creates redundancy at a very low level - down to the disk sector. And reconstruction is performed at that level as well. Whether a disk sector has useful data or not is irrelevant to unRAID. Every sector contains 512 bytes. Could be all zeros. Could be remnants of a deleted file. Could be part of a real file, or part of the disk's housekeeping. But every sector has 512 bytes. And those 512 bytes don't change unless and until that sector is updated. When a disk is rebuilt, each sector on the disk is restored by looking at the corresponding sector or every other disk. So if the disk is heavily fragmented, when it is rebuilt it is fragmented in the exact same way. It's not like restoring from a backup, where each file is individually written fresh to a new disk. Instead it is a mirror image. SSDs are not the same as hard disks. There are no physical sectors. Sectors are simulated by SSD cells. But each cell on an SSD has a limited number of writes. If one spot is written too many times, that cell will fail. To protect against having individual cells get worn out while the vast majority of the SSD is fine, the SSD has the ability to remap the cells. So while sector 1234 might have been simulated by cell 1000, it is possible that sector 1234 could be changed to be simulated by cell 5555. So long as the SSD preserves the data, the disk will still behave perfectly fine. So in essence the SSD has the smarts to avoid cells failing due to a lot of writes, by invisibly shuffling the cells so that the SSD will wear in a consistent way. The process of shuffling the cells CAN result in unused sectors on the SSD ("unused" meaning they contain no data of value) having their values change. Since the data is unused, who cares that they are not maintained. So unlike a hard disk where the only way a sector can change is by the disk writing to that sector, the SSD sector values can change invisibly if they are unused. And if this happens, parity will be broken and the ability to rebuild any failed disk reliably will be impacted. Use of the TRIM command may be the highest risk, but the TRIM command is important to maintaining the SSD and helping it last as long as possible. Its important to understand this risk. And if you are using SSDs in the array, do some research on the particular model to confirm that even unused sector values will be preserved, making them suitable for use in a RAID or unRAID array.
  8. I have successfully updated my Windows VM to 1803. Afterwards I had demonic sound, which was only fixed by updating the GPU drivers (Nvidia 1050 Ti) and rerunning the MSI utility. Now the audio seems perfect. Previously I had noticed an occasional drifting of audio sync with YouTube, which I have not seen since the update. But I also haven't done a lot of testing so that may still happen. I would advise, if possible, to reload the GPU drivers and see if it helps.
  9. The SMART system tends to put failures in the context of media issues on the drive. Reallocated sectors. Pending sectors. Uncorrectable errors (sectors). But it is interesting that once a drive develops even a single one of these types of conditions, the number continues to grow and does not stabilize. I used to advise those with these types of issues to run 3 parity checks, and if the counts stayed consistent, that the drive was probably ok and that SMART had correctly defected a real weak spot on the media. But no one ever found the numbers to say consistent for more than 1 parity check. And this meshes closely with my own experience. So long as the SMART attributes are zero, all is good. But the first increment is reason to worry, and expect to see the attributes continue to increment. The only drives I've seen where the attributes stabilize above zero, is on new drives where the issue is reported very very early. This is rare, but I have had a couple drives over the years with 1-2 reallocated sectors or even one with 7 pending sectors that never got worse after multiple parity checks and even preclear cycles. So while I might agree with your definition, I am skeptical that this drive is actually reporting errors that are due to media issues. I believe it is more likely the failure of some hardware component in the drive and that the uncorrectables are symptoms of a more systemic hardware issue, preventing the drive from functioning normally after the condition is detected and dealt with (which I believe was the true intent of SMART).
  10. I would not trust using this drive for anything important: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocated_Sector_Ct PO--CK 187 187 140 - 252 197 Current_Pending_Sector -O--CK 192 192 000 - 1465 198 Offline_Uncorrectable ----CK 198 194 000 - 342 200 Multi_Zone_Error_Rate ---R-- 200 001 000 - 26 These attributes look particularly concerning. The drive has reallocated 252 sectors. Normally this should be zero. I have seem some drives with very small numbers (1-2) reallocated sectors stabilize, but 252 is a big number, and I expect over time this number will get worse. Worse, the drive has detected 1465 sectors that appear to be failing, but there has not yet been a write to those sectors. Only a write will trigger the actual reallocation. You might say that pending sectors are possible read errors waiting to happen. Should be zero. Offline uncorrectable and multi-zone errors are, in my experience, indications of hardware issues in the drive. When I start to see them increment beyond low single digits, even in the absence of pending and reallocated sectors, I start to get nervous. But combined with the sector issues - it is a very bad sign. And, as @johnnie.black, the extended test hit a read error. This is a clear sign of a bad drive. ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 9 Power_On_Hours -O--CK 037 037 000 - 46093 One more thing to point out, this disk has been powered on for 7.25 YEARS. It is old, and assuming these issues are rather recent, would represent a long lived drive that is finally ready to retire. It is a WD Green 1T. These, in my experience, were awesome drives. I have a number of these drives that are still functional (although some have failed). Due to their small size they are not installed even in my backup server. But I would give them a gold star for reliability, and can't fault the drive for failing at this age. I often recommend a drive with some SMART issues as a backup drive, but I'd say this drive's issues would not make it the best candidate - although it would be better than no backup.
  11. Obviously no solution is going to meet everyone's needs. But I offer the following points for your consideration: 1 - At its heart, unRAID is a NAS server. It will protect an extremely large array (over 300T with unlimited license) from most single (or dual if dual parities are installed) failures. This is real-time protection, which is continuously updated as files are added/changed/deleted. Disks can be of arbitrary / mixed sizes. Many here on the forum found the core NAS features alone (before docker, VM, or dual parity features existed) to be well worth the cost of admission for unRAID. 2 - Opening your server to remote access represents a security exposure. There are VPN solutions available for unRAID to enable this feature, but do require some effort and understanding to get fully enabled. But there are other options that work for most users that do not require VPN. Consider Teamviewer (free for personal use) which provides secure remote access to a Windows box (VM or physical). Once connected, it is like you are sitting at home at that computer and on your network. It is very easy to access the array, and transfer files. Plex (free with Plex license) has a remote access feature that allows media to be accessed remotely. These two remote access features are very easy to configure even for the least technical user, and support the lion's share of use cases for those interested in gaining secure remote access to their server without getting into setting up VPN. 3 - If you look at the cost of a QNAP or Synology solution to support 30 disks, you will be spending a lot of money if it is even possible. I just looked at a server that supports only 8 3.5" disks at a cost of $1200. It includes a low power CPU inadequate for anything processing intensive. These are all-in-one hardware/software solutions, so license is embedded in the cost. It is part of the purchase price of a server upgrade. An unRAID server can be built very economically. All it takes is a basic computer, a $50 controller card, a couple of drive cages, and an unRAID license to have a robust setup for 10 drives. An unRAID server of the size and power of the $1200 unit mentioned could be set up for 1/3 of that cost, or even less. A $1200 unRAID server would be a very powerful unRAID server, or come populated with drives. 4 - The unRAID license is a one time cost, and easily moved to an upgraded server as storage needs and/or more horsepower are needed. The licenses are also upgradable to higher capacity without repurchase. No unRAID user has ever had to pay for an upgrade to gain access to new features - even with the adding of Docker, VM, dual parity, and disk counts increasing (for full license, the max disks have increased from 12 to 30 since I have owned). And when you consider the cost of the license against the hardware and drive costs in setting up a server, the cost is a very very small percentage. The ability to protect against drive failures at the cost of a single drive, means the most economical redundancy available. 5 - With the inclusion of VMs, many users are able to virtualize their workstation(s), gaming rig, media player and other physical machines to all run as VMs on their unRAID server. This is a very attractive feature for many, myself included. I am typing this into my Windows VM running on my 12 core i9 unRAID server with KVM hardware passthrough. And this is also supporting Plex which can transcode (even processing intensive 10 bit/4k HEVC video in software), with only about 30% load. Oh - and this is running on my unRAID full license that I bought in 2007 and have never needed to upgrade through 3 major server rebuilds and countless upgrades. Certainly there are other NAS options in the marketplace, but I consider unRAID to be the most compelling platform available today. It provides parity protection, low entry cost, wide hardware compatibility, mixed drive sizes, perpetual license, and access to an expanding set of Docker apps for most every purpose under the sun. VM feature is robust and easy to access for folding multiple physical machines into VMs running on the server. You may find lower learning curves on the all-in-one solutions, but the training wheels come at a high price and are very limiting. One final comments on the forum support. A conscientious set of very knowledgeable users, virtually all volunteers except the 3 LimeTech employees who rarely engage in day to day issues unless an email is sent to LimeTech directly, provide a high level of support to users with questions or problems. Complaints like yours are exceedingly rare and quickly rectified if someone is getting frustrated and posts in the forums. You are certainly welcome to go elsewhere - but hope this helps explain the unRAID value prop as compared to other offerings. Best of luck! (@ssdindex - UnRaid vs QNAP / Synology?)
  12. Not sure why you are opting for this controller card. It is pretty pricey and unless you are running SSDs off the card, not going to equate to s performance advantage. For about $150, you can buy an LSI SAS9201-16i card which will work well for 16 spinners. For example: https://www.ebay.com/itm/LSI00244-9201-16i-OEM-PCI-Express-2-0-x-8-SATA-SAS-Host-Bus-Adapter-Card/113060096975?hash=item1a52e82bcf:g:QVwAAOSwvmNbIUaj If you don't need 16 ports, the -8i version is about $50. Hook SSDs to motherboard ports. You might also look at Monoprice for the SFF-8087 breakout cables: https://www.monoprice.com/product?p_id=8186 $5.39 is a lot less than $21. (and there is a 15% off code for Father's Day). They do charge shipping but it is nominal. This is 1/2 meter, but you can buy longer lengths if needed. Shorter is better though. Just measure approx distance. I have mostly 1/2m, but one 3/4m. Good luck!
  13. The memory type will be determined by what your motherboard supports. Transcoding to RAM can offer some performance advantage, which might up your memory needs, but I am really unable to give you guidance on how much is enough. Generally I'd say 32G is ok for a typical server, 64G for a higher power server, and 128G limit for most non-Xeon setups. Not enough RAM can certainly bottleneck a system, but extra RAM (or tweaking the RAM specs) does not correlate with significant performance improvement. Only some experimentation will tell for sure. I am still a bit unclear how you are planning to use the GP100 for transcoding. You say you plan to run in a VM - assuming that means Windows? And that Windows Plex will support the HW decoding with that graphics card. Is that correct? I use the Plex Linux (docker), and AFAIK, only the iGPU decoding is supported for 4k transcode. That is what I mentioned much earlier in this thread. But very interested in your results with the Windows setup. I could reconsider if decoding with an add-on video card is supported.
  14. Is it possible you were trying to move our copy files from a disk share to a user share? Or vica versa? This is the behavior I'd expect if you were. There are numerous posts on the user share copy bug. Whatever the emulated disk shows is what will be rebuilt. Doing an actual rebuild is not necessary. If you are on RFS, there is a chance you might salvage some of your data, but I have not heard similar successes with xfs or btrfs. Note that all file systems are susceptible to the bug mentioned above. IMO, xfs is the safest filesystem choice based on its large commercial base and active support resources. Btrfs is certainly coming along and has some nice features not available in xfs (read about the scrub command). Hope I am wrong or you have backups to restore your files. Best of luck.
  15. Some apps are coded to use only 1 thread. For such apps, it is the speed of the single core that matters for the performance of the single app. Some apps are written to use a fixed (or max) number of threads (e.g., 4). For such apps, the number of useful cores is limited. Some apps are written use a lot of threads, and for such apps, the entire set of Cores are useful. Games tend to be in this category. Some people run a lot of apps at the same time, and having lots of cores would let them run in parallel without having to share the cores much if at all. Of course if the apps you run are not CPU intensive, none of this may make much difference. Your threads will be mostly idle anyway. If you are running hard gaming apps, your high core count might be valuable. But games tend to be gated more with GPU performance than CPU performance. Would 10 4G Cores be better than 20 2G Cores? That would depend on the app. The 10 faster cores might be better for heavy spreadsheets, while the 20 2G Cores might be better for simultaneous transcodes. So see how your usage model stacks up against these use cases. If you have enough threads of execution running in parallel, your 44 cores may be giving you outstanding performance.
  16. You can definitely upgrade your flash drive. Look in the Wiki for more precise instructions. But generally you prepare a new flash like you were a new user, copy the "config" folder from the existing flash to the new flash, and then boot with the new flash. UnRAID will detect that you have an invalid keyfile, and offer the option to transfer your license to the new USB. Not sure if you have any files on the existing USB that you could delete. I sometimes store syslog copies, preclear reports, and an occasional download file. If there are files you can delete, you might be able to continue to use the 500G flash for a while longer. But I'd suggest an upgrade to at least 2G. 4G, 8G and 16G drives are more commonly available and still quite inexpensive. Use a decent brand USB2 drive.
  17. @AJ Ouellet I would test out the new drives, and then add them as UDs. Then you can copy the data from the drives you want to replace to the new drives in parallel. Optionally you can compare checksums to ensure everything copied correctly. Then you can do a new config, excluding the failing disks and including the new ones. Parity will then build. The three old drives you can hang on to as backups. This is a lot faster and handles failure scenarios better IMO.
  18. If you are on the fence, I agree to go with the pro license. It is exceedingly rare for a user to need fewer drives than planned. But upgrading from plus to pro is not a huge premium. Not a problem to go that route and upgrade when/if you need more drives.
  19. May 1 BackBlaze report. Drives are getting more reliable in general. Seagate continues to be the favorite brand. https://www.backblaze.com/blog/hard-drive-stats-for-q1-2018/
  20. In theory - a sector goes bad and drive reallocates it, using one of the spare sectors that the drive holds in reserve. The bad sector is never used again. All is good. But in practice, this is rarely observed. Typically a single drive reallocation is only a symptom of a larger problem that will keep getting worse. I've had a few drives with a small number of reallocated sectors that never got worse. But this was in my very early years with unRAID - drives were IDE and probably ~250G - 300G. Since then I personally have not had any reallocated sectors until drives were 6+ years old and rapidly got worse. You can post diagnostics and Johnnie may be able to see something. An extended smart test is probably in order.
  21. Are you able to share any information based on your current inventory?
  22. I can show you from my system. Here are two RAID0 disks in my array. This always says the same thing. Temps are not right. root@tower:/var/local/emhttp/smart# cat parity smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.16-unRAID] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === Current Drive Temperature: 30 C Drive Trip Temperature: 25 C Manufactured in week 30 of year 2002 Specified cycle count over device lifetime: 4278190080 Accumulated start-stop cycles: 256 Elements in grown defect list: 0 root@tower:/var/local/emhttp/smart# cat disk7 smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.16-unRAID] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === Current Drive Temperature: 30 C Drive Trip Temperature: 25 C Manufactured in week 30 of year 2002 Specified cycle count over device lifetime: 4278190080 Accumulated start-stop cycles: 256 Elements in grown defect list: 0
  23. I do agree with the video above in the sense that it is good to have high volume long term data on your drives that are most like the environment in which you will have them. But unfortunately we can't arrange such a study. But even if BackBlaze is not exactly your use case, it does provide a laboratory that exposes all drives to a pretty consistent "average" usage pattern over time. Would you expect enterprise drives to do better? Yes, you might. Desktops worse? Yes, you might. But what if you are finding some desktop drives that perform as well or better than Enterprise. Would the BackBlaze study help you find just gems in the market? YES. So while you might say Seagate should not be penalized for having desktop drives that are pretty crappy for enterprise use, other manufacturers might be complimented for selling a product that is over engineered and works well for both. And in thinking about the use case of BackBlaze - are we really that different? Our media drives tend to get filled up rather quickly. Once full, deletes are rare, but do occur. And occasionally repurposed and refilled. BackBlaze is filling drives rather quickly with lower volume updates. They have client turnover and deleting their data only to replace it with new customer data is happening at some level. Our disks are often spun down when not being accessed. BackBlaze data is mostly backups that may sit unaccessed for long periods or forever. We run our parity checks that BackBlaze problably doesn't do. But overall I really don't think we are so different. Maybe for the video above, someone that plans to install Windows on a 3T spinner in a gaming case would have a very different use case. But we unRAIDers, I think it is pretty similar. I have had best luck with Hitachi and HGST drives (and maybe I'll throw in the Toshiba's that were acquired from Hitachi). The Seagates during the 2T-4T years were the worst of the worst for me. I lost several and swore off of them. Recent 8T WD RED and Seagate SMR purchases are not old enough to comment. But so far so good. I still think BackBlaze data is valuable if used properly. And they would have you buying HGST and steering clear of Seagates - very consistent with my personal experience. If an idiot savant comes up with the right answer, you have to give him credit, even if you don't agree or understand his method!
  24. Sounds like you may not have enough slots for your array. I think biggest mistake people make is plan for too few disks. I'd suggest setting up a new server, this one with enough slots to grow. And add at least a couple disks. Get your data moved over server-to-server. And once the new server is working well you can physically move over the larger disks into the new array, and keep the old server for backups and emergency use. New server can have a low powered CPU and minimal memory, that you'd plan to exchange with the existing server once all is set up. A motherboard transplant is not so hard. If you do look at eSata, be caseful. I've seen some of the eSata units come with longer cables and not work reliably. Shorter cases are not as convenient but they work better. I can't recommend USB for an array disk. In a jam I am not against plugging in a bare drive with sata and power and sitting it outside the case on the floor, turned over like a turtle with electronics side up. Not a long term solution, but if your server is out of the way and will not be disturbed while you complete whatever you are doing, I don't think that is an awful option for a short term need. I've precleared disks that way before, but now always have at least one free slot for preclear or emergency use.
  25. I understand the exhaust part. It's your air intake I don't understand. There are no fans in the upper back of the case I can see. Cool air will NOT just waft into the case from above without a fan driving it down. And you are working against the natural tendency of warm air rising. Your layout is much more likely to result in hot air accumulating at the top of the case with nothing much to force it out. And that hot air will recirculate and get hotter, reducing your cooling. How is the exhaust from the CPU on the right getting out of the case? Looks like it is blowing into the back of your drives?? It will tend to rise to the top of the case, as mentioned, will get recirculated back into the CPU coolers. Take a look at this https://www.howtogeek.com/303078/how-to-manage-your-pcs-fans-for-optimal-airflow-and-cooling/ it has a lot of info on case cooling, positive vs negative pressure, etc. Here is what you are trying to achieve. Cool air coming in from low front (could also come in from bottom depending on the case), and hot air going out the back and top. Perfection is impossible, If you turn the fan on the CPU on the right to point up, and added an exhaust fan up there, while having a couple of intake fans on the front to get cool air into the case, I think your cooling would be much more effective. #ssdindex - case cooling