bman

Members
  • Posts

    182
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bman's Achievements

Explorer

Explorer (4/14)

9

Reputation

  1. It depends how much you depend on your cache data being safe. I don't like surprises, so I wouldn't trade an EVO drive for a cheap SSD of any kind. I have thrown away many lesser SSDs but used some of my forty or so Samsung SSDs for 8 years already. They make the NAND flash memory, they make the controller, they make the firmware. Their reputation is all over their drives inside and out. I discovered that Intel's 5-year warranty means precious little if you don't have your data when you come home from a long day of video production. Eight SSDs and all of them failed more than once within the first year, in the case of that particular hard-learned lesson.
  2. Not that I am an expert on this but as far as I know, the special flag to tell Unraid the drive is "cleared" is only set at the end of a successful preclear script execution. If a new disk is not properly cleared, Unraid must clear it before use as part of the array. This would be (among other reasons) to prevent any possibility of old data on a drive becoming part of a parity calculation and wreaking havoc on your once-good data elsewhere in the array. Clearing is just writing zeroes to all spots on a disk, where preclear reads, then writes, then verifies for each of its cycles, unless you change its settings to skip some of those steps. That's why you saw the large time difference.
  3. I don't have the answer for your issue, but I experienced very much the same kinds of trouble when I wanted to run 4 RAM sticks on my Gigabyte EP45-UD3P based computer. Two sticks were fine but I had to tweak many BIOS settings to get four RAM sticks to be stable. I was able to find the correct tweaks via internet search. Maybe someone has ideas for your motherboard?
  4. I'm sure there are better ways, especially if by chance your Synology disks are each singles with valid file systems on them, but since we don't know how you've got things configured, here's how I would go about it: Create desired shares on UNRAID and use rsync to transfer files categorically over the network. For example if you have a large Apple Music library and create a "music" share on UNRAID, you would proceed to copy all the music files from your Synology device to that share as one session. Repeat for TV shows, photos, etc as required. That way if something breaks along the way you can easily find out where things left off, double-check the latest file edition for corruption, and continue from there. For mounting shares on UNRAID check https://docs.unraid.net/legacy/FAQ/transferring-files-from-a-network-share-to-unraid/ I usually mount network shares in the /mnt directory on UNRAID but you may choose anywhere you like. rsync can show progress, verify and then safely remove source files with a single command, such as rsync -av --progress --remove-source-files /mnt/synmusic/* . This copies and verifies, then erases the source files. If you want to skip the --remove-source-files option you'll retain a backup of each file until you know everything is safe. That command assumes you're currently in the UNRAID destination share before executing - e.g. cd /mnt/user/music. "synmusic" is the name I chose to use for the mounted music share from the Synology device. If you have just one master share from your Synology device and just a bunch of subdirectories, perhaps "synology" is a more appropriate label, then your rsync command might reference /mnt/synology/music/*
  5. Okay thanks for the explanation. This is a useful plugin, thanks very much for your efforts!
  6. I'd like to request this goes one further in having the ability to hide individual slots, or mark them in some way as N/A (not available) with their own colour. Reasoning is because sometimes through poor quality control or over-use/abuse, individual slots on a backplane may no longer be functional. I have two Supermicro 24-bay chassis where this is the case and I need to remember not to try to use those slots, so I physically mark them with tape, but it would be nice to have the plugin able to also mark individual slots as defunct, so when I am several miles away from the physical server, I can easily know which slot(s) are faulty for making upgrade or drive swap plans before visiting the site.
  7. tbh I haven't given Intel SSDs a fair shake since premature failure on 6 of 8 purchased 520 series products left students without their days' worth of video footage upon returning. Swore off them because even though the warranty was good, the reliability wasn't. Seeing the price of the D3-4510 960G versus the competition has me rethinking things, albeit for different use cases. I think you're on the right track with that one.
  8. $0.02 on write endurance - Samsung 850 or 860 Pro series wins. I've only seen one dud out of a couple dozen I've used over the past several years. Write endurance seems to be a Sammy strongpoint: https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
  9. Agreed: return to sender. Had lots of drives dead on arrival in my time. Had a drive not quite failed but prone to future problems based upon SMART information and inconsistent data transfer rates after the first cycle was complete, so I cycled it through preclear for several days until it barfed so I had no hassle at the returns desk.
  10. preclear is a script you can run on your unRAID server to read from and write to every sector on a hard drive, then read again to verify the writes happened properly. It is an optional step in preparing a drive for use with a server (I preclear all new drives whether for unRAID or not). The original purpose of the script is now irrelevant, but it was also very useful for weeding out weak hard drives before you entrusted them with your data, which presumably you do not wish to lose. By forcing a drive to read and write every sector multiple times (I use a cycle count of 3 - for 12TB drives that would likely take a week) you are forcing the drive to find any sectors that are perhaps bad from the factory. It's a burn-in test. For me it has found a few drives that I was able to return within the 15 days my local computer shop allows me to walk in and replace a failed drive with them. After that time period I'd have to deal with the manufacturer's RMA process which takes much longer. It's also useful for assessing the relative health of a drive that has been used before. It erases everything on the drive, so it's something you only do before or in between "uses". preclear is unrelated to SMART data and testing except to say that SMART counters will definitely be altered after preclear has been run. SMART test results after the completion of preclear can be decent indicators of whether you can rely on your drive to keep your data safe, but nothing is foolproof. SMART data will never reveal the full story: Almost regardless of the SMART data, your drive may live a very long life, or it may fail a week later without any hint of trouble beforehand. The majority of the time (75%, give or take, if you believe Backblaze data on the subject) SMART results are good enough to determine the relative health of a hard drive.
  11. The total capacity of your data array is irrelevant to parity, so long as you follow the rule of parity being at least as large as your largest data drive. You can have twenty-eight 8TB drives (for a capacity of 224TB) all protected by your single 8TB parity drive, if you so choose. 28 data disks is the current unRAID limit.
  12. There are exceptions to every rule. I have a 320GB cache drive that's running great and is 11 years old. Most of my IDE drives from 2004 still worked fine when I tossed them out in 2015. I have SCSI drives from the 1980's that are running fine today. Today I want drives that last 7 years or more and I am hoping the better 'ratings' of the higher-priced drives will prove to be of value in terms of longevity and total cost of ownership. I get the insurance thing, law of averages, conspiracy theory and share some of the same thoughts. Taking WD's lineup for example, purple for... surveillance? Give me a break. That's different than NAS how? I used to have dozens of WD green drives spinning, but I can count on one hand how many lasted longer than 5 years, and over 70% of them failed before 4 years. I have (so far) had much better luck with WD Red, Black and Gold drives, though few of them are older than 4 years at this point.
  13. Put better, 5-year warranty means you won't have to shell out more money to replace the drive any time soon Drives fail whenever they want to based upon lots of factors, but in general those warranted for five years have passed more stringent tests at the factory and can be expected to last longer, else the manufacturer wouldn't put a longer warranty on them in the first place.
  14. In general, what you're after is a different server OS than unRAID. You can use unRAID as a spot to host virtual machines, and in that way get the "different" OS that is required for your performance needs. In short, though, if you're not using unRAID for its storage abilities, you may be better served installing another OS on your bare metal build. The author of the video is already running unRAID for other reasons, but unRAID is not what is enabling the speed between his demo systems. Rather, the speeds demonstrated come from the Windows, Linux or OS X machines (whichever he has booted into at the time) in concert with RAM used as disks and 10GbE network interface cards. Proper network setup is necessary too, naturally.
  15. While I do not recommend you run without parity, technically this is also possible. If I hadn't used parity, I can recall at least 6 times I would have lost multiple TB of data in the past, if that helps you make the decision on parity use. You may be surprised how short a time period your initial array size will last (given your intended use), so I'd say save some cash for a couple 5-year-warranted 8TB drives as your next upgrade, then you can just add such "large" drives to your array as needed. Going from 2TB to 4, then 6, then 8 for your parity drive as an example, means you've purchased at least two too many hard drives in a short (maybe 2 year) time period. By buying 5-year-warranted drives (WD Gold or Black or Red Pro as examples) you can rest assured the drives won't fail anytime soon, and by buying large it takes much longer to fill your chassis with hard drives. (I experienced some regret as had to remove perfectly functional, not-very-old 2TB drives in favour of larger ones just because I hadn't any more physical space for more drives.)