bman

Members
  • Posts

    182
  • Joined

  • Last visited

Everything posted by bman

  1. It depends how much you depend on your cache data being safe. I don't like surprises, so I wouldn't trade an EVO drive for a cheap SSD of any kind. I have thrown away many lesser SSDs but used some of my forty or so Samsung SSDs for 8 years already. They make the NAND flash memory, they make the controller, they make the firmware. Their reputation is all over their drives inside and out. I discovered that Intel's 5-year warranty means precious little if you don't have your data when you come home from a long day of video production. Eight SSDs and all of them failed more than once within the first year, in the case of that particular hard-learned lesson.
  2. Not that I am an expert on this but as far as I know, the special flag to tell Unraid the drive is "cleared" is only set at the end of a successful preclear script execution. If a new disk is not properly cleared, Unraid must clear it before use as part of the array. This would be (among other reasons) to prevent any possibility of old data on a drive becoming part of a parity calculation and wreaking havoc on your once-good data elsewhere in the array. Clearing is just writing zeroes to all spots on a disk, where preclear reads, then writes, then verifies for each of its cycles, unless you change its settings to skip some of those steps. That's why you saw the large time difference.
  3. I don't have the answer for your issue, but I experienced very much the same kinds of trouble when I wanted to run 4 RAM sticks on my Gigabyte EP45-UD3P based computer. Two sticks were fine but I had to tweak many BIOS settings to get four RAM sticks to be stable. I was able to find the correct tweaks via internet search. Maybe someone has ideas for your motherboard?
  4. I'm sure there are better ways, especially if by chance your Synology disks are each singles with valid file systems on them, but since we don't know how you've got things configured, here's how I would go about it: Create desired shares on UNRAID and use rsync to transfer files categorically over the network. For example if you have a large Apple Music library and create a "music" share on UNRAID, you would proceed to copy all the music files from your Synology device to that share as one session. Repeat for TV shows, photos, etc as required. That way if something breaks along the way you can easily find out where things left off, double-check the latest file edition for corruption, and continue from there. For mounting shares on UNRAID check https://docs.unraid.net/legacy/FAQ/transferring-files-from-a-network-share-to-unraid/ I usually mount network shares in the /mnt directory on UNRAID but you may choose anywhere you like. rsync can show progress, verify and then safely remove source files with a single command, such as rsync -av --progress --remove-source-files /mnt/synmusic/* . This copies and verifies, then erases the source files. If you want to skip the --remove-source-files option you'll retain a backup of each file until you know everything is safe. That command assumes you're currently in the UNRAID destination share before executing - e.g. cd /mnt/user/music. "synmusic" is the name I chose to use for the mounted music share from the Synology device. If you have just one master share from your Synology device and just a bunch of subdirectories, perhaps "synology" is a more appropriate label, then your rsync command might reference /mnt/synology/music/*
  5. Okay thanks for the explanation. This is a useful plugin, thanks very much for your efforts!
  6. I'd like to request this goes one further in having the ability to hide individual slots, or mark them in some way as N/A (not available) with their own colour. Reasoning is because sometimes through poor quality control or over-use/abuse, individual slots on a backplane may no longer be functional. I have two Supermicro 24-bay chassis where this is the case and I need to remember not to try to use those slots, so I physically mark them with tape, but it would be nice to have the plugin able to also mark individual slots as defunct, so when I am several miles away from the physical server, I can easily know which slot(s) are faulty for making upgrade or drive swap plans before visiting the site.
  7. tbh I haven't given Intel SSDs a fair shake since premature failure on 6 of 8 purchased 520 series products left students without their days' worth of video footage upon returning. Swore off them because even though the warranty was good, the reliability wasn't. Seeing the price of the D3-4510 960G versus the competition has me rethinking things, albeit for different use cases. I think you're on the right track with that one.
  8. $0.02 on write endurance - Samsung 850 or 860 Pro series wins. I've only seen one dud out of a couple dozen I've used over the past several years. Write endurance seems to be a Sammy strongpoint: https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
  9. Agreed: return to sender. Had lots of drives dead on arrival in my time. Had a drive not quite failed but prone to future problems based upon SMART information and inconsistent data transfer rates after the first cycle was complete, so I cycled it through preclear for several days until it barfed so I had no hassle at the returns desk.
  10. preclear is a script you can run on your unRAID server to read from and write to every sector on a hard drive, then read again to verify the writes happened properly. It is an optional step in preparing a drive for use with a server (I preclear all new drives whether for unRAID or not). The original purpose of the script is now irrelevant, but it was also very useful for weeding out weak hard drives before you entrusted them with your data, which presumably you do not wish to lose. By forcing a drive to read and write every sector multiple times (I use a cycle count of 3 - for 12TB drives that would likely take a week) you are forcing the drive to find any sectors that are perhaps bad from the factory. It's a burn-in test. For me it has found a few drives that I was able to return within the 15 days my local computer shop allows me to walk in and replace a failed drive with them. After that time period I'd have to deal with the manufacturer's RMA process which takes much longer. It's also useful for assessing the relative health of a drive that has been used before. It erases everything on the drive, so it's something you only do before or in between "uses". preclear is unrelated to SMART data and testing except to say that SMART counters will definitely be altered after preclear has been run. SMART test results after the completion of preclear can be decent indicators of whether you can rely on your drive to keep your data safe, but nothing is foolproof. SMART data will never reveal the full story: Almost regardless of the SMART data, your drive may live a very long life, or it may fail a week later without any hint of trouble beforehand. The majority of the time (75%, give or take, if you believe Backblaze data on the subject) SMART results are good enough to determine the relative health of a hard drive.
  11. The total capacity of your data array is irrelevant to parity, so long as you follow the rule of parity being at least as large as your largest data drive. You can have twenty-eight 8TB drives (for a capacity of 224TB) all protected by your single 8TB parity drive, if you so choose. 28 data disks is the current unRAID limit.
  12. There are exceptions to every rule. I have a 320GB cache drive that's running great and is 11 years old. Most of my IDE drives from 2004 still worked fine when I tossed them out in 2015. I have SCSI drives from the 1980's that are running fine today. Today I want drives that last 7 years or more and I am hoping the better 'ratings' of the higher-priced drives will prove to be of value in terms of longevity and total cost of ownership. I get the insurance thing, law of averages, conspiracy theory and share some of the same thoughts. Taking WD's lineup for example, purple for... surveillance? Give me a break. That's different than NAS how? I used to have dozens of WD green drives spinning, but I can count on one hand how many lasted longer than 5 years, and over 70% of them failed before 4 years. I have (so far) had much better luck with WD Red, Black and Gold drives, though few of them are older than 4 years at this point.
  13. Put better, 5-year warranty means you won't have to shell out more money to replace the drive any time soon Drives fail whenever they want to based upon lots of factors, but in general those warranted for five years have passed more stringent tests at the factory and can be expected to last longer, else the manufacturer wouldn't put a longer warranty on them in the first place.
  14. In general, what you're after is a different server OS than unRAID. You can use unRAID as a spot to host virtual machines, and in that way get the "different" OS that is required for your performance needs. In short, though, if you're not using unRAID for its storage abilities, you may be better served installing another OS on your bare metal build. The author of the video is already running unRAID for other reasons, but unRAID is not what is enabling the speed between his demo systems. Rather, the speeds demonstrated come from the Windows, Linux or OS X machines (whichever he has booted into at the time) in concert with RAM used as disks and 10GbE network interface cards. Proper network setup is necessary too, naturally.
  15. While I do not recommend you run without parity, technically this is also possible. If I hadn't used parity, I can recall at least 6 times I would have lost multiple TB of data in the past, if that helps you make the decision on parity use. You may be surprised how short a time period your initial array size will last (given your intended use), so I'd say save some cash for a couple 5-year-warranted 8TB drives as your next upgrade, then you can just add such "large" drives to your array as needed. Going from 2TB to 4, then 6, then 8 for your parity drive as an example, means you've purchased at least two too many hard drives in a short (maybe 2 year) time period. By buying 5-year-warranted drives (WD Gold or Black or Red Pro as examples) you can rest assured the drives won't fail anytime soon, and by buying large it takes much longer to fill your chassis with hard drives. (I experienced some regret as had to remove perfectly functional, not-very-old 2TB drives in favour of larger ones just because I hadn't any more physical space for more drives.)
  16. Quite right; It was not obvious where to go from my point of not assigning drives to the array, but the idea is that once you've got them outside the control of unRAID you can build a virtual machine to handle them in any speedy fashion you choose.
  17. Hello Ettiene, There are upgrade procedures to go to v6. Did you try to implement the method here? https://lime-technology.com/wiki/UnRAID_6/Upgrade_Instructions For your case you'd need to perform a "clean install" s outlined from Tom's post here: https://lime-technology.com/forums/topic/39381-upgrading-unraid-5-to-unraid-6/ You have not included any details of your server. Can you indicate which components you have working with 4.7? Version 6 will not work with all the same hardware as version 4.7 does. (I for example had to buy a new motherboard, CPU and RAM since my 2005-era server was too old for v6)
  18. As nuhll said, you can run VMs of any Linux flavour you like (or other operating systems as desired) and get all kinds of performance. But the storage subsystem unRAID uses is not suited to the speeds you're looking for. Using drives outside the array (by simply not assigning physical drives to unRAID's storage pool) with any VMs you might like to run will get you the speeds you're looking for. You can have a hybrid system in this way, using unRAID for archive type storage (write-once, read many concept) and also using it as a spot to host virtual machines from. Or you can ignore the core functionality of unRAID and just use it to host VMs and fast drives + network access, similar to what Linus did in his 7-gamers-on-one-PC video as mentioned above.
  19. Dual parity may not be 100% successful in cases where you lose a drive or two simultaneously, because failing hard drives can cause controller lock-ups and other inconsistencies which may in turn negatively affect your otherwise healthy drives. Once any healthy drive gets bad bits written to it in the throes of disk failure, parity is no longer of any use to you in rebuilding when you replace your failed drive(s). I'd suggest you get a functional backup running for the data you consider most important (the Kodi/database stuff, at least) then just follow the upgrade procedures to get to v6. My own transition was painless. MySQL runs fine on 6, but I do not know how to transfer an existing database to a new server so cannot comment on what troubles may show up there. Your cache drive ostensibly could (should?) be empty so whether its format gets changed or not during the upgrade is irrelevant with respect to being able to reformat it to work again after a downgrade. If important data reside on your cache drive, back them up too before you start.
  20. Still a little confused. Does this mean you'll be running RAID60 with 10 or more drives in the end? Problem with RAID60 is it's pretty ridiculous in terms of protection versus cost. Not just cost of hardware, but rebuild time if there's a failure and the speed impact the rebuild function has on the overall performance of the server. RAID 10 is very nearly as robust in terms of data protection as RAID 60, it's cheaper and many hours (sometimes days) quicker to rebuild in the event of drive failure. I wouldn't touch RAID 50 or 60 for video editing. Time is money. Hard drives are cheaper than downtime. $0.02
  21. No question this is a task unRAID is not well suited for. 8TB of new information per hour is impossible for spinning disks to keep up with in the world of unRAID. Furthermore I question the need to create two RAID6 arrays out of 8 drives then stripe them together. You'd end up with 4 data drives and 4 parity drives (capacity-wise, not minding the actual RAID implementation). If you're chopping your capacity in half anyway why not do a simple RAID 10 to get the speed and redundancy you're after? Saves a lot of XOR time and you don't need a fancy hardware RAID controller to get the job done.
  22. Are you planning to use unRAID for this purpose? Is the purpose of this caching server to hold your backups while they get written to tape or some other long-term storage solution? Assuming you have lots of servers each creating backups on many different networks you can multihome your NICs to allow specific servers to connect to your caching server on different NICs (presumably at staggered times during the day) and get lots of throughput directly to the hard drives. If you're using unRAID for this purpose, it gets a little complex in the setup though, since by default unRAID writes to just one HDD at a time. You could set things up so each server sends its files to specific shares which are confined to specific HDDs so that you would be forcing unRAID to write to multiple HDDs at the same time to get the throughput you want. If however you are using parity with unRAID, that will be a bottleneck due to the single-drive nature of parity in unRAID. I'd suggest that a slightly easier way on the Ethernet side might be to have a top-of-rack switch that has a free 10Gb port connecting to a 10GbE NIC on your caching server. You can configure all the VLAN stuff on the switch and just tell each of your servers to send their backups to the caching server. Theoretically this means ten Gb-connected servers could run simultaneously into the caching server at full speed. You'd still need to ensure your hard drive subsystem could sustain 1GB/s writes, though.
  23. Don't have access to a 2,1 model myself but have found it trivial to install various flavours of Linux on the 1,1 and it's worked great. Not sure if there would be drivers for the network and storage controllers of the MP machines in unRAID. If you boot from the unRAID USB drive what happens?
  24. Not really solved as in I found the problem, just that it started working today. There were a few more Windows updates completed on the physical server as well as the VMs that run the Active Directory domain so I am suspecting some pending update somewhere fixed a glitch. Syslog shows same as yesterday, so no clues there.
  25. The rest of the drives coming to the rescue in your "evil" scenario means that all other drives (which have none of your share data) must be fully functional until a rebuild is complete. If you were to lose any other drive at the same time, all your share data are gone. It is always true that two drive failures in unRAID mean the parity bits are invalid if using a single parity drive, (3 drive failures means same if using dual parity,) so it is not to say confining a share to a single drive is bad or good, only to remind that any important data need to be backed up to be truly safe. Side note: Dual parity does not necessarily mean you can lose 2 drives and still successfully recover all your data. See the following for reasoning: https://lime-technology.com/forums/topic/58741-how-does-dual-parity-work-actually/?do=findComment&comment=576325