Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

8 Neutral

About bman

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. bman

    question about error checking

    preclear is a script you can run on your unRAID server to read from and write to every sector on a hard drive, then read again to verify the writes happened properly. It is an optional step in preparing a drive for use with a server (I preclear all new drives whether for unRAID or not). The original purpose of the script is now irrelevant, but it was also very useful for weeding out weak hard drives before you entrusted them with your data, which presumably you do not wish to lose. By forcing a drive to read and write every sector multiple times (I use a cycle count of 3 - for 12TB drives that would likely take a week) you are forcing the drive to find any sectors that are perhaps bad from the factory. It's a burn-in test. For me it has found a few drives that I was able to return within the 15 days my local computer shop allows me to walk in and replace a failed drive with them. After that time period I'd have to deal with the manufacturer's RMA process which takes much longer. It's also useful for assessing the relative health of a drive that has been used before. It erases everything on the drive, so it's something you only do before or in between "uses". preclear is unrelated to SMART data and testing except to say that SMART counters will definitely be altered after preclear has been run. SMART test results after the completion of preclear can be decent indicators of whether you can rely on your drive to keep your data safe, but nothing is foolproof. SMART data will never reveal the full story: Almost regardless of the SMART data, your drive may live a very long life, or it may fail a week later without any hint of trouble beforehand. The majority of the time (75%, give or take, if you believe Backblaze data on the subject) SMART results are good enough to determine the relative health of a hard drive.
  2. bman

    New User looking for advice on setup

    The total capacity of your data array is irrelevant to parity, so long as you follow the rule of parity being at least as large as your largest data drive. You can have twenty-eight 8TB drives (for a capacity of 224TB) all protected by your single 8TB parity drive, if you so choose. 28 data disks is the current unRAID limit.
  3. bman

    New User looking for advice on setup

    There are exceptions to every rule. I have a 320GB cache drive that's running great and is 11 years old. Most of my IDE drives from 2004 still worked fine when I tossed them out in 2015. I have SCSI drives from the 1980's that are running fine today. Today I want drives that last 7 years or more and I am hoping the better 'ratings' of the higher-priced drives will prove to be of value in terms of longevity and total cost of ownership. I get the insurance thing, law of averages, conspiracy theory and share some of the same thoughts. Taking WD's lineup for example, purple for... surveillance? Give me a break. That's different than NAS how? I used to have dozens of WD green drives spinning, but I can count on one hand how many lasted longer than 5 years, and over 70% of them failed before 4 years. I have (so far) had much better luck with WD Red, Black and Gold drives, though few of them are older than 4 years at this point.
  4. bman

    New User looking for advice on setup

    Put better, 5-year warranty means you won't have to shell out more money to replace the drive any time soon Drives fail whenever they want to based upon lots of factors, but in general those warranted for five years have passed more stringent tests at the factory and can be expected to last longer, else the manufacturer wouldn't put a longer warranty on them in the first place.
  5. In general, what you're after is a different server OS than unRAID. You can use unRAID as a spot to host virtual machines, and in that way get the "different" OS that is required for your performance needs. In short, though, if you're not using unRAID for its storage abilities, you may be better served installing another OS on your bare metal build. The author of the video is already running unRAID for other reasons, but unRAID is not what is enabling the speed between his demo systems. Rather, the speeds demonstrated come from the Windows, Linux or OS X machines (whichever he has booted into at the time) in concert with RAM used as disks and 10GbE network interface cards. Proper network setup is necessary too, naturally.
  6. bman

    New User looking for advice on setup

    While I do not recommend you run without parity, technically this is also possible. If I hadn't used parity, I can recall at least 6 times I would have lost multiple TB of data in the past, if that helps you make the decision on parity use. You may be surprised how short a time period your initial array size will last (given your intended use), so I'd say save some cash for a couple 5-year-warranted 8TB drives as your next upgrade, then you can just add such "large" drives to your array as needed. Going from 2TB to 4, then 6, then 8 for your parity drive as an example, means you've purchased at least two too many hard drives in a short (maybe 2 year) time period. By buying 5-year-warranted drives (WD Gold or Black or Red Pro as examples) you can rest assured the drives won't fail anytime soon, and by buying large it takes much longer to fill your chassis with hard drives. (I experienced some regret as had to remove perfectly functional, not-very-old 2TB drives in favour of larger ones just because I hadn't any more physical space for more drives.)
  7. Quite right; It was not obvious where to go from my point of not assigning drives to the array, but the idea is that once you've got them outside the control of unRAID you can build a virtual machine to handle them in any speedy fashion you choose.
  8. bman

    Running Unraid 6.5.3 on microserver N36

    Hello Ettiene, There are upgrade procedures to go to v6. Did you try to implement the method here? https://lime-technology.com/wiki/UnRAID_6/Upgrade_Instructions For your case you'd need to perform a "clean install" s outlined from Tom's post here: https://lime-technology.com/forums/topic/39381-upgrading-unraid-5-to-unraid-6/ You have not included any details of your server. Can you indicate which components you have working with 4.7? Version 6 will not work with all the same hardware as version 4.7 does. (I for example had to buy a new motherboard, CPU and RAM since my 2005-era server was too old for v6)
  9. As nuhll said, you can run VMs of any Linux flavour you like (or other operating systems as desired) and get all kinds of performance. But the storage subsystem unRAID uses is not suited to the speeds you're looking for. Using drives outside the array (by simply not assigning physical drives to unRAID's storage pool) with any VMs you might like to run will get you the speeds you're looking for. You can have a hybrid system in this way, using unRAID for archive type storage (write-once, read many concept) and also using it as a spot to host virtual machines from. Or you can ignore the core functionality of unRAID and just use it to host VMs and fast drives + network access, similar to what Linus did in his 7-gamers-on-one-PC video as mentioned above.
  10. bman

    Upgrade from v5 to v6

    Dual parity may not be 100% successful in cases where you lose a drive or two simultaneously, because failing hard drives can cause controller lock-ups and other inconsistencies which may in turn negatively affect your otherwise healthy drives. Once any healthy drive gets bad bits written to it in the throes of disk failure, parity is no longer of any use to you in rebuilding when you replace your failed drive(s). I'd suggest you get a functional backup running for the data you consider most important (the Kodi/database stuff, at least) then just follow the upgrade procedures to get to v6. My own transition was painless. MySQL runs fine on 6, but I do not know how to transfer an existing database to a new server so cannot comment on what troubles may show up there. Your cache drive ostensibly could (should?) be empty so whether its format gets changed or not during the upgrade is irrelevant with respect to being able to reformat it to work again after a downgrade. If important data reside on your cache drive, back them up too before you start.
  11. bman

    Hardware RAID 6 Software Raid 0

    Still a little confused. Does this mean you'll be running RAID60 with 10 or more drives in the end? Problem with RAID60 is it's pretty ridiculous in terms of protection versus cost. Not just cost of hardware, but rebuild time if there's a failure and the speed impact the rebuild function has on the overall performance of the server. RAID 10 is very nearly as robust in terms of data protection as RAID 60, it's cheaper and many hours (sometimes days) quicker to rebuild in the event of drive failure. I wouldn't touch RAID 50 or 60 for video editing. Time is money. Hard drives are cheaper than downtime. $0.02
  12. bman

    Hardware RAID 6 Software Raid 0

    No question this is a task unRAID is not well suited for. 8TB of new information per hour is impossible for spinning disks to keep up with in the world of unRAID. Furthermore I question the need to create two RAID6 arrays out of 8 drives then stripe them together. You'd end up with 4 data drives and 4 parity drives (capacity-wise, not minding the actual RAID implementation). If you're chopping your capacity in half anyway why not do a simple RAID 10 to get the speed and redundancy you're after? Saves a lot of XOR time and you don't need a fancy hardware RAID controller to get the job done.
  13. bman

    Multi Homing / Multi Nics

    Are you planning to use unRAID for this purpose? Is the purpose of this caching server to hold your backups while they get written to tape or some other long-term storage solution? Assuming you have lots of servers each creating backups on many different networks you can multihome your NICs to allow specific servers to connect to your caching server on different NICs (presumably at staggered times during the day) and get lots of throughput directly to the hard drives. If you're using unRAID for this purpose, it gets a little complex in the setup though, since by default unRAID writes to just one HDD at a time. You could set things up so each server sends its files to specific shares which are confined to specific HDDs so that you would be forcing unRAID to write to multiple HDDs at the same time to get the throughput you want. If however you are using parity with unRAID, that will be a bottleneck due to the single-drive nature of parity in unRAID. I'd suggest that a slightly easier way on the Ethernet side might be to have a top-of-rack switch that has a free 10Gb port connecting to a 10GbE NIC on your caching server. You can configure all the VLAN stuff on the switch and just tell each of your servers to send their backups to the caching server. Theoretically this means ten Gb-connected servers could run simultaneously into the caching server at full speed. You'd still need to ensure your hard drive subsystem could sustain 1GB/s writes, though.
  14. bman

    Mac Pro (2.1) and UnRaid (latest version)

    Don't have access to a 2,1 model myself but have found it trivial to install various flavours of Linux on the 1,1 and it's worked great. Not sure if there would be drivers for the network and storage controllers of the MP machines in unRAID. If you boot from the unRAID USB drive what happens?
  15. Not really solved as in I found the problem, just that it started working today. There were a few more Windows updates completed on the physical server as well as the VMs that run the Active Directory domain so I am suspecting some pending update somewhere fixed a glitch. Syslog shows same as yesterday, so no clues there.