m-a-x

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

m-a-x's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Dev 1 through 9 are the source of suffering. sdX have direct correlation with physical SAS connectors on my 24 bay SAS expander, so I know which drive is which sdX. But the Dev [#] that I don't even know what it is, gets assigned randomly. If I remove say half the drives, the Dev [#] would get rearranged, there would be still Dev 1-4, but as soon as I introduce all the drives back, the Dev [#] assignment goes back to this specific arrangement. So it obviously uses some HDD info to map it in this particular way.
  2. Not sure if you mean the plug-in thread, but the dev # assignment is random even without the plug-in installed. The "native" unassigned devices does the same thing.
  3. Yes, I know, it doesn't affect functionality. But it just drives my OCD insane!! I am trying to understand how those Dev # are assigned? I have 9 drives: sde/f/g/h/i/j/k/l/m and their dev # are completely out of order in GUI and they are scattered all over the place under the "unassigned device" list. I plan to zpool-1 (raidz2) the first six sde/f/g/h/i/j (they're sitting next to each other in the server too), then zpool-2 (mirror) the other two sdk/sdl, and then use the last drive as a hot spare for zpool-1. My eyes are bleeding when I see the two pools' members all mixed up in the list. I tried powering the server on and adding the drives one by one - sde, sdf, sdg, etc etc. At first they're showing up as dev 1, dev 2, but then as I add more drives to the server they start exchanging dev numbers between each other ending up completely shuffled. I'm in pain. Can I control this behavior?
  4. I want to deploy a Win10 VM with BlueIris. As far as I remember the software writes to destination in continuous/looped fashion. Each camera generates its own filestream, cut into whatever time segments you choose. You also define a max cap for all the recordings. Once it's reached, the oldest files get deleted as new data flows in. I have 2x6TB data drives + 1x6TB parity drive. Total available space 12TB. In BlueIris I configure the max recording cap of 10TB. Time to configure the destination. What's are the best practices configuring destinations for continuous recordings, which are always full at capacity and the files just get overwritten in a loop? Should I configure a standard UNRAID share? What do I do with the allocation method if the space is technically always full? Do I even bother configuring UNRAID shares? Will ZFS work better? Cache Pool?
  5. Would it make sense to do a quick check for disk signature before the message? Something like: "the disk seems to have been already precleared by 3rd party tool. If you trust the tool and wish to add the drive and skip the clear check [x] I trust this tool and understand the consequences. Otherwise leave unchecked and UNRAID will clear the disk again. Thanks everyone for the input! I totally understand the technical side, I get it. It'd just be nice if the message was less confusing for users who don't know. At least a simple note: [*if the disk has been precleared, then Unraid will skip it].
  6. Emulated disk was brought up in this conversation in a different context, specifically a scenario of data corruption on the emulated disk in case of bit flip on a precleared disk that had been added to the array at some point and a subsequent failure of a disk. Anyway, that's beyond this topic. The real question is - if UNRAID trusts that the new disk is clean based on the signature, why then GUI would still throw a message that the drive will be cleared again. One can't argue that this is a result of UNRAID not trusting Preclear/3rd party plugin. Because the drive signature is provided by Preclear too, and UNRAID happily trusts that signature i.e. Preclear's clearing job.
  7. This is sort of what meant, I just worded it poorly. By saying "emulated disk is in reference to the disk that has been added to the array", I meant the disk that was going to be added to the array as a replacement to the failed one, which is the 1:1 copy of the emulated disk. But yes, I do realize that emulated disk is not even the replacement disk, it's not physical, it's a result of XOR in the RAM (I assume). Thanks for amazing explanation!
  8. This makes total sense, thanks for the detailed explanation. I assume that the term "emulated disk" is in reference to the disk that has been added to the array? Also, I assume "flipping bits" refers to a procedure that bypasses traditional write access, otherwise the clear signature would have been destroyed, correct? I also understand that the result of mismatching parity is correction of the parity data, not the flipped bits? About the disk failure remark - this is exactly the scenario I had in mind when I said "risking the integrity of the entire array" in my original comment. I found confusing that, I quote, "preclear is not a standard part of Unraid but a 3rd party utility so the GUI does not take account of the fact that a Clear had been done outside the control of Unraid". Here's why: The OP tried to add a precleared drive to the array and Unraid was stating that it wanted to clear the drive again. So on one hand UNRAID uses the signature as a proof that that drive is zeroed, and on the other hand the GUI still throws a misleading message that the disk cleaning was going to happen again. It's worth noting that the drive was certainly clean and untampered with: as per OP follow up he OKed the prompt for one more clearing, only to realize that nothing really happened and the drive was added to the array instantly. What's in bold seems to me contradictory. Not trying to be annoying, just trying to understand how it all works. UNRAID is awesome.
  9. I'll try to explain but correct me if I am wrong at any point. The statement above implies that UNRAID has no knowledge if disk clearing has been performed outside own native functionality, hence the message that the disk will be cleared. At the same time, as we see later, UNRAID trusts the signature that is issued by the same plug-in from "outside", assumes that the new disk has been zeroed and adds it to the array without own native clearing process. My confusion is why UNRAID has no knowledge of outside clearing, if in fact it can validate it via Preclear signature? Am I missing something? Thanks!
  10. I am new and just learning UNRAID, but can you please elaborate on the following: If UNRAID "does not take account of the fact that a Clear has been done outside", then how does it make a decision to add the drive to the array without the subsequent clear procedure? Doesn't it still rely on the signature issued by Preclear, technically risking the integrity of the entire array?
  11. My perception might be completely off, due to some things that I could be missing entirely. But if we forget about SAS3 and DataBolt for a moment (I found this info too while waiting for comments here) and consider an older SAS2 end-to-end connection but SATA2/SAS1 drives - what actually happens there? I just can't understand how the speed of the drive can reduce the bandwidth of the link between the expander and HBA. Which hardware instance decides to drop the speeds to 3Gbps per lane when it detects SATA2 drives? What happens if I have a mix of SATA2 HDD and SATA3 SSD capable of 6Gbps?
  12. I've read some older posts here and came across a statement that left me confused. I need your help confirming or disproving it. Assuming we have a 24 slot SAS expander backplane supporting 12G and an HBA card supporting 12G. We connect them using one 4-lane Mini-SAS cable, effectively yielding us a bandwidth of 48Gbit/s. The statement that I came across says that if we plug, say 12 x SATA300 drives, the speed will be limited to 4 (Mini-SAS lanes) x 300MB (drive speed) = 1200MB/s. TRUE or FALSE? If TRUE, then please explain what is preventing the SAS expander to push 12 x 3gbit = 36Gbit/s over the 48G link?
  13. m-a-x

    Raid 5 Doomed Article

    12 years later I came to tell that it doesn't matter how many drives you have and whether they are a part of one storage array or each drive plugged into its own laptop. The official number for Bit Error Rate per read (BER) for Consumer HDD (PC/Laptops) = 1/100,000,000,000,000 (1 / 10^14). The proper interpretation of BER is this: "the chance that a bit is unreadable". * Every time you read a bit with a BER of 1/10^14, it's like you are rolling the dice with a chance of 1/6 to get a "six" (let's imagine for a moment that the "six" is that "read error" situation). And when you go on to read the next bit, it's like you are rolling the dice again. * If you want to calculate that you are not getting the "six" in 30 straight rolls, you calculate the probability of no-six per each roll: 1 - 1/6 = 5/6. This is the probability that you'll be getting anything but the "six". Next you raise 5/6 to the power of 30 (number of throws): (5/6)^30 = 0.0042 or 0.42% chance of not getting a "six" in 30 dice rolls. Now to calculate the probability of getting a "six" at least once in 30 dice rolls, we just have to "invert" the probability: 1 - 0.42 = 99.58%. By the same logic, to calculate the probability of not getting a single bit error in a 12TB read we have to identify the number of independent bits reads (aka dice rolls): 12TB = 96Tbit = 96.000.000.000.000 bit Each bit has a BRE of 1/10^4 (read error). Meaning that read success = 1 - 1/10^4 = 0.99999999999999. This is the probability of a successful read of a SINGLE bit. Now we raise this number to the power of 96.000.000.000.000 bits (number of independent dice rolls): 0.99999999999999 ^ 96.000.000.000.000 = 0.383186795 or 38.3% of having NO bit errors across the 12TB of read. Which results in 1 - 38.3 = 61.7% chance of bit error. The important part here is to realize how independent and decoupled bit reads are from each other throughout the 12TB of read. It doesn't matter if you (re)read all 12TB from a single 1TB disk, or it's across the 100TB array made of 20 disks, or it's a cumulative read performed by 12 separate laptops with their own drives inside, 1 TB of read by each laptop. Each bit read is as independent event as each roll of a dice. No matter how many different and unique dice you use throughout the experiment. I just thought I'd add my two cents as I was waiting for the Parity-Sync to complete on my Unraid. Plus this topic is still indexed very well by google after all these years. Okbye! P.S. Some might say - 61.7% chance of bit error is way too high based on anecdotal evidence and personal experience. And they will be right. Firstly, because UREs don't happen due to a single bit flip, thanks to ECC built into HDD firmware which can tolerate a flip per sector. Secondly, because I find the BER of 1/10^14 way too conservative. The above example calculates the probability of a bit flip that most likely goes unnoticed thanks to ECC, not an actual URE. It also shows that there is no difference how many drives are participating in the 12TB read. Every bit read is an independent event from the standpoint of probability.