eweitzman

Members
  • Content Count

    59
  • Joined

  • Last visited

Everything posted by eweitzman

  1. Yeah, I got what you were saying about how tuning will change performance one way or the other for different scenarios. I will try some of these once the array is loaded. My (waning?) interest is still about the 2xR+2xW thing. unraid.c's header talks about two state transitions specifically for computing parity, but I haven't read into the code to see if they perform first time parity calcs differently. I will find out in a few days how it performs when I add a parity drive to my parity-less array with data disks that are being loaded now. The disks will run at near full bandwidth
  2. That really changes things... My ignorance of the big picture shows. One could hope that if md batches small, apparently sequential requests and asks for one large transfer down the chain, it would be more efficient. That would need to be tested before spending time on serious coding. - Eric
  3. With modern drives, you can't GET physical drive geometry. It is translated by the drive. Okay, that was sloppy. Let me rephrase it: ...if the requested stripes have some sort of addressing that can be used to order and group them so they can be retrieved sequentially in batches... A cursory glance shows a large buffer (unsigned int memory) allocated for an array of MD_NUM_STRIPE stripe_head structs. stripe_head has sector and state members. state could be used to prepare a list of stripes waiting to be read and written (ie, that are in the same state), while sector could
  4. bubbaQ, Browsing through the driver code, I see it can hold on to ~1200 "stripes" of data. This term must be a legacy from the days when this code was really a RAID driver, right? Anyways, if the driver is aware of 1200 simultaneous IO requests, perhaps some of them can be grouped, reordered and processed so that a large series of data reads on adjacent tracks is done in parallel with a similar series of parity drive reads. That is, if the requested stripes have some sort of addressing that can be mapped to drive geometry, there is the possibility of disk-sequential, deterministic re
  5. I see. I've been looking at this as if the unraid code was higher up, ie, not a driver, and had knowledge of what needed to be written beyond a single block or atomic disk operation. If each call to the driver by the OS has no knowledge of previous and forthcoming calls (that is, data to read/write) then it would be very gnarly to have the driver to coordinate with other invocations of it. From what I've gleaned since last night, there are three main parts to unRAID: md driver - unRAID-modified kernel disk driver shfs - shared file system (user shares?) built on FUSE emhttp - man
  6. First, an introduction. I've just started using unRAID Plus (not Pro) and I like it a lot. It has the right trade-offs for me, and is replacing a slow, dedicated RAID NAS box and some unprotected drives in a PC. I'm a developer. I worked on various unixes (and other OSes) from the mid 80s to mid 90s. Getting ps -aux, ls -lRt, top, and even vi back into L2 has been a trip. (vi twice so for an emacs guy.) I dug up a 1989, spiral bound O'Reilly Nutshell book on BSD 4.3 that I bought back in the day. I'm not a kernel programmer or driver programmer or hardware guy, so the following thoughts m