BRiT

Members
  • Posts

    6572
  • Joined

  • Days Won

    8

Everything posted by BRiT

  1. Yes there is. It's written up on how to configure them in the forums when the feature was first introduced: http://lime-technology.com/forum/index.php?topic=4782.0 However, are you running Cache_Dirs? That may be all you need to prevent the spun-down disks from spinning up when all you're doing is getting file listings.
  2. I don't have the time to find and repost the series of articles and links right now, but if you perform a search on these forums, you will find several earlier posts related to the drives and the cause of the issues.
  3. No problems. However, your performance is not what it should be. Some extreme cases saw performance improvements of 285%. The lowest performance increases seemed to be on order of 30 - 40%.
  4. Depending on what you downloaded, you could try installing it using: installpkg packagefilename.tgz
  5. Response copied from this other thread [ http://lime-technology.com/forum/index.php?topic=2216.msg16744#msg16744 ] ~~~ I think the opcode of 0x85 relates to a request to pass the command through to the actual drive, likely to check on the hard drive temps or dealing with drive spinup/spindown. [ http://www.t10.org/lists/op-num.htm ]
  6. Whenever a ReiserFS disk is mounted, there will be a certain minor amount of writes done to it. A small amount is normal. "Mounting" means the system is in process of 'mounting' the drive, or making it available to be used. For more information see: http://whatis.techtarget.com/definition/0,,sid9_gci214638,00.html
  7. Here's what a typical Slackware Package layout consists of [ http://slackwiki.org/Packages#Slackware_Package_Layout ] ./ ./install/doinst.sh ./install/slack-desc [./usr/bin] [./usr/lib] [...] The install directory as well as the files within it will be deleted after the installation of the package to the root install directory (usually '/'). The install directory is not necessarily required for the most basic packages but allows additional functionality: slack-desc contains the package description, and doinst.sh contains post-installation instructions such as creating symbolic links. Every other file within the package is simply untarred (extracted) to the root directory. Most of these files are in standard locations. There are other special files within packages: usr/doc/<appname><version> All documentation included with the package (such as README, INSTALL, ChangeLog, docs/, etcetera) should be placed in this directory. Some of the files in this directory should be manually copied from the source archive, since Makefiles usually don't copy these documentation files to the package build tree.
  8. Now to answer your question about the filename.tgz files... I imagine a typical untorrent.tgz.auto_install file would contain the following: /bin/installpkg untorrent.tgz This will invoke the command 'installpkg' on the untorrent.tgz. The 'installpkg', 'upgradepkg', and 'removepkg' and 'slackpkg' are standard Slackware Package Management tools to install, upgrade, remove, and manage packages. From Joe's earlier message, his unmenu.auto_install does the following commands: cd /boot/unmenu_dist13 ./uu
  9. I updated the previous post to include a bit more information as I found links to the SlackBuild Wiki. The -c option to sh (unix shell) according to the man page (assuming the default shell is Bash) [man bash]: The string passed to the shell is 1 line [-n1 option specifies at most 1 argument per command line] based on the input to 'xargs', which is the output of the 'sort' which works on the output of the 'find', which lists all *.auto_install file names.
  10. Forgot to mention on the .tgz / .txz. Those files are compressed using 'gz' or 'xz'. The contents of which are in TAR format. It's convention to use .tgz or.txz to indicate gzip tar or xzip tar contents. Of the two, 'xz' is the new and improved format. The contents of the TAR files are Slackware Packages [slackBuild]. They contain the information needed to install the contents and the actual contents to install. They are installed using the "installpkg filename.tgz" or "installpkg filename.txz" For way too much information on how to generate and create Slackware Packages using the typical tool, SlackBuild, here is their wiki page: http://slackwiki.org/Writing_A_SlackBuild_Script
  11. I think you got it all. I am guessing that the sort is to have packages installed in a certain order, thus the auto_install files are prefaced with a priority, much like the Linux init subsystem.
  12. I think most use the 4-in-3 or 5-in-3 hard drive cages, which take 1-2 power plugs to power all the drives, or use their NORCO case, which has a powered backplane for the drives.
  13. Those last two segfaults are from the Folding At Home [ http://folding.stanford.edu/ ] distributed computing program. They actually name the cores they use with a 'exe' filename extension. nighttraitor, try not starting F@H until after the formats are done. The additional stress F@H puts on the system (memory usage and cpu usage) might be triggering several corner cases not fully exhibited by the kernel's normal test suite.
  14. I agree with all parts, save one. I'll stop short of stating the degree of implementation difficulty. To make it possibly easier for the tracking aspects and to prevent having to write new parts of the system, I'd only allow for the addition of single drive at a time. To allow for adding multiple drives with existing data at a time, you'd have to read the existing parity data, read all the new drives' data, calculate the new parity data, write the new parity data, and write the in-progress sector. This might be able to leverage some of the existing code between the parity-check and parity-correct, but I'm in the dark and guessing at this point.
  15. I disagree. It's because you're writing to 2 different sectors. Both are vitally important to keep data integrity. One of them contains the sector number that was just updated (the resume point). The other contains the recalculated parity data. If at any point, those two get out of sync with each other, you need to resort to a full recalculate parity from reading all of the data. When you crash while writing to a data drive, typically the system recovers with the Reiserfs replaying the journaled transaction or reverting to an earlier transaction point. When this occurs, unRAID typically forces a full parity rebuild to update the parity data. If you the user cancels the full parity rebuild, you're at risk for having data corruption as your parity data is out of sync.
  16. It was for an Atari 8bit. My friend did the update for his Apple II. Both systems used the the 6502 cpu. It's how I got started in programming. My, those days were so long ago.
  17. I remember paying that much for a 64K memory update to 128K total!
  18. queeg, I suggest you reread my post, as you are ignoring several cases that make continuing from where you left off impossible without having corrupted parity data. I'll repeat the part you're missing. The last case forces a recalc parity from scratch. The real issue at hand is you do not know what the case might be during a crash. I had thought about recovering from the last point changed (In-Process Sector Indicator) but there's several possible trouble spots that can easily occur that could cause data corruption. In one case, if you update the In-Process Sector Indicator BEFORE you write the new Parity Sector (Sector N), the update of the Sector N might have been lost (power glitch, drive crashed, system never sent it, controller never wrote it, drive never flushed it, etc). If the other case, if you update the In-Process Sector Indicator AFTER you write to Sector N, the update of the In-Process Sector Indicator might have been lost. In both cases, you need to repeat the process on Sector N. In another case, the drive might have cached the write to the In-Process Sector Indicator but NOT the updates to Sector N.
  19. Just to make sure everyone understands correctly, let me state it again. As it stands now All Drives, not just the new drive, are unprotected until the process finishes. Your proposed situation does have merit. If the operation crashes before it's finished, the recovery step is to discard existing parity and recalculate from scratch. I had thought about recovering from the last point changed (In-Process Sector Indicator) but there's several possible trouble spots that can easily occur that could cause data corruption. In one case, if you update the In-Process Sector Indicator BEFORE you write the new Parity Sector (Sector N), the update of the Sector N might have been lost (power glitch, drive crashed, system never sent it, controller never wrote it, drive never flushed it, etc). If the other case, if you update the In-Process Sector Indicator AFTER you write to Sector N, the update of the In-Process Sector Indicator might have been lost. In both cases, you need to repeat the process on Sector N. In another case, the drive might have cached the write to the In-Process Sector Indicator but NOT the updates to Sector N. Also, in typical fashion upon a crash forcing a reboot or restart, the reiserfs might replay journalled transactions which might change the data drives involved. In your hypothetical situation, I can see it becoming an overly complex issue to ensure cache (buffer) coherency. It can become beyond complex in dealing with an in-flight parity update of adding a drive with existing data and allowing updates of the existing drives because those secondary updates will force parity updates too. What happens when the in-flight add of the data drive and the existing data drive wants to update the same sector of the parity drive? This forces all READS/WRITES of the Parity Drive to be serialized in order to properly account for the simultaneous operations while maintaining data integrity. You need to either share the same buffer/cache of the parity drive data or not buffer/cache it at all. Now maybe this situation is already accounted for in the code, but I have my doubts since currently adding a new drive is treated as an atomic operation that prevents the array from doing anything else.
  20. BRiT

    The New Drobo FS

    With one of these [ 4 x 2.5" HDD in 1 x 5.25" bay ], the Lian Li Pc-Q08 could even house 10 disks. Granted, 4 of them would have to be laptop style drives. Could be very useful as overkill for using SSD in Raid/Safe config as the Cache and main OS drives.
  21. The moment you start changing even 1 bit on the Parity drive, NONE of the disks are protected until the operation finishes. If the process crashes in the middle, your Parity data is worthless. The only possible way it would be safer is by possibly being faster from only having to read from 2 of your N drives, parity and the new data drive instead of parity and all data drives.
  22. Two of the items concern me: Raw_Read_Error_Rate and Seek_Error_Rate. Your post-clear report looks obscenely high. Perhaps Joe L knows better what to expect from Seagate drives. Before: 1 Raw_Read_Error_Rate 0x000f 100 100 006 Pre-fail Always - 30610 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 18 After: 1 Raw_Read_Error_Rate 0x000f 119 100 006 Pre-fail Always - 207738410 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 469544 According to Wikipedia [ http://en.wikipedia.org/wiki/S.M.A.R.T. ] Read Error Rate Indicates the rate of hardware read errors that occurred when reading data from a disk surface. The raw value has different structure for different vendors and is often not meaningful as a decimal number. Seek Error Rate Rate of seek errors of the magnetic heads. If there is a partial failure in the mechanical positioning system, then seek errors will arise. Such a failure may be due to numerous factors, such as damage to a servo, or thermal widening of the hard disk. The raw value has different structure for different vendors and is often not meaningful as a decimal number.
  23. If you have a physical hard drive you can dedicate, you could try installing unRAID onto a full Slackware distro. With that, you won't need to reinstall packages upon reboots, as everything is persisted.
  24. BRiT

    unraid effect???

    Nope. Those 'effects' occurred long before my interaction with unRAID.
  25. Here's some more stats: I am tired. I am out of OJ. I like poptarts.