nick5429

Community Developer
  • Posts

    121
  • Joined

  • Last visited

Everything posted by nick5429

  1. Yep, these will work great. I bought a similar one earlier this week after extensive research. The PSU that comes with yours is insanely loud. They're not power-efficiency rated. You'll be better off with a Gold or Platinum rated PSU. And you really only need 1 quieter PSU (instead of 2 insanely loud ones) for home use, since I doubt many of us care about redundant PSUs. The one you linked (with the TQ backplane) may require up to 3 controller cards (or some other way to get 24 individual SATA ports), but will have the highest overall performance (each drive gets a fully dedicated SATA link). Those controller cards are pretty power hungry too. IMO, the better option for bulk storage has the SAS2 backplane (server model SC846E16, or a listing that specifies a SAS2 backplane) with a port expander. The SAS2 option only requires 1 controller card. If it doesn't come with one, best option for controller is an m1015 (or similar) flashed to IT mode (<$100). With the SAS2 expander, you'll get aggregate 2400MB/sec throughput to your drives (so 100MB/sec simultaneous reads on all 24 drives for parity checks, or faster per drive on fewer drives). I think most people's parity checks are limited around that speed anyway, and that limit won't apply to e.g. standard single file reads - bandwidth is dynamically shared. Another advantage -- one single cable going from the controller to the backplane. The TQ model has 24 individual cables. IMO, this with a quieter gold-rated PSU or this + this MUCH quieter platinum-rated PSU are better than the linked option. They'll require sourcing a few additional components (CPU + RAM) yourself though Xeon L5630 (2 for $40) or L5640 (2 for <$100) are good options for CPU with slightly lower power consumption. 6x4GB = 24GB ECC DDR3 registered RAM for $50 on ebay. The options I linked also have integrated IPMI on the motherboards
  2. I have an IBM BR10i raid card/controller for sale. I've already done the work of flashing it to LSI IT mode (the 'good firmware' mod for unraid). Includes 1 SFF-8087 to 4x SATA connector breakout cable to connect 4 SATA drives, and a full height mounting bracket (this is what 99% of users will need). Just grab another of these cables (~$8 on amazon) to connect the other 4 ports. Works great with unraid, I've been using it flawlessly for the past 3+ years. I'm only selling it as I'm upgrading to a fully hot-swapping 24 bay case. This is functionally equivalent to a flashed IBM m1015, except that it only provides full support for drives up to 2TB. For a 3TB+ drive, it will only access 2.2TB of the drive. I just used this card to mount my older, smaller drives, and used my motherboard SATA ports for my bigger drives. Worked great for me, as I figure I'll always have a smattering of old drives around! Asking $50+shipping for the raid card + breakout cable. PM me if interested
  3. Hi! I wrote that plugin I stumbled across this thread while trying to see if anyone had made a v6 encryption plugin. I'm still actively using the v5 plugin linked above, and it Works For Me . I'd be very interested to hear from anyone else who uses/used/tried to use it, the plugin thread is just an echo chamber of me talking to myself!
  4. Along with the 'safe mode' checkbox, I'd very strongly advocate for a mode where any automatically-initiated parity check is NONCORRECTING. Additionally, a full 100% read-only mode which does not allow writes from any source -- parity check, reiserfs log/transaction updates/replays, etc. There are cases where this combination could be used along with a 'force trust' procedure to salvage data that would otherwise be lost/destroyed (e.g., drive failure during upgrade of another drive).
  5. I think I figured out a somewhat roundabout way to accomplish this using the FIBMAP ioctl; example code given here: http://lists.debian.org/debian-mips/2002/04/msg00059.html This will generate a list of FS blocks for a given filename. Do this for every file on the drive, save the results to a file. Search that file for the block you're looking for. Hopefully the 'blocks' returned by that ioctl correspond directly to the number returned when there's a parity mismatch. Anyone know if parity mismatches are reported units of 4kB or 512B 'blocks'? IIRC, I think it's 512B blocks Generating my blocklist now, I'll post more detailed/streamlined instructions if this ends up working for me.
  6. I found instructions on 'trust the array' for v5, which boil down to: * Use the "new config" utility * Set up the drives * Run: "mdcmd set invalidslot 99" * Click "Start" on the array management page without loading/reloading the page (aside: there was also a 'parity is correct' checkbox. I checked that too) I was on unRAID v5-RC8a, so had to upgrade to v5-RC11 before this would work So my array is back up/"trusted", and I'm running the stability parity checks mentioned in step#1. I'd still be very interested in mapping block numbers to files if anyone has ideas how this can be accomplished.
  7. I got a notification last night that two of my array drives were missing. I looked and noticed they're both on the same 2-port PCIe controller, so I figure maybe the controller wiggled loose or something. I shut down, re-seated the controller, boot back up. All the drives show up again, and of course unraid launches into a correcting parity check (argh! If only there were something like my feature request to have a safe-read-only-across-reboots option... unRAIDv5's "maintenance mode" does not appear to persist across reboots in a manner which would have satisfied this requirement) I stop the correcting check as soon as I can, though it's already "fixed" 30+ "incorrect" parity locations. I start a new non-correcting check, but both drives go missing about an hour into the check. Diagnosis at this point: likely dead controller, and parity is probably corrupted. I shut down, pull the bad controller, and replace it with another from the closet. When I reboot, one of the drives on the controller is red-balled (sdb/disk5, a data drive), the other (the parity drive, FWIW) is green. Status at this point: red-balled drive with likely-functioning drive and mostly-good data (I can mount it manually); "green" parity which is probably wrong due to the automatic correcting parity check; new drive controller that still isn't 100% trusted. I have backups of everything "critical", but would rather not lose the rest of the data on the drive. What I (think) I want to do 1) Run a couple non-correcting parity checks to make sure the new controller is solid 1a) this will generate a list of mismatching blocks 2) Force disk5 to be trusted. 3) Rebuild parity 4) If possible, use the blocks from (1a) and map block->filename on disk5. I know this is possible on ext*, last time I researched I couldn't find a similar tool for reiser. Manually check these files for integrity and/or restore from backups. Thoughts? Advice? How (in particular) can I accomplish step 2?
  8. You might have better luck starting a separate thread, since this one is marked as solved. Post the results of these two commands in your other thread: ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS free -m Also, a list of what plugins you're running.
  9. The 32-bit Linux one wget ftp://mersenne.org/gimps/p95v279.linux32.tar.gz tar -xzvf p95v279.linux32.tar.gz ./mprime
  10. Can you describe what sort of files you were transferring? (small files -- eg, images, text files, etc? or big files [hundreds of MB or larger]-- movies, media, etc)? Were you doing anything else on the server at the same time as doing the test transfers?
  11. It "should" still work, but just be a bit slower when stored on an unraid share instead of a standalone disk. However, I tried moving the .vdi to a standalone disk and it works fine, whereas the .vdi on the unraid disk share gave I/O errors. Very odd. I don't really need fantastic disk I/O performance on my virtual guests and would much rather have the guests' backing store on my unraid drives instead of a non-array drive..
  12. So I was able to get virtualbox and phpvirtualbox running on my unraid 5.0 rc8a setup. I've been trying to install ubuntu in a virtual machine (with the virtual disk file stored on my unraid array under the disk4 mount). Whenever the installation gets to the point where it starts creating partitions, a) my unraid server load average spikes way up -- 8 to 10 or higher and b) the guest OS complains about various disk errors. "illegal qc_active transition" was one of them. Then, I can't stop the virtual machine from within the phpvirtualbox GUI, and "kill -9 <pid>" as root doesn't even work to kill the process. I tried both fixed and dynamic disk modes. The .vdi disk file never actually grows beyond 8-10MB. Other than for running virtualbox, my reading and writing to my unraid server disks seems perfectly healthy to me. My hardware should be plenty to run virtualbox (AMD Phenom II, all modern 1.5-2TB SATA hdds) Has anyone else encountered anything similar?
  13. And it doesn't show up when I view the config file either. How does that config file get created? I could manually hack it to work for me locally, but I don't see any references to it in the .plg
  14. I'm pretty sure I'm just using the .plg extracted from the 1.0.5 .zip file. # ls /boot/config/plugins/simple* /boot/config/plugins/simpleFeatures.core.webGUI-1.0.5-noarch-1.plg* /boot/config/plugins/simpleFeatures.web.server-1.0.5-noarch-1.plg* /boot/config/plugins/simpleFeatures: lighttpd.cfg* simpleFeatures.core.webGUI-1.0.5-i486-1.tgz* simpleFeatures.web.server.png* php.ini* simpleFeatures.core.webGUI.png* simpleFeatures.cfg* simpleFeatures.web.server-1.0.5-i486-1.tgz* Nothing simpleFeatures-related in my /boot/extra directory # ls /boot/extra/ PlexMediaServer-0.9.6.9.241-da3068c-unRAID.txz*
  15. The webserver wouldn't start for me from the GUI, so I went hunting. Something's wonky with the lighttpd config file: # /usr/sbin/lighttpd -f /etc/lighttpd.conf 2012-10-05 12:46:57: (configfile.c.901) opening configfile /etc//boot/config/plugins/simpleFeatures/lighttpd.cfg failed: No such file or directory 2012-10-05 12:46:57: (configfile.c.855) source: /etc/lighttpd.conf line: 338 pos: 1 parser failed somehow near here: (EOL) I'm running rc8a and just freshly installed SimpleFeatures. Tried a reboot to no avail. Changing the line in lighttpd.conf to: include "../boot/config/plugins/simpleFeatures/lighttpd.cfg" makes it happy. Not sure why it's trying to prepend "/etc" onto an absolute path...
  16. For the benefit of anyone subscribed to this thread, I've written an EncFs plugin for unRAID 5.0. See here: http://lime-technology.com/forum/index.php?topic=22908.0
  17. I don't think so. It's still a good test. But now we have data to suggest that it's not 100% valid in all cases. So we'll need to remember that a flaky system that has passed memtest should be checked with Prime95. Agreed. I think, as a grand overgeneralization: * Memtest is best at finding actual bad locations in the RAM, and it does a much more thorough job of testing all memory locations with a variety of access patterns. * Prime95 would be better at testing stability under load. It can't know which physical memory locations it's testing, it just does a lot of memory transactions under heavy CPU and memory load. This is more likely to find problems with things like overclocked or overheating RAM or RAM which isn't getting enough voltage.
  18. Bumped up the RAM voltage. After many hours of testing/confirming, the system is totally prime95 stable and passed 3 rounds of parity checks with no mismatches. I think I can finally call this closed, thanks for everyone's help / tips / suggestions!
  19. I had contacted the european seller of these and got referred to ipcdirect as well. In an email, they stated: But their shipping calculator is a bit overzealous and wants another $15 to ship it. Definitely works out cheaper if you want more than one, though.
  20. ::jawdrop:: I almost wouldn't have believed you if I didn't just observe the same behavior. memtest+ stable for 36 hours, prime95 blend consistently fails within 15 minutes. prime95 'small' passes overnight (likely indicating, from the description in prime95, a 'problem with the memory or memory controller'). I'd always considered memtest the golden standard in memory subsystem stability. I guess not. My RAM was even running underclocked -- autodetected [email protected] vs the rated 1600 @ 1.5V. CPU, everything else running at stock/autodetected speeds. I increased the RAM voltage a bit (1.6ish) and left it at 1333; so far prime95 blend has been running for ~3 hours without errors.
  21. Putting something like that in the 'go' script would satisfy the 'read only' functionality some people have requested whereby drives are set read-only after filling them with media, but wouldn't truly be sufficient for the use case I outlined. Poked in the emhttpd binary, and it looks like (in 4.7) the mount command string is "mount -t reiserfs -o noacl,nouser_xattr,noatime,nodiratime %s %s" -- doesn't look like it's parametrized to be able to specify ',ro' when mounting the device to prevent all writes. Perhaps I should post this (or have a mod move it?) into Feature Requests.
  22. Is there a way to force the array into a read-only mode (persistent across reboots)? I've noticed, for instance, that each disk in my array tends to do ~5 writes after starting on boot even if I don't intentionally do any reads/writes (per the stats on the main admin page). I've searched a bit, but everything that comes up is related to arrays/disks being put into read-only mode because of a disk error. As just one of many potential use cases... this would be useful for instance during a parity drive upgrade -- what if another disk fails when the parity drive upgrade is only half-done on the new drive, and there have been writes to the array in the meantime? If there were a way to force into read-only mode before starting the upgrade, you could use the 'trust my parity drive' procedure to swap the (old, unmodified) parity drive back into the array and recover the failed drive.
  23. While these are valid points which may potentially explain a problem like what I'm seeing... If the RAM were bad, it would show up in a memtest. Any memory error, ever, is indicative of a module that should be thrown out / RMA'd. This would show up in the syslog / smart stats and again, indi I highly doubt it, and the test I describe is quite similar to the stress from a standard parity check. If this were true, all users of unRAID would be seeing random parity errors when doing subsequent parity checks. This issue can easily cause silent data corruption, which--even on consumer hardware--is unacceptable and unexpected. One of the Samsung drives is the drive I suspect of failures, but as far as I can tell, the firmware issue affects only the HD155UI and HD204UI drives; mine are HD154UI Anyway, an update: * 12 hour memtest shows no errors (still running) * The only other sata controller I have laying around is sil3132-based, which are apparently known to be flaky. I've got two of the Syba cards from monoprice (which seem to be one of the few 'recommended' brands and I think is why I bought them), but switching the suspect drive onto either of them results in a drastically increased number of parity mismatches. I'm curious to do a little more testing to see if switching a suspected-good drive onto the Syba cards has a similar effect (which will lead to me promptly discarding the cards...)
  24. Both important points: a) I was trying to compare areas of the disk that aren't actually protected for my testing and b) using the partition files fixes the sector 63/64 alignment issue. My little program works nicely now! Thanks Joe. PM me if you'd like a copy, I'm disinclined to post it here; potential for misuse is high, especially if/once tweaked to update parity. I do have one real parity error on my drive from running a correcting parity update on my array after an unclean shutdown (knowing that I'd likely get 1-2 real parity errors introduced due to the flaky drive/controller/whatever I have going on, but it was better than the ~10-15 from the unclean shutdown) syslog reports the parity mismatch at 2740517552; its offset per my program is 2740517555 from the beginning of the sdX1 partitions (using "1 block" = "512 bytes"). One single bit is flipped. It's pretty cool to be able to see/verify that ...and I might tweak it to fix the error just for that block on my parity disk if my 'switch the controllers' test doesn't work this evening. Otherwise, there'll be no way to correctly rebuild the array when I swap the disk out (thus my feature request, as most users shouldn't have to / wouldn't be able to do something like this if presented with a problem like mine)
  25. Wow, they seem to be the only retailer who sells this product? Contacted them for a shipping quote to the US, though doubt that's a viable option for me. Thanks for the info though!