Zithras

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by Zithras

  1. Hmm I'm guessing from the lack of replies that the ability to graphically view SMART attribute history was removed or never added in 6.3.5 This is sad. On the plus side, everything else seems to be working fine. Honestly, the only new feature of 6.0 I'll probably use is the larger disk size support. May your disks never fail, Zithras
  2. I *finally* got around to upgrading my old 5.0 unRAID servers (with unMENU and I think BubbaRAID?) to 6.3.5 I think I managed to replicate all of the old functionality I wanted (we'll see if I configured rsync right tomorrow...expect another post if I didn't), except for 1 thing... The old UI I had (probably unMENU) shows a graph of drive SMART attribute changes over time. This was every useful, especially for monitoring the rate of uncorrectable sector increases, or anything that experienced a sudden spike requiring drive troubleshooting/replacement. I cannot find any way to view SMART scan history (i.e. attribute changes over time) in the new version, especially not the previous nicely-readable graphs. Is this feature available withing unRAID 6.3.5 or any easily installable plugin (I noticed unMENU is long depreciated, and isn't even a plugin anymore, so didn't install it) Thank you for you help! Zithras Edit: I saw 'show SMART self-test history:' under the disc smart status menu, but it says that no history is available; if this information is kept on the disc itself, as the feature implies, it should still how my previous SMART history? So I don't think this is where my missing feature's hidden...
  3. There's several free Windows programs that should read your ReiserFS partitions just fine, so gou can get data off em. I've never tried this, but they claim to work. Just google 'reiserfs windows reader' and pick your favorite.
  4. Good to know! I think I'll stick with 5gb free per drive, rather than 5%, and try not to copy over any large single files for the last bit.
  5. Awhile ago, when I first set up my unRAID server, the recommendation was to keep 5% of each disk as free space, with some users experiencing severe performance degradation if disk free space dropped too low. For the old 250gb disks, and the like, this made some sense. With new hard drive capacities as high as they are, is this still a valid rule? For a 4TB hard drive, that would be leaving 200gb on the disk sitting there doing nothing - surely you don't need this much to prevent file fragmentation? How much space you you leave on a disk before its time to get a new disk? 5GB? 20? I just realized I filled up my drives to have about 2.5gb free per drive...i haven't noticed any slowdowns, but this is probably too little. Now that I have a couple upgraded disks added to the array, I'm thinking of increasing it to 5gb per disk free, or maybe 10. 5% seems like overkill though... Thoughts?
  6. Yup all fixed now. I hate it when I spend hours trying to figure out a problem, eventually resort to asking for help, then realize minutes after I posted that it was something obvious >.< I think the script that failed dated back to 2009 or so
  7. And, of course, its the obvious solution. I downloaded the new preclear before installing 5.0, but checking the last edit date on my flashdrives, apparently saw it already there and forgot to copy over the new version while upgrading. Lets see if this works...
  8. I'm trying to preclear 4 preclear 3TB drives, 2 on each of 2 near-identical unRAID setups. All 4 pass smart tests, but give the same errors. There are no errors from after the time the preclear script is run in the syslog. The closest thing to an error in the log is an 'known partition table message'. (after the Nov 6 22:15:45 Server2 in.telnetd[8356]: connect from 192.168.0.3 (192.168.0.3) (Routine) Nov 6 22:15:46 Server2 in.telnetd[8396]: connect from 192.168.0.3 (192.168.0.3) (Routine) lines) The putty screen gives: =========================================================================== = unRAID server Pre-Clear disk /dev/sdl = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Elapsed Time: 10:33:44 ============================================================================ == == SORRY: Disk /dev/sdl MBR could NOT be precleared == ============================================================================ 0000000 0000 0000 0000 0000 0000 0000 0000 0000 1+0 records in 1+0 records out 512 bytes (512 B) copied* , 0.000356474 s, 1.4 MB/s 0000700 0000 0000 0000 003f 0000 0a37 15d5 0000 0000720 0000 0000 0000 0000 0000 0000 0000 0000 * 0000760 0000 0000 0000 0000 0000 0000 0000 5c5c 0001000 root@Server2:/boot/custom/etc/init.d# All 4 errors are variations on that. Whats going wrong? The drives are new, and according to the syslog seem to be passing SMART tests. I thought it might be a memory error, since cache_dirs is running, but I have plenty of ram, and the syslog shows no memory errors. Perhaps because i didn't include the -A flag when running preclear? I thought i didn't need to do this anymore? I can just try and use the drives without preclearing, but would really like to preclear first and figure out whats going on if possible... Syslog from one of the computers attached. Other syslog is more of the same. Thanks for the help, Zithras syslog-2013-11-07.txt
  9. For #2, that is EXACTLY what I am looking for. Downloaded, set Cron to run daily, fixed. For #1, my flashdrive was actually working fine, and nothing was set to read-only that shouldn't have been. I eventually figured out the problem though - I have the old 'startup/custom script setup' structure on my flashdrive, with custom/etc/init.d and rc.d files on the drive. UnMenu apparently looks for these and adds the 'reinstall on reboot' line to a custom script in rc.d if the directory exists, and my ServerStartup.sh was only configured to run those files which I want, not everything I put into rc.d. After I copypasted the autoinstall line from /custom/etc/rc.d/S10-install_custom_packages into my ServerStartup.sh unMenu packages started reinstalling just fine. An obvious solution in retrospect, but I had a bit of a hard time figuring out how unMenu handled autoinstalls - perhaps something to be added into the install documentation (or is it there and I just missed it)?
  10. Trying to upgrade to unRAID 5.0, I've hit 3 major issues so far. The first, flash drive failure, I'm currently dealing with (ordered new one off newegg, will email new ID for a key request when it comes in Luckily, since I was trying to upgrade, I have a backup from about 10 minutes before failure!). Issues 2 and 3, however, have me needing some help please. 1) Unmenu is installed, and boots just fine. I can download and install packages through the UnMenu package manager, but when I tell them to reinstall on reboot, they don't. I don't have to redownload them, but I'm having to reinstall all plugins each reboot! Is there a manual fix for this? What am I doing wrong? Any ideas? I ran the permissions utility after installing 5.0, and didn't run into any troubles...The unMenu directory is in /unmenu (which becomes /root/unmenu when the server is running), and seems to startup just fine. My go script, however, doesn't seem to be getting edited (/config/go) which is (I think?) what unMenu is supposed to do? 2) BubbaRaid had a graph-based SMART history feature, which let you easily see failing disks by a sharply increasing bad sector count over time. Is there any plugin in 5.0 what will let you see this? UnMenu seems to let you see current SMART test, but I don't see any way to easily view changes in disk performance over time. There's a Smart History tab, but it gives a Sorry: cannot run smarthistory. The /boot/smarthistory directory does not exist. error when I try and use it? How do I setup this feature? Do I simply need to create the directory on the flashdrive? Is there a plugin I need? Thanks for the help! Zithras
  11. Well, I'm guessing the lack of a response (specifically, the lack of a response along the lines of 'oh no! That hardware will blow up your house, cause worldwide devastation, and begin the End Times!') means I shouldn't run into any obvious hardware issues. Ill try and upgrade servers over the next few weeks and see how it goes. Expect more threads as new, unforseen difficulties emerge (which they will, I'm sure).
  12. Way back when I first bought unraid, I installed 4.4.2 on two servers. I haven’t posted much at all since, but still check the forums regularly. So far, happily, unRAID has worked just fine for the past few years, and I’ve been perfectly happy to let it carry on working just fine. However, for some time now, I’ve rather desperately wanted to start upgrading to >2GB drives, especially as the pricepoint per mb for these drives has become more and more competitive. I really didn’t, however, want to install a release the community seemed divided on whether to support/write addons for, or regard as a beta, so have been checking the forum daily for months now. But, the wait is over! 5.0 is OUT! FINALLY! Oh happy day. I hope. I suspect theres a lot of us who have been waiting silently in the wings, really wanting to install a product with large drive support that wasn’t a beta So, looking over the release notes and suchlike, it seems I should do an inplace-upgrade to 4.7, then backup my configuration folder and do a fresh upgrade to 5.0, copying over my config folder. Installing 5.0 seems to be easy enough, upgrading to 4.7 may be trickier, since its been so long since most of the people on the forums have had to worry about that…Before doing this, I wanted to ask on the forums if there’s anything I should be aware of/watch out for before trying to figure out how to do this. In particular, I remember some discussion about certain hardware being incompatible with later versions of unraid, although I don’t remember the details. I’ll post my server hardware down at the bottom of the thread, in case anyone recognizes anything horrible. As far as addons go, there’s only a few that I really want to replicate the functionality of: 1) I use crontab and rsync to back up server 1 to server 2 every month, and schedule a bimonthly parity check. I see no reason why cron in 5.0 won’t work just as well. 2) ACPUSPD shuts down the system after 5 minutes on APC power. Again, I think this addon is mirrored in 5.0 3) I use unMenu as an easy interface that lets me see how much used and free space I have on each drive, and total array free space. It’s a much easier to read menu than the default unRAID screen, which doesn’t show the total array free space. Is unMENU available in 5.0, or has the main UI been updated at all? 4) I use BubbaRAID to do a quick check for errors every few days (giant red messages on the bottom), and, most importantly, to spin up and check the SMART status of all the drives once or twice a month, with nice easy to read graphs and all, most importantly showing me any new bad sectors and the increase of bad sectors on the drives through time. I’d really really love a way to do this without installing an entire custom unRAID package. Is there anything else out there that gives both numerical and graphical SMART test results/history for all drives with one click? 5) I don’t have it now, but I’d love a way to access the entire array online, through an ftp server, web interface, whatever, that (given a sufficiently complex password or keyfile) is secure enough that I can keep it running without worrying about it much. What’s recommended for this? Anything? When I installed 4.4.2 there really wasn’t a solution for this, unless you wanted to try and run unRAID on top of a full linux distribution. Server hardware (both are the same except for the CPU): CPU: Intel Pentium Q6600 G0 / Intel Pentium E5200 Power: PC Power & Cooling S75CF 750W RAM: G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 1066 (PC2 8500) SATA cards: Adaptec 2240900-R PCI Express 4-lane 2.5 Gb/s SATA 1430SA x2, SYBA SD-SA2PEX-2IR PCIe SATA II Controller Card (Sil3132 chipset) x3 Motherboard: GIGABYTE GA-EP45-UD3P LGA 775 Intel P45 ATX Intel Motherboard Video card: CHAINTECH GSP5200T2 GeForce FX 5200 128MB DDR PCI Video Card (servers usually run headless, unless I’m troubleshooting something) Other: Supermicro 5-in-3 SE-M35T-1B Drive cages x4, fan controller, cables, case, APC, etc. Thanks for the help! Zithras TLDR version: Upgrading from 4.4.2, anything to watch out for? Any hardware upgrades required? (I hope not) Need a replacement for bubbaraid too, or at least a good way to view smart status.
  13. If you do decide on unRAID, just as a comparison, not including hard drive costs, I set up a 20-drive non-rackmount unRAID server (x2) for about $1000 each. Figure in about 5x 1.5 TB drives for $500-600, and you're still well underbudget. The most expensive part of the build were the 5-in-3 SATA hard drive cages. Zithras (edit: I've heard the Norco cases have major airflow issues - make sure and research the fixes for that if you go that route)
  14. Also, make SURE you run preclear on the replacement drive Seagate sends you... I've found anywhere from 1/2 to 1/4 of Seagate drives are bad on receipt - especially the 1.5 TB ones. Just keep on sending them back until you get a good one. Zithras
  15. If you copy files directly to the drive share itself (i.e. drive1 folder, drive2 folder, etc) you can fill the drives up as much as you want. (until they're 100% full, of course) Zithras
  16. Wait till a PC Power and Cooling 500 W is on sale - they are single rail, good quality, run quite softly, and occasionally are substantially discounted on Newegg. Zithras update: here's one for $70 + free shipping - a bit expensive, but it's good quality supply http://www.newegg.com/Product/Product.aspx?Item=N82E16817703011
  17. I just took mine out and threw them in a desktop to run the Feature Tool... Like Drealit said, they are already set to 3.0 Gbps...(I wish I'd read his post BEFORE I started the process ) I haven't noticed that they run any louder or less efficient than the 1.5 TB Seagates...I don't really see any need to set enable the APM features... Zithras
  18. I have two of them running as parity drives, but didn't know about the SATA2 disable...I'll have to try the fix. Zithras
  19. I just wanted to warn everyone not to try and run unRAID off a Supertalent Pico series flash drive. When you use them, they get extremely warm, and eventually fail. I installed unRAID Pro on two PicoC drives in January. One failed for me after about 4 months, the other just failed (after about 11 months). They are now replaced by Lexar Firefly drives, which seem to be working just fine. Hope this helps, Zithras
  20. makes sense - I've taken everything except the folder pointer (and BubbaRAID's tasks that it put there) out of the go script - you're right - it looks much neater that way - plus, if i update unRAID and it overwrites the go script - it's much easier to just put the one line in than rewrite all the other commands.
  21. I needed to verify that my files were coping over to my unRAID server okay, and that everything was set up. So, I used hkSFV in Windows to calculate the md5 checksum of the first batch of a few hundred GB or so of files, and transferred them and the hash file (with names and file locations and md5 hashes) over to the server. (By the way, hksfv is a wonderful and easy to use Windows program with a good GUI for calculating file hashes - just make sure and read all the options before you run it) At this point, I needed a commandline tool that would run in unRAID and process the hash file to make sure my files had been transferred over correctly. Surprisingly, finding one was rather difficult... Finally, I managed to find this wonderful program called md5deep, here: http://md5deep.sourceforge.net/ Even better, it comes as a precompiled slackware package that will run in unRAID (running BubbaRAID and unMENU, if it makes any difference) here: http://www.linuxpackages.net/pkg_details.php?id=9470 (There is a newer version available, but this is the one I used, so I know it works) It's able to process a list of md5 hashes and file locations, and either tell you what files are present that are not in the list, or the files that matched teh list - quite useful. Anyway, I'm sure most of the regulars on the forums know all about this, but just in case someone else needs something similar, I thought I'd share and hopefully save someone an hour or three of searching (feel free to message me if you need help setting it up) Hope this helps someone, Zithras (Edit: I think someone else was looking for something similar earlier, but I don't think they posted if they actually found a solution - off to dinner now)
  22. Actually, I ended up basically doing this - but instead of putting in separate scripts, I just put all my personal startup items into a single rc.local script and called that from go (I found it easier that way). If I ever need to add on more custom user scripts, I'll modify the go script to call whatever is in the rc.llocal folder instead of just the one script - thanks for replying though
  23. I figured it'd be something like that - I was planning to use Notepad++ and save the file without carriage returns anyway.
  24. Actually, looking over it, I don't think that I really need to run smart tests on the drives as well as parity tests. I'd still like the parity tests scheduled for twice a month (especially for the first few months), but I'll pass on the SMART tests unless someone suggests that they're actually necessary. The original plan was to just run parity checks once or twice a month (and daily rsync, of course, so I won't have to manually copy data twice), but I was quite unclear from the forums whether long SMART tests offered benefits that parity checks didn't (since parity checks essentially tested every sector anyway), so I threw them in just in case. I'm glad you all stopped me from keeping them in As far is storing disk SMART logs go, I'm not planning to use any automated script, but instead have already made up an excel spreadsheet posted next to the computer, and plan on checking disk status through BubbaRAID's menu every month or two and writing down the date and any new differences/errors. Slightly more work on my part, but it encourages me to keep more of an eye on the servers than letting the logs accumulate until something goes wrong (I saw BubbaQ's SMART reports package, but I'll think I'll pass on it, especially if the long smart test doesn't offer much benefit over a standard parity check) Once the systems get 10+ drives each, and have been running stably for a year or two, I might reconsider. I went through the automation links given, and while I understand the theory behind them, I'm not sure how to implement them in practice (hence the original plan of putting everything into the go script) Still, I'll give it a shot, since that seems to be the preferred method now: (please correct as necessary - in particular, I'm not sure I started out rc.d script correctly and put in the correct setup in the go script) First, I want to create a directory in /boot/custom/etc/ called 'rc.d' in this directory, I want a file called 'ServerStartup.sh' within this file, I want to type: crontab -l >/tmp/crontab # parity checks on the 1st and 14th at 3am # user notice of check echo "# check parity on the 1st and 14th of every month at 3am:" >>/tmp/crontab # check command echo "0 3 1,14 * * /root/mdcmd check 1>/dev/null 2>&1" >>/tmp/crontab #set up rsync between the two servers every other day at 3 am - will be commented out for Server2 go script echo "0 3 2-6,8-13,15-20,21-31 * * /usr/bin/rsync rsync://Server2/disk1/*" >>/tmp/crontab echo "0 3 2-6,8-13,15-20,21-31 * * /usr/bin/rsync rsync://Server2/disk2/*" >>/tmp/crontab echo "0 3 2-6,8-13,15-20,21-31 * * /usr/bin/rsync rsync://Server2/disk3/*" >>/tmp/crontab crontab /tmp/crontab While I'm at it, I'll probably want to move the unMENU boot call over to the script too, but that will just be a matter of cutting/pasting from the current go script Next, I want the rsync.conf file (both of them) as described in post #1 (unchanged?) Finally, in my go script I'll want to add to the end: fromdos < /boot/custom/etc/rc.d/ServerStartup | sh (whats the fromdos syntax for? Why can't I just use /boot/custom/etc/rc.d/ServerStartup.sh Also, I'm still confused as to how to set up rsync - will what I have above work? (see also questions from post#1) How will it know where to copy the data to? (The way I have it set up, for example, it seems to be taking data from server1 disk1 and copying it server2, but not necessarily server2 disk1)