crazytony

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by crazytony

  1. I had a strange problem with Plex I couldn't really get past so I stopped running it on Unraid. I have 4gb of ram and after a period it used up all the RAM on the system (the whole unraid OS is using about ~160mb for me right now). Even killing the Plex process didn't free up the memory (this was what I couldn't figure out) and after about 3 Plex restarts the whole machine would hang (I think the OOM killer found a not so willing victim like the md process). This happened over the space of a month or so. After the third time of having to do a hard reboot and a parity check I moved Plex to my desktop machine and my UnRaid box has been stable for 6 months. No unmenu, no simple features.
  2. Check the access logs for sejeal.jpg, .htaccess and php.class.php. I've found that in most cases it points me right to the offending page (url params)
  3. I think this is possible but there may be angles I'm not aware of so please let me know if I'm crazy(er): I'm copying from an existing, working, parity-protected array to a new, completely empty array. If I run with the parity drive I'm getting write speeds of 30-40MB/s. Can I remove the parity drive and restart the array in an unprotected state? This will mean unraid won't have to sync two disks when I copy. I am aware this new system will not have parity protection until I rebuild the parity but I'm not really terribly concerned about that because the data origin has protection. After I've done the copy (~30TB), can I re-assign the parity drive then rebuild? Would I miss anything other than parity protection on the new system by doing this?
  4. Phew. Was in a bit of a panic last night wondering WTF was happening. Get up this morning to see it's not just me. I tried my Sandisk Cruzer but since we got that working I didn't try my HP device. I will test that device (without running make_bootable/ copying ldlinux.sys) to see if it also has issues.
  5. Not if your PCI-E slots are filled with HBA cards I have 1 spare PCI-E x1 but the SASLP blocks it.
  6. That made it work. That's really strange. Maybe the boot parameters/size changed and that caused the boot loader to go wonky? Tomorrow I'm going to try 8a to see if it is the boot config. That's my normal procedure but when it didn't work I tried the more manual procedure.
  7. ZIP utilizes a CRC to validate the files contained in the archive. The CRC should prevent incomplete archives from extracting properly. That being said, I re-downloaded the archive and went through the process again without any luck.
  8. Well despite earlier enthusiasm 10 will not boot. I can't provide a log because it doesn't get past the bootloader (the SYSLINUX...) Steps: 1. Working 8a stick 2. Power down 3. Transfer stick to PC 4. Copy BZROOT/BZIMAGE for rc10 5. Transfer stick back 6. Boot 7. Hang on bootloader (Syslinux copyright Peter Anvin... etc) 8. Power down 9. Transfer stick to PC 10. Load BZROOT/BZIMAGE for rc8a 11. Transfer stick back 12. Boot 13. Working rc8a unraid Same machine, same stick, same process (copy bzimage/bzroot): 8a works for me but 10 just won't boot. ASUS F1A75V-Pro/4GB ram/2x AOC-SAS2LP-MV8/10 3TB Seagates
  9. Capital A Awesome! I can't wait for final. It's going to be hard to find cost-effective non-realtek solutions in the general market. I tried when building my latest system (in December) due to the realtek issues but all the boards I could find had a Realtek chipset.
  10. I would think that attaching sinks would void your warranty. That being said, if you are not experiencing issues I would not do anything. Just from reading the posts in the last couple of days it looks like the problem may be more complex than a single drive (some of their backplanes had 2 dead mosfets on 2 different ports). I seriously considered getting a norco before I decided to go another way to get to 24. I'm glad that I did though Norco's support sounds like it's competent (shipping out new backplanes like there's no tomorrow).
  11. Apparently the hotswap electronics on the 4224 cannot handle more than .5A draw so some 3TB disks are causing the hotswap to fail which also causes the disk to fail. http://wsyntax.com/cs/killer-norco-case/
  12. NFS stale file handle issues can be caused by a whole bunch of things including the client software not adhering to the protocol for some reason (bug, API incompatibility, etc) as well as issues on the server side (which is why the syslog is important). The original NFS error case that I reported to limetech has been resolved since rc4 and I have confirmed that it is still fixed in rc8a. Do you use a cache drive? The only issue that may or may not be fixed is the NFS issue when you have a cache drive installed. If you can walk through the steps to recreate, Tom can fix the issue.
  13. Can you tell us a little about the scenario in which the stale NFS handle appears and include a syslog? What is the client NFS version?
  14. hmmm Unraid doesn't split files across disks/devices so I'm not sure what's going on. How are you accessing the file? My OpenELEC boxes (NFS mounts) are pretty good about accessing only the files. If the one disk is spun down it can take a couple of seconds to spin up. On Windows, you might want to check your antivirus settings as I've seen laptops attempt to start a recursive folder/file scan when you access a mount (even through XBMC on Windows).
  15. The short answer is yes, unraid knows where the file is stored. The longer answer is that under unraid, the data physically resides on one disk and unraid caches the disk tree (inodes) in memory. If the size of the inode table exceeds memory, unraid must spin up each disk to find the file. Once the file is found, unraid will cache that information in the inode table removing less frequently accessed files. I believe the optimal solution is to make sure you have enough ram to accommodate the inode table. If you have a few (15,000) large files (like I do) this isn't a problem. If you have trillions of 1kb files then this is a massive problem for unraid (and really any parity/pooling system).
  16. I can't get the Asus (has been EOL'd in favour of the A85x). The A85x isn't even shipping here at the moment (one motherboard from one supplier) and the support for the chipset doesn't look awesome at this point so I've had to substitute for an ASRock A75 Extreme6. It might be a lucky substitute anyway: the SAS2LP and the M1015 both use PCIE-8x. On the Asus the first x16 port was x16 and the second was x4. On the ASRock the first and second ports balance to x8 and x8. The other nicety is they have added two more SATA3 ports to bring it up to 8 on the motherboard. I need to check support for the ASMedia ASM1061 chip that powers the additional two ports. It does look like it will work on the AHCI driver: https://ata.wiki.kernel.org/index.php/SATA_hardware_features The case will be arriving in November and I have a temporary case that I can use to test the loadout until the new case arrives. I should have everything in-hand on Monday. The first thing I'm going to do is configure the array and move about 10-15TB worth of data on to it to see how well it performs and I'll do a two week burn-in/soak test. If it is all good then I'll continue the build out.
  17. They are both modules which go on top of an existing OS whereas the Unraid distribution is both the OS and the filesystem modules on top. So yep you could install anything you want. For example, instructions on putting Snapraid on an Ubuntu 12.04 installation: http://www.havetheknowhow.com/Configure-the-server/Install-SnapRAID.html
  18. Cheers! Snapraid is a new one on me. Have you used it? What's the product like? What is the community like?
  19. I'm I am concerned that the relatively slow progress is going to cause issues with growing the community and we really want the community to grow. I don't think that Tom has a lot of options at this juncture: our diverse install base (there are 40+ motherboards on the tested list) means that it takes time to work through compatibility issues. If Tom decided to support only one motherboard and only one raid card then progress would move quite quickly. But at this point, it's whatever old hardware you have lying around. I've looked at the other types of systems (from Drobos through ZFS) and the thing that keeps bringing me back to unraid is that if it all goes pear shaped, I can pull the disks out of the array and I still have my data in a format that can be read in another machine. Lose your parity and another data drive at the same time? You've only lost what was on that data drive. Replace them and you can rebuild your array. Motherboard failure? You can port to another motherboard and retain your array. Controller failure? Go buy a new one and you're good to go. These options don't exist with most other raid solutions so I come back to unraid time and time again.
  20. Not in Australia. StaticICE has the reds for a little cheaper than my usual local supplier: $202+shipping($10-$30) Even then I'm not about to stump up $80 to ship defective disks to Singapore (or Malaysia -- can't remember). Seagate I can post locally -- $15.
  21. I've decided I can't take much more of my Adaptec 2805 card so I'm working on a new build that will last 3-5 years. I have a couple of unique issues that I have to deal with the biggest being no air conditioning so I need fans + less disk density. Plus the storage is in a bedroom so I need relative quiet. I can use 4-in-3s but not 5-in-3s as there is almost no room to ventilate the 5-in-3s. During the peak of summer, the disks in my current case get to 49-52 degrees if I run a full parity. I have 3 planned construction stages. Stage 1: Case: Lian Li PC-D8000 Motherboard: Asus F1A75-V-PRO Substitute: ASRock A75 Extreme6 Power supply: Thermaltake TR2 850W Disks: Seagate ST3000DM001 3TB x 6 Stage 2: HBA (still very undecided): IBM M1015 flashed to IT Disks: Seagate ST3000DM001 3TB x 8 (14 total) Stage 3: 2nd HBA (still very undecided): IBM M1015 flashed to IT Disks: Seagate ST3000DM001 3TB x 8 (22 total) As far as the disks go: I'm done with WD. Their reds are far too expensive (Green 3TB: $130, Red 3TB: $240), Intellipark sucks and their RMA address is in Singapore so disk RMA is not an option (international postage is $80 for one disk!) Can't wait to get started on the first stage.
  22. Given that you aren't doing realtime re-encoding, this sounds to me like a bandwidth issue. Looking at your disks I see they're recent disks (Seagate SATA3 + WD EARS) so I don't expect problems there. Can you tell us about how your XBMC frontend chats to the Unraid backend?
  23. Lots and lots of things can be going wrong but can we start with the network? Is it wired? 10/100? GigE? Wireless? The bitrate for unencoded blurays is up to 54 mbps. I don't know to what rate your videos have been encoded but it's worth a peek. Less likely would be an issue with the disks but just to be sure: what disks are you running on your unraid data drives? SATA? IDE? The other thing to do is to run a tool like nmon (or nettop) on linux or looking at the network monitor in windows. Those tools will show you how quickly data is arriving on your XBMC node. *Edited because I got the bluray bitrate wrong.
  24. The latest beta seems to be a little better but it's still far from perfect. What disks are you using?