wsume99

Members
  • Posts

    531
  • Joined

  • Last visited

Everything posted by wsume99

  1. Try your board out first and see how it goes. I run a Gigabyte EP43-UD3L with the Realtek 8111C NIC and I have not had a problem with mine. This agrees with what I've found in my research of the issue - the problem is not consistent for all 8111C/D/E chips. Some people just seem to be the unlucky ones that get a NIC that is a problem. Perhaps my onboard NIC is bad but I think that it is a driver/hardware interaction because my NIC worked just fine before I switched to 4.7. To be clear here I think the the issue is not really an unRAID problem but rather the hardware drivers that are included in the kernel that unRAID is built upon. So it's not like 4.7 is broken and 4.5.6 is good but rather they have different drivers included so YMMV depending on the characteristics of your specific chip. Of course I could try to RMA the board but then I'd have to pay for shipping and my server would be down. So I opted to just get the Intel NIC - which some people do even if they don't have any problems because they are just so rock solid.
  2. I had recently had pretty much the same issue with my MB. I have a C2SEE and a 8111C. Here is my support thread if you're interested. All the research I've done indicates that this is a driver issue. I think that I started having problems with this after installing 4.7 but I am not absolutely certain. However since I now have all advanced format drives in my server I don't want to go back to the previous version. I didn't have the time to try building my own kernel or doing a slackware build so I bought an Intel Pro PCI NIC and it has been working just fine I installed it. Why is it so important to save one of the two PCI slots on your MB? From an unRAID perspective PCI slos are certainly not as important as PCIe slots. So sacrificing a PCI slot for the Intel Pro PCI NIC would be worth it to me.
  3. You must have overlooked the fact that his build sheet included an Intel PCIe NIC. It looks like @delicatepc actually did some reading about what works and what doesn't before picking out his h/w.
  4. Well since my last post I've installed a 120mm Delta fan and it stops when commanded to zero. I have a C2SEE MB so my BIOS settings might be a little different than yours. Here is what my manual says about the Fan Speed Control Modes... I have found that in order for me to gain control of the fan I have to set this BIOS option to Disabled. I never had to turn off the AutoFanControl included in the kernel. Now I never enabled it either when I ran pwmconfig either. IIRC whenever I used the pwmconfig command it asked me if I wanted to enable AutoFanControl and I always said No.
  5. ^^That is exactly what I meant, thanks @chuck23322 I was just too lazy to type it all out. I think it makes sense to organize your files like this because it might be a lot of work later to re-do your directory structure after you have a bunch more files. Again, you don't HAVE to do this but I would recommend that you do.
  6. If you have all mkvs then split levels won't affect video playback but you may still find it useful to place them in individual folders. My unRAID server streams to XBMC on my HTPC and sometimes I find it necessary to place a movie.nfo file in the same folder with the mkv file because the scraper is picking up the wrong info.
  7. Bottom line ... you want a split level of 1. There is only one split level setting for each user share. The split level determines how the directory structure within a user share can be spread across the disks within a user share. In your scenario, the Movies directory is a user share. So if you assign a split level of 1 that means that the contents of any folder placed within that share will all be stored on the same disk. So if you rip a DVD and place all the VOBs in the Movie A in your Movies share all the VOBs (and other files) would be placed on the same disk. This is what you want -all files for a single movie on the same disk - to ensure seamless playback.
  8. You probably need to take a serious look at your cooling if you're seeing temperature variations that high, 29C --> 53C. If one of my drives were that hot I think I'd be getting pretty worried would crap my pants. That may not be what caused this specific failure but temps that high - even if it is only during parity calculations - certainly are not helping you to get the most life out of your drives.
  9. 1. I've only used SNAP for just a short time but it will basically automatically mount non-array drives and it enables hot-plugging as well. You can also assign a name to each device by s/n so that it's mounted with a name you can recognize. This is really useful for external HDDs, USB flash drives, and memory cards. There might be more it can do but that's how I use it. For JBOD without parity - that's easy - just don't assign a parity drive to the array. In fact that's how my array started because you'll get much better throughput without parity. So I copied all my data to my unprotected array and then added parity later. If parity protection is not important then you don't have to assign a parity drive. 2. Not sure, but I'm interested to hear the response to this one. 3. All of the packages that I've installed via unMenu only required that I manually do it the first time. Most of the time when I've installed a package it's actually a three step/click process. First you download the package files. Then you install the files and start the service so the package is now running on the machine. The last step is to make the installation persistent. This usually involves a modification to the go file so that the package will be started again as part of the boot process. Once I've completed those steps no further interaction is required by me.
  10. I agree. On top of that I'd have to say that the simple fact that the article was just written implying this is some new discovery when in fact this is a well documented characteristic of WD green drives that dates back to mid 2009. Thanks for the update guys - you're only ~ 2 years late. I don't think I would continue to use that site as a source for information.
  11. Nothing new here. Just increase or disable the IDLE3 timer if you are worried about it affecting your drive.
  12. The unRAID forums are about the best I've seen. Check out the User Customizations and Applications sections in the forum and you'll see that there are a lot of user generated packages and add-ons. It seems like I just keep finding things that I never knew were available in there. Honestly, the forums played a big part in my decision to start using unRAID, the support I've gotten has just been tremendous. I would recommend unRAID to anyone who is looking to build a NAS. Welcome aboard and hope you stay around.
  13. Three primary benefits to Seagate - Increased HDD volume should help them in terms of production efficiency Technology from Samsung HDDs should improve their products Special supply agreements with Samsung for NAND - aka Seagate is going after SSD business Downside to consumers - We're @#$%ed
  14. The biggest issue is that you are using the fill up allocation method. Here is what the manual says on fill up ... So because you have not set a min free space value unraid would never write to to your new disk (disk3). Also because your split level is set to 2, once you write a file in a season folder all the subsequent episode files will be forced onto that same disk that the first file was placed on. So, even if you did not have the fill-up allocation problem, new episodes written to the TV share would still go to disk2 unless you are writing shows to your TV share that are from a new season (i.e. one that does not exist in your share yet). My advice to you would be change your allocation method to most free and then set a min free space. Rule of thumb is to set it to be twice the size of the largest file you would ever write to your TV share. The value is in Kb.
  15. In my mind all of the options include the inherent assumption that 5.0 - whatever options it includes - MUST be a stable release. I also have confidence that Tom will do the right thing. The intent here should be to voice the community's priorties when it comes to new features and let Tom figure out if/how he wants to incorporate that into his roadmap. I do find it a little ironic that you stated that you want 3TB support but voted for the option that says that 3TB support is not a priority to you.
  16. I'm sure there are lots of ways to do this but I just use the nzbdstatus add-on for Firefox. This thing is super easy to get running. There is an option you can enable that will automatically direct nzbs to sabnzbd. Then whenever you download a nzb it just automatically shows up in your queue.
  17. Most users install SAB/SB onto either a cache drive or a non-array drive. There is an unMenu package now available for installing SAB/SB/CP. It really could not be much easier.
  18. I wouldn't just assume that existing users won't leave because 3TB support is not available. Lack of 3TB support could also force existing users of the basic version who are looking to expand their array and want to use 3TB drives to seek an alternative solution with 3TB support. And it's not that a customer or user cannot use a 3TB drive - they just can only use 2/3rds of it right now. What is most disappointing to me is that the roadmap seems to be so out of date that it is basically useless at this point. I have no idea when 3TB support will be added. 3TB drives have fallen in price enough that people are seriously considering getting them even though they can't use all the space now. To me that is very telling and it indicates that Limetech needs to get out in front of this issue. Just imagine what it'll be like in 2-3 months, you'll probably have 3TB drives in the $100-110 range.
  19. If working on 5.X development means that 3TB support is delayed then I'd prefer a 4.8 with 3TB support. Perhaps a poll should be created and let's put it to a vote? What is more important to the community - 3TB support or 5.X features?
  20. There was no learning curve for my family because I took care of all that myself. There is really no need for anyone in my family to touch unraid and I don't want them to either. Really that applies to all the computers in my house - they are users, not administrators. All they need to do is know how to use XBMC and they're good to go. My kids & wife have no idea what is happening in the background when they play a movie. The only time they care is if the server goes down and XBMC says that the files are unavailable. So my point is that someone in the house needs to be able to take care of the server. If your friend is not capable of doing that then are you willing to shoulder the burden for him?
  21. According to the development page >2TB support was originally supposed to be a feature of 5.0-beta4. But if you click the link you'll see Tom's comments.
  22. Your write speeds with and without parity are slower than I would expect. I have all WD Green drives and I get 25-30 MB/sec writes with parity on my gigabit network. Many other users report similar speeds. The fact that you were able to complete three preclear cycles on two drives simultaneously is an indication that your server itself is healthy. Obviously the more drives you do at the same time the better - it would just load the server heavier and give you more confidence that everything is ok. What happens if you unassign the parity drive? Does the server crash then when writing files to it with no parity drive? Also you need to be running tail via telnet so you can capture the syslog activity just prior to any crash. That might help to better describe exactly what is going on. Read about how to do that here
  23. I'm trying to better understand these two statements from your original post. So if I understand correctly you built the server and copied all your data onto it using TeraCopy before calculating parity. Then after you had copied all your data you assigned a parity drive and established parity. Now you are trying to write files to your parity protected array and you are getting crashes. Is that correct? You mentioned very slow write speeds. What were the write speeds that you saw without parity? They should probably be ~60MB/sec (or more) if you have a gigabit network. It sounds to me like you've had slow transfer speeds from the beginning which means that your server (or something else) may not have been running properly all along. How about copying data from the server? Does it crash when you do that? Did you preclear all of your drives before adding them to the array. If so, how many preclear cycles did you perform? Did you preclear multiple drives at the same time? Have you made any changes to your network other than adding your server? You might consider connecting your server to another port on your switch/router. If I were you my strategy would be to verify the server is stable on its own first - i.e. disconnected from the network - that way you can narrow down the number of variables you're working with. I feel like you've established that your RAM is not faulty so my next suggestion would be to run simultaneous preclears on all the disks you have or as many as you can without losing data. FYI-I'm assuming that the data you wrote to your server is still intact somewhere else. Running multiple preclears is a good way to stress some of the key components in your server. I have no experience with StressLinux but it sounds like it's trying to do the same thing that multiple preclears would do - stress the system to find weak components.
  24. Yes, I disagree - strongly. Just because a memory stick works well in one motherboard doesn't it will be compatible with another motherboard. Why do you think that the mobo manufacturers publish lists of compatible memory? My experience has always been that any incompatibility issues between RAM and the MB result in a failed boot. However I'm sure that others may have had different experiences.