Vibe

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by Vibe

  1. Home Run! After erasing the bios on both cards the system booted without any issues and unRAID immediately recognized all drives. The only thing I needed to do was re-assign my cache drive. This is absolutely perfect and will serve my needs very well. I can't thank you enough johnnie.black for your suggestions and guidance, and SSD for the education on PCIe speeds. It's members like you both that help make unRAID a wonderful operating system! Thanks again - I hope I can return the favor!
  2. I've made some progress - both LSI cards have been flashed to firmware version 20.00.07 (and bios 7.39.2 which I disabled). I'm getting a similar bios error so I will go ahead and erase the bios to see if this fixes the issue (thanks for the reference to your post johnnie.black). When I erase the bios - Is it possible to re-flash the card at a later date to utilize the bios again? (e.g. if I ever decide to sell the cards due to an upgrade). Or is this a "one-time" shot? Thanks again!
  3. Thank you johnnie.black - I greatly appreciate the information. I've seen numerous comments about bios version 20 - but thought that they only pertained to cards that were not natively in IT mode. This is great to know - my knowledge is growing in baby-steps I will definitely try this over the weekend - thank you again!
  4. Well...the plot thickens I tried installing the second LSI card in the x4 PCI slot only to receive the following error message: MPT BIOS Fault 10h encountered at adapter PCI (07h,00h,00h) Press any key to continue... What is strange, however, is that both cards work fine in the main x8 slot - no issues at all. It is only when I add both cards to my system (regardless of the order) that I receive the same error message. I have a feeling my motherboard will not accept two cards. During the process of troubleshooting I did disable the Boot Support for both cards. I did some further research and found many references to resolving the matter by flashing the bios/rom for these cards (for the non-IT versions). However, since they are both SAS9201-8i cards they are already in IT mode. I reviewed the unRAID support page for Crossflashing Controllers but none of this appeared to be relevant in my case. I was also not able to find bios information specific to the 9201-8i (only the 9201-16i). Would anyone mind looking at my attached LSI Card Screenshot and let me know if this bios is considered outdated? I don't want to risk damaging the card if the bios is fine and my motherboard is the culprit. If this is an old bios that should be updated - can someone recommend the bios version that I should be using? I greatly appreciate any suggestions! As an aside, I have found a few used/refurbished AMD 990fx motherboards that presumably have dual x16 and/or x8 support. If it does turn out that my motherboard does not like having cards installed - I may be better served going this route. Thank you again for any suggestions!
  5. Perfect - I was hoping this would be the case. Thank you both again for your help and suggestions - I greatly appreciate it!
  6. Hi johnnie.black - thank you for your reply. I didn't realize that the south bridge shared the connection with the onboard SATA ports - this is very good to know. I read your very helpful (and appreciated) post HERE for information about SATA/SAS controllers. With what you have mentioned, is there any standard practice for unRAID (e.g. regarding hardware) to utilize 15 storage drives (1x parity + 14x storage) and then add a two or three drive cache pool?
  7. Hi SSD - thanks so much for your reply, I greatly appreciate it. I didn't realize that the second card would work in the second PCIe x16 slot if it only runs at x4 - this is great news. I always thought the number of lanes that a card utilized would determine the slot requirement. I'm currently only running one SSD drive on one of the motherboard SATA ports so considering the speeds you mentioned I should be set with my standard SATA drives. I will definitely try installing the second card tomorrow to see how this will work out. Thanks again for your insight and information!
  8. Hi everyone, Recently I picked up a pair of LSI 9201-8i HBA cards from a friend after he upgraded his system. I was hoping to use both in my unRAID server that uses an Asus M5A97 R2.0 motherboard. Soon after I realized that my motherboards' PCIe configuration won't work with two x8 cards since one PCIe slot runs in x4 mode. This being the case, can anyone recommend an AMD AM3+ motherboard that runs both PCIe slots in x8 mode - which would allow me to use both cards? The other option I can follow is to use a spare IO Crest 2-port card + one LSI card - but I'd rather use both LSI cards and save the motherboard SATA ports for an SSD cache pool. I greatly appreciate any suggestions - thanks!
  9. Thank you BobPhoenix - I greatly appreciate it. I see what I did wrong - I mistakenly thought that bjp999's version (preclear_bjp.sh from the mediafire.com link) had the modification for v6.2 already. However, after doing a file comparison with preclear_disk_15c.sh that you pointed out, I see where the required change needs to be made (line #968). I am also seeing the changes added for notifications. In the meantime I am trying one more attempt using the default gfjardim-0.8.9-beta script to test a new hard drive I received yesterday (currently pre-read at 50% @ 194MB/sec). If this fails I will I will try the versions you mentioned (with modification on bjp999's script) to see if I am successful. If I can get this to work as intended, the convenience will be a tremendous addition to a fantastic script. Thanks again for steering me in the right direction!
  10. Hi everyone, I am in the process of trying to preclear two new HGST 4TB drives and can confirm the same issues. Since I am also in the process of upgrading my main system from v5.06 to v6.3.5, I decided to try out the new plugin system for unRAID 6 using a "trial" version that is running on a different server. The system boots just fine, recognizes the hard drives, and the plugin appears to have installed correctly. I have not added the drives to the array and was attempting to preclear both at the same time and later individually. When I attempt a preclear with the default gfjardim-0.8.9-beta script, one drive made it through the initial pre-read but stopped at 12% during zeroing. The second drive only made it through 86% of the pre-read (it's running slower and may have some issues). The preclear came to a hault and the CPU was running at 100%. As per BobPhoenix's recommendation above, I then tried using bjp99's script (bjp999-1.15b) by renaming/moving it to /flash/config/plugins/preclear.disk/preclear_disk.sh, and also copied the included readvz file to the root of the flash drive (/flash/readvz). I actually tried with both the original readvz as well as the one with bjp999's script. After a reboot the option to use bjp999's script was available within the preclear dialog. However, after starting a preclear (individual drives or both together), the status was not updated and only the message "starting" was displayed. I will try to run bjp999's script manually to see if I have better success.
  11. Thank you Frank1940 for the information. I just did some further reading after your post and learned that my battery backup doesn't provide the sine waveform that you mentioned. Very interesting stuff. I don't have many power outages anymore, but may plan on upgrading to something that does provide this feature. A friend of mine also mentioned issues he has experienced due to power supplies dying on servers he manages (Linux but not unRAID). For this reason he has implemented using dual power supplies. Hi bjp999 - I certainly appreciate your perspective. The same friend I mentioned above jokes about the batteries on his UPS having an "invisible timer" that determines when it's ready to be replaced. He's suggested that he can add the day to his calendar for when he needs to order a new one - and they are not cheap. I began using unRAID years ago when storage on my Linux home server I built (software raid1) grew beyond 250GB. I never had any issues with data corruption when the power went out - very reliable storage indeed (despite the operator). Once I moved to unRAID and experienced the same reliability and the ability to easily increase capacity I was hooked.
  12. This is very good to know - I can just imagine the issues with a power failure on a cache pool. After a power failure several years ago I did buy an APC Back-UPS 1300 (battery backup) which has been working with the APCUPSD pkg. However, since then I have read that there is a difference between a UPS and a battery backup. Would this be sufficient with BTRFS?
  13. Perfect - Thanks so much Squid. I have a couple SSD drives that I haven't been using so now they will have a new home. I greatly appreciate your taking the time to answer my questions!
  14. Thank you Squid for your reply - I greatly appreciate it. Of all unRAID users, if you're fully confident with XFS, my concerns are no longer an issue! I do have multiple back-ups of critical data, which is definitely good advice to follow. Interestingly enough I have never had any issues after power failures with unRAID 5. However, after moving to a new home and experiencing the first one, I quickly recognized the necessity of having a UPS. I did have a power supply fail on me - but after a typical parity check all was just fine. As far as my last question regarding using a Cache disk/pool for Docker storage - this would not negate the ability to continue writing directly to mounted disks would it? Thanks again for everyone's help!
  15. Hi Frank1940 - I greatly appreciate the information. I have read about the slowness with writes on full disks (ReiserFS) and have noticed this myself to a degree. After reading about the developers murder conviction I can see why there would be limited interest to continue. The bug that cropped up in the past was also very concerning - I would have been extremely frustrated had I made the upgrade at that time. I too will probably make the full jump to XFS as I have the time and free drives. I will definitely follow your example and keep track during each stage. I am currently pre-clearing two 4TB hard drives on a "trial" system with that goal in mind. Sadly one drive may have some "issues" as it is making a repetitive noise every 5 seconds or so, and is pre-reading at 40MB/sec slower than the other (both same HGST 4TB NAS). I may be using Amazon's return policy. I completely agree with your sentiment about a UPS. After losing power to a storm a few years ago, the next 8 hours were a "nail-biter" when power was restored and my server was doing a parity check. Luckily I was not writing to the disks when this happened - I'm not certain if that would have had a different outcome. However, for the price and security - it is definitely a smart thing to do. When I make any changes I will definitely follow your recommendations regarding testing the procedure. Thanks again for guidance!
  16. Hi everyone! I'm a long-time unRAID user with a very stable v5.06 system who would like to upgrade to v6.3.5. I am slow to adopt new technology, especially when my current system performs flawlessly for my needs (a testament to unRAID + Dynamix webGui). I only use my system for streaming media files, and a single disk for a storage of work related files. I have read a lot from the forum about the upgrade process (a huge "thank you" to all that share their wealth of knowledge) as well as through the Lime Tech website and Google searches. However, I have a few questions/concerns that I am hoping to learn more about. 1. I am using an APC battery backup for automated power down during electrical outages - it works great. When I wasn't using this, my system would begin a parity check after an outage and the wait time was the only burden I experienced. I have never had any issues with data corruption that I am aware of. I presume that the ReiserFS was the main factor here. I understand that the XFS file sytem is the default format for unRAID 6 and it is designed for large files and scalability. After a bit of online searching I have read about some users (outside of unRAID) experiencing data corruption using XFS after power outages. The unRAID Wiki also mentions that the recovery tools for XFS are not as mature as those for ReiserFS. One site I read where a user experienced issues where files would appear to be available, yet files sizes were "0". However, a lot of this information appeared to be several years old. Has anyone experienced a power failure without any major issues and/or losses using XFS? Are there any concerns or precautions regarding data corruption to be aware of? Have there been any issues with reliability? I hope I don't appear to be paranoid - I've been blessed with an exceptionally reliable unRAID system for 8 years, and change can be "spooky". 2. I use one of my disks to store work files (1,000's of small documents, HTML, PHP, etc.). I have read that XFS doesn't perform as well with small files. This may be outdated information as well so I am uncertain if this is still valid. Would it be better (performance wise) to continue using ReiserFS for this particular disk? Is the ReiserFS safe to use with unRAID 6? I read about the ReiserFS regression with earlier versions of Linux, and presume that this is no longer an issue - is this correct? 3. I have not had a need to use a Cache disk/pool as the slower write speeds have not been a problem. I also write files to individual disks rather than shares. However I am very interested in trying out Dockers as it clearly makes application installation a breeze. I have read where users suggest keeping Docker files on the Cache disk or Cache pool, e.g. outside the array. Now for my "dumb" question. Is it possible to utilize a Cache disk/pool only for Docker storage - and still maintain the ability to write directly to individual disks (e.g. /server/diskX) and bypass writing to disk shares? My apology for my long post - and thank you to anyone who can shed some light on my questions. I hope to return the favor!
  17. A Mix - This is using the 7K3000 for parity and 2x Hitachi 5K3000 (green) + 3x WD Caviar Black 1.5TB + 1x WD Caviar Back 1TB for data. I was getting SLOWER speeds using ALL WD Caviar Blacks before the move to the three Hitachi drives. As mentioned, during parity checks I see fluctuations from 65-85MB/s - sometimes momentarily as high as 90MB/s+. With my previous setup the speeds would be more constant around 58-60MB/s. I was actually going to post this in the forum as I was unsure of the reason for the increase in speed (didn't seem correct as the WD 1.5TB parity seemed pretty fast).
  18. I recently added this drive to my server as the parity drive - it is working very well. From what I am seeing my parity checks have gone from 58-60MB/s to anywhere from 65MB/s - 85MB/s+ (previously using a WD Caviar Black 1.5TB). So far it appears to be a great drive.
  19. Thank you Joe L. and prostuff1 for you replies and assistance - greatly appreciated. I will definitely keep an eye on this drive - and will compare the results to the other two drives that I have (same WD Caviar Black 1.5TB). Before the most recent version(s) of the Preclear script I used to use my cell phone to capture the results. I will take a look at these to see if there has been any change with the Spin_Up_Time attribute as when I first cleared the drive. Good to know this one isn't on the fritz - at least I hope so . Thank you both again for your time and assistance!
  20. Thank you Joe L. for an outstanding script - using a second computer as a test box/clearing server makes adding new drives to unRAID an absolute breeze. Yesterday I added a new Hitachi 5K3000 drive to my system (upgraded a WD Caviar Black 1.5TB) - everything went smoothly as usual. I then used the Preclear script to clear the WD Caviar Black, and ended up with a report that highlighted the Spin_Up_Time attribute (something I've never seen highlighted in a preclear report before): ========================================================= ** Changed attributes in files: /tmp/smart_start_sda /tmp/smart_finish_sda ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Spin_Up_Time = 40 40 21 near_thresh 15000 Seek_Error_Rate = 100 200 0 ok 0 Temperature_Celsius = 117 132 0 ok 35 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ======================================================= The report shows no issues with sector re-allocation - but appears to show the Spin_Up_Time attribute to be near the threshold. I searched in the forum and on Google but didn't find much information on whether this result should be of concern. I read somewhere that this could be the result of the power supply - but the server I cleared the drive on is healthy. Should I be concerned about the longevity of the drive itself?
  21. Thanks vexhold! The 4400 sounds like it will fit the bill
  22. Fantastic - thanks for the post! I've been wanting one of these to test my unRaid server and other household appliances. Are there any major differences/benefits between this (model P4400) and the P4460?
  23. Wonderful! Just ordered one - I have been waiting for a good deal on this one - thanks Rajahal!
  24. I have been using WD Black drives and would like to make the jump to 2TB - but OUCH! For the price of a WD 2TB Black one could purchase a 7K3000 (Currently on sale at Newegg for $109.00 + Free shipping) and a WD Ears or Hitachi 5K3000 (also on sale). Has anyone had experience with how the 7K3000 performs as a parity drive?