Jump to content

Fireball3

Members
  • Posts

    1,356
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Fireball3

  1. Well, the naming S3 is probably confusing because in that script you can select the option to shutdown instead of S3 sleep! Take your time, I don't know exactly when I finish to set up my backup rig.
  2. Or use the auto_s3_sleep script on your backup machine to power down when it is idling. This will work even if the GUI has gone unresponsive. Would you mind sharing your rsync config/setup files?
  3. Thanks for improving! Please document/comment the changes inside the script, raise the version number and then post it here. I'm far from being that experienced to tell if there are better ways of scripting - if it works it's OK. Maybe others, with the same use case, will want to test and comment on it?
  4. Hi tdallen, for the bandwidth issue: PCIe 1.0/1.1 (2003) PCIe 2.0/2.1 (2007) PCIe 3.0 (20012) PCIe 4.0 (~2015?) per lane 250 MB/s 500 MB/s 985 MB/s 1969 MB/s Nominal values incl. overhead... So going with a PCIe x1 2.0/2.1 and 4 drives is "on the edge" but still OK IMHO. If you studied the wiki you find the confirmed working cards. The list is of course not exhausting but you can check at least some cards. Sure, some of them are outdated but you may find them on ebay? The next hurdle you will find is that some models can be found only in some regions for reasonable prices. In Europe you will rarely find cheap Supermicros but you can often find DELL and IBM on ebay that people pull from their servers. Just make sure to get the pulled cards and don't buy them from china. They sell for about 50-70€ over here. If you plan to expand further you're probably best off with an 8-port card - check the price/port.
  5. I also flashed one more Dell Perc H310 to IT (P19) last night without any problems! Updated the toolset.
  6. Quoting myself: Alternatively: Search the forum for "9211-8i" to find users that own the card and then PM them.
  7. Once the google robot has been here, this "news" will lead many people to the unRAID forums! Nice marketing trick Seagate!
  8. If it is genuine LSI and is obviously defective (port A not working) why not RMA? You did nothing illegal when trying to flash LSI supported firmware. At least they (LSI) should be able to send you a stock rom to solve your problem - although I doubt you will be able to solve potential hardware issues like that.
  9. Thanks for sharing this! Seagate is once more ruining their reputation. - shorten warranty periods for consumer drives --> negative - deliver drives with useless APM settings (ST1000DM003) --> negative - poor drive quality (personal opinion) --> negative - now this bull*$&% --> negative - more to come... Unfortunately they're one of the very few players left in this business.
  10. I remember having read similar issues somewhere in this forum - keep searching. It is not caused by the script - it's a general issue with S3 on your configuration.
  11. @ Mr_Gamecase I noticed you're running the Highpoint 1740 cards. They are PCI 32bit cards with 4 SATA II ports. How are your parity check speeds? Must be a bottle neck or not? You have even 4 running on your PCI bus... I would like to add the information to the wiki, so please be accurate.
  12. Having a short look at that board I see: 2 x Mini-SAS (for 8 x SAS 6Gb/s ports) 1 x Mini-SAS (for 4 x SATA II 3Gb/s ports) 2 x SATA III 6Gb/s ports If you're lucky to use that onboard controller as HBA and unRAID has an apropriate driver, you already covered 14 Drives. You have to examine that! ("...onboard LSI SAS") Install unRAID, get some forward breakout cables and plug the drives and see if they're available in unRAID. If you wanna save money, then you're probably good with the 2x M1015 (or similar builds). At my place the M1015 is about 50-70€ @ebay. Expander aren't that popular and you barely find them on ebay. Nice board btw.!
  13. Expander, OK but: 1. While I'm not sure how the expander is working - what do the experts here say with regard to the bandwith provided by a single PCIe x8 to connect 24 drives? Isn't it a bit of a bottle neck? 2. I suppose you will use a server grade motherboard. If you manage to get one with 8 SATA slots and add 2 M1015 (or other 8x SAS adapters) you also have 24 slots. Depends on your enclosure of course.
  14. Here is the version 1.1 of the auto_s3_sleep script. Thanks to Superorb who helped to get rid of some bugs. In the process I also made the logging more detailed. If the script runs for your satisfaction you may want to set the debug variable to 0 (no logging) to keep writes to the log (flash drive) as low as possible. Some answers to questions that arose: This script is only checking certain conditions and either shuts down the server or sends it to S3 sleep. Powerdown is done by invoking the /sbin/powerdown You should ensure that the clean powerdown script is installed, it is not part of this script! S3 sleep is done by echo -n mem > /sys/power/state It seems that sending the server to S3 sleep is not trivial though. unRAID will return many errors when waking up again. The whole extent is not clear since I don't use it. There should probably be some kind of routine to, at least, unmount the array before sleeping. More work and testing is needed so watch out when using S3! Possible settings/checks: # check for TCP activity # check for SSH connections # check for TELNET connections # check for locally logged in sessions # check for tmp/nos3sleep.lock (if this file exists, no sleep will be performed) # do not sleep if dedicated clients are pingable # only countdown outside specified hours # call smarthistory script to log SMART values before powerdown (not tested & not confirmed working) # choose between s3 sleep or powerdown (make sure you have installed the clean powerdown script) Rename the file to .sh Edit with a unix-like editor (e.g. notepad++) and configure to your needs. Update: Version 1.2 of this script is here It adds the possibility to exclude drives. Thanks go 2 maspiter.
  15. OK, with extensive help from WeeboTech I finally managed to get the auto_s3_sleep script working. Possible settings/checks: # check for TCP activity # check for any SSH connections # check for any TELNET connections # check for any locally logged in sessions (if "no" allows console debugging) # check for tmp/nos3sleep.lock (if this file exists, no sleep will be performed) # do not sleep if dedicated clients are pingable # only countdown outside specified hours # call smarthistory script to log SMART values before powerdown (not tested & not confirmed working) # choose between s3 sleep or powerdown (make sure you have installed the clean powerdown script) Rename the file to .sh Edit: attachment temporarily removed due to a reported minor bug (bug has been found, removed but remains to be confirmed)
  16. Could this be a problem of the readvz64v3? I haven't checked all results that were posted. Any other results from the v3?
  17. Here are my results with readvz64v3.zip & LC_CTYPE=C /boot/preclear_disk15b.sh -f /dev/sdX As far as I can say, it seems there is no improvement compared to the initial script from JoeL? Why are the prereads so slow? ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdb == HitachiHDS5C4040ALE630 PL1321LAG325VH == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:32:21 (67 MB/s) == Last Cycle's Zeroing time : 10:53:55 (101 MB/s) == Last Cycle's Post Read Time : 16:51:45 (65 MB/s) == Last Cycle's Total Time : 44:19:12 == == Total Elapsed Time 44:19:12 == == Disk Start Temperature: 21C == == Current Disk Temperature: 28C, == ============================================================================ ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdc == HitachiHDS5C4040ALE630 PL1311LAG38ZJH == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:24:22 (67 MB/s) == Last Cycle's Zeroing time : 10:47:02 (103 MB/s) == Last Cycle's Post Read Time : 16:44:01 (66 MB/s) == Last Cycle's Total Time : 43:56:37 == == Total Elapsed Time 43:56:37 == == Disk Start Temperature: 21C == == Current Disk Temperature: 27C, == ============================================================================ ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdd == HitachiHDS5C4040ALE630 PL2331LAG8W1YJ == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:31:59 (67 MB/s) == Last Cycle's Zeroing time : 10:57:37 (101 MB/s) == Last Cycle's Post Read Time : 16:51:39 (65 MB/s) == Last Cycle's Total Time : 44:22:27 == == Total Elapsed Time 44:22:27 == == Disk Start Temperature: 21C == == Current Disk Temperature: 27C, == ============================================================================
  18. Just started the 2nd run with 5.0.5 and the x64 preclear script. readvz64v3.zip & LC_CTYPE=C /boot/preclear_disk15b.sh -f /dev/sdX Edit: Now I found out, that the x64 binary won't work on the x86 unraid... : (guess I was to eager on testing, so that I turned my brain off.../facepalm/) Should there be a check to avoid such?
  19. Here we go, first run completed: ========================================================================1.15b == invoked as: ./preclear_disk15b.sh -A -f /dev/sdc == HitachiHDS5C4040ALE630 PL1311LAG38ZJH == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8388608 Bytes == Last Cycle's Pre Read Time : 12:18:50 (90 MB/s) == Last Cycle's Zeroing time : 10:55:48 (101 MB/s) == Last Cycle's Post Read Time : 14:21:13 (77 MB/s) == Last Cycle's Total Time : 37:37:02 == == Total Elapsed Time 37:37:02 == == Disk Start Temperature: 24C == == Current Disk Temperature: 26C, == ============================================================================ ========================================================================1.15b == invoked as: ./preclear_disk15b.sh -A -f /dev/sdb == HitachiHDS5C4040ALE630 PL1321LAG325VH == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8388608 Bytes == Last Cycle's Pre Read Time : 12:27:01 (89 MB/s) == Last Cycle's Zeroing time : 11:06:19 (100 MB/s) == Last Cycle's Post Read Time : 14:32:25 (76 MB/s) == Last Cycle's Total Time : 38:06:57 == == Total Elapsed Time 38:06:57 == == Disk Start Temperature: 24C == == Current Disk Temperature: 27C, == ============================================================================ ========================================================================1.15b == invoked as: ./preclear_disk15b.sh -A -f /dev/sdd == HitachiHDS5C4040ALE630 PL2331LAG8W1YJ == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8388608 Bytes == Last Cycle's Pre Read Time : 12:25:54 (89 MB/s) == Last Cycle's Zeroing time : 11:05:24 (100 MB/s) == Last Cycle's Post Read Time : 14:27:29 (76 MB/s) == Last Cycle's Total Time : 37:59:59 == == Total Elapsed Time 37:59:59 == == Disk Start Temperature: 24C == == Current Disk Temperature: 26C, == ============================================================================ The next run will be with the x64 edition. Is the updated version ready yet?
  20. OK, switching to pc15b Is this also valid for x64 or for x86 only? Edit: Also noticed, that without the -A it reports that the drive won't be aligned although it is a 4TB drive. Is this correct?
  21. I'm going to start the preclear of 3x 4TB Hitachi now. I will start with 5.0.5 and the pc15b2.zip Stop me if I'm wrong.
  22. I have almost everything together to build my backup server. I'm planning to preclear some drives but I can start by next week only. I will have 3x 4TB drives (to be harvested) - will post the specs if I have them ready. Also 2x 500GB (identical) but I expect one to have issues. 1x 3TB drive + 1x 2TB drive I intend to run 3 cycles - so I could run it with 5.x and 6.x Prepare the test plan and provide the script and I will let it run for you. Would you mind sharing the differences (in layman terms) between Joe's script and your's?
  23. And here is the issue. UNRAID AUTOMATICALLY STARTS A CORRECTING PARITY CHECK AFTER A CRASH. It doesn't give you the option to evaluate the situation before it starts writing data to the parity disk. I agree! If unRAID detects an error, it should not start the array and it should not automatically conduct a parity check!
  24. Look for settings like "passthrough, or disable RAID" if available.
×
×
  • Create New...