greybeard

Members
  • Posts

    127
  • Joined

  • Last visited

Everything posted by greybeard

  1. Nice! Looking forward to when it will be useful, maybe even the default at some future time. Have you thought about incorporating the change to skip the preread for passes 2-n in a multipass preclear? Would be nice not to have add that modification to the script when new versions are released. It's in there too. The pre-read only is done on the very first cycle. the post-read of every other is treated as the pre-read for the subsequent cycle.: if [ "$pre_read_flag" = "y" -a $cc = 1 ] then pretmr=$(timer) #get preread start time read_entire_disk $theDisk preread display_progress fi Thanks!
  2. Nice! Looking forward to when it will be useful, maybe even the default at some future time. Have you thought about incorporating the change to skip the preread for passes 2-n in a multipass preclear? Would be nice not to have add that modification to the script when new versions are released.
  3. I used F9 to load optimum settings and changed ACPI to OS support YES, S3 and V3. I did nothing special to install a "powerdown" script. The flash drive is a fresh build using unRAID 4.6-rc5 distribution and unMenu is clean install of Version 1.3 after which I ran the update script. Revision now shows 178. To my surprise, no. The LEDs are dark when the system is either powered off or sleeping.
  4. I think the questions are: 1. Should be a 4.6.1, 5.0 or 5.1 feature? 2. Should be 5.0, 5.1 or 5.x (x>1) feature? The answers need to be coridnated and GPT support needs a plan too.
  5. This is how WOL is behaving with my C2SEE. E3400 CPU, Kingston KVR1333D3N9K2/4G memory, Corsair TX750 and one WD20EARS. unRAID 4.6 NOTE: There are no HBAs in this system. I will try adding an MV8 in the next week or two. From a cold power off (powered off using the switch on the power supply) - Can not wake up system via magic packet From a soft power off (with unRAID up and running, push the power button on the case or shutdown from web) - Magic packet reliabley powers the system on and unRAID boots up - Magic packet continues to work even if I power off during the boot process - Seems I have to do a cold power off get back to where the magic packet will not start the system - Guessing unRAID configures the LAN adapter in a way that makes WOL work but that confguration is lost when standby power is removed After putting system to s3 sleep using using the "Go To S3 Sleep" user scripts button - Magic pack reliably wakes system up but... - I always loose video to an attached monitor (it is a test system so it has a monitor attached) - If the system has been asleep for too long it wakes up but I can not connect to the web server or open a telnet session. I don't know if this is a wakeup issue with the software or a hardware problem. I did read another post where someone had a WOL problem that was fixed by replacing a Corsair PS with another PS. I suppose it could be memory related too. Apparently the system has not totally crashe because pushing the power off button on the case appears to initiate an unRAID power off sequance. I need to do additional diagnosis of this problem. Here is the command I am using to send the magic packet. wolcmd.exe 00:30:48:B2:10:55 192.168.0.61 255.255.255.0 wolcmd.exe file date is 1/4/2005
  6. I understand that which is why I am wondering if using GPT only on drives > 2TB might solve the problem. Guess I don't know what the work effort is to: 1. Align on 64 2. Use MBR if drive is 2TB or less 3. Use GPT if drive is > 2TB Is there any reason this would not work? Is it just a matter of work effort?
  7. Yes... rebuilding a failed drive with data at the end of the drive. If GPT was used only drives that are > 2TB in size, wouldn't a replacement drive always have a partition that is => the failed drive? Space would only be lost if the replacement drive is > 2TBs. If replacing a 2TB or less drive with a 3TB the partition would be much bigger. If replacing a 3TB drive it would be the same size. If the replacement drive were the same size as the original and 2TBs or less, no space would be lost so the data partition would be the same size using either 63 or 64 allignment.
  8. Not sure I understand why loosing space at the end would be a problem unless it was a 2TB or less parity drive. Can't imagine that the lost space on a data drive is significant. Maybe GPT could only be used on drives > 2TB. Obviously to use a 3TB drive you need to replace the parity drive first so all existing data drive partitions would still be smaller than the parity drive partition. Adding a 3TB data drvie would get the same backup partition table so the data partition would be the same size as the parity partition. Of course it would take at least two 3TB drives to do proper testing. Am I missing something?
  9. Agree but want to point out that that LBA addresses have been 48 bits (roughly 128,000TB addressability with 512k sectors) since 2003. Unles unRAID was coded to use only 32 bits when doing LBA calculations (assuming it even gets to that level in the drive interface) the current 32 bit limit is a MBR partitioning limit. So we don't ever need true 4K sector support for any purpose other then partition allignment and maybe some performance improvement. I suspect the key to unlocking 3TB drives is support for GPT partitioning. Don't know what file system limits might exits. Things like BIOS support and bootability don't matter to unRAID. I really don't know if connecting a 3TB drive to a MV8, creating a GPT partition and addressing the full 3TB using LBA address will work or not. Such things need to be carefully tested but 3TB drives are still a bit pricy to buy one just to experiment. Who knows maybe it works so long as you don't try to boot from it. Don't we all disable INT13 anyway? Would be nice if Supermicro would do the testing and isssue a statement regarding compatibility, now that 3TB are available in the retail market. I would think that we will eventually see a list of HBAs that have been tested to be compatible with > 2TB drives.
  10. For the first time since I updated to a version that no longer has the reset option on the main web page I need to reset a config. So I had to go lookup how to do it. I believe that what I found is cause for concern. In looking at the wiki I read about this url that can be type into a browser address line in order to do the reset (deliberately not listing it here). The test system I wanted to reset was using the default Tower name so I wondered what would happen if just cliked on the link in the wiki. Sure enough it reset the config, no warning, no "are you sure you want to do this", nothing. It jsut gave me a blank browser page to look at. Granted the array needs to be stopped first but even so if someone using the default server name is having a problem starting their array, an accidental click on a link in the wiki would make things worse for them. Maybe that is a lot of ifs but I have alwasy been somewhat paranoid when it comes to data protection (my computer security background showing through). I just don't think is is a good thing that clicking a link in the wike would be able to modify my array under any circumstances. Maybe this particular shortcut should not even exist in unRAID. Reseting an array should not be that easy. Not to mention any web page anywhere could hide the url behind perfectly friendly text, including links in this forum.
  11. My preference would be to use a spare drive swap approach. Something like this. 1. Shutdown server. 2. Replace drive to convert with with a fully tested spare. 3. Startup and do rebuild. 4. Run CRC checks and a verify. 5. Assuming all is good, remove the jumper from the replaced drive. 6. Install in a test system and test it. 7. If necessary use any of the document processes to fix problems. 8. Now you have another tested spare to use for the next drive to be migrated. I am sure there other variations on the process I outlined that would work just fine too. Obviously you would need to start with the largest drive and work back to the smallest. This process preserves your original files until you are sure they are properly rebuilt on the spare drive.
  12. Thanks for the communication and great work. unRAID is a great product with a fantastic community.
  13. Adding or removing the jumper after a WD20EARS has been used is risky business. In a recent experiment I took a new drive and ran a multipass disk sanitization (with read verify) against it. Then installed the jumper and put it in a test unRAID system. The result was a large number of hardware IO errors. Something in unRAID (or unMenu) was trying to read the very last sector of the drive which for some reason was unreadable. SMART even indicated a pending sector reallocation. Started a preclear and continued to get 1,000s of errors throught the preread. Wasn't until preclear finished it's write pass that all was good again. The post read was normal. It took 33 hours for one pass. That is a long time even for a 2TB drive, again the post read ran at normal speed. In the end there was no pending reallocate and no relocated sectors. This could all be a coincidence caused by one marginal sector in exactly the right spot on the drive but I doubt it. I have one more I think I will install and format before adding the jumper just to see what happens.
  14. Interresting point. I was looking at it more from the perspective that logically removing the drive would allow me to take it out and set it on a shelf until I am sure I no longer need anything that is on it. If you look at it from the perspective that the drive is part of the arrary up until the zeroing is finishd then I agree you need to erase as you go. However doing so completely destroys the integrity of the file system withing the partition. Not sure what the affect of that would be, if anything.
  15. Yes, I think so. The XOR for parity works on block 1 of all partition 1s, then block 2, etc. So it makes no differenc where the partitions actualy start. You can rebuild to a relocated partition without "moving" anything. No need for a file level copy. Right? Cool!
  16. I like the idea of all newly formatted drives using sector 64 alignment. I don't like the idea of anything moving during rebuild, when an arrary is most vulnerable. I would rather see a misalligned partition than risk moving things during a rebuild. Simulating zeroing (not actually clearing) for removal would be a nice to have but is not strictly require for 4K support. It would make manual migration from 63 to 64 easier. I can make good use of sector 64 alignment right now even if there is no upgrade path from sector 63 alignment. I have no immediate need for support of >2TB drives. No desire to buy them until they are much cheaper. All my recent drive purchases have been WD and Samsung. (I missed those $60 F4s or I would have several more of them.) Just my opinions.
  17. I suppose this could come back to bite me somewhere down the road but I use a fill em and forget em approach. These drives have been this way for over six months and have caused me no problems. The free space has not changed. Last number is the free space as displayed by the the unRAID main page. Most of these drives have multiple 40+ G ISOs on them. disk1 SAMSUNG_HD154UI_S1XWJDWZ203486 * 1,465,138,552 29,332 disk2 SAMSUNG_HD154UI_S1XWJDWZ203489 * 1,465,138,552 72,324 disk3 SAMSUNG_HD154UI_S1XWJ1KZ104867 * 1,465,138,552 106,672 disk4 SAMSUNG_HD203WI_S1UYJ1KZ402327 * 1,953,514,552 359,452 disk5 SAMSUNG_HD154UI_S1XWJ1KS912520 * 1,465,138,552 174,916 disk6 SAMSUNG_HD154UI_S1XWJDWZ203484 * 1,465,138,552 36,108 disk7 SAMSUNG_HD154UI_S1XWJ1KS912511 * 1,465,138,552 43,128 disk8 SAMSUNG_HD203WI_S1UYJ1CZ405157 * 1,953,514,552 91,672
  18. What do you think taking one of these apart does to the warranty? This is definitly a good deal if you need an external drive. There have been recent examples of bare 2TB drives for $80 that are shipped direct to you home. After taking sales tax and gas into account how much are you really saving?
  19. Here is a link to a PDF Hitachi published on the subject. It is 4 pages long. I can't judge which parts are a factor in an unRAID system. http://www.hitachigst.com/tech/techlib.nsf/techdocs/D213A024C090CE9F862577D5002600FC/$file/FinalHiCap_2.2TB_TechBrief.pdf The data sheet for these drives claim 512 byte sectors. Doesn't specify if that is at the interface on the platters. Nothing in the spec sheet about advanced format. The ATA/ATAPI-6 specificaiton "defines a method to provide a total capacity for a device of 144 petabytes". Seems to suggest there is less incentive to produce native 4k interface drives. It's only about interface performance, not capacity. 4K physical sectors are about capacity of the drive platters.
  20. As always it is the chicken and egg problem. Do you 1. Wait until hardware and software supports 3TB drives and then start manufacturing them or 2. Wait for 3TB drives to become available and then update hardware and software to be able to use them. All my post points out is that another drive manufacture is introducing 3TB drives. In the case of unRAID I am guessing that if there were no large capicity drives (> 2.1TB) then there would never be any support for them. On the other hand the more widely available they become the more hardware will support them and the more likely unRAID will add support for them. This is a cycle that repeates itself over and over again in the computer technology world. From my perspective having more manufactures producing large drives is a good thing. Each announcement is just one step closer to when we will be able to use them in unRAID, however far away that time actually is. I am fairly new to unRAID but wonder if it has been arround long enough so that early versions supported IDE drives only and SATA was a fancy new thing people couldn't wait to get support for. The road to where we are today is littered with hard drive barries that have been overcome and left in the dust. Eventually the same will happen for drives > 2.1TB. While we are waiting we can all have fun talking about it.
  21. "The Deskstar™ 7K3000 is Hitachi’s first hard drive to deliver an enormous three terabytes of storage capacity and 7200 RPM performance in a standard 3.5-inch form factor. The 7K3000 is also the first Hitachi hard drive with a 6Gb/s SATA interface, which along with its 64MB cache buffer..." http://www.hitachigst.com/internal-drives/desktop/deskstar/deskstar-7k3000 There is also a 5K3000 "CoolSpin" version described here: http://www.hitachigst.com/internal-drives/desktop/deskstar/deskstar-5k3000
  22. Just read through this and think there might be one option not explicitly mentioned. Maybe it could be as simple as all new arrays (not new drives in an existing arrary) allign partitons at sector 64. Non advanced format drives would still work just fine, they don't care what the starting sector is because every sector is essentially alligned to a physical sector. When adding a drive to an existing array the partition should be alligned to what ever that array is using 63 or 64. I should think that software wise this would be a simple and reliable solution. The idea of converting anything on the fly during a rebuild scares me, as do the issues with an array of mixed partition allignments. The downsides I can think are: - You could not move a drive between 63 and 64 alligned array without clearing it. - You would need to know if you have a 63 or 64 alligned array to set the WD jumper correctly when adding a new drive. Even if you don't get this right the penalty is not huge, as has previously been pointed out. On the plus side: If you are building a new array leave the jumber off WD drives and use any drive you want, including Samsung F4s. Everyting will perform optimally. Very simple for new users. >2TB drive support is a whole other subject and unrelated to allignment on existing drives.
  23. I believe that Norco has updated the 4020 and 4220 to be identical to the 4224 except for the number of trays. Of course the 4020 does not have the SAS backplane but the non SAS backplane does appear to be mounted horizontally with thumbscrews just the same as the backplane in the 4220 and 4224. These "new" versions also take the same 120mm fan plate that goes in the 4224. The new version is also designed to hold two internally mounted 2.5 inch drives instead of two 3.5 drives that I read could go in an older version. You need to be carefull what you buy. There seems to be a lot of old stock versions out there including what Newegg has. Because the product name did not change it can be a challenge to know which version you are buying. Anyone had a hands on experience with one of these new versions and can confirm what I have written is accurate?
  24. http://www.buy.com/prod/kingston-hyperx-4gb-2-x-2gb-ddr3-1066mhz-sdram-non-ecc-240-pin-memory/q/loc/101/212695590.html Two rebates per household if submitted to gether. From the datasheet "The SPDs are programmed to JEDEC standard latency DDR3-1333MHz timing of 9-9-9 at 1.5V." http://www.valueram.com/datasheets/KHX1600C9AD3K2_4G.pdf Anyone know if this will work with a C2SEE? I know it is not explicitly listed in the compatibility list.
  25. I have decided to wait for a sale on Norco 4020. Even with having to buy a powersupply the cost per drive bay will be about the same for a 15 drive array and lower if I decide to put more than 15 drives in the array. I also think there is a higher probablility of being able to buy more of the same thing for about the same price in the future if I want to expand further. I do agree that even at $300 this is a good deal but no longer think it is the lowest cost route for a server with hot swap drive bays. If I already owned one and needed to expand I would certainly grab one or two.