SSD

Moderators
  • Posts

    8022
  • Joined

  • Last visited

  • Days Won

    13

Everything posted by SSD

  1. You might post a diagnostics file that includes the time you did whatever formatting / partitioning you tried. Otherwise might need to zero the first few megs of the disk (this will remove existing partition information). And then try again. Hopefully zeroed in this way would cause unRAID to re-partition the disk when unRAID tries to format it. I'm assuming you have given up trying to recover any data from the drives at this point.
  2. So now we'll never know. Too bad. Good luck!
  3. Ron did say earlier that disk7 contained a lot of data. Maybe he was mistaken. What would unraid do in the scenario you mention. Would it rebuild parity to include the two new disks? Would it assume the partition starting sector? (My understanding is parity only protects the partition, not the entire disk.) The partition structure of disk6 seems wrong for a 4T disk. Not sure windows would have created such a partition. He did say it had come out of a USB enclosure, but seems wrong even for that. I was thinking Ron might have put the disk back in a windows computer and inadvertently done something in Windows to repartition the disks. Are you 100% confident that remounting of disks 6 and 7 in the array would not result in any writes to the disks or parity? Assuming both theories are possible, which would be best to try first? The stimulating disk approach would not write if there are no writes to the array. And would positively not write to the physical disk6 or disk7.
  4. Yes, I agree. The most likely theory that fits all the facts ... 1 -- the disks were in the array 2 - the partitions are invalid for an array disk 3 - parity was nearly perfectly [meaning either the disks were always partioned wrong (seems very unlikely because we've never seen unraid mispartition a disk) or the partition table got corrupted and unraid used its own partition info (from super.dat) to guide the parity check. The latter is far more likely IMO] is that the partition table got corrupted or accidentally overwritten due to user error. So removing the physical disk assignments and letting unraid stimulate the disks would allow the partitions to mount, because unraid would place the partition in a simulated disk structure that would contain the original and valid partition table. If this works, the disks can be rebuilt. If not, the whole operation can be undone and no damage would have been done in the trying. But at this point I don't see another alternative that leads to a successful recovery. And this option seems to have a decent, if not high, likelihood of success.
  5. No, no disadvantage. Except maybe slightly easier to make a mistake with a new config. But seems that is the only easy to do it given Johnnie's update.
  6. Not easily. You'd have to manually reconstruct the partition table. There may be some software to do this, but it could still be tricky, And there could be more than just the partition table impacted. Other early sectors on the disk may also have been trashed and we'd have no way to know which ones or how to repair or reset them. I still believe that removing the physical disks and letting unRaid stimulate them is the best way to diagnose this. Parity will allow unraid to stimulate the partition, but unraid has the smarts to stimulate the early disk sectors including the boot sector and partition table. And the procedure is completely reversible if you backup the config folder first. That's what I would do. The more I think about this, the more optimistic I am that this will allow you to see what's was on the disk prior to the partition tables getting messed up. And then allow you to repair the physical disks.
  7. @Ron - If it were me, I would unassign the drives from the array as I laid out in my last post. With the backup config directory you could undo the operation. I would at least follow those instructions through this step: - Check to see if the files are present on disk6 and disk7 (which are now being simulated using parities and other disks in the array). If the files are visible, you will be able to recover. And if they are not, you know you have a more serious issue. You could hold off doing the rebuild, as there may be a simpler way to accomplish the equivalent. Been giving more thought - I believe that the super.dat file likely contains the disk partitioning information. I expect that is what the parity check is using for the starting sector and size. And that it is ignoring the partition table. That would explain why the partitions were correctly aligned, and so few sync errors were found.
  8. @Ron - Hopefully Tom will respond, but I think it is likely that if you remove disks 6 and 7 from the array - by unassigning the disk from those two slots and then starting the array, that unRAID will simulate those two disks using parity. And that you'll be able to see the data on those drives. Once confirmed, you'd be able to stop the array, reassign the two disks, and restart the array. UnRAID would then rebuild those two disks based on parity. I posted information about how to do this, but will give more details: - Shutdown and power down the unRAID server - Create a copy of the entire "config" folder on the unRAID flash from a Windows computer. Call it "config_backup" or something similar. - Put the USB stick back into the unRAID server and power on. - Stop array if it auto starts (I'd suggest disabling auto start if you haven't already) - While array is stopped, unassign disks assigned to slot6 and slot7. DO NOT DO A NEW CONFIG. - unRAID will warn you that two disks are missing, but still offer option to start the array - Start the array - Check to see if the files are present on disk6 and disk7 (which are now being simulated using parities and other disks in the array). - If you see the files you can stop the array, breath easier, and do the following. (BTW, based on my understanding of unRAID, I think you have an excellent chance of this outcome.): - Stop the array - Reassign your disks to slot6 and slot7: - Start the array. I've never tried to rebuild two disks - but expect either they will rebuild in parallel, or one will rebuild and then the other will rebuild. - When rebuilds finish, confirm you still see the files (they will be on the physical disks - they are no longer being simulated). - You can reboot and make sure all files are there after the reboot. - If you do not see the files present on the simulated disk, then we'll need to keep noodling over the issue. I'll provide you with instructions to restore the config backup, but not sure that will be necessary depending on what we learn if the files are still not visible.
  9. I would tend to shutdown the server (or at least stop the array), and wait to hear something back from Tom. He may be the only one that can answer why so few parity errors with partition tables on two of the drives were off.
  10. @Ron - Parity is computed on the PARTITION not the ENTIRE DISK. So if one disk starts are section 2048 and another starts on sector 64, the first parity block of the first disk would begin at physical sector 2048 and the first sector of the other would start at physical sector 64. So if a disks partition were updated to start at a difference sector, you'd think parity would be out of sync virtually everywhere. We are only seeing 9 parity errors, very consistent with why you'd see after a hard shutdown. This is good news IMO for your ability to recover your data. The most important question that comes out of this is how unRAID is treating the odd paritioning of disk6 and disk7. Is it assuming that the disk partition is 63 or 64 based on the disk size, and sort of ignoring the partition table values? Or is it assuming the disk partition is accurate and hence the first parity block for disk6 is at the 2k point? (And it must have been that way from the time they were added to the array.). Hoping that Tom ( @limetech ) will weigh in here. My hypothesis is if we can correct the partition tables for disks 6 and 7, your data will be valid. The question is, what is correct? See below for my interpretation of the parity check log entries. Jun 22 20:11:56 Tower kernel: mdcmd (45): check correct - This is a correcting check - so parity is updated for every parity mismatch found Jun 22 20:11:56 Tower kernel: md: recovery thread: check P Q ... - Dual parity (P and Q) Jun 22 20:11:56 Tower kernel: md: using 1536k window, over a total of 5860522532 blocks. Jun 22 20:11:56 Tower kernel: md: recovery thread: PQ corrected, sector=128 - First parity corruption. Sector listed is always the first of 8 sectors (4k bytes). So we don't know if they were off in only a single sector, or if they were off in all 8 of the sectors between 128-135. - What is interesting is that sectors 0-127 had no corruption. And there are only 9 sectors reported corrected. If the starting sector of disk6 should have been 64 (which is what unRAID would have done), and the parity check was instead starting at 2048 due to the partition table, you'd have to believe that this would impact virtually every sector on the disk. Does this mean that the parity check was assigning a starting sector of 64? Or does this mean that the disk has consistently been treating the first sector of the partition starting at sector 2048. (@limetech) Jun 22 20:11:57 Tower kernel: md: recovery thread: Q corrected, sector=65688 - Over 64k of accurate parity blocks before this. Notice that only Q parity is corrected, P parity is accurate. I'd say this would be expected type of corruption for a hard shutdown. Jun 22 20:11:57 Tower kernel: md: recovery thread: Q corrected, sector=263384 - Look at the timestamps. This one is very close to the prior one. Again, this is typical of corruption from hard shutdown. Again affecting only Q parity. It is logical that P parity is updated first and Q parity second. So appears a hard shutdown happened between those 2 I/Os. Jun 22 20:12:38 Tower kernel: md: recovery thread: PQ corrected, sector=10987336 - This one is 41 seconds later - typical correction from hard shutdown Jun 23 00:24:20 Tower kernel: md: recovery thread: PQ corrected, sector=2389180464 - This one is 12 minutes later. Still typical from hard shutdown. Jun 23 04:18:49 Tower kernel: md: recovery thread: PQ corrected, sector=4811915336 - This one is almost 4 hours later. Again, a lot of parity is accurate. Typical. Jun 23 04:18:49 Tower kernel: md: recovery thread: PQ corrected, sector=4811915352 - Very close to last one. Probably I/O on the same file. Typical. Jun 23 05:50:25 Tower kernel: md: recovery thread: PQ corrected, sector=5878055056 - Over 1.5 hours later. Typical. Jun 23 06:45:16 Tower kernel: md: recovery thread: PQ corrected, sector=6460538896 - 1 hour later. Typcial. Jun 23 14:37:24 Tower kernel: md: sync done. time=66327sec - Parity check done. Just under 7 hours from last Parity correction. A lot of valid parity. Jun 23 14:37:24 Tower kernel: md: recovery thread: completion status: 0 - Parity check finished successfully Note there is a max number of parity errors that unRAID will log. It has varied by unRAID version, but think it is in the hundreds. There were only 9 found, so expect all were logged. These few parity errors are very consistent with corruption from a hard shutdown. And updating parity is almost always the right thing to do, meaning all of the parity is likely corrected appropriately. This is not what I'd expect with misaligned partitions. If we can correct the partition tables for disks 6 and 7, I believe the disks will mount and your data will be valid. The question is, what is correct? Tom, or one of our other very technical users, may be able to advise.
  11. I've had this happen also. But I believe that KVM can flawlessly suspend any running VM. I haven't done it a ton of times, but have done at last 5 times, and each time worked perfectly. I confirmed that DLing files resumed when VM was resumed. Thanks for the pointer to this script. I searched community applications and didn't find anything. Here is the link for anyone searching. Seems it will stop, and even force stop, a running VM. But has no logic to suspend a VM. Might be a simple change.
  12. I believe you could do this without a new config. - Unassign parity2 - Start / stop array - Unassign disk from disk2 slot - Assign that disk to slot 3 - Start / stop array - Unassign disk from disk1 slot - Assign that disk to slot2 - Start / stop array - Assign the old parity2 disk to slot1 - Start array (disk in slot1 will begin zeroing, which done disk will be empty and usable) 90% sure this will work. I know you used to be able to exchange 2 disks in array and when array was restarted, it would record the slot change. But you couldn't exchange 4 disks, instead you'd have to do 2, start/stop array, and then do the other 2, and start the array. BTW, slot exchange would not work with parity2 disk assigned.
  13. Certainly if the risk exceeds the benefit I would agree. Can you give a problematic use case so I can better understand the risks you see? I think with the hibernation, the risk is very low. Maybe if you are in the middle of doing a ROM update, or in the middle of downloading a file via HTTP or FTP (which would probably timeout and not resume when the VM is resumed) you could have issues. But DLs from torrents and nzbget type tools (I think these are the most common for large file download for most users) should resume just fine. And if you consider this would enable much more frequent backups than most people do (my last backup from in February as I was taking a fresh backup today for a specific purpose) which would put the user in a much better position if a corruption occurred in the VM image. I'm not clear that a snapshot is what is really needed here. Not the same as a backup.
  14. @Ron (I had created most of this post before @pwm's post, but have included more detail) ** DO NOT TRY TO DO ANYTHING IN YOUR ARRAY RIGHT NOW. READ THIS POST TO THE VERY BOTTOM AND RESPOND WITH ANY QUESTIONS ** The output from your above the fdisk commands are not as I would expect them to be. For comparison, here is the output I receive for an 8T data disk in my own array. fdisk -l /dev/sdp Disk /dev/sdp: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 84077B29-0642-46AC-A625-5146AC747B9D Device Start End Sectors Size Type /dev/sdp1 64 15628053134 15628053071 7.3T Linux filesystem When unRAID prepares a disk it will always have a starting sector of 64. Earlier versions of unRAID might have a starting sector of 63 and would still be properly recognized. But this is only for drives 2.2T or smaller. Any disk over that size would have a starting partition of 64 to work as an array disk. Here is an example of an older 2T disk formatted as XFS starting on sector 63. fdisk -l /dev/sdm Disk /dev/sdm: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x9c4adde7 Device Boot Start End Sectors Size Id Type /dev/sdm1 63 3907029167 3907029105 1.8T 83 Linux But if you look at your two outputs ... /dev/sdb1 2048 11721043967 11721041920 5.5T Microsoft basic data /dev/sdc1 63 976751999 976751937 465.8G 7 HPFS/NTFS/exFAT sdb1 is showing a starting sector of 2048. unRAID is not going to recognize that for an array disk. It is not partitioned properly. sdc1 is showing a starting sector of 63 which is possible for an array disk, but only for a disk under 2.2T. This disk is 4T. If you look carefully you'll see the partition size it is showing as 465.8G. UnRAID would always use the full disk size. (If you look at the output from my two disks, you'll see that the disk size and partition size are the same. This looks like a badly partitioned disk for unRAID. The partition types are also showing Microsoft basic data and HPFS/NTFS/exFAT. If you look at my disks it is showing Linux. This is another sign that the disks are badly partitioned / formatted for unRAID. Here is an excerpt from your syslog indicating that unRAID cannot make sense of these two disks, showing unsupported partition layout. Jun 22 19:31:53 Tower emhttpd: shcmd (67): mount -t reiserfs -o remount,user_xattr,acl,noatime,nodiratime /dev/md5 /mnt/disk5 Jun 22 19:31:53 Tower emhttpd: reiserfs: resizing /mnt/disk5 Jun 22 19:31:54 Tower emhttpd: shcmd (68): mkdir -p /mnt/disk6 Jun 22 19:31:54 Tower emhttpd: /mnt/disk6 mount error: Unsupported partition layout Jun 22 19:31:54 Tower emhttpd: shcmd (69): umount /mnt/disk6 Jun 22 19:31:54 Tower root: umount: /mnt/disk6: not mounted. Jun 22 19:31:54 Tower emhttpd: shcmd (69): exit status: 32 Jun 22 19:31:54 Tower emhttpd: shcmd (70): rmdir /mnt/disk6 Jun 22 19:31:54 Tower emhttpd: shcmd (71): mkdir -p /mnt/disk7 Jun 22 19:31:54 Tower emhttpd: /mnt/disk7 mount error: Unsupported partition layout Jun 22 19:31:54 Tower emhttpd: shcmd (72): umount /mnt/disk7 Jun 22 19:31:54 Tower root: umount: /mnt/disk7: not mounted. Jun 22 19:31:54 Tower emhttpd: shcmd (72): exit status: 32 Jun 22 19:31:54 Tower emhttpd: shcmd (73): rmdir /mnt/disk7 I can't speak to what happened. Looks like the disks were never part of your array. But if they were, and you had data on them, appears the partition table was corrupted. This is not a typical thing to have happen randomly - normally implies that the user did something to alter the partition table through a partitioning tool, or by booting in a non-Linux OS and performing partitioning. Suggest waiting for @johnnie.black to give his suggestion, but if this was a corruption that occurred outside of normal array I/O, then the parities (looks like you have dual parity) could simulate those two disks if you do things exactly correctly. And that simulation might be intact while the physical disks are corrupted. To confirm you could backup your config folder on your flash disk (do on Windows machine while server is powered down), disconnect (or just unassign disk6 and disk7), and start the array. Those disks would then be simulated and you could look to see if your data is showing accurately. DO NOT DO ANY WRITES TO THE ARRAY. In fact you might want to do this in safe mode to prevent docker/plugin/VMs to do writes to your array. It will also prevent the array from autostarting (but missing disks will also stop the autostart which is a good reason to physically disconnect them prior to booting). But looks like array has already started. In fact a parity check may have kicked off (although I see no sign of that in the syslog, but do see Plex started meaning the array started). Depending on the unRAID version, the parity check, if it kicked off, would have been in correcting mode or non-correction mode. You can't to back, and best you could do is try removing those 2 disks and see what you get. But I suggest waiting for johnnie and others who may have other ideas to pick the one you want to do first. My approach (if you can stop writes to the array) is reversible with the backup of the config folder I suggested. You'd just have to rename the config folder to "config.bad" or something, and rename your config folder backup to "config" (do this in Windows workstation while server is powered down). I don't think I have to say this - but your array is in a very sensitive state. Your ability to recover may be impacted by doing the wrong thing. Even a very small mistake may make recovery impossible. Therefore, I urge you not to rush into any action without posting EXACTLY what you plan to do in as much detail as possible, and getting some before doing it! Some things are "destructive", and once you do it you can't undo it. But a simple change can make them "non-destructive", enabling to to undo the changes and get things back into the original state to try something else if the action doesn't work. (For example, my step to backup the config folder before removing disk6/disk7 is what makes my suggestion non-destructive. Otherwise getting back to the starting state would be much more difficult or potentially impossible without it.) A little knowledge can be a dangerous thing, so you really want others looking over your shoulder before each action is attempted.
  15. Even if there was a need for a second Windows 10 license - if one had a spare license, or was willing to purchase an OEM license (which can be had pretty inexpensively), instructions to go from VM to physical, maintaining all (most) installed apps, that might be very helpful. Something like Acronis might help. It has a way to do a backup/restore to dissimilar hardware. And if you could get the VM install restored onto the physical disk, you could follow this video to get it dual booting as a VM also. I had another need that might make for a useful video although it might need a new plugin (@Squid). I would like is to be able to schedule a backup of my VM image, e.g., monthly. And unRAID would shutdown (or maybe just suspend) the VM, compress the image file, and store it to a defined path on an array disk or user share, and when done start the VM back up. If the VM were suspended, it should come up exactly as it was when the VM was taken down, and not lose data in open apps. Not sure a backup of a suspended VM would be easy to restore. Might need a suspend file to be backed up as well as the .img file for it to restore cleanly in the future? Could schedule during overnight hours and it could run very stealthily - user should not even realize it ran.
  16. Actually the speed should not be so much slower than normal copy operation. If you copied to a separate network share, and then had to copy it from there back to the array, I expect it would be slower. This is a one-time thing - he can just start it and let it run overnight. He can copy a few files and then attempt to access/play them from their destination. Could also check MD5s (that's what I typically do). Not sure why you say that. You're not moving from a broken disk - you are moving from a simulated disk - which basically means some updates to parity. And very minimal since deleting a file just marks it as deleted. But some file systems (RFS) are very slow to delete for some reason. But the copy is preferred because it is faster and he won't be loosing access to the files in case he wants to do the md5 verification. Hope jonp got the notification and asks someone working on the wiki to make the update.
  17. Cache only shares with SSD cache drives are quite fast. Cache pools add redundancy. But for a large library of media files, unRAID gives very good read performance (gated only by the speed of the disk and network). Redundancy is very economical. Write performance is impacted for the real-time redundancy, but is in the "fast enough" range for most users (normally 45MB/sec - 75MB/sec). And turbo write mode can be turned on to increase the write performance when needed. SSD arrays would represent a sizable write speed advantage if that is the primary need. Although I would agree that unRAID is not the "fastest", I would say that there are a variety of features available in unRAID to allow good to excellent performance depending on the requirements and how you spec and configure the array.
  18. I see. Based on my experience with creating HPAs, I found that some controllers allowed the HPAs to be created and others did not. Motherboard ports were the best. Once created, the disk could be moved to any controller and it would recognize/respect the HPA. And once an HPA operation is performed, you had to power down the drive/server in order to do another HPA operation on that disk. below is a thread from 2011 that documents my experiences with the 3T to 2.2T HPA. It gives a lot of the commands and techniques for creating and removing HPAs.
  19. As I recall, the real-time transcoding does not store the entire transcoded file, but only a buffer to support the playback. Might want to confirm, but that's what I remember, so the RAM needed is not outrageous.
  20. I have found that the config files inside the appdata folder for the docker to be invaluable when trying to create my new configurations. I did a fresh install with a new SSD, but found myself referring to the appdata folders on my older SSD to see how I had configured certain features. You might initially rename the existing folders, and only delete once your new configurations are working as expected.
  21. Yes and No. I do realize that unRAID would want to use the entire disk as the array disk, and you'd not be able to have it take up a subset without something like an HPA. And I know HPAs very well - I once purposefully created HPAs to reduce several 3T disks to under 2.1T so I could use it with unRAID until support for larger drives was added to unRAID (5.0 beta 8 I think. It was a slight PITA, but allowed me to buy the cheaper/TB 3T disks and use them at reduced capacity initially, without continued investment in the 2T drives. And once support for larger drives was added to unRAID, I removed the HPAs and was able to use the full capacity. Of course had to copy the data off and then copy the data back on, a PITA but not so bad. But what I was not aware of, is if SSDs have some sort of internal "underprovisioning" configuration that would make the SSD appear smaller than it truly is to the OS. I remember when setting up SSDs under Windows being asked about underprovisioning, but never saw anything like that in unRAID. Again, if the underprovisioning is an SSD configuration, its possible unRAID would see the size as smaller, and happily add it to unRAID at that size. This was my question. Are you certain that SSDs don't have this feature?
  22. Yes - seems the wiki should up updated to make this more clear. Early on I did a lot of work on the Wiki, but most of it has been rewritten or new content added, and I'm not sure who has the responsibility now. @jonp might be able to funnel this request to the right person to do the update. Good you didn't do the new config yet. One thing you need to know and understand is the user share copy bug, which I discovered a long while back, and due to technical reasons, cannot be fixed by LimeTech as I had hoped. Steps have been taken to avoid this issue, but your situation is particularly susceptible to encountering this bug. Here is what appears to be a perfectly valid thing for you to do, but this WILL result in losing a lot of data in a hurry. (Again, do not do this!) Remove the failed disk from the user share configuration. Then copy all the data from the disk share folder (e.g., /mnt/disk4/Movies to the user share (/mnt/Movies) The reason this will not work is that disks excluded (meaning explicitly excluded or not included [you should only use included or excluded, not both BTW]) does NOT truly exclude the disks from the user share except in one very specific use case. If a file is copied to a user share which overwrites a file, that file will be overwritten on whatever disk it is present, even if that disk is excluded in the user share. And even if the file really is a new file, if the split level will force that new file onto a specific disk, even if that disk is excluded. Only if a new file is being copied that does not exist and split level is not impacting its placement - only in that situation do the excluded/included disk configurations to come into play. Also, when you browse to a user share, it is going to show content from all disks in the array that contain the root level folder for the user share, regardless of the include/exclude share configuration. The only way to stop these user share behaviors is to globally exclude a disk from the user share feature. Once you do that the disk will be ignored for anything user share related. You would not be able to have any user shares on that disk. So if you copy (or move) a file from the user share directory on a disk share to the user share, the user share will think you are overwriting an existing file. So you are basically trying to copy a file overtop of itself. Normally the operating system prevents you from doing something like that - you'd get a "can't copy a file to itself" error and the OS would prevent it before it tried. But in this situation, the OS does not realize what the user share is doing under the covers. And it will not prevent the operation. So it will try to copy the file, and immediately clobber the source. With the source gone, the copy fails, and the contents of that file are lost. Say you are copying (or moving) 500 files that take up 1T of source files, you might think you'd somehow realize what was happening and stop it. But in truth unRAID would wipe out those 500 files very very quickly. Only the first block of each would be attempted, the copy would fail, and then on to the next file. A rule of thumb is to always copy disk share to disk share, or user share to user share. Do not mix. But I'll give a tip that would allow you to safely copy from a disk share to a user share. Go to the disk share, and RENAME the root level folder. Say it was called "Movies". Change it to "X" or "MovieTemp" or anything that is different from "Movies" and not the name of some other user share. This will instantly separate the files on that disk from the "Movies" user share, and temporarily create a new user share with the name you gave. You also need to make sure that the user share configuration excludes (or does not include) that disk. You can then copy from that disk share to the user share. Or, copy from the user share "X" or "MovieTemp" or whatever you call it, to the "Movies" user share. This would result in all of the Movies being copied to one of the currently configured disks in that share. It is not necessary to Move, Copy is fine. The disks is being simulated, and when you reconfigure you array, any files on that disk will be poofed out of existance. Moving requires deleting the file on the simulated disk - which would unnecessarily waste time - potentially a lot of it. Post back with any questions. (#ssdindex - User share behavior, user share copy bug)
  23. Does unRAID give the option to underprovision array drives?
  24. I said that a 12T 7200RPM was considerably faster than a 5200 2T drive, and it is also faster than a 7200RPM 2T drive. Trying to highlight that even if the rotational speed is the same, the speed of the drive (sequential read/write) still increased with larger drive. I didn't mix up the order.