Jump to content

garycase

Moderators
  • Posts

    13,623
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by garycase

  1. True if you're using the traditional write mechanism; but many folks these days are using the "reconstruct write" method (aka "Turbo Write") ... and I'd think that those with use cases where there are frequent simultaneous writes from multiple processes/users are even more likely to enable this. With turbo write enabled, the speed advantage outlined by c3 would indeed result in quicker writes. With the traditional read/modify/write method that advantage is indeed largely gone -- although the "penalty" isn't as bad as it might seem, since much of the delay in seeking to the persistent cache area is using the same time that a PMR drive would be waiting for a rotation of the drive. But I agree the advantage c3 outlines is really only true if the write method is set to reconstruct write. In general, I agree it's better to use a PMR unit for parity if your use case has frequent simultaneous writes.
  2. .... of course after the writes were completed, the SMR drive would then be busy moving those writes to the shingled bands -- this is somewhat akin to the "mover" for data cached in UnRAID, but is entirely transparent to the user.
  3. The persistent cache is an area of the disk that's not shingled -- so yes, it's spinning media. C3's point -- which I hadn't thought of but is very valid -- is that for random writes the shingled drives cache the data in the persistent cache; and then later do the block rewrites needed to move that data to a shingled block. So if you're doing a lot of random writes, they'll all be to the same area of the disk -- thus eliminating additional seeks that would be needed if the random writes were done directly to the target area. While a 7200rpm drive clearly writes a bit faster than a 5900rpm drive (e.g. the archive units); the added seeks would take FAR more time than is saved by the higher rpm. So indeed, a large number of simultaneous random writes would actually perform better on the shingled archive drives
  4. I think that'd be a great choice. In fact, you can even use an archive for parity if you want -- as long as you don't have a lot of simultaneous writes by different processes/users that's very unlikely to hit the shingled performance "wall". As I noted several times above, my personal preference for the helium-sealed units does NOT mean I don't like the archives -- in fact I have several of them, and would still buy more, depending on the projected use of the drive. Several folks have all-archive servers, and have been very pleased with the performance.
  5. FWIW the cost difference is more like $75. And while that is a premium price over the Seagate Archive drive; it's also $90 LESS than the 8TB HGST UltraStar drive. So from an apples-apples comparison (8TB Helium-sealed PMR drives), it's $90 CHEAPER than the faster version of essentially the same drive. Perhaps I should pay the extra $90 to get the HGST version for it's "firm reliability data" (0% failures)
  6. Tough choice with those prices -- they're essentially all the same price. If the application is one that the archive drives are appropriate for, that's probably what I'd buy; otherwise I'd go with the IronWolf.
  7. The agreement WD has with the China's Ministry of Commerce (MOFCOM) required separate assembly lines for 2 years (which expires this year), but did not preclude using the same technologies between the brands. As I noted above, if you put an 8TD WD Red next to an 8TB HGST UltraStar HE8 they are absolutely identical in every way except for the labeling. They do, I'm sure, have different firmware and clearly run at different speeds; but I suspect there's little if any difference in reliability -- in fact, the Reds may be the more reliable of the two, simply due to the lower stresses of running at a lower rpm. Indeed, I suspect the new 8TB WD Gold drive (a 7200 rpm helium sealed unit) is very likely identical to the HGST UltraStar -- although it has to be made on a different line since the 2-year hiatus hasn't yet expired. Reading some articles about the WD 8TB's on various IT sites (Storage Review, etc.) you can find the following tidbits r.e. the WD helium sealed units ... "... WD is using their HelioSeal helium-technology to get the higher capacity much like the HGST Ultrastar Helium Drives. ..." " ... The limited information we have ... indicates that it is nearly a carbon copy of the HGST Ultrastar He8 Series " " ... WD ... indicated that it is employing technologies across both brands, which includes, but is not limited to, mechanical components, electronics and firmware." I very seriously doubt it => I've certainly not read anything to suggest that. I've installed 7 of these drives -- most in May of last year -- and they're all performing perfectly in 24/7 operation. As I already noted, I'm not saying the archive units aren't excellent drives -- I have a few of them as well; but once the helium-sealed Reds came out, that's all I've been buying, and I'm VERY pleased with their performance. The cost difference simply doesn't matter to me => but if the extra $75 or so (figure ~ $10/TB) is important to you; then by all means go for the savings.
  8. There's virtually no relationship between the 6TB Reds and the 8TB helium-sealed units, which are effectively just lower-speed (and lower cost) versions of the 8TB HGSTs [which, as you noted, had a 0% failure rate ]. In fact, if you put the 8TB Red next to an 8TB HGST they look absolutely identical except for the names.
  9. Welcome back Paul. Hope the health issue has been fully resolved. My wife has had 6 surgeries in the past 3 years, so I can definitely relate ... and I had a nearly fatal bout with pneumonia about 3 years ago that was pretty scary. But definitely good to see you're "back in the saddle" -- and as Rob noted, I'm sure once you start pedaling, the skills will come back quickly. Definitely agree, however, that it's good you documented things in writing => there have been MANY times I've wished I had done a better job of that
  10. Agree the archive disks are excellent choice -- although I've been tending towards 8TB Reds when I need large drives these days. The helium sealed technology is probably more of a factor in that choice than the PMR recording, although it's nice to have both. However, if I was more price-sensitive I'd certainly stay with the archive units.
  11. While that may have a bit of impact; I assume it's been like that all along; so it still doesn't explain why the checks are taking longer with the newer versions. Until you posted, I had simply assumed it was the higher CPU demands of the v6 releases -- but clearly your system doesn't support that conclusion.
  12. Definitely surprised -- dual 2670's score 18353 on PassMark -- my Pentium E6300 scores 1701. So clearly you have PLENTY of "horsepower". I'd have not anticipated ANY slowdown in parity checks with that setup. I presume you haven't made any changes in the disk controllers, memory, or anything else that might have impacted this.
  13. Interesting. I had assumed that my slower times were due to the relatively slow E6300 -- although candidly it should still be plenty fast enough to compute parity in real time (especially on my single parity system -- I have two E6300-based setups, one with single; one with dual parity ... and both have become notably slower with the newest versions). But I'm surprised a dual Xeon setup would have this issue. What model Xeons?
  14. Even slower, but not by a lot. Took about 12 minutes longer with 6.3.2.
  15. The 8TB Reds are certainly a great choice -- I've used several of them in the past year for various purposes and am VERY impressed with both their performance and their reliability.
  16. No, they are both exactly the same size. All you would need to do is ... (a) Do a parity check to confirm all is well before you start. (b) Swap the parity drive and wait for the rebuild of the new drive (the WD Red) (c) Do a parity check to confirm that went well. (d) Now add the old 8TB shingled drive to your array. If you haven't seen any issues with the shingled drive, it's probably not really necessary to make this switch; but it IS true that if you are ever going to hit the "wall" of performance with the shingled drive (i.e. a full persistent cache), that it's most likely this will be on the parity drive. The conditions that would make this likely are a lot of simultaneous write activity by different users. If that never happens with your use case, you're probably okay to just leave well enough alone. I agree, however, that the WD Reds are clearly better drives -- both because they're not shingled, but also due to the helium sealed enclosure, which results in lower power draw and lower temps.
  17. It absolutely should be plug 'n play. But I agree with ashman70 --- it's probably something pretty simple ... a defective power supply is possible; a defective CPU is probably unlikely, since you've tried each of them individually and they worked; memory could still be an issue, but you've probably eliminated that by trying a different set of modules and confirming that you have the memory plugged into the correct slots for BOTH CPU's. It's also possible you have a defective motherboard ... but I'd try a different PSU first ... preferably a single rail supply [Doesn't actually have to be a server unit if you happen to have an ATX unit available that has two EPS-12V connections ... you don't have to mount it in the case to try it.].
  18. The keying is different on PCIe and EPS-12v connectors -- there's NO chance of using the wrong ones
  19. "... It was half off on Newegg; I'm starting to see why." => Well for half price perhaps it only supports one CPU More seriously ... just in case, I'd try swapping the EPS-12V connectors (the 8-pin CPU aux power connectors) => not likely to matter; but if you happen to have a defective one that might at least isolate the cause. Clearly you know about (and I've seen a LOT of on-line comments about) the requirement for a BIOS upgrade to support v4 processors (which you have). You indicated you installed a v3 CPU to do the update [REALLY surprising Asus has the neat "update a BIOS without a CPU or even turning on the system" feature, but ships the board in a state where the required IPMI function for this feature is disabled !!] ==> so my question is do you by chance have TWO identical v3 processors, so you could confirm whether or not the board will work with dual v3 CPU's? Do you have a spare PSU with dual EPS-12v outputs that you could try instead of your Athena? Preferably a single-rail unit, although I really doubt that the load on either rail is an issue (but it IS possible that one of your rails is defective, which may explain this whole problem).
  20. "... I cleared the BIOS, reset to defaults, and then I had to re update BIOS to work with the v4 CPUs, which meant putting in v3 CPU. " ==> I assume this means you have a v3 CPU that you could do this with. Note that this motherboard supports Asus' "Easy BIOS Update" feature, which allows you to do a BIOS update WITHOUT a CPU installed . I've used this feature on a couple of desktop boards -- and it works VERY well.
  21. Interesting that it's working with one CPU. The error code isn't clear -- errors at the DXE phase are typically due to some hardware issue (e.g. memory), but it's surprising that you only get that with 2 CPU's installed. The CSM interrupt could mean some issue with your graphics card, but again it's surprising that you don't see if with only a single CPU. I'd guess that this may indeed be an incompatibility with the G-Skill modules, as there indeed seem to be quite a few references to DXE errors with these modules. Certainly wouldn't hurt to try a couple modules of another brand -- I'd use Crucial or Kingston modules.
  22. As saarg just noted, you need to have memory installed in the correct sockets for BOTH CPU's => if you've installed all 4 modules in the slots for CPU #1, then it won't be able to use CPU #2. If that's the case I'm a bit surprised there's not a BIOS message telling you that -- but in any event you need to check that you've got your modules installed correctly so that both CPU's have memory.
  23. It is indeed a low quality PSU that I would generally not recommend, but if it's working well for you then there's no urgent need to replace it. But with a Xeon, an add-in graphics card, and a growing complement of disks, you may indeed want to consider moving to a higher quality unit the next time there's an attractive sale on a high-end Seasonic or Corsair unit. Both Newegg and Amazon have fairly frequent sales on some excellent units.
  24. While I'm continually surprised that folks consider their data important enough to built fault-tolerant servers, but not important enough to back up, it's clear that many people are in this situation. Assuming your data is at least "somewhat" important to you, I wouldn't break parity when it's not really necessary. In your case, I'd first upgrade your parity drive to 6TB; then do one of the following, depending on whether or not you have a spare SATA port: I. If you have a spare SATA port ... (a) Add a 6TB drive, formatted as XFS. (b) Copy all of the data from two of your 3TB drives to the 6TB drive (verify those copies); then format the 2 3TB drives as XFS. (c) Now, one-at-a-time, replace the two empty 3TB drives (the ones you just formatted as XFS) with your other 2 6TB drives -- the rebuilds will take several hours; but when this process is completed you'll have two empty XFS-formatted 6TB drives. (d) Copy all of the data from your other Reiser drives to the 2 6TB drives you just added to the array; then format those drives as XFS. Done . At this point your entire array is XFS, and you have 4 6TB drives and a couple smaller drives in the array. You may want to use one of your 3TB drives to replace the 1TB drive. II. If you do NOT have a spare SATA port ... (a) Replace your 1TB drive with one of the 6TB drives. Note that this will still be a Reiser disk after the rebuild. (b) Copy all of the data from your 1st 3TB drive to the 6TB drive; then reformat that 3TB drive as XFS, and then copy the data back to it. (c) Repeat Step (b) for another 3TB drive. When this is completed you'll have 2 3TB drives with their data intact and in XFS format. (d) Now, one-at-a-time, replace the 2 3TB drives that are in XFS format with your other 2 6TB drives. This will take a long time; but when it's done you'll have 3TB of free space on each of those drives. Copy all of the data from your Reiser 6TB drive (the one that used to be a 1TB drive, so it will only have 1TB of data on it), to one of these new drives; and then reformat the Reiser 6TB drive to XFS -- it will now be an empty 6TB XFS drive. (e) Now copy all of the data from your remaining Reiser disks to the free space on your 6TB XFS disks -- and when done, just reformat the remaining Reiser disks to XFS.
  25. No new config required -- but if you DO want to do it, you can still maintain parity, since this is a single parity system. Since parity will be valid when all is done, you can do a New Config (which will let you re-assign all of the disk numbers reflect the content they originally had) and you can check the "Parity is already valid" box, since the only thing you're doing is re-ordering the disks. Note that you can't do this with dual parity, since the order of the disks matters in that case.
×
×
  • Create New...