garycase

Moderators
  • Posts

    13606
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by garycase

  1. It would indeed save SOME of the reformatting, but assuming you're not replacing ALL of your drives with the two new 4TB drives, you'd still have to reformat any drives that you were planning to leave in the array after you moved the data off of them. But if you have the ports to do it, it would save the initial rebuild of one drive that would later have to be reformatted.
  2. Note these are two different processes. To replace a smaller drive with a 4TB drive is straightforward -- but the resulting 4TB drive will have the SAME format as the drive it replaced (e.g. RFS). I'd replace ONE of your smaller drives with a 4TB drive; then move as much data as possible to that drive from other smaller drives (emptying some of those drives). Then I'd do a parity check to be sure all is good; and then I'd do a New Config removing all of the empty drives and adding the other 4TB drive. After the parity sync for the new config completes, do another parity check to confirm all went well. Then change the format of the new (and EMPTY) 4TB drive to XFS; then copy all of the data from the 1st 4TB drive to the XFS drive (verifying that it is a good copy); and then you can format the 1st drive to XFS (this will delete all the data on it -- but you just copied it to your other drive, so it's okay). Now you'll have an empty 4TB drive in XFS format, so you can move all the data from a smaller RFS drive to it; then reformat the RFS drive to XFS; and then repeat that process until you've got all XFS drives. VERY IMPORTANT NOTE: Be CERTAIN you understand the "user share copy bug" and do NOT copy in a way that will cause that issue -- if you do, you'll lose ALL of the data associated with that copy.
  3. FWIW, if you're looking for a good non-shingled parity drive, the 10TB HGST units are on sale at Newegg today for $299.95 (I just ordered one for my HTPC)
  4. Note also that even if you move the drive to the array, it's still possible to fill the cache if you're doing a log of "moving around" that impacts the same disk. MUCH less likely you'd see a problem than with a parity drive, but still possible. However, as SSD noted above, it's very rare to see this issue with typical UnRAID usage (even as a parity drive). I think you'll be just fine if you use a non-shingled drive as parity and relegate your shingled units to data drives.
  5. As Johnnie noted, writing large files isn't an issue. The drive recognizes that a full shingled block is involved in the write, so it simply writes directly to the shingled area and the cache is never even involved. The cache is used when a write doesn't fill a full block and there will have to be reads/rewrites of the shingled area to actually write the data. If you're writing a LOT of random data, the cache can in fact fill up ... and that's when the performance comes to a screeching halt. You said it happens when you're "moving large files around" => Note that this results in both writes to the destination location and deletes from the source location. If you're doing several of these at once, this can cause small segments of each movie to be written, which "look" like small random writes, and thus the cache gets involved. It will also cause other writes on the parity drive as parity is updated for the file deletions -- and these are in fact small writes, since it's just directory info being updated. I suspect that's how you're managing to "hit the wall" of performance due to the cache being filled.
  6. As you likely know, the "really bad" slowdowns aren't because the drive has performance issues -- it's because the multiple random writes you're doing have hit the "wall" where the persistent cache on the shingled drive is full, and it has to come to a screeching halt while it moves data out of the cache. ANY non-shingled drive will eliminate this issue.
  7. As Johnnie noted, as long as you maintain all of the same slots for the other disks, it will still be valid. This does, however, add another element of risk to using this procedure instead of simply doing a New Config and letting parity rebuild -- which is still how I remove disks. [A very rare process in my case]
  8. Just out of curiosity, why did you sell them? Getting larger drives? Having issues? Moving to PMR tech drives? etc.
  9. Basically the only time my servers are ever shut down or rebooted are (a) extended power outages that last longer than the 10 minutes I have the UPS control software set for (rare); (b) software updates that require a reboot (the most common reason); or (c) hardware changes (adding/replacing a disk drive -- also fairly rare). It would be easy to get a year or longer of runtime (been there/done that) ... but it's not something I focus on. Basically I reboot whenever there's a new stable release (I have a test server for the pre-releases).
  10. I had a v5 server with over a year of uptime ... the only reason that streak ended was a power failure that exceeded the UPS timer I had set, so the system shut down. I agree, however, that with the much more frequent updates (and much simpler update process) with v6, you really don't want to go over a year without updating, as many of the updates are designed to improve security.
  11. Agree with johnnie => the disk are VERY different. The most perplexing thing about the performance of the DM004 is that it's so much slower than the previous version even though they're significantly improved the areal density of the platters. This almost certainly means they're simply running them at a much lower rpm ... which is also supported by the very low power consumption figures.
  12. Agree -- I was just noting that the performance is not consistent with the specs noted on Amazon & Newegg's listings for that model. I suspect the model # for the drives in the external cases is, for some reason, not correct (or perhaps there's both a "Pro" and a non-Pro version with the same #)
  13. Very interesting. The specs also show a max sustained transfer rate of 190MB/s -- the same as the archive drives. Doesn't make sense with the higher density platters unless they are indeed spinning appreciably slower to achieve the higher platter density. But the drive is shown as a 7200rpm drive (at least in Amazon's listing for it): https://www.amazon.com/Seagate-BarraCuda-3-5-Inch-Internal-ST8000DM004/dp/B07211QYRC/ref=sr_1_2?s=electronics&ie=UTF8&qid=1505338436&sr=1-2&keywords=ST8000DM004
  14. It is disturbing that Seagate dropped the Workload Rate Limit so much on the latest drives. Not sure that's a "real" change as much as simply a lower spec for drives slated for the consumer market. Seagate rates their Enterprise class drives with a WRL of 550TB/yr, and their "near line" drives at 180TB/yr ... I suspect the 55TB figure is simply because this is a consumer oriented drive -- perhaps more of a marketing difference than a real technical concern. Although they don't differentiate reads from writes in the computation of the WRL, I also suspect that writes are FAR more important in terms of actual degradation of the performance. (That's certainly true for SSDs) And of course a parity check is virtually all reads. Bottom line: I doubt the WRL figure is really anything to be concerned about. It is, however, very interesting that the speeds aren't better on the v4 drives, especially since that model # is supposedly a 7200rpm drive. I can't find a link that confirms it's using 2TB platters, however ... Johnnie: Are you certain about the platter count? I wasn't aware anyone was shipping anything larger than 1.5TB platters on PMR drives. (WD ships 1.5TB platters on their 12TB helium units, which are PMR; and manages to stretch that to 1.75TB/platter on the 14TB shingled version of that drive)
  15. Glad you were able to recover most of your files. Now is a good time to setup a backup strategy
  16. 20TB sounds like a lot to back up, but not when you consider the size of modern drives AND the impact of losing that much data. There are a variety of ways to do it -- a 2nd UnRAID server (a good idea); a bunch of offline drives that you update periodically; or even a cloud backup if you have high enough internet bandwidth. I have both a backup server (that my other 2 servers back up to) AND a set of offline disks (most of which are simply smaller disks (1-3TB) that I've repurposed to backups as I upgraded to larger disks) that I keep in a waterproof, fireproof, data-rated safe.
  17. Reiserfsck can do some amazing recoveries -- hopefully it will help here. It should as long as you didn't do ANYTHING on the disks except format them ... but as Johnnie noted it can take a LONG time. Just have patience and hopefully you'll recover a significant amount of your data. However, as CHBMB noted above ... Once you recover your data, you should keep this in mind and be sure you have backup copies of anything you would be upset about losing. I always tell folks they should always assume their system is going to suffer a catastrophic crash at midnight; and if there's anything they're going to be upset about losing be sure it's backed up before then
  18. Sounds like there's enough space on each of the machines for all of the data -- apparently her husband had a good backup strategy So ... I'd suggest that after you create an UnRAID machine and copy all of the data from the other server to it (with verification to be sure they're all good copies), that you then turn the 2nd machine into another UnRAID system for backups ... and copy all of the data back to it. You could set it up to run the backups automatically; or it could simply be kept OFF most of the time; and just turned on periodically to sync the backups [It sounds like there's not a lot of changes likely on the systems]. I'd use dual parity on the main machine, and probably just single parity on the backup.
  19. It would be easier to assess this situation with a detailed description of the specific hardware in the two systems -- motherboard, CPU, any add-in controller cards, and the exact complement of disk drives in the machines. Clearly this take a bit of disassembly - be VERY cautious that you don't change or disconnect anything, as you don't want to mess up the currently running server, and you don't want to make things worse in the 2nd (non-functioning) server. This info might suggest a method of recovery for the old server. One thing you might want to consider -- 24TB can be backed up on just 3 modern 8TB drives ... e.g. Best Buy currently has WD external 8TB units for $169. Before doing ANYTHING -- especially on the running server -- you might want to copy all of its data to a backup drive. You could also, instead of using the backup drives in their cases, simply build a small UnRAID server with 4 8TB drives with single parity and then you'd have 24TB of protected space to back up all the data to.
  20. I presume you tried another power supply ... "just in case" "No video" makes this unlikely -- but are you sure it's not just hanging due to a switch in the boot device (if it's attempting to boot from a hard drive instead of the USB flash drive it will generally hang with a blank screen. This can happen after a power failure ... especially if the motherboard battery has failed. But you should still be able to boot to the BIOS, so assuming you have a keyboard and display connected, the "no video" comment likely excludes this. Just wanted to be sure you'd considered it.
  21. Actually to add 4TB you don't need to buy 3 drives. If you buy 2 of the 8TB drives and replace your parity drives with them, then you'll have 2 4TB drives available for other uses (the old parity drives). If you only have one free slot you can only add one of them to the array -- but that will give you 4TB of additional space. If you have 2 free slots you could add both of the old parity drives and gain 8TB. Another thing you could do with the old drives is replace any smaller drives you might have -- which would also gain a bit more space.
  22. Basically agree, but as I noted earlier it's more complex if you consider the non-NAS functions that are now common in UnRAID. Dockers and/or VM's that use the disk to-be-removed would have to be either shut down during the process or somehow notified to not use that disk. Excluding those, it is indeed a fairly simple function => step (1) is already available in UnBalance, which could perhaps be invoked to empty the disk (and STOP the process with a "NOT ENOUGH SPACE" if there isn't sufficient space to do it); step (2) is indeed very simple (albeit LONG) -- but would also require the system be set to NOT do any further writes to the disk (this would perhaps mitigate the Dockers/VM issue as far as the removal goes ... but could generate questions as to "why my Docker isn't working anymore); and step (3) should be virtually instantaneous (just modifying the config) ... although as bjp999 noted it may be tricky with the array online.
  23. I understand what you're saying ... I just don't think it's even close to a common function. WHY would you want to remove a disk that had data on it ?? Replace it? -- Yes. But remove it? Folks who manipulate their data -- moving it around to different disks; restructure their shares; etc. aren't likely in the "naïve appliance user" category. And for them to simply move all the data off a disk they want to remove from the array isn't a daunting chore at all. Personally, if I DID want to remove a disk, I'd simply remove it and do a New Config. (Obviously I'd confirm all was well before doing that) I simply don't think a "Remove a disk" function is needed -- but I agree it would be a nice function for those cases where somebody wants it. Zeroing a drive with DD and then doing a New Config with the "parity is already valid" box is definitely more complex than a simple "Remove an empty drive" function. I just don't think that "emptying the drive" needs to be a GUI function => in fact, there's already a plugin that will do this [ UnBalance ].
  24. A "Remove a Drive" feature is conceptually very simple for the NAS, but is complicated by active Dockers and/or VM's that might be using the drive. Basically, removing an empty drive without impacting parity is very easy -- just zero the drive, then remove that drive from the config ... this could be easily implemented; and would indeed be nice if it was a built-in function as it would eliminate the need to use DD and the need for a new config (since UnRAID could simply remove the drive from the current config). The next step -- removing a non-empty drive -- is more complex. It would require an initial step of moving all of the current data on the drive to another location in the array, while also flagging the drive so no further writes are allowed to it. This could also impact the functionality of any Dockers or VM's that require that drive. Personally I think the simple "Remove a Drive" that's empty is all that's needed -- it just needs to start with a BIG WARNING that the drive to be removed must be completely empty or you will LOSE ALL OF THE DATA on the drive -- and then the user can simply abort that and copy any data from the drive before using the function. The system could even offer a "Remove a drive and rebuild parity" feature, which would NOT zero the drive before removing it and would simply update the config and rebuild parity [Thus not requiring the user to do a "New Config"]