Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


garycase last won the day on March 8 2017

garycase had the most liked content!

Community Reputation

45 Good

1 Follower

About garycase

  • Rank
    Advanced Member
  • Birthday 12/22/1947


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Glad the thread helped -- some of these topics can get "buried" and folks with the same issue may not ever see them. An occasional post that highlights that the info is still relevant isn't a bad thing at all.
  2. A note on forcing a UUID value on virtual machines. It's probably obvious, but in addition to allowing you to use the same license on a physical and virtual machine, this also allows you to use the VM on other hardware. This has the BIG advantage of protecting your system from hardware failure without the need to reload anything. If the physical machine you're running a VM on fails, just move the VM to a new system and it will run just fine -- no activation; no programs to reinstall; etc.
  3. A few thoughts r.e. backing up a Windows 10 system -- regardless of whether it's bare metal or a VM ... => There are really two different things to back up -- (a) the actual OS, and (b) all of your data. => The OS doesn't really need backup very often … an image backup once/quarter (or even every 6 months) is probably fine. This is what you need to actually restore the OS should something catastrophic happen … but if it's a few months behind all you need to do is run a couple cycles of Windows Update to get it back up-to-date. You DO want to update the image if you make significant changes to the programs installed, but other than that it only needs to be updated a few times a year. Given that, it's simple enough to just shut down the system and make an image (for bare metal) or copy the VHD (for a VM). I don't see a problem with an occasional shutdown to do this; although you can also do the image from a "live imager" … such as Acronis, Image for Windows, etc. I've been very happy with Image for Windows on my bare metal systems, but haven't tried it on a VM (too simple to just copy the VHD, so no need). And I DO minimize activity on the system whenever I'm running the image utility (basically don't use it for anything while it's imaging). Regardless of the reliability of modern "live imagers", I'm "old school" when it comes to creating an image -- I like the system to be completely dormant during that process. => Your data should be backed up VERY regularly (daily or even more frequently). But this can easily be done from within the running OS using any desired synchronization utility (e.g. SyncBack) Doesn't require shutting down the OS or even the running apps, although depending on the utility used for the backups it may fail to backup open files (i.e. files currently being modified/created) -- but this isn't a big deal, as they'll be backed up the first time the utility runs after the file activity has been finished. As long as you have an image and a current data backup, recovery is very simple: (a) restore the image; and (b) restore all of your current data [If you're using SyncBack, (b) is simply a matter of running your restore profiles in "Restore" mode immediately after you've restored the image]. Then (c) do all windows updates to get the OS up-to-date.
  4. garycase

    Is preclear still important?

    If you have 24 drives to pre-clear, it sounds like you're building a new system from scratch. If you add all of these drives to the initial configuration, no clearing is required. If, however, you want to test the drives first, then you can do that on other systems; or you can, as Brit suggested simply pre-clear 4-6 drives at once until you've got them all done; THEN do the initial configuration of the system (which won't need to clear anything if it's the initial config).
  5. There are some Linux synchronization utilities, but I still prefer SyncBack. I'd simply run that from a Windows VM.
  6. I'd definitely seal the hole … electrical tape or duct tape should work fine for that purpose.
  7. garycase

    The Power Supply Thread

    Not sure what you're referring to r.e. "... one 6-pin connector". IF the voltages are correct and you wire them correctly, then yes, you could power one of the cages from that feed. But if there's any doubt, I'd just use a splitter, which you KNOW is providing the right voltages to the right places
  8. garycase

    Reallocated sector count

    With only a single reallocated sector I wouldn't be at all concerned -- especially if the count doesn't in crease. And of course with dual parity you're well protected should the drive suddenly decide to fail. I never replace a drive just because of a small # of reallocated sectors; as long as the count stays static. If it gets higher than I like, but is still a stable number, I'll replace the drive and use the one with the reallocated sectors for storing off-line backups.
  9. Rats!! I just saw this and went to Newegg to see if by any chance it'd still work (18 minutes after midnight PST) ... but it does not. Oh well, I guess that saved me $222, as I was going to buy one "just because" to have it available for my next server upgrade -- not because I actually need it That is (was) indeed a VERY good price for a great case.
  10. There's little notable difference in the write speeds with single vs. dual parity -- so I'd certainly use dual parity. But with either you'll notice a significant drop from what you're seeing without parity. Just how much depends on the speed of the drives. With older drives you might see 30-40MB/s. With very high density (1.5TB/platter) 7200rpm 8TB or larger drives you'll probably see closer to 60MB/s (even better while you're writing to the outer cylinders). To maximize the write speeds with parity you can enable "Turbo Write" => this is actually called "reconstruct write" in the settings, but is often referred to as "Turbo Write" in discussions about the feature. This results in faster writes because there are fewer sequential disk operations needed to write a block of data (in the normal method, the current contents of both the parity drive(s) and the disk being written to have to be read before it can do the write). The disadvantage of Turbo Write is that ALL disks need to be spun up to do a write. It's not a bad idea to turn this feature on while you're initially filling your array; but you may not want it on all the time, so an occasional write doesn't spin up all of your disks.
  11. Very interesting. I don't know why Lian-Li doesn't make that case anymore -- it was a GREAT case, and the cooling with the door-mounted fans was superb. But the D800 is certainly a great alternative -- and can hold a lot more "stuff"
  12. You'll love that case -- I've worked with a LOT of cases, and for a large system build there's nothing that comes close except for a another long-discontinued Lian-Li case (PC-80B) which was a superb case for up to 20 drives. But the D600 has even more capacity and clearly is very easy to work in due to the cavernous interior
  13. Perhaps even a bit of overkill in that regard ... I'd also do a bit of measuring and buy some shorter cables.
  14. I tend to agree that drives can last a VERY long time if they don't have any infant mortality issues. The vast majority of drives I replace aren't due to drive failure -- it's to bump up the capacity or replace them with SSDs. I've got a boxful of spare drives (a few dozen) that all test perfectly, but are simply smaller than I'm ever likely to use.
  15. garycase

    User Share Copy Bug

    The "absolutely paranoid and don't want to take ANY chance on losing data" approach to this is to be CERTAIN that the data you plan to move around is BACKED UP on another system But as already noted, as long as you understand how to avoid the issue there shouldn't be any problem with your copies. If you have ANY doubt that you're doing it right; copy ONE file first (being certain you have a backup of that file on another system) ... and confirm that all worked well.