garycase

Moderators
  • Posts

    13606
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by garycase

  1. I don't recall the exact details -- it's been about 3 years since I first tried v6 on this system. I recall that when I did that the parity times jumped from under 8 hours to over 14 hours, and the transfer speeds were notably slower for both reads and writes. I reverted the system to v5 and left it like that until about a year ago, when I decided to try v6 again to see if anything had changed. It had not changed, but I simply decided to leave it on v6 anyway so all of my servers would be on the same version. For what I use this server for the slower transfer speeds aren't a big deal, and the longer parity check times don't really matter (except for the frustration of knowing they could be MUCH faster). I do recall outlining the issues in an early v6 release thread (probably ~ 6.1) and I also created an "UnRAID on Atom D525" thread … I'll see if I can find some of the posts with the details on the transfer speeds, but it's really not an issue at this point. But as v6 has evolved over the last year, the longer parity checks were at least fairly consistent, until 6.6.3, which added nearly 3 HOURS. I've simply reverted this system back to 6.5.3 and will just leave it there. FWIW, my last few parity checks took 15:14, 15:13, 15:13, 15:10, 17:52 (on v6.6.3), and 15:13 (last night, after reverting to v6.5.3). Until I moved this system to v6, parity checks consistently took ~ 7:55 with v5. landS has an identical server (except different disk drives -- his are 4TB drives, mine are 3TB WD Reds) … and he's also noted the v6 performance issues. He also runs some additional plugins, etc. -- and as he noted a few posts above has finally decided to give up the ghost and abandon his trusty Atom 😊
  2. Agree -- a simple note that makes it clear it's not supported is all that's needed. Lack of support for the latest HTML version is ample justification -- my point above was that a small market share isn't a good reason … IE has more market share than Firefox, Edge, Safari, and Brave.
  3. That explains why it doesn't work -- but it really necessary to break compatibility with IE (since all previous versions worked fine with it) I suppose there are other surveys that show this, but the netmarketshare data for the latest month shows IE at 9.94% … ahead of every other browser except Chrome.
  4. I'm tempted to do the same, but this trusty little server is just used for storage and I have 3 other servers with a lot more horsepower for other tasks. … and I love the < 20w idle power consumption. (Although the newer generation motherboards/CPUs aren't all that much more with a LOT more horsepower) So … I simply reverted it back to 6.5.3 and will just leave it there. Running a parity check now to confirm it's back to "only" 15 hrs, and it shows 3 hrs to go after 11:50, so I'm sure it's back to the times I've been used to for v6. [Still frustrates me every time I run a parity check since it used to take 7:55 with v5, but at least it's less than the 18 hours it took with 6.6.3]
  5. Another annoyance … I just finished running a parity check with 6.6.3 and … The new version has added almost 3 HOURS to the parity check times !! This system used to do parity checks in just under 8 hours with v5. When I upgraded it to v6, the time for a parity check jumped to almost 14 hours !! Data transfers were also notably slower. I reverted it to v5 for a long time, but last year decided to just put up with the slower transfer speeds and longer parity checks and upgraded it to v6 so all of my servers were on the same version. This is purely a storage server – no add-ons, Dockers, or VMs – so the slower performance wasn’t really a big deal. The times never got better as new v6 versions were released – the original time of 13:50 jumped up to 16:08, then decreased to 15:18 with 6.5, and has been steady in that range throughout 2018 – my last few checks were all in the 15:10 to 15:14 range. But the upgrade from 6.5.3 to 6.6.3 has now jumped these times to nearly 18 hours (17:52) !! That’s 10 HOURS longer than they took with v5 !! I’ve never understood why v6 was so much slower for these checks … these should be the exact same set of calculations. This IS a low-end CPU … a SuperMicro Atom D525 board … but as I noted v5 did the exact same parity check in under 8 hours. The drives are all 3TB WD Reds, and it’s a single parity system.
  6. While IE11 is indeed effectively dead, but it IS still included with Win10 and there are a lot of folks who still use it … and the simple fact is the GUI worked just fine with it until this release. Agree it's just an "Annoyance" that it doesn't work -- but clearly it could.
  7. Was reading through the thread in more detail, and noted others have had similar display issues … and they are also using IE-11, which is what I generally use. Just tried accessing the GUI with Firefox and all of the problems I noted above disappeared … i.e. everything seems to be working just fine. I also tried it with Edge, which also works fine. So the issue is simply that IE-11 does NOT display the GUI correctly with this new version. A bit frustrating, but not a major problem … I just need to change my shortcuts for my servers to use Firefox or Edge.
  8. Just upgraded one of my servers to 6.6.3 from 6.5.3 and found several issues ... (1) On the Main screen the STOP button doesn't work. Hovering the mouse there has no impact (the button doesn't highlight) -- it works fine on all other buttons ("Check", "History", Spin Down", "Spin Up", "Clear Statistics", "Reboot", and "Power Down"). (2) Dynamix Cache Directories does not seem to be working -- and the Settings page for this is almost completely unresponsive. Nothing can be selected or changed except the drop-down for Logging and the text entry box for User defined options -- no other box can be selected to enter any values and the other drop-downs don't do anything when you click on the carets. (3) The display header at the top of the page isn't displayed correctly. Haven't done anything with the array -- e. g. parity check; xfer speed checks; etc., so I don't know if there are any apparent performance issues (only been running it for a couple hours). But until I resolve the above I won't be updating any of my other servers :-)
  9. Glad the thread helped -- some of these topics can get "buried" and folks with the same issue may not ever see them. An occasional post that highlights that the info is still relevant isn't a bad thing at all.
  10. A note on forcing a UUID value on virtual machines. It's probably obvious, but in addition to allowing you to use the same license on a physical and virtual machine, this also allows you to use the VM on other hardware. This has the BIG advantage of protecting your system from hardware failure without the need to reload anything. If the physical machine you're running a VM on fails, just move the VM to a new system and it will run just fine -- no activation; no programs to reinstall; etc.
  11. A few thoughts r.e. backing up a Windows 10 system -- regardless of whether it's bare metal or a VM ... => There are really two different things to back up -- (a) the actual OS, and (b) all of your data. => The OS doesn't really need backup very often … an image backup once/quarter (or even every 6 months) is probably fine. This is what you need to actually restore the OS should something catastrophic happen … but if it's a few months behind all you need to do is run a couple cycles of Windows Update to get it back up-to-date. You DO want to update the image if you make significant changes to the programs installed, but other than that it only needs to be updated a few times a year. Given that, it's simple enough to just shut down the system and make an image (for bare metal) or copy the VHD (for a VM). I don't see a problem with an occasional shutdown to do this; although you can also do the image from a "live imager" … such as Acronis, Image for Windows, etc. I've been very happy with Image for Windows on my bare metal systems, but haven't tried it on a VM (too simple to just copy the VHD, so no need). And I DO minimize activity on the system whenever I'm running the image utility (basically don't use it for anything while it's imaging). Regardless of the reliability of modern "live imagers", I'm "old school" when it comes to creating an image -- I like the system to be completely dormant during that process. => Your data should be backed up VERY regularly (daily or even more frequently). But this can easily be done from within the running OS using any desired synchronization utility (e.g. SyncBack) Doesn't require shutting down the OS or even the running apps, although depending on the utility used for the backups it may fail to backup open files (i.e. files currently being modified/created) -- but this isn't a big deal, as they'll be backed up the first time the utility runs after the file activity has been finished. As long as you have an image and a current data backup, recovery is very simple: (a) restore the image; and (b) restore all of your current data [If you're using SyncBack, (b) is simply a matter of running your restore profiles in "Restore" mode immediately after you've restored the image]. Then (c) do all windows updates to get the OS up-to-date.
  12. If you have 24 drives to pre-clear, it sounds like you're building a new system from scratch. If you add all of these drives to the initial configuration, no clearing is required. If, however, you want to test the drives first, then you can do that on other systems; or you can, as Brit suggested simply pre-clear 4-6 drives at once until you've got them all done; THEN do the initial configuration of the system (which won't need to clear anything if it's the initial config).
  13. There are some Linux synchronization utilities, but I still prefer SyncBack. I'd simply run that from a Windows VM.
  14. I'd definitely seal the hole … electrical tape or duct tape should work fine for that purpose.
  15. Not sure what you're referring to r.e. "... one 6-pin connector". IF the voltages are correct and you wire them correctly, then yes, you could power one of the cages from that feed. But if there's any doubt, I'd just use a splitter, which you KNOW is providing the right voltages to the right places
  16. With only a single reallocated sector I wouldn't be at all concerned -- especially if the count doesn't in crease. And of course with dual parity you're well protected should the drive suddenly decide to fail. I never replace a drive just because of a small # of reallocated sectors; as long as the count stays static. If it gets higher than I like, but is still a stable number, I'll replace the drive and use the one with the reallocated sectors for storing off-line backups.
  17. Rats!! I just saw this and went to Newegg to see if by any chance it'd still work (18 minutes after midnight PST) ... but it does not. Oh well, I guess that saved me $222, as I was going to buy one "just because" to have it available for my next server upgrade -- not because I actually need it That is (was) indeed a VERY good price for a great case.
  18. There's little notable difference in the write speeds with single vs. dual parity -- so I'd certainly use dual parity. But with either you'll notice a significant drop from what you're seeing without parity. Just how much depends on the speed of the drives. With older drives you might see 30-40MB/s. With very high density (1.5TB/platter) 7200rpm 8TB or larger drives you'll probably see closer to 60MB/s (even better while you're writing to the outer cylinders). To maximize the write speeds with parity you can enable "Turbo Write" => this is actually called "reconstruct write" in the settings, but is often referred to as "Turbo Write" in discussions about the feature. This results in faster writes because there are fewer sequential disk operations needed to write a block of data (in the normal method, the current contents of both the parity drive(s) and the disk being written to have to be read before it can do the write). The disadvantage of Turbo Write is that ALL disks need to be spun up to do a write. It's not a bad idea to turn this feature on while you're initially filling your array; but you may not want it on all the time, so an occasional write doesn't spin up all of your disks.
  19. Very interesting. I don't know why Lian-Li doesn't make that case anymore -- it was a GREAT case, and the cooling with the door-mounted fans was superb. But the D800 is certainly a great alternative -- and can hold a lot more "stuff"
  20. You'll love that case -- I've worked with a LOT of cases, and for a large system build there's nothing that comes close except for a another long-discontinued Lian-Li case (PC-80B) which was a superb case for up to 20 drives. But the D600 has even more capacity and clearly is very easy to work in due to the cavernous interior
  21. Perhaps even a bit of overkill in that regard ... I'd also do a bit of measuring and buy some shorter cables.
  22. I tend to agree that drives can last a VERY long time if they don't have any infant mortality issues. The vast majority of drives I replace aren't due to drive failure -- it's to bump up the capacity or replace them with SSDs. I've got a boxful of spare drives (a few dozen) that all test perfectly, but are simply smaller than I'm ever likely to use.
  23. Looking good => amazing what nearly tripling your airflow (38CFM -> 110CFM = 2.89) does for keeping things cool
  24. I've looked at a few E-ATX boards over the years; but have always shied away when actually buying simply because of the limits it places on what cases I could use. But if you're building a dual-CPU system you really need to go with E-ATX simply because of the extra space you need to accommodate the CPU's and the other "stuff" you'll want on the board (plenty of memory slots; expansion slots; etc.). I'm not sure just what your issue is, but I don't think it's because the board is E-ATX
  25. I would really think your current heatsinks should be okay -- but it IS true that the larger fans you referenced have a LOT more airflow than the ones on your current heatsink -- 110CFM vs 38CFM. Moving that much more air would certainly provide better cooling. But I really think re-mounting the heatsinks might be all you need to do [or course if you switch heatsinks you'll be doing that for sure ] Adding the middle row of fans may also provide a big improvement -- clearly it will add a lot more airflow. As I noted earlier, be sure to check the CFM rating for all of your fans and see just how much air they should be moving -- if you're using low-rpm fans that only move ~40CFM that may be the primary issue.