Dephcon

Members
  • Posts

    601
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Dephcon

  1. Be careful with using HP RAID/HBAs. I've used a number of older DL380s for ceph development and they suck, either using multiple RAID0 to simulate an HBA mode or on the newer models with an official "HBA" mode. the hpsas drivers/cards are absolute shite on ubuntu in my testing.
  2. an issue with 6.2? root@vault15:~# /boot/config/plugins/preclear.disk/preclear_disk.sh /dev/sdb sfdisk: invalid option -- 'R' Usage: sfdisk [options] <dev> [[-N] <part>] sfdisk [options] <command>
  3. i think many were suckered in due to it being beta18 and assumed it'd be pretty much prod ready. i fully expected to roll back after trying it out, but it's been fine for me.
  4. LT is aware of the issue. They're working on a setting to disable the second parity, which will fixed the autostart and error notification issues when only using 1 parity disk.
  5. that would address my concerned brought up about the matter.
  6. This also an issue with a brand new system, it shows the red X and an alert notification is sent out.
  7. In regards to the dual parity slot, you should be able to disable it if you don't intend to use it. I started my array with parity 2 set to unassigned and it stays visible with a big red X and an alert notification that parity 2 is in error state.
  8. If you don't mind all your disks spinning when writes are occurring, some users can forego using cache-enabled shares because turbo-write can often times saturate the performance of a network when using fast-enough devices. However, the slowest performing disk in your array because your performance bottleneck with this feature enabled. It is also possible that when the array is wide enough (# of disks in the array), the performance of this feature may be even LESS than with it turned off. In short, Turbo Write is a great feature when having to bulk copy large sums of data to the array directly. Enable it, do your big bulk copies and then turn it back off. That's its current recommended use. We have plans to incorporate the use of Turbo Write in other ways in the future, but it'd be premature to discuss details just yet. ah thanks for the clarification
  9. so with turbo write, do we even really need to use a cache disk for anything other than cache-only shares for docker/vms?
  10. I, for one, would not want my server to automatically update. Living on the bleeding edge of code is for wobbly Windows installations. IF it ever comes to that, all I ask- is the ability to DISABLE it. I'll let braver? souls test the waters first. He's talking about vetted stable security patches, the same as mainline distros like Ubuntu do monthly security updates that are stable. ^This. We do automated security patching on our hypervisors at work from the official ubuntu repo on a 10 business day cycle (10% per day), pre-prod in the morning and prod in the afternoon. Now this is great when you have multiple environments, not so much for home. What LT could do is maybe host their own repo that lags a couple days behind the os distro main so they can do their own automated testing and put the breaks on if there's any regression issues. The testing/repo updating could be automated to an extent: Pull from distro main, upgrade test-benches, validate, if pass push to LT repo mirror, else notify someone to investigate.
  11. http://lime-technology.com/forum/index.php?topic=40937.msg445818#msg445818
  12. Thanks for the update.... Have you guys put any more thought into moving unraid to another distro that would allow automated security patching? It's got to be difficult to stay on top of security patching and development simultaneously.
  13. Dunno tbh, I'm just testing out the reverse proxy.... It's got an api so I assume you can centralise... Looks like you can centralize your indexers... have to figure out how to configure Sickrage and Couchpotato. What's the benefit of this?
  14. 2015 Review: https://www.backblaze.com/blog/hard-drive-reliability-q4-2015/
  15. damn that's impressive. i wonder if an all hitachi nas array would see similar speeds.
  16. That's a hot piece of kit. 6x SATA3, 2x10GB-T, 2x1GB, 4 DIMMs, VGA and IPMI all in an ITX package. Jesus if it had two more SATA3 I'd buy one just for the hell of it. That box would make an awesome esxi host too <3
  17. I haven't noticed any of the higher CPU usage with the new crashplan docker like i saw with the desktop one.
  18. If you set it's network as bridged then yes, but IMHO CrashPlan works better with host network. I'll add those port mappings either way. it's always been bridge mode since i installed it, but i've switched it to host, Thanks. This new single vnc container is awesome.
  19. Also you need to add a port 4280 > 4280 mapping. Are TCP 4242 and 4243 still required?
  20. You have a chrome email account? chrome was caching the password field on the settings page