Chris Pollard

Members
  • Posts

    967
  • Joined

  • Last visited

Everything posted by Chris Pollard

  1. Were you passing any PCI devices through to your Ubuntu VM or just using it as a traditional headless VM? Check your domain cfg file for the Xen VM and see if the builder = hvm. If not, then you have a paravirtualized Ubuntu VM, which will be far more difficult to convert. Here's a thread on the Ubuntu forums about this very topic: http://ubuntuforums.org/showthread.php?t=1668809 If you have HVM-based guests, converting should be a breeze. Just fire up beta 15, create a new VM, point the primary vdisk to your Xen VM vdisk, and done. Ubuntu already has all the virtio drivers and whatnot in their build, so it should just work. That said, make a backup of the vdisk first...just to be safe. OK Looks like mine aren't HVM..... That thread just says rebuild them under HVM... That solution I can come up with myself
  2. Yeah, I might as well do it now if xen is being dropped.
  3. I bought cheap reverse breakout cables from ebay.... they didn't work. (They didn't work forward or reverse so god knows what they were). Ended up buying them from scan instead.
  4. So if I have a Xen Ubuntu VM is there some guide to change it to KVM? Do I need to do this to upgrade to b15?
  5. Yeah 11 MB/s is actually pretty good for powerline in my experience.
  6. This. If you are going to remove Xen then it would be nice to have good documentation of the process to convert to KVM
  7. My 4224 chassis are pretty quiet. Both have the 120mm Fan wall which I purchased Noctua fans for..... and I found I absolutely needed the 80mm fans by the I/O sheild... without those running things get pretty warm during a parity check so they had to be replaced... I funnily enough used Arctic F8's and didn't have any problems but my chassis may be slightly different to the Norcos most people use here.
  8. LOL, mine has always been like that too. Just found the setting... Thanks!
  9. Also had this, I changed them and then changed back and everything went back to normal.
  10. Both of the machines need to be in same subnet too, wireless shouldn't be a problem as long as everything is in the same broadcast domain.
  11. You should be using the standoffs that came with the case, if they don't line up chances are the case is warped or had the wrong parts included with it.... or the case is badly made.
  12. DL380's are considerably quieter. I think the 385 was a bespoke version for OEM customers, there is no firmware to make it quieter unfortunately. When you first turn it on it is VERY VERY loud, after the drivers kick in to control the fans it just becomes loud enough to be annoying (in the next room!) Like I say I only use it for ESX occasionally, has 128gb of RAM so if I need to run up a lot of machines to test something it is useful. eBay is good for this sort of thing I find. Lots of different versions all very cheap.
  13. I have a gen6 DL385 which I use for ESX sometimes.... Don't leave it turned on because it sounds like a jet engine.
  14. My understanding is that you have to passthrough the whole controller so wherever you put your datastore drives you can't pass through those sata ports to a VM. Certainly this was the case when I tried ESX... I didn't have fast drives for datastores so I just used an ancient LSI card for them.... gave up with ESX in the end as I was getting purple screens of death all the time and I didn't have the inclination to work out why.
  15. I wouldn't trust the drive with that many pending sectors personally.
  16. Would also just replace the oldest. Maybe the oldest Samsung / Seagate.
  17. Thanks all, sorry to derail the thread somewhat, had a chat about this on IRC too, think I will probably migrate... slowly.
  18. Happy new year! From not so sunny England.
  19. Is it worth the effort to migrate drives from RFS to XFS? How are the recovery options? I'm setting new disks to XFS but migrating all of my old disks would be very time consuming. What are you reasons for migrating if you don't mind me asking?
  20. Impressive. If I could get more upstream bandwidth I would have similar problems to you I think. I wish they would allow limiting of remote streams, If I could cap people at 720p all my problems would go away. As for your CPU choice.... Borderline running out of bandwdith before CPU... good rules of thumb here :- https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-computer- 2000 passmark for each 1080p 10Mbps stream they reckon....
  21. I also had a similar issue to the one posted in the first log, stall related to shfs. I didn't bother taking logs, assumed it was something disk related. I'm running beta 10a however.... had to powerdown as there were a ton of stuck processes I couldn't clear. If I get it again I'll grab some logs.