Garani

Members
  • Posts

    47
  • Joined

  • Last visited

Everything posted by Garani

  1. The c2750 is outta my budget. I would have to get signed approval from my boss and it would cost me dearly. My boss is my wife Joke aside, I am very very tempted by the g3258. I know that I would use that power one day
  2. Indeed, the G3258 is almost double the CPU pawer then the Kabini, and costs just as much. Still I have to work with tradeoffs: Kabini is a 4 cores, meaning that I can have 4 real concurrent tasks running, while the Pentium has 2 cores and no Hypethreading. So the Pentium will take half as much for a single task, but you have to wait cores to free before processing a new thread. In the end of things it should balance out, but generally a CPU load of 6 on a core of 4 is better then a load of 3 on a CPU of 2, even though my system works pretty much on idle most of the time. Power consumption wise the Kabini seems to win hands down. In Guru3D http://www.guru3d.com/articles-pages/pentium-20th-anniversary-series-g3258-processor-review,6.html they tested a basic system without discrete GPU and the measured power drawn at idle for the Pentium system is about 42 watts, which is significantly higher then my target, while the 5350, tested by Guru3D as well, draws 22 watts at idle(http://www.guru3d.com/articles-pages/amd-athlon-5350-apu-and-am1-platform-review,6.html). So yeah, it is about trade offs: a G3258 sounds really good from a horse power point of view, but the 5350 is half of everything: half power, half consumption. Big decisions to be made (and I still have a couple of weeks to decide) and I may go the Intel way anyhow
  3. Well, with the move on to unRAID 6.0 it appears that I have to review my home's network. I am currently using a Dell T105 with an AMD Opteron 1212 as a Firewall/Mail Gateway/VPN gateway and low quality transcoding device (but lately almost never used anymore). Then I have an HP Microserver Gen7 N54L with a range of HDs acting as my unRaid server. Those 2 servers, plus a Switch, ADSL router and WIMAX modem are sucking around 160watts So I told myself that I really need to cut on the Power Consumption. I estimate that my current annual bill is composed of 180 euros just for those servers. If I could bring things down to 30 watts I would be looking at 40 euros per years. That would be a major difference! So, after looking around, I have set my mind on this setup: CPU: AMD 5350 2.05Ghz Quad-Core Processor (€56.02 @ Amazon Italia) Motherboard: MSI AM1M Micro ATX AM1 Motherboard (€37.91 @ Amazon Italia) Memory: Kingston Fury Series 8GB (2 x 4GB) DDR3-1333 Memory (Purchased) Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive (Purchased) Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive (Purchased) Storage: Western Digital Red 3TB 3.5" 5400RPM Internal Hard Drive (Purchased) Storage: Seagate Barracuda 160GB 3.5" 7200RPM Internal Hard Drive (Purchased) Case: Cooler Master N300 ATX Mid Tower Case (€39.99 @ Amazon Italia) Power Supply: XFX ProSeries 450W 80+ Bronze Certified ATX Power Supply (Purchased) Wired Network Adapter: Intel E1G42ET 10/100/1000 Mbps PCI-Express x4 Network Adapter (Purchased) Other: Syba - Sata 3.0 x 4 PCIex1 (€43.00 @ Amazon Italia) HDs are those of the old unRaid system, and I would add another one from the 2 x 1TB that I am using on the Dell, or maybe both... The motherboard has only 2 SATA ports, but with the Syba controller I would add another 4. That would take me to 6 SATA ports, comfortably supporting all HDs that I currently have. The power supply would be another thing that I have lying around, at 450watts 80+, so that would be able to support all that I need. The memory as well would be recovered from the old N54L, so I'll still have 8GB of ram available. I'll also recover an old Intel Dual Gigabit card. Now to have everything working I would need at least 2 PCIe slots (an x1 and an x2). This means that I won't be able to use a MiniITX motherboard: so either an MSI or a Gigabyte card. They are both excellent makers, and I really I don't known which one to choose, they cost pretty much the same and the major difference is supported ram: Gigabytes can go up to 32GB. I'll need a new case too, and I read good things about the N300 by Coolermaster. So, what do I want to do with this new setup? I want to run unRaid 6 with a couple of dockerized apps (the usual stuff) and a VM for the firewalling server with the mailserver in there. Or I may dockerize the mail server by myself, but I would have to look into it first. And then I may get back into transcoding, maybe with dockerized Plex, given that I would move from a CPU Mark of 983 (Opteron 1212) and 906 (Turion N40L) to a 2602 (Amd Athlon 5350 quad core). So, any advice, before I get to Amazon to order all the goods? Maybe I won't hit the 30 watts, but even 50 watts would be quite an accomplishment for me!
  4. Well, I got to the bottom of things. I tried to use the plugin this time to create a guest, and choosed Q35. Now the guest is running smoothly. I don't know if it is the Q35 chipset or something else doing the difference, but finally I have something usable. and inline with what I used to have a few months ago. I would say that this issue is solved.
  5. I tried - Fedora 21, doesn't even get to the graphical screen. - Centos 7, it is so slow that you can't install the packages from minimal. - IPFire, does install, but it is so slow that simply doesn't work. - Untangle 10, which is the same version that I installed last time. Well, after 1 hour I am still waiting for the installation to finish. It is a cpu issue, tough: it get's stuck at 100% and with a load factor of 6/7, out of 2 logical CPUs. Something, this round, is seriously wrong... mmmm The real difference, from the last time, is that I am creating the VMs from Libvirtmanager from another linux server. I'll try to use virsh next machine, to see if something changes.
  6. Hi all. I have been with 10b for ages, and I really really had to upgrade (given the reliability issues with 10b). Ao, now I have a sparckling new 14b running on my HP Microservers. Several months ago I used to run a couple of VM for testing pourposes. They were performing as expected: not too fast, given that I have an AMD Turion NEO N40L under the cover, but not too bad either. Since all I needed to do was supported by docker, so I just erased the VMs and let the thing go. Now I have the need to run something that can't be dockered I went ahead that starting working on installing new VMs. Horror! They are not slow, they are painfully slow! They just can't be used anymore. Still I had a pleasant experience under 10b. Anyone with similar issues?
  7. UnRaid 6b7 and KVM – Multi Nic Sharing Goal: in a multi NIC environment, use all nics independently in a predictable manner with the best performance possible. There are 4 gottas that we have to look for in setting up the system: UnRaid web bridging and bonding option PCI Configuration MACs NIC Drivers and Emulation The main issue that you'll face as soon as you start configuring your system is the web GUI of UnRaid: it is very simplistic and basically it binds and bridges ALL available nics together under the same bond and/or bridge. What this means is that if you activate it you get all ETHx assigned to BR0. This is NOT what you want. What you want is for a single bridge to be created and linked to each different physical nic so that you can independently assign what you need/want to different VMs. The second issue is that you ideally need to tell KVM where that nic is located on you PCI bus. This makes the system just more robust and speedier, though is a “cloud” environment binds the guest to a single host. The third issue to solve is making sure that you have a predictable MAC: if you want your VM's network configuration to work reliably between reboots, you need to make sure that you present to it the same MAC every time. Last problem that you want to solve is linked to the capability of the nic. Left by itself KVM will propagate to the guest a simple 100mb/s virtual nic. If you have that nic connected to a 100mb/s network it won't be a problem of course. But if you have a gigabit switch with a gigabit nic you want to make sure that you can take full advantage of the capabilities offered to you. So, let's say that you have a setup like this one: You want to expose a VM to the outside (some web server, mail server or, God forbid, a firewall!) and some other VM to the inside. Maybe you want to dedicate a NIC to just UnRaid, so that you can make the most of your switch's backplane capability. Standard Warning: this is NOT a secure setup, it is NOT a typical 3 bastion configuration and if someone gains access to the VM that we are exposing to the Internet, then he does have access to the inside network. There is even a possibility that the actual host (UnRaid) could be broken into via the exposed nic, without the need to compromise the actual exposed guest. So be careful, take care, and don't be overconfident. Setting up Bridges How would you go about it? Well, let's start by listing the available NICs. root@Tower:~# ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet X.X.X.X netmask 255.255.255.0 broadcast X.X.X.X ether e8:39:35:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 21176163 bytes 27861232863 (25.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9132593 bytes 6238202394 (5.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 18 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:15:17:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 6467004 bytes 501074792 (477.8 MiB) RX errors 0 dropped 267 overruns 0 frame 0 TX packets 14851045 bytes 18438882164 (17.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 18 memory 0xfe8e0000-fe900000 eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:15:17:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 12946775 bytes 17396615344 (16.2 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7387714 bytes 586589652 (559.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 19 memory 0xfe880000-fe8a0000 So, I have a three NIC setup. ETH0 is on the motherboard, the other two are on an Intel dual gigabit card. What do I want to do with this? Well, I want: ETH0 – carry only UnRaid traffic ETH1 – go to the router and expose a firewall ETH2 – go to the internal switch and be used by all internal facing VMs So ETH0 and ETH2 will be physically linked to the internal switch. ETH1 will be physically linked to the external router. Now that we know what we have, let's create a few bridges. ETH0 No Bridge needed, will be used only by UnRaid. ETH1 It is our external interface and we need to give a name to the bridge: root@Tower:~# brctl addbr out0 root@Tower:~# brctl stp out0 on root@Tower:~# brctl addif out0 eth1 ETH2 It is our internal interface and we need to give a name to the bridge: root@Tower:~# brctl addbr in0 root@Tower:~# brctl stp in0 on root@Tower:~# brctl addif in0 eth2 Now let's bring up the interfaces: root@Tower:~# ifconfig eth1 up root@Tower:~# ifconfig eth2 up root@Tower:~# ifconfig out0 up root@Tower:~# ifconfig in0 up Now we have the bridges set up and running. Next, set up our KVM xml configuration file. KVM Configuration We need to configure our VM. Starting from the default configuration found here http://lime-technology.com/forum/index.php?topic=33807.0 we need to modify our file. Erase the included “interface” config. Then open a web browser and go to a MAC Generator: http://www.miniwebtool.com/mac-address-generator/ Use 54:52:00 as MAC Address Prefix and ask to generate an address in the format of XX:XX:XX:XX:XX:XX. Get a new mac address and use it as the KVM mac address: ETH1 We insert this config: <interface type='bridge'> <mac address='54:52:00:xx:xx:xx'/> <source bridge='out0'/> <model type='e1000'/> </interface> ETH2 We insert this config: <interface type='bridge'> <mac address='54:52:00:xx:xx:xx'/> <source bridge='in0'/> <model type='e1000'/> </interface> Why are we specifying the mac address? Because this is the best way to make sure that once setup the guest OS applies the same configuration between reboots. Another item of interest is the “Model Type”. This determines how KVM presents the device to the guest os. If you omit it then it plays safe and presents it as an RTL card capable of 100mb/s. If you have a gigabit card, then you need to specify it as virtio or e1000 to make use of the full bandwidth available. PCI Configuration We are missing the PCI address in our configuration. I could walk you through on how to recover that information, but the easiest way to do it is to use virsh. So let's start it up (in place of {vmname} you must put the name of the vm defined in the xmlfile.xml: root@Tower:~# virsh define /path/to/xmlfile.xml Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # edit {vmname} You can now see that you have a configuration much more complete then the one included in the original xml file. Basically libvirt and virsh are filling in all missing information for you by inferring the best setup given your hardware. This is important because here you can see that there are PCI addresses for all configurations. Scroll down to the interface tag. You should get something like this: <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> This is what you need. Copy all interface tags (2 if you have assigned both of them to the VM), close your editor and quit virsh. Now change your xml file by substituting the old interface tags with the new one. Or you could dump the XML from virsh, it is up to you. We are almost there! Go on and boot the VM: root@Tower:~# virsh undefine {vmname} root@Tower:~# virsh create /path/to/xmlfile.xml There you go, your VM now uses the multiple nics you have setup. All this will be probably be obsoleted once someone gets WebVirtMgr working in a docker container. The bridging part will still apply though. Ah, I almost forgot. If you want the bridges to survive unraid reboots you would have to copy the configuration to the /boot/config/go file. Just add it all there: brctl addbr out0 brctl stp out0 on brctl addif out0 eth1 brctl addbr in0 brctl stp in0 on brctl addif in0 eth2 ifconfig eth1 up ifconfig eth2 up ifconfig out0 up ifconfig in0 up
  8. It works beautifully after getting a few quirks out of the way. I'll write a mini how-to step by step on making interfaces working with bridging, PCI bus, driver emulation and so on.
  9. Well, since I got no reply, I went ahead and wrote my own bridging commands in go.
  10. Well, I know that the answer to my question may very well be: "turnoff bridging at the UNRAID menu and then use the go script to setup bridging by yourself", but I want to give it a try to see if I can find a way to do it within the system. Setup: I have a 3 nic box Objective: I want 1 nic to be exclusive to UNRAID, 1 nic exclusive to a VM, 1 nic to be promiscous to several VMs. Current situation: if you turn on the bridging option the rc.inet1 script actually binds br0 to all ETH/nic available: root@Tower:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.001517a065b6 yes eth0 eth1 eth2 vnet0 docker0 8000.56847afe9799 no I can live with bridging all nics, but in this case I need a br0->eth0, br1->eth1, br2->eth2 Any idea on how to achieve it, other then using the go script?
  11. With kvm there are a few config issues at the moment. The virtual networks are not set up at boot time, and if you try to access it via KVM virt manager from another host it doesn't support qemu+ssh, which is a royal pain Having said that, I am still in the process of trying to understand how the "darn" thing works, and I am having some real fun in the challenge Sent from my iPhone using Tapatalk
  12. Bkastner, I couldn't have said it any better. What you wrote reflects my (business) position 100%. But when you see one of the first 10 world banks moving to private clouds that sit on a very well known public cloud infrastructure, you wonder where things started to go wrong. Sent from my iPhone using Tapatalk
  13. So you are comparing a line command with a config file? Ok. Just for the record (now I am with a computer in front of me and can quote): ESX or death. This is what swings the boat in old Europe. Having said that, VMware is a real pain in the ass, but has got paid support, it has got to value for something, don't you think? (I am being sarcastic here) You don't know, you just don't know. I have seen things (and still seeing them happening today)... And regarding the "cloud", I have always called bullshit, but that's where the money is going, and that's where the shit is going to hit the fan, IMHO. Now, coming back to the subject at hand, I frankly don't care who is the better, Xen or KVM, as long as it works over unRaid. I have no interest whatsoever to be dragged in a "I am right, you are wrong" competition, it was not the intention of this thread and I'd rather prefer to see it locked then to go down that avenue. Point is: beta6 works with KVM, Beta5a is what needs to be used if I want to work with Xen. First I have to make sure that neither one of the other have the old nasty NFS Stale Filesystem issue, then I will make a choice between bleeding edge (beta6) or a less bleeding edge (beta5a) and then I'll decide on the hypervisor. Thank a lot to you all for the time and effort.
  14. Sigh, why should I start debating on who is more right on this stuff? We both do what we do for a living, and I will not humor you of who is better: KVM or Xen. I have my ideas, I use unRaid in my home, I think I am entitled to make my choice knowing my needs and capabilities, am I not? In my company's network (yeah, fortune 100 here too) I would never suggest unRaid: it isn't the right tool for their job. But then again, Xen or KVM? It is like debating witch computer is better. Well, guess what? I pull out of this kids race: been there, done that. Sent from my iPhone using Tapatalk
  15. Well, I still have to really design the new network. The real challenge will be moving from my current physical firewall to a virtual one. We will see how it works out. Regarding Xen vs KVM, I am still convinced of the superiority of KVM: big companies don't care about Linux virtualisation: they work on VMware and that is all. The current push is into clouding, trying to overcome current limits of host hardware, but that is another matter, and openstack is the current lead player, and is focusing on KVM. Given current state of processors, KVM makes transition between hosts a lot simpler then pv/Xen. But the road is long. Sent from my iPhone using Tapatalk
  16. I understand that things are a bit too raw at the moment, and I don't expect any clear indication before year end. I do understand why choosing KVM (currently de facto Linux standard) over Xen. Not so sure of the contrary, given that without a Xen's aware kernel thinks get a bit bloody (firewalls are one of this special apps: you don't mess with their kernel. Ever) In the end my main issue is with the firewall vm: it is the only one that would have issues with Xen, IMHO. Sent from my iPhone using Tapatalk
  17. Docker seems quite useful, I agree. It is just that certain things (Firewall, WebServer/Mail server) are not quite for docker, IMHO. Certainly running a firewall under Xen seems to be a real nightmare, and the router's firewall does not have sufficient features for me (VPNs for starters). Seems that I'll have to play with it a bit more, before starting to consolidate my home network.
  18. I still have to install unRaid 6 beta6, but I started visualising what to virtualise over the unRaid box. I am a KVM believer, and I am in favour of KVM vms. But I can see that unRaid pushes Xen a lot. What would be the best effort in your opinion? Try to get KVM working on unRaid Host, or move everything over to Xen? Sent from my iPhone using Tapatalk
  19. I don't know about Peter, but I did revert back to 4.7 and it seems that I have finally solved all my NFS issues. It is a shame that I can't run 5beta14, as it seemed pretty stable for me, but for the NFS issue, but I do really need NFS to be available over my network, and so be it.
  20. Tried b12a, still with the "Stale NFS File Handle" issue. I guess I'll have to think about reverting down to 4.7. NFS is kinda critical to me, as I cannot use my decoder over SMB/Samba (too slow even over the 1ge network) and I am left with NFS.
  21. Unluckily I can confirm that, after upgrading to kernel 3.3.0 posted by you, I still get the "Stale NFS file handle" on clients.
  22. I am just the new kid here, having just started on unraid. Still I am experiencing the nfs stale file handler issue. Running b14 at the moment. Other then that I haven't seen other issues.