1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong! On the other hand, I don't actually run any VMs and if I did I don't have room for video cards or additional drives in my server. lol... they can be a pain. one of my graphics cars blocks 2 slots
  2. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong!
  3. vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there.
  4. I'd like to add one more thing to multi-verion backup/replication ... make it multi-threaded to increase performance. I absolutely hate being bound to one cpu's power for backups. It turns what could be a 5-6 hour initial backup into a day and a half.... costing more overall energy in the long run.
  5. I just finished setting up my backup server a few days ago and have been playing around with a few different setups. 1. It has 1 share for the backup. If the main goes down, i don't care about having the backup run how the main does, I mainly care that the data is there and can be put back into use somewhere. 2.Crashplan works but is slooooooow and yields encrypted data on the backup server. On a 10Gbe line directly connected between both my boxes, it could only muster 300ish mbps, the same as on a gigabit line. This was even after playing around with read/write buffer and giving it 100% cpu usage ability, which it never utilized. Directly copying the array yields 1.2-1.5gbps for me (running parity on both.) I originally used Midnight Commander but switched to Krusader because it "seemed" slightly faster, though I doubt there was a difference. It really just keeps me from having to open terminal, or having terminal log out during a transfer, causing the transfer to stop. 3. I first setup my backup server without parity. It would write at 150+MBps since it is just directly writing files to the drives one at a time. I also setup some hard disks as a raid 0 cache and yielded about 300MBps when transferring from a ssd (running only sata II). But 95% of my data is on hard disks that max at about 150-180MBps read. Some less. I eventually ended just setting it up as a standard parity array for backup. I thought about what might happen if my main goes down, and while copying the data to a new main server, I lose a backup disk. So since I can write to the backup array between 70MB to spikes/some smaller sustained sections of 140MB, I decided to just use the added protection. It's going to be slow, so I embraced the slowness. There are some folk that use rsync and some other things on here. I'm not much of a terminal person, so i don't. Recently there have been requests for unRaid to have built in duplication/backup to another ever. I think an unRaid time machine function or something with file versions would be excellent. But I'll take a built in tool that scans and only copies over changed files just as happily.
  6. I misread some info on your cpu when I looked it up. You have 6 cores. but, you're still dropping from 6 cores to 4, losing 1/3 of your available cpu power. the best you can do is isolate cpus 1,2,3,4,5 in the syslinux.cfg , assign all 5 of those cores to the vm, and put the emulator pin on core 0. that would look like this (if your'e on 6.2.4): default /syslinux/menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append isolcpus=1,2,3,4,5 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Memtest86+ kernel /memtest and then the xml <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='4'/> <vcpupin vcpu='4' cpuset='5'/> <emulatorpin cpuset='0'/> </cputune> If you're trying run other dockers/plugins, you may need to give unRaid another core, and only give the vm 4 then.
  7. On the top right of the vm creation/edit window, there is a toggle next to the words "basic view." click that to change to advanced view. It will then show you your network settings and more.
  8. I believe the newer hardware comes with updates for a set amount of time.... "entitlement." Since owning servers was new to me, I used hp cards for my hba and sas expander, which i probably didn't have to do. I've also used 3 different nvidia graphics cards, an asrock usb 3.1 card, and mellanox 10gbe adapters with essentially no issues. I don't think there are any hardware restrictions, at least on my machine. I've been eying the G9's but it'll be a few years before i wear my current servers out, or they become woefully outdated. Hopefully by then they'll be a bit more reasonable in price.
  9. I haven't wanted to install LibreElec until now. Is finding the game roms difficult?
  10. so, you're going from giving windows 7 12 cores (native) to just 4 (vm)... that might be the first part of the differences. read the thread stickied at the top about cpu pinning and isolating cores. that's step 1. step 2 is to just experiment with core assignments. question: if you're running a 6 core chip, with 12 logical cores, then why does it skip a core in your assignment sequence?
  11. I have a couple MNPA19-XTR cards in a couple machines. They are directly connected via infiniband to each other and worked without issue. I know it's not the exact card you asked about, but it might be useful if they share a chipset?
  12. as discussed in another thread, P400 is not supported. Get an hba and you'll be all set.
  13. yes, I saw that. Don't let 50 bucks rob you of the unRaid experience.
  14. then buy h220 hba and keep it in the hp family!
  15. has worked on 3 nvidia cards for me, after I was pointed in the right direction This worked almost perfectly for me to get HDMI sound from my Radeon RX460! Thanks!! Only issue I have is that there seems to be a very minor crackle/pop every few seconds from the sound. It's easiest to hear if you listen to something without any variation. I find listening to a tone makes any crackles/pops immediately obvious: Ex: https://www.youtube.com/embed/TxHctJZflh8 Any thoughts on a cure? Later in the thread there's talk of isolating cpus and such. Would this cure a minor crackle/pop in the sound from HDMI? It wouldn't hurt and will make it perform a little bit better. How many cores does your system have and how are you assigning them?
  16. If you dont need it to be automatic just make a copy of the vdisk using a docker such as krusader. Just copy and paste it to another location. Make sure vm is off when you do the copy. Should something go wrong with the vm you can just copy the vdisk back and it will work. This is the easiest way!
  17. this is a +1, though they should run fine on the samsung cache you have.... but try this and see what happens: isolate cpus 1,2,3,5,6,7 assign your power hungry gaming vm 2,3,6,7, set your kodi and workstation both on cpu's 1,4 and set the emulator pin for all vm's to 0,1 if the workstation is just writing emails and word documents, it should be able to share cores with kodi, which is fairly low cpu requirements (for me anyways). also try changing your topology to not show/use virtual hyper threads. for a 4 core vm that would look like: <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='1'/> if it still is wonky, then try the vm on cpus 1,2,3 (really) then kodi on 5,6 and workstation on 6,7. this goes against the prevalent consensus on how to pin, but as I stated earlier, I've had more success without using hypertreaded pairs when assigning to vm's. I'm actually beginning to think pinning is much more influenced by processor/board than a one size all solution.
  18. cpu pinning: https://lime-technology.com/forum/index.php?topic=49051.0
  19. cpu pinning does not need to correlate to actual cpu numbers on your host. If it did, you could never run more than 1 vm. In fact, it's better to not put a vm on core 0, as unRaid prefers it for host operations. The op is attempting to put the vm on "core" 3 and its hyper threaded pair. Some people swear by this method, but I've found for certain vm's that using cpu "sides" and not using their hyper threaded pairs in groups actually works better, at least in my case (using a dual processor server.) to take it a step further, and for better response in vm's, one should isolate the intended vm cores away from unRaid so no host functions run at the same time as the vm.
  20. It took me forever to get bridging to work, that is, where 2 added in nic cards would communicate through br0 on my server, but always with weird inabilities to ping different local and web addresses (one would not ping local but would internet, the other was the opposite.) After trying multiple configurations and bridging in pfsense, I have no clue what settings actually made it work. Additionally, after trying to pass pfsense my a mellanox 10gbe card and having it completely ignored, I moved on. I can't check what/how I set it up because i ditched the idea of having a pfsense router on unRaid and now just use a standalone low power computer I got off craigslist for 10 dollars. Setup was a breeze with zero issues. I would recommend considering doing that instead. UnRaid seems to complicate what is otherwise a fairly straightforward install.
  21. It doesn't work to use VNC and GPU pass through at the same time. It's one or the other. In most cases you should have an output from your card as soon as you start your VM if pass through works. Where does your unraid console show? On the onboard vga or your GPU? vnc and gpu "works" with os x (I know, we're talking about win 10.) It will show the bios boot and the loader, then transfers over to gpu to load the operating system.
  22. I have 4 DL380 G6 machines. There are plusses and minuses. Can you elaborate some of the pros and cons you faced? i'll list both at the same time: can have lots of ram. I have 72gb currently, with max capacity a little more than double that. When doing file transfers with unRaid, I saturate gigabit networking as the file first goes to ram, then to cache (if I have it selected, which I'm not sure I really need for transfers) then to the array. It's nice to be able to run 2-3 vm's and still drop a 30GB file and push it at max speed. I recently added a couple 10gbe cards for backing up one server to another directly. Right now I'm limited at about 4-5gbps, partly because of how unRaid writes files (not striped) and partly because of the 3gbps max on sata drives I'm running into. I don't think I really needed 10gbe cards for backing up the array because when using parity, you only write at 50-80MB/s, but it's fun to play with. I've gotten higher transfer rates, but that was when assigning fast drives to the cache pool, btrfs, and striping them (which I think is what Linus did.) Not really practical though. I currently use an SAS expander to connect an external array to access 3.5 drives. seems to work fine. It holds 15 disks. If I want/need more, I can just add another card. 8 2.5 drive slots onboard, which is great for SSDS, but sata disks only connect at 3gbps on the onboard backplane. But if 2.5 is your thing, you can get an additional carrier to make it 16 onboard 2.5 disks. That would be one hell of a ssd cache cluster. 6 pcie slots when using risers (4 x4 and 2 x8(16) only running at v.2 (booooooo!) can be relatively cheap to get into and upgrade. I bought mine for 50 each. They came with dual xeon 2.2ghz quad core processors. I upgraded to dual xeon 6 core 2.9ghz processors for 120 bucks. It increased my passmark scores from 7500 to about 13k on a single machine. Cinebench more than doubled but I don't recall the exact numbers. no onboard power for aftermarket gpu and a tight fit: I had to take a dremel to the back plastic edge of my gtx 760 to make it fit (it was hitting the processor heat sink shroud) and I also had to run an external power supply to power the card. I believe G7's have an onboard power source for a single graphics cards up to 300 watts. enterprise equipment is built to be beat on. I read on the forums about people having issues because of hardware failures, and other things that come up from pushing consumer equipment too hard. Consumer computers aren't meant for 80-100% utilization all the time, or even for longer periods of high utilization. Servers (or at least mine) have massive (and often loud) cooling systems and are built a bit "tougher." I'm sure some will disagree, but the longest living hardware i've owned is my 6 year old mac book pro, and this set of servers which are about the same age. Both have been beat on and continue (knock on wood) to chug along. Long story short: if you're going to haul tons and tons of dirt, better to get a dump truck vs a honda civic. bios updates: you have to pay for them from hp. You could probably download them from less reputable sources, but I don't chance it. sloooooow boot up. eat more power, but have better power management. My main server idles at 100 watts doing nothing. 4 gigabit ethernet ports onboard, so if you're running a few vm's that are using bandwidth on the network, each doesnt don't bog down sharing 1 gigabit port and you don't lose a pci slot to add a card for it. (side note: I lost access to one of my x4 slots because my gpu was so tall it blocks it.) when I first started using unRaid about 10 months ago, the onboard raid controller wasn't recognized. Then for some reason about 4 months ago, it was. Maybe an upgrade, but I don't remember. only 2 of servers use onboard controller, the other 2 are using an h220. iLo server management is fun to play around with. very easy to "service." if a fan goes out, you get an alert. problem with one of the redundant power supplies? notification. Once the system is powered down, it takes 30 seconds to replace. Everything is a bit easier to service, short of replacing the board. I've spent hours and hours trying to figure out how to make things "work." Part of the is because I was learning about the hardware at the same time I was (and am still) learning about unRaid. And because of that, I both love and hate these servers. I have 4 because I use one as a primary storage, plex transcoder, and host a few vm's which connect to physical desktop locations in my house over cat 5e. The other 3 are for transcoding video projects, with one of those doing double duty making a duplicate of my primary server array. I probably have about 800 dollars into the whole cluster (a thousand total if you include the half rack,) which gives me access to 72 cores worth of transcoding power. For another 360 dollars, I'll have 96 cores, but all with higher clock speeds. I couldn't build anything this powerful for a grand. If my needs were less, then I'd probably buy a desktop server. It doesn't look as "cool" but sometimes function has to override form. I'm sure there's more to be said, but I think that's a good start.
  23. I have 4 DL380 G6 machines. There are plusses and minuses.
  24. post your vm xml along with your iommu groups/devices