Jump to content


Popular Content

Showing content with the highest reputation on 05/03/19 in all areas

  1. 2 points
    Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd DonationLink: https://www.paypal.me/chips777 All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The default password for the gameservers if enabled is: Docker It there is a admin password the default password is: adminDocker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). If you like my work please consider Donating for further requests of game server where i don't own the game. Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid
  2. 2 points
    Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  3. 2 points
    More and more games these days are benefit from multicore CPUs. You will see better performance with more cores on modern titles. Unraid itself always uses core0+HT so you left with 6 threads you can use for your VMs. Best performance you will get by isolating these cores, means prevent everything else from using them. You will end up with 2x3core VMs or 1x2 and 1x4. Is that enough for the game you wanna play??? How much RAM your game needs? Next question. 6GB for each VM and 4GB for Unraid itself? You can test Unraid 30days without limitations if you're interested in Unraid and see if the performance is OK for your needs.
  4. 1 point
    I never get this lucky to win. Thanks Unraid.
  5. 1 point
    Sounds like a good plan to me.
  6. 1 point
    That is HP's "hpsa" driver: https://sourceforge.net/projects/cciss/files/hpsa-3.0-tarballs/ The version in Linux kernel 4.19 is "3.4.20-125". You can see from above link that there are newer hpsa versions, but those drivers are designed to be built/integrated into RedHat variation of the kernel and do not build with our kernel. Checking kernel 5.0 and 5.1, reveals they all also use hpsa driver "3.4.20-125". Why hardware vendors insist to maintain their own driver vs. the stock kernel driver is a mystery. Eventually someone who HP cares about will complain and they'll update the stock kernel driver.
  7. 1 point
    I just (yesterday) built a new UNRAID server using an Athlon 200GE and a Gigabyte B450M DS3H motherboard. Everything runs super smooth and even some a few dockers for torrent/zerotier and power draw and heat dissipation are minimal. CPU rarely spikes above 50% use even during intense transfer sessions. I couldn't be happier. p.s. I use a separate machine to run ProxMox and that has many containers/VM's there.
  8. 1 point
    I would also love a 7 days to die option for this. Looks like someone created one just for that- but would love to have one plugin to rule all my game servers. Ark is on there- so that is great, and TF2 is a staple. I guess the other one that would be useful is Conan exiles.
  9. 1 point
    Yes, just means there's no bios installed.
  10. 1 point
    it should be: sas3flash -o -f SAS9305_24i_IT_P.bin -b mpt3x64.rom
  11. 1 point
    There is nothing you can enter into a docker run command that you cannot also enter into a template via Add Container Nope. Nothing at all. Depending upon what you are doing though, some management options of the container may not be available via the GUI if you create the container through the CLI It should be noted somewhere that there is nothing that is "an unRaid container" A container is a container is a container, and will work from one docker implementation to another (assuming that the various paths, ports, variables are appropriately defined in either the GUI or in the CLI docker run command)
  12. 1 point
    There are a few views on this but no replies. I will say that I personally have seen disconnect issues, but I don't use individual device passthrough. I pass through the controller, and sometimes see a disconnect/reconnect when transferring large files (that breaks the transfer) but it always reattaches the device. Have you tried passing through the controller? It should at least easily facilitate the reconnect when it drops. No need to buy another controller, you can pass through one of the controllers on your mobo.
  13. 1 point
    I got the server rack cases for $25 a piece. The bottom one is my main server. the one above that is dedicated onsite backup. Both are in active use or I take pics of the inside.
  14. 1 point
    It's a custom wood frame build roughly based on this guide. https://tombuildsstuff.blogspot.com/2014/02/diy-server-rack-plans.html My build is a 16U instead of 20, I omitted a few of things like wheels (furniture movers work just fine on wood floors). I don't have a real use for a rear door either.
  15. 1 point
  16. 1 point
    Norco 4224 case 9x 6TB 6x 4TB 2x 500GB SSD 1x 1TB SSD Intel® Core™ i7-5820K Asus X99-DELUXE 32 GB DDR4 Nvidia P2000 3x LSI 9211-8i This has been updated a little since these pictures were taken. I'll update here when I take new pictures.
  17. 1 point
    Supermicro X8DTI-LN4F 2 x Xeon E5620 4 cores 8 threads 48 Gigs ECC memory 2 x 3tb Seagate - Parity Drives 1 x 1tb WD - Data 7 x 3tb Seagate - Data 2 x 240 Gb SSD's - Cache drive
  18. 1 point
    I'm not sure the host os can determine the IP address of the virtual Nic that is passing through the bridge to an external router acquired via dhcp in the manner you are asking, especially since the ip can change if you aren't using static assignments in the vm. Even if it could, it wouldn't be able to show it at boot and would have to update after the vm has checked in with the router. but I could also be way wrong.
  19. 1 point
    @shEiD @johnnie.black @itimpi Oh how we love to be comforted! While it is true that the mathematics show you are protected from two failures, drives don't study mathematics. And they don't die like light bulbs. In the throes of death they can do nasty things, and those nasty things can pollute parity. And if it pollutes one parity, it pollutes both parties. So even saying single parity protects against one failure is not always so, but let's say it protects against 98% of them. Now the chances of a second failure are astronomically smaller than a single failure. And it does not protect in the 2% that even a single failure isn't protected, and that 2% may dwarf the percentage of failures dual parity is going to rescue. I did an analysis a while back - the chances of dual parity being needed in a 20 disk array is about the same as the risk of a house fire. And that was with some very pessimistic failure rate estimates. Now RAID5 is different. First, RAID5 is much faster to kick a drive that does not respond in a tight time tolerance than unRaid (which only kicks a disk in a write failure). And second, if RAID5 kicks a second drive, ALL THE DATA in the entire array is lost. With no recovery possible expect backups. And it takes the array offline - a major issue for commercial enterprises that depend on these arrays to support their businesses. With unRaid the exposure is less, only affecting the two disks that "failed", and still leaving open other disk recovery methods that are very effective in practice. And typically our media servers going down is not a huge economic event. Bottom line - you need backups. Dual parity is not a substitute. Don't be sucked into the myth that you are fully protected from any two disk failures. Or that you can use the arguments for RAID6 over RAID5 to decide if dual parity is warranted in your array. A single disk backup of the size of a dual parity disk might provide far more value than using it for dual parity! And dual parity only starts to make sense with arrays containing disk counts in the high teens or twenties. (@ssdindex)