Leaderboard

Popular Content

Showing content with the highest reputation on 05/22/19 in all areas

  1. Might I suggest that those of you requesting games that ich777 doesn't currently own a license for should contribute by either obtaining a license or working together with other requesters to obtain a license for ich777? You are asking for a significant amount of work to be done on your behalf, the least you can do is make the work easier to accomplish and support.
    2 points
  2. Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The default password for the gameservers if enabled is: Docker It there is a admin password the default password is: adminDocker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). The Steam Username and Password is only needed in templates where the two fields are marked as requirde with the red * Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid If you like my work, please consider making a donation
    1 point
  3. This plugin is designed to find and offer up suggestions to a multitude of configuration errors, outright problems, etc for numerous aspects of your unRaid server. To install this plugin, just head over to the Apps tab and search for Fix Common Problems. After installation, you will see a lifesaver within Settings / User Utilities which will launch a manual scan (and give you the option to set the background scan settings) For every error or warning that this plugin finds, a suggested course of action will also be displayed. Additionally, should you seek additional help for any error / warning, the errors are all logged into your syslog so that persons helping can easily find the issue when you post your diagnostics. Scans can be scheduled to run automatically in the background (you have the option of hourly, daily, weekly, and monthly). Additionally, if the background scans find an issue they will send out a notification (depending upon your notification settings in this plugin) The current list of tested items will be maintained in the second post of this thread. Any support for problems this plugin finds, should be posted in the General v6 section of these forums. Problems relating to false positives, suggestions for more checks, why I made the decisions I did, wording mistakes in suggestions, etc. should be posted here. As usual for anything written by me, updates are frequent as new ideas pop into my head. Highly recommended to turn on auto-updates for this plugin. Most of the tests will include a link to a "More Information" post about the specific error. These are all contained within this thread: A video with a basic run through of FCP can be found here: (at about 18:25)
    1 point
  4. Hi there, This is a very weird issue you're having and not sure how we can go about recreating it or why this would be a problem in the first place or how Unraid would be causing it. Just to confirm, this is the scenario: - You have Windows 10 installed on a Macbook Pro connected to your network via WiFi - Prior to 6.7, you would be able to copy files over the network to this device at 300-500 mbps - After 6.7, you are limited to 6 mbps - This behavior does exhibit itself on any other devices, wired or wireless, nor does it happen on the same device when booted into Mac OS First question would be if you use another wireless device with Windows (on non-Mac hardware), what happens?
    1 point
  5. Updated three servers, no issues. New BIND method working seemingly perfectly vs stubbing.
    1 point
  6. I would also suggest you reboot after downloading it in case an old one is loaded into RAM or something.
    1 point
  7. I know this is overkill, but it only took about 15 minutes. Here are 3 different scripts for different situations: Remove directories within a source directory over a certain size: #!/bin/bash limit=20G source="/mnt/user/Movies" limit=$(echo $limit | cut -d 'G' -f 1) for dir in "$source"/*/ do size=$(du -sBG "$dir" | cut -d 'G' -f 1) if (( $size > $limit )) then echo remove: $dir rm -rf "$dir" fi done removeDirOverSize.sh Remove files within a source directory over a certain size: #!/bin/bash limit=20G source="/mnt/user/Movies" find "$source" -type f -size +$limit -delete -print removeFilesOverSize.sh Remove files within a source directory over a certain size with specific extensions: #!/bin/bash limit=20G source="/mnt/user/Movies" find "$source" -regextype posix-extended -iregex '.*\.(mpg|mkv)' -size +$limit -delete -print To change extensions you could replace mpg|mkv with avi or mpg|mkv|avi etc... removeFilesOverSizeByExtension.sh
    1 point
  8. The unbalance plugin will do what you want, it's a graphical front end that utilizes rsync command line to do the work. There also is a procedure that takes some of the risk out of moving data off of one array member, emulated or not, where you exclude the drive from the global shares configuration. That will allow you to enable disk shares and "safely" copy from that disk to the user share system, which will allocate the data according to your split level and disk allocation strategy. If you don't GLOBALLY exclude the disk from user shares, not just the regular exclude, it's going to corrupt the data if you try to copy from disk to user share. Unbalance operates from disk to disk instead of user share, so it's not effected. Also, I'd recommend copying instead of moving. It will be faster, and have the same end result. You will have to rebuild parity without that disk after you get the data safe. In any case, I hope the rest of your drives are perfectly healthy, because you are relying on them to perform flawlessly for the entire duration of this procedure. You say you plan on upgrading, it would be safer to go ahead with the upgrade and rebuild on to a larger disk. You would be operating at risk for less time.
    1 point
  9. You also need to know whether the VM can do a UEFI boot or not. If the vmdk is not set up for UEFI boot then you should be using the SeaBIOS option.
    1 point
  10. Do you mean vmdk (rather than vmdf)? If so they are not supported by the Unraid GUI although the underlying KVM system can normally use them. Since the GUI does not support them you have to manually enter the full path. Also a vmdk File is not an iso image but the main vdisk that is used to run the VM. You also need to ensure the controller type is set correctly to match what the original VM was set up to use. This is probably not the vfio type that Unraid defaults to but something like SATA. There have to be appropriate drivers already loaded into the VM or it will not be able to use the vdisk.
    1 point
  11. Seems to work with min,% value set to 150. Thanks, @binhex
    1 point
  12. The upgrade went well on both of my servers. I love that all my Macs are running Time Machine now via SMB.
    1 point
  13. You misunderstand parity protection as it applies to single disks. The mdX devices are protected by parity, so there is no difference in protection whether you use /mnt/user/appdata or /mnt/diskX/appdata. In either case a single disk failure is still emulated by parity. The only thing you are losing is the ability to automatically spread the ../appdata folder across multiple disks and use a single point of access, /mnt/user/appdata. BTW, parity doesn't provide a resilient filesystem, only device failure protection. Each disk has a separate independent file system. The /mnt/user fuse is just the combination of all the root folder paths on each separate disk.
    1 point
  14. I left my v2 docker alone, stopped it and created a new docker with the v3 preview with its own appdata folder. I started fresh and everything is working fine so far. I imagine you can copy the DB folders into the new appdata folder but I don't know that exact process. I have only been running v3 for a couple days but I have not come across any issues yet.
    1 point
  15. @Squid I have updated the dockerfile so that uses the latest version of mono, and I also have updated the template to fix those issues.
    1 point
  16. Out of interest I made a test between my Ubuntu 19.04 server VM (10.0.104.11) and my Ubuntu 19.04 desktop VM (10.0.104.12). I use parallel TCP sessions to maximize the throughput. This is the command used at the desktop side. iperf3 -c 10.0.104.11 -i0 -t20 -P10 -w400k -R Since both VMs are on the same host, there is no true network limitation and performance is more determined by the VMs themselves. The Ubuntu server maxes out at 28.4 Gb/s on my server and shows the driver is capable of handling very high speeds.
    1 point
  17. Thanks for that. Just in case someone else has this issue and to expand upon what @dmacias said. I had to go into Nerdtools and set it to download and install the pip package. Once that was done everything worked again. Thank you!
    1 point
  18. I also updated python and pip for 6.7 and added setuptools as a separate package
    1 point
  19. There is nothing wrong with having a reallocated sector as long as the value remains constant. Modern drives are designed to reallocate sectors if one fails. If it keeps increasing then the drive needs replacing.
    1 point