Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 04/27/19 in all areas

  1. 10 points
    Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The standard password for the gameservers if enabled is: Docker Please read the discription of each docker and the variables that you install (some dockers need special variables to run).
  2. 7 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. New in Unraid OS 6.7 release: New Dashboard layout, along with new Settings and Tools icons. Designed by user @Mex and implemented in collaboration with @bonienl. We think you will find this is a big step forward. Time Machine support via SMB. To enable this feature it is necessary to first turn on "Enhanced OS X interoperability" on the Settings/SMB page. Next, select a share to Export for Time Machine in the share SMB Security Settings section. Note: AFP is now deprecated and macOS users are encouraged to use SMB only. Enhanced syslog handling. On Settings/Network Services page click on Syslog Server. Here you can designate this server to receive system logs from other Unraid OS servers, or forward this servers syslog to another local or remote server. Parity sync/Data rebuild/Check pause/resume capability. Main functionality in place. Pause/resume not preserved across system restarts yet however. Linux kernel 4.19. This is the latest Long Term Support kernel. Here are some other kernel-related updates: Added TCP "BBR Congestion control" and made it the default. This should improve network throughput but probably not too many users will notice anything different. Added Bluetooth support in the Linux kernel. We did not add the user-space tools so this will be mostly useful to support Bluetooth in docker containers. AMD firmware update for Threadripper. Ignore case in validating user share names. If there are multiple top-level directories which differ only in case, then we use the first such share name encountered, checking in order: cache, disk1, disk2, ..., diskN. Additional top-level directories encountered will be ignored. For example, suppose we have: /mnt/cache/ashare /mnt/disk1/Ashare /mnt/disk2/ashare The name of the exported share will be 'ashare' and will consist of a union of /mnt/cache/ashare and /mnt/disk2/ashare. The contents of /mnt/disk1/Ashare will not appear in /mnt/user/ashare. If you then delete the contents of /mnt/user/ashare followed by deleting the 'ashare' share itself, this will result in share 'Ashare' becoming visible. Similar, if you delete the contents of /mnt/cache/ashare (or gets moved), then you will now see share 'Ashare' appear, and it will look like the contents of 'ashare' are missing! Thankfully very few (if any) users should be affected by this, but handles a corner case in both the presentation of shares in windows networking and storage of share config data on the USB flash boot device. New vfio-bind method. Since it appears that the xen-pciback/pciback kernel options no longer work, we introduced an alternate method of binding, by ID, selected PCI devices to the vfio-pci driver. This is accomplished by specifying the PCI ID(s) of devices to bind to vfio-pci in the file 'config/vfio-pci.cfg' on the USB flash boot device. This file should contain a single line that defines the devices: BIND=<device> <device> ... Where <device> is a Domain:Bus:Device.Function string, for example, BIND=02:00.0 Multiple device should be separated with spaces. The script /usr/local/sbin/vfio-pci is called very early in system start-up, right after the USB flash boot device is mounted but before any kernel modules (drivers) have been loaded. The function of the script is to bind each specified device to the vfio-pci driver, which makes them available for assignment to a virtual machine, and also prevents the Linux kernel from automatically binding them to any present host driver. In addition, and importantly, this script will bind not only the specified device(s), but all other devices in the same IOMMU group as well. For example, suppose there is an NVIDIA GPU which defines both a VGA device at 02:00.0 and an audio device at 02.00.1. Specifying a single device (either one) on the BIND line is sufficient to bind both device to vfio-pci. The implication is that either all devices of an IOMMU group are bound to vfio-pci or none of them are. Other highlights: Added the '--allow-discards' option to LUKS open. This should only have any effect when using encrypted Cache device/pool with SSD devices. It allows a file system to notice if underlying device supports TRIM and if so, passes TRIM commands down. Added 'telegram' notification agent support - thank you @realies Updated several base packages, including move to Samba 4.9 and docker 18.09. Fixed a number of minor bugs. Finally: as always, a big "Thank You!" to everyone who contributed and helped with testing. Version 6.7.0 2019-05-08 Base distro: aaa_elflibs: version 15.0 (rev 3) acpid: version 2.0.31 adwaita-icon-theme: version 3.32.0 at-spi2-atk: version 2.32.0 at-spi2-core: version 2.32.1 at: version 3.1.23 atk: version 2.32.0 bash: version 5.0.007 bin: version 11.1 (rev 3) bluez: version 4.101 bridge-utils: version 1.6 btrfs-progs: version v4.19.1 ca-certificates: version 20190308 cairo: version 1.16.0 cifs-utils: version 6.9 coreutils: version 8.31 curl: version 7.64.1 (CVE-2019-8907, CVE-2019-3822, CVE-2019-3823) cyrus-sasl: version 2.1.27 dbus: version 1.12.12 dhcpcd: version 7.2.0 diffutils: version 3.7 dmidecode: version 3.2 dnsmasq: version 2.80 docker: version 18.09.5 (CVE-2019-5736) e2fsprogs: version 1.45.0 etc: version 15.0 (rev 9) ethtool: version 5.0 file: version 5.36 (CVE-2019-8906, CVE-2019-8907) findutils: version 4.6.0 freetype: version 2.10.0 fribidi: version 1.0.5 gdbm: version 1.18.1 gdk-pixbuf2: version 2.38.0 git: version 2.21.0 glib2: version 2.60.1 glibc-solibs: version 2.29 glibc-zoneinfo: version 2019a glibc: version 2.29 gnutls: version 3.6.7 (CVE-2018-16868) gptfdisk: version 1.0.4 graphite2: version 1.3.13 grep: version 3.3 gtk+3: version 3.24.8 gzip: version 1.10 harfbuzz: version 2.4.0 haveged: version 1.9.4 hdparm: version 9.58 hostname: version 3.21 hwloc: version 1.11.11 icu4c: version 64.2 infozip: version 6.0 (CVE-2014-8139, CVE-2014-8140, CVE-2014-8141, CVE-2016-9844, CVE-2018-18384, CVE-2018-1000035) inotify-tools: version 3.20.1 intel-microcode: version 20180807a iproute2: version 5.0.0 iptables: version 1.8.2 iputils: version 20190324 irqbalance: version 1.5.0 jansson: version 2.12 jemalloc: version 4.5.0 jq: version 1.6 (rev2) kernel-firmware: version 20190424_4b6cf2b keyutils: version 1.6 kmod: version 26 libSM: version 1.2.3 libX11: version 1.6.7 libXcomposite: version 0.4.5 libXcursor: version 1.2.0 libXdamage: version 1.1.5 libXdmcp: version 1.1.3 libXext: version 1.3.4 libXft: version 2.3.3 libXmu: version 1.1.3 libXrandr: version 1.5.2 libXxf86dga: version 1.1.5 libaio: version 0.3.112 libarchive: version 3.3.3 libcap-ng: version 0.7.9 libcap: version 2.27 libcroco: version 0.6.13 libdrm: version 2.4.98 libedit: version 20190324_3.1 libepoxy: version 1.5.3 libestr: version 0.1.11 libevdev: version 1.6.0 libgcrypt: version 1.8.4 libgpg-error: version 1.36 libjpeg-turbo: version 2.0.2 libnftnl: version 1.1.2 libpcap: version 1.9.0 libpng: version 1.6.37 (CVE-2018-14048 CVE-2018-14550 CVE-2019-7317) libpsl: version 0.21.0 libpthread-stubs: version 0.4 (rev 3) librsvg: version 2.44.11 libssh2: version 1.8.2 (CVE-2019-3855, CVE-2019-3856, CVE-2019-3857, CVE-2019-3858, CVE-2019-3859, CVE-2019-3860, CVE-2019-3861, CVE-2019-3862, CVE-2019-3863) libtirpc: version 1.1.4 libvirt: version 5.1.0 libwebp: version 1.0.2 libwebsockets: version 3.1.0 libxcb: version 1.13.1 libxkbfile: version 1.1.0 libxml2: version 2.9.9 libxslt: version 1.1.33 libzip: version 1.5.2 lm_sensors: version 3.5.0 logrotate: version 3.15.0 lsscsi: version 0.30 lvm2: version 2.03.02 lz4: version 1.8.3 lzip: version 1.21 mc: version 4.8.22 mcelog: version 162 mesa: version 18.3.0 miniupnpc version: 2.1 mkfontscale: version 1.2.1 mozilla-firefox: version 66.0 (CVE-2018-18500, CVE-2018-18504, CVE-2018-18505, CVE-2018-18503, CVE-2018-18506, CVE-2018-18502, CVE-2018-18501, CVE-2018-18356, CVE-2019-5785, CVE-2018-18511, CVE-2019-9790, CVE-2019-9791, CVE-2019-9792, CVE-2019-9793, CVE-2019-9794, CVE-2019-9795, CVE-2019-9796, CVE-2019-9797, CVE-2019-9798, CVE-2019-9799, CVE-2019-9801, CVE-2019-9802, CVE-2019-9803, CVE-2019-9804, CVE-2019-9805, CVE-2019-9806, CVE-2019-9807, CVE-2019-9809, CVE-2019-9808, CVE-2019-9789, CVE-2019-9788) mpfr: version 4.0.2 nano: version 4.2 ncompress: version 4.2.4.5 ncurses: version 6.1_20190420 netatalk: version 3.1.12 (CVE-2018-1160) nettle: version 3.4.1 (CVE-2018-16869) nghttp2: version 1.38.0 nginx: version 1.14.2 (+ nchan 1.2.3) (CVE-2018-16843, CVE-2018-16844, CVE-2018-16845) ntp: version 4.2.8p13 (CVE-2019-8936) oniguruma: version 6.9.1 (CVE-2017-9224, CVE-2017-9225, CVE-2017-9226, CVE-2017-9227, CVE-2017-9228, CVE-2017-9229) openldap-client: version 2.4.47 openssh: version 8.0p1 openssl-solibs: version 1.1.1b (CVE-2019-1559) openssl: version 1.1.1b (CVE-2019-1559) p11-kit: version 0.23.15 pciutils: version 3.6.2 pcre2: version 10.33 pcre: version 8.43 php: version 7.2.18 (CVE-2019-11034, CVE-2019-11035, CVE-2019-11036) pixman: version 0.38.4 pkgtools: version 15.0 (rev 23) pv: version 1.6.6 qemu: version 3.1.0 (rev 2) patched pcie link speed and width support rpcbind: version 1.2.5 rsyslog: version 8.40.0 samba: version 4.9.7 (CVE-2018-14629, CVE-2018-16841, CVE-2018-16851, CVE-2018-16852, CVE-2018-16853, CVE-2018-16857) sdparm: version 1.10 sed: version 4.7 sg3_utils: version 1.44 shadow: version 4.6 shared-mime-info: version 1.12 smartmontools: version 7.0 spice-protocol: version 0.12.14 spice: version 0.14.1 sqlite: version 3.28.0 sudo: version 1.8.27 sysvinit-scripts: version 2.1 (rev 26) sysvinit: version 2.94 talloc: version 2.2.0 tar: version 1.32 tdb: version 1.4.0 tevent: version 0.10.0 tree: version 1.8.0 ttyd: version 1.4.2 ttyd: version 20190223 util-linux: version 2.33.2 wget: version 1.20.3 (CVE-2019-5953) xauth: version 1.0.10 (rev 3) xfsprogs: version 4.20.0 xkeyboard-config: version 2.25 xprop: version 1.2.4 xterm: version 341 xtrans: version 1.4.0 zstd: version 1.4.0 Linux kernel: version: 4.19.41 added drivers: CONFIG_USB_SERIAL_CH341: USB Winchiphead CH341 Single Port Serial Driver CONFIG_X86_MCELOG_LEGACY: Support for deprecated /dev/mcelog character device added TCP BBR congestion control kernel support and set as default: CONFIG_NET_KEY: PF_KEY sockets CONFIG_TCP_CONG_BBR: BBR TCP CONFIG_NET_SCH_FQ: Fair Queue CONFIG_NET_SCH_FQ_CODEL: Fair Queue Controlled Delay AQM (FQ_CODEL) added Bluetooth kernel support: CONFIG_BT: Bluetooth subsystem support CONFIG_BT_BREDR: Bluetooth Classic (BR/EDR) features CONFIG_BT_RFCOMM: RFCOMM protocol support CONFIG_BT_RFCOMM_TTY: RFCOMM TTY support CONFIG_BT_BNEP: BNEP protocol support CONFIG_BT_BNEP_MC_FILTER: Multicast filter support CONFIG_BT_BNEP_PROTO_FILTER: Protocol filter support CONFIG_BT_HIDP: HIDP protocol support CONFIG_BT_HS: Bluetooth High Speed (HS) features CONFIG_BT_LE: Bluetooth Low Energy (LE) features CONFIG_BT_HCIBTUSB: HCI USB driver CONFIG_BT_HCIBTUSB_AUTOSUSPEND: Enable USB autosuspend for Bluetooth USB devices by default CONFIG_BT_HCIBTUSB_BCM: Broadcom protocol support CONFIG_BT_HCIBTUSB_RTL: Realtek protocol support CONFIG_BT_HCIUART: HCI UART driver CONFIG_BT_HCIUART_H4: UART (H4) protocol support CONFIG_BT_HCIUART_BCSP: BCSP protocol support CONFIG_BT_HCIUART_ATH3K: Atheros AR300x serial support CONFIG_BT_HCIUART_AG6XX: Intel AG6XX protocol support CONFIG_BT_HCIUART_MRVL: Marvell protocol support CONFIG_BT_HCIBCM203X: HCI BCM203x USB driver CONFIG_BT_HCIBPA10X: HCI BPA10x USB driver CONFIG_BT_HCIVHCI: HCI VHCI (Virtual HCI device) driver CONFIG_BT_MRVL: Marvell Bluetooth driver support CONFIG_BT_ATH3K: Atheros firmware download driver firmware: added BCM20702A0-0a5c-21e8.hcd added BCM20702A1-0a5c-21e8.hcd md/unraid: version 2.9.7: setup queue properties correctly support sync pause/resume fix: kernel BUG if read phase of read/modify/write with FUA flag set fails on stripe with multiple read failures OOT Intel 10Gbps network driver: ixgbe: version 5.5.5 OOT Tehuti 10Gbps network driver: tn40xx: version 0.3.6.17 patch: support Mozart 395S chip patch: hpsa: change scsi_host_template.max_sectors from 2048 to 1024 per request Management: add early vfio-bind utility restore PHP E_WARNING in /etc/php/php.ini support Apple Time Machine via SMB acpi: silence undefined ACPI event logging docker: preserve container fixed IPv4 and IPv6 addresses across reboot/docker restart emhttp: bug fix: cache-only/cache-prefer share not initially created on cache emhttp: ignore *.key files that begin with "._" emhttp: properly dismiss "Restarting services" message emhttp: use mkfs.btrfs defaults for metadata and SSD support emhttpd: Add --allow-discard luksOpen option emhttpd: Increase number of queued inotify IN_MOVED_TO events from 16 to 1024 for /var/local/emhttp directory. fix: docker log rotation fix: inconsistent share name case fix: terminal instances limited to 8 (now lifted) fstab: mount USB flash boot device with 'flush' keyword networking: pass user-specified MAC address through to bridge rc.nginx: eliminate unnecessary 10 sec delays rc.nginx: implement better status wait loop - thanks ljm42 rc.sshd: only copy new key files to USB flash boot device smartmontools: update drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} smb: when Enhanced OS X interoperability set, include "fruit:nfs_aces = no" to be compatible with Unraid security model smb: disable samba auto-register with avahi for now webgui: Add GameServers to category for docker containers webgui: Add log-size and log-file options to docker run command webgui: Added new font icons webgui: Added parity pause/resume button webgui: Added syslog server functionality webgui: Allow optional notifications on background docker update checks webgui: Allow plugins to use font awesome for icon webgui: Dashboard: add settings shortcuts webgui: Dashboard: added control buttons webgui: Dashboard: create more space for Dokcer/VM names (3 columns) webgui: Dashboard: cut off long container and VM names webgui: Dashboard: fix color consistency webgui: Dashboard: fix incorrect memory type webgui: Dashboard: fixed display of Wattage in UPS load webgui: Dashboard: fixed hanging when no share exports are defined webgui: Dashboard: fixed wrapping of long lines webgui: Dashboard: fixed wrong display of memory size webgui: Dashboard: include links to settings webgui: Dashboard: replace inline style statements for style section webgui: Dashboard: table adjustment in three columns view webgui: Dashboard: table right adjustment in two columns view webgui: Dashboard: use disk thresholds for utlization bars webgui: Dashboard: wrap long descriptions webgui: Diagnostics: dynamic file name creation webgui: Do not capitalize path names in title of themes Azure and Gray webgui: Docker: single column for CPU/Memory load webgui: Docker: Add More Info link (docker registry) to context menus webgui: Docker: textual update webgui: Docker: usage memory usage in advanced view webgui: Escape quotes on a containers template webgui: File browser: force download of files webgui: Fix Background color when installing container webgui: Fixed share/disk size calculation when names include space webgui: Fixed version display in system information webgui: Fixed: slots selection always disabled after "New Config" webgui: Keep status visible for paused array operations webgui: Main: make disk identification mono-spaced font webgui: Minor textual changes webgui: Move "Management Access" directly under Settings webgui: New icon reference webgui: OS update: style correction webgui: Open link under Unraid logo in new window webgui: Per Device Font Size Setting webgui: Permit configuration of parity device(s) spinup group. webgui: Plugin manager: add .png option to Icon tag webgui: Plugin manager: align icon size with rest of the GUI webgui: Plugin manager: enlarge readmore height webgui: Plugin manager: table style update webgui: Position context menu always left + below icon webgui: Prevent update notification if plugin is not compatible webgui: Replace string "OS X" with "macOS" webgui: Replaced orb png icons by font-awesome webgui: Revamped dashboard page webgui: Share settings: fixed exclude "All" from write function webgui: Suppress PHP warnings from corrupted XML files webgui: Switch button: use blue color in ON state webgui: Switch plugins to a compressed download webgui: Syslinux config: replace checkbox with radio button webgui: Syslog: add '' entry in local folder selection webgui: Syslog: added log rotation settings webgui: Syslog: added viewer webgui: Syslog: included rsyslog.d conf files and chmod 0666 webgui: Syslog: sort logs webgui: Updated Unraid icons webgui: Updated icons and cases webgui: Updated jquery cookie script from 1.3.1 to 1.4.1 webgui: Use cookie for display setting font size webgui: VM manager: remove and rebuild USB controllers webgui: VM page: allow long VM names webgui: added new case icons webgui: other GUI enhancements webgui: prevent dashboard bar animations from queuing up on inactive browser tab webgui: sort notification agents alphabetically, add telegram notifications webgui: syslog icon update webgui: telegram notification agent bug fixes
  3. 7 points
    As many are aware, Intel has had some serious security vulnerabilities released over the past year. "Spectre", "Meltdown", and now one of the strongest dubbed "Zombieload" aka MDS. Intel seems to be having some skeletons coming out of the closet, which saw a CEO resign, and market share loss now to AMD. The mitigation's to these vulnerabilities have all individually come with a performance cost, Spectre/Meltdown in the range of ~%15, and now MDS rumored to need Hyperthreading disabled altogether to mitigate, costing upwards of %30-%40 (sources are based on the internet, so take with a grain of salt). So add them all together, and that's a pretty hefty penalty for users who may not even be a target for this kind of attack. Personally, I have nothing that sensitive at my home running in individual dockers or VM's that I would worry enough about if someone from one area could read data from the other. As well, my local users are myself and my wife 🙂 , so she could just TAKE the money from the bank in person 🙂 Not a threat to me. I don't care if someone is watching me play games on a vm, or is watching that I am encoding or decrypting a movie, big deal, not much going on at my house anyone would work hard enough to watch....... and if someone did make it that far to target me, I got bigger problems than speculative execution, like checking my firewall rules!! So, with that said, this is ALL AT YOUR OWN RISK, I or the community do not assume any responsibility of damage due to the disablement of these mitigation's. As of 6.7.0, we have kernel level 4.19.41 which marks the last kernel to NOT mitigate against MDS. To disable Spectre/Meltdown for release 6.7.0, adjust your syslinux.cfg file as follows (and reboot): pti=off spectre_v2=off l1tf=off nospec_store_bypass_disable no_stf_barrier As of 6.7.1 RC1, we have kernel level 4.19.43 which marks the first kernel TO mitigate against Spectre/Meltdown AND MDS. To disable Spectre/Meltdown/MDS for release 6.7.1 RC1+, adjust your syslinux.cfg as follows (and reboot): pti=off spectre_v2=off l1tf=off mds=off nospec_store_bypass_disable no_stf_barrier You can validate the mitigation's on the OS before/after by: cat /sys/devices/system/cpu/vulnerabilities/* BEFORE: Should look similar to (notice the Mitigation's): Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable Mitigation: Clear CPU buffers; SMT vulnerable Mitigation: PTI Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mitigation: __user pointer sanitization Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling AFTER: Should look similar to (notice the Vulnerable): Mitigation: PTE Inversion; VMX: vulnerable Vulnerable; SMT vulnerable Vulnerable Vulnerable Mitigation: __user pointer sanitization Vulnerable, IBPB: disabled, STIBP: disabled
  4. 6 points
    This video is the first part of a series taking an in depth look at Unraid shares. This first one gives a brief introduction then looks at SMB shares and how Windows PCs interatct with them. It shows how to create and connect to both public and private shares etc It goes through various problems that people have and the solutions to overcome them. But also it shows some useful tips such as adding features in the smb extras such as creating custom shares. Hope that you find this video interesting.
  5. 6 points
    OK, fixed. Here's the breakdown. Tried to compile on v6.7.0 and it failed complaining of missing libraries. Now one of the dependencies we use is Mesa which includes these libraries So I tried a known version of Unraid which I compiled 8 days ago, v6.7.0rc8, using the same packages that I used for v6.7.0 today and it also failed to build with the same error. There's been no updates to the Slackbuild of Nvidia drivers since I compiled v6.7.0rc8 so that ruled out a problem with that. So I started to look at the packages I was installing to spot any issues. There has been a major version change between the GCC compiler version Limetech use and the version I was pulling from slackware-current (v8.3 to v9.1) which I thought may have been the issue, so my working theory was downgrade GCC and the other GCC bits we use and see if that worked. I started to build GCC from source, which was taking a VERY long time, as there are no longer any Slackware packages available to download GCC v8.3, but in the meantime managed to find an out of date Slackware mirror which still had them, so switched everything back to v8.3 expecting it to work. Unfortunately, although everything looked "cleaner" when compiling, it still failed with the same error. So shot that theory down. I chose at this point to pull the already published DVB builds as I'd compiled them with v9.1 and it seemed like it could be asking for trouble. I then took a look at Mesa again and a bit more closely and decided to examine the slackware-current changelog for any changes to that which may have caused the failure. It was then I noticed just this morning there had been a change to slackware-current which meant mesa now had a new dependency libglvnd which was previously not required as it was previously included in the mesa package. Essentially, if I'd compiled this at the time of release (I couldn't I was asleep) then I wouldn't have hit this error until the next version of Unraid Once I added that package it all worked and compiled fine, so I tried booting my VM with the Nvidia build and that was successful but I don't have a card in it to test, so then I upgraded my bare metal Unraid with my 1050ti in it and it worked with no issues. I've typed this out so those of you that wonder what the delays are have your answers. If you follow all that then feel free to swing by Discord and offer to help out with stuff. Basically Slackware doesn't spoon feed you, if you installed Mesa on Ubuntu for instance it would pull all the dependencies in automatically with it's package manager, Slackware you have to figure them out for yourself. There are currently 90 dependencies for the Nvidia build.
  6. 5 points
    There are several ways to disable the various mitigations, both via kernel parameters and at run-time (though this will require us to include 'debugfs' which we can do). Google 'linux disable meltdown spectre zombieland mitigations' for a number of how-to articles. If it gets serious enough we'll probably add debugfs support and perhaps a config page. Having all these mitigations in place is mainly a C.Y.A. move in my opinion.
  7. 4 points
    In this inaugural blog in the New Users Blog Series, we talk about: Unraid and the USB flash drive Using the USB Flash Creator tool How drives are counted towards the license limit How to reset your root password How to rename your server (Tower) How to change banner images and themes Check it out and let us know what you think! Have ideas/questions about Unraid that you'd like to see a blog written about? Post them here or send me a DM. Cheers! https://unraid.net/blog/unraid-new-users-blog-series
  8. 4 points
    I'm not blaming the user. This really isn't that different from the ReiserFS (RFS) issues. It worked perfectly fine earlier but then issues started popping up as the software evolved (Linux Kernel). The preventative users took the initiative and migrated away from RFS to XFS before it became a larger issue. Others waited until they had larger issues, which always happens during inconvenient times, and were forced to migrate anyways. Switching from hardware with questionable software dynamics to newer hardware with better software dynamics to prevent larger issues is a wise investment for something you're already invested in to keep your data safe. Running a server for data safety is never a One and Done event, it's an ongoing event that requires investment maintenance. Would you rather take your car into the shop for replacement tires when you notice signs of wear and tear or during the first leg of a road-trip vacation where you're forced to put on the anemic spare tire at the side of the road in the middle of nowhere? It's up you whether you take preventative measures or not.
  9. 4 points
    Doesn't mean we won't add it
  10. 4 points
    The new image has built so if you now pull down latest it should work as expected Sent from my EML-L29 using Tapatalk
  11. 4 points
    ok guys, spotted the issue. it was a legacy bug that the additional logging picked up causing the exit of sabnzbd (even though it was running). sadly i cannot currently build a new image with the fix as docker hub is currently in maintenance mode and looks like it wont back online for approx another 8 hours, so i will have to press the button in the morning.
  12. 4 points
    We'll include QEMU 4.x in Unraid 6.8
  13. 4 points
    We have had nothing but problems with ADATA branded drives. It is definitely not anything Unraid is doing. They are just a cheap brand with very poor reliability. Sent from my Pixel 3 XL using Tapatalk
  14. 3 points
    This is a special release in light of the recent so-called Zombieland vulnerability revealed by Intel earlier this week. Normally we don't generate -rc stable patch releases, however in the interest of maintaining our, and the Community's sanity, this release is exactly the same as 6.7.0 except for updated Intel CPU microcode and corresponding Linux kernel (4.19.43). If 6.7.1-rc1 "fails" with something but 6.7.0 "works" then we can be fairly confident microcode/kernel to blame. We have released this on the next branch in order to get some testing before publishing for everyone on stable. Please post here in this topic, any issues you run across which are not present in 6.7.0 also - that is, issue that you think can be directly attributed to the microcode and/or kernel changes. Version 6.7.1-rc1 2019-05-17 Linux kernel: version: 4.19.43 intel-microcode: version 20190514a
  15. 3 points
    As an aside, the Fix Common problems plugin has been issuing warnings about marvel controllees for quite awhile now. Sent via telekinesis
  16. 3 points
    I agree and I mentioned the fact during the rc phase but it seems our preference for function over style puts us in a minority.
  17. 3 points
    Ok, I used to be able to connect to Host network with this before the update....that allowed me to be assigned an IP on my WiFi subnet, which then allowed me to access the UnRAID GUI interface. NOW, instructions make us connect to Bridge network......so how do we access the UnRAID GUI interface if we are on the bridge network? OpenVPN dished me out a 172.27.xxx.xxx address (docker subnet). Update: Figured out how to access UnRAID GUI. Did NOT figure out how to be assigned a local address on my primary WiFi subnet though. In Admin Page ----> VPN Settings go to Routing section and add a line for the subnet you want your clients to have access to (for example, I added 192.168.1.0/24 which is my primary WiFi subnet and where I can access my UnRAID GUI locally)
  18. 3 points
    Not blaming the user, simply offering an alternative to get server back working again. Sometimes the easiest solution is a h/w one. Sometimes the only solution is a h/w one. But to your point: You're right: something broke in the kernel. When this was first reported we spent quite a bit of time, short of bisecting the kernel, to find out what might have caused this. We are willing to try any patch you, or someone else might run across. For example, take a look at this post in the -rc8 release topic regarding the 'hpsa' storage driver. I delayed this 6.7.0 release a day to put this patch in, and user has reported that it works and fixes the issue. Note that we never see these kinds of issues in the h/w we have - we would not publish a release where we see problems like this. When I see an issue like this, first thing I do is 'google' the error message and start searching for similar reports. But I can't spend more than a few hours doing this on any one issue. If a solution is not apparent, then I let it sit for a while because eventually it will happen in one of the bigger distros such as Ubuntu or Fedora, and those guys have the resources to further investigate the problem. This is kinda how it works with open source unfortunately.
  19. 3 points
    Maybe I can offer a different perspective. First let’s go to the “1,000 foot bird eye view” and ask the question what are we trying to accomplish? Likely the answer is to protect data. The next question is what are you willing to do to protect said data? Well, we built a dedicated computer to store our important files. So we are going above and beyond what most “common” folks do to store files. When you start adding up the money spent on the computer hardware plus the actual storage drives, there is some serious money invested. I think if you look at it in that regard, and ask the question is my data worth an extra $100 dollars for a piece of hardware that is known to work great and let’s me stay on up to date software that comes with the latest security patches or is it worth it to use a piece of hardware with known issues that will likely not get better... well I know what my decision would be. Anyways just something to think about.
  20. 3 points
    You can plug your USB flash device into a PC and then make a backup (just drag contents to a temp folder on your desktop). Next use the USB Creator Tool to re-install Unraid OS on your flash. You can put the last version you were running, or try 6.7 again. Then copy the contents of your 'config' folder backup to the USB flash 'config' folder. If you had any custom 'syslinux' settings, you could restore that too. Eject flash, reboot server and 'should work' 👍
  21. 3 points
    Norco 4224 case 9x 6TB 6x 4TB 2x 500GB SSD 1x 1TB SSD Intel® Core™ i7-5820K Asus X99-DELUXE 32 GB DDR4 Nvidia P2000 3x LSI 9211-8i This has been updated a little since these pictures were taken. I'll update here when I take new pictures.
  22. 2 points
    Plugin Name: Unraid Nvidia Github: https://github.com/linuxserver/Unraid-Nvidia-Plugin This plugin from LinuxServer.io allows you to easily install a modified Unraid version with Nvidia drivers compiled and the docker system modified to use an nvidia container runtime, meaning you can use your GPU in any container you wish. Any posts discussing circumvention of any Nvidia restrictions we will be asking mods to remove. We have worked hard to bring this work to you, and we don't want to upset Nvidia. If they were to threaten us with any legal action, all our source code and this plugin will be removed. Remember we are all volunteers, with regular jobs and families to support. Please if you see anyone else mentioning anything that contravenes this rule, flag it up to the mods. People that discuss this here could potentially ruin it for all of you. EDIT: 25/5/19 OK everyone, the Plex script seems to be causing more issues than the Unraid Nvidia build as far as I can tell. From this point on, to reduce the unnecessary noise and confusion on this thread, I'm going to request whoever is looking after, documenting or willing to support the Plex scripts spins off their own thread. We will only be answering any support questions on people not using the script. If your post is regarding Plex and you do not EXPLICITLY state that you are not using the Plex script then it will be ignored. I know some of you may think this is unreasonable but it's creating a lot of additional work/time commitments for something I never intended to support and something I don't use (Not being a Plex user) May I suggest respectfully, that one of you steps forward to create a thread, document it, and support it in it's own support place. I think we need to decouple issues with the work we've done versus issues with a currently unsupported script. Thanks.
  23. 2 points
    I don't know about you, but my wife does this all the time (usually in anger) and expects me to remember everything...
  24. 2 points
    AIRVPN.ORG best vpn i've had,
  25. 2 points
    The unbalance plugin will do what you want, it's a graphical front end that utilizes rsync command line to do the work. There also is a procedure that takes some of the risk out of moving data off of one array member, emulated or not, where you exclude the drive from the global shares configuration. That will allow you to enable disk shares and "safely" copy from that disk to the user share system, which will allocate the data according to your split level and disk allocation strategy. If you don't GLOBALLY exclude the disk from user shares, not just the regular exclude, it's going to corrupt the data if you try to copy from disk to user share. Unbalance operates from disk to disk instead of user share, so it's not effected. Also, I'd recommend copying instead of moving. It will be faster, and have the same end result. You will have to rebuild parity without that disk after you get the data safe. In any case, I hope the rest of your drives are perfectly healthy, because you are relying on them to perform flawlessly for the entire duration of this procedure. You say you plan on upgrading, it would be safer to go ahead with the upgrade and rebuild on to a larger disk. You would be operating at risk for less time.
  26. 2 points
    This Docker is in BETA status. Some controllers may not be detected, there may be errors thrown. Please help in resolving these issues if this occurs if you use this beta version. Installation Via the Community Application: Search for "DiskSpeed" Manual Installation (The Community Applications plugin is having issues currently, here's a work around for now) Save the attached "my-DiskSpeed.xml" file to your NAS under \\tower\flash\config\plugins\dockerMan\templates-user View the Docker tab in your unRAID Administrator , click on "Add Container" Under "Select a template", pick "my-DiskSpeed" The defaults should work as-is unless you have port 18888 already in use. If so, change the Web Port & WebUI settings to a new port number. The Docker will create a directory called "DiskSpeed" in your appdata directory to hold persistent data. Note: Privileged mode is required so that the application can see the controllers & drives on the host OS. This docker will use up to 512MB of RAM. RAM optimization will happen in a later BETA. Running View the Docker tab in your unRAID Administrator and click on the icon next to "DiskSpeed" and select WebUI. A new window will open. On first time run (or after the Docker app is updated or you select to rescan hardware), the application will scan your system to locate drive controllers & the hard drives attached to them. Drive Images As of this post, the Hard Drive Database (HDDB) has 825 drive models in 20 brands. If you have one or more drives that do not have a predefined image in the HDDB, you have a couple options available - wait for me to add the image which will be displayed after you click "Rescan Controllers" or you can add the drive yourself by editing it and uploading a drive image for it. You can view drive images in the HDDB to see if there's an image that'll fit your drive and optionally upload it so others can benefit. Controller & Drive Identification Issues Some drives, notably SSD's, do not reveal the Vendor correctly or at all. If you view the Drive information and it has the same value for the vendor as the model or an incorrect or missing Vendor, please inform me so that I can manually add the drive to the database or add code to handle it. If you have a controller that is not detected, please notify me. Benchmarking Drives The current method of benchmarking the hard drives is to read the hard drive at certain percentages for 15 seconds and takes the average speed over each of those seconds except for the first 2 seconds which tend to trend high. Hard Drives report an optimal block size to use while reading but if not, a block size of 128K is used. Since Docker under unRAID requires the array to be running, SpeedGap detection was added to detect disk drive activity during the test by comparing the smallest & largest amount of bytes read over the 15 seconds. If a gap is detected over a given size which starts at 45MB, the gap allowed is increased by 5MB and the spot retested. If a drive keeps triggering the SpeedGap detection over each spot, you may need to disable the SpeedGap detection as the drive is very erratic in its read speeds. One drive per controller is tested at the same time. In the future, each drive will be tested to see it's maximum transfer rate and the system will be tested to see how much data each controller can transfer and how much data the entire system bus can handle. Then multiple drives over multiple controllers will be read simultaneously while keeping the overall bandwidth by controller & system under the maximum transfer rate. Contributing to the Hard Drive Database If you have a drive that doesn't have information in the Hard Drive Database other than the model or you've performed benchmark tests, a button will be displayed at the bottom of the page labeled "Upload Drive & Benchmark Data to the Hard Drive Database". The HDDB will display information given up by the OS for the drives and the average speed graphs for comparison. Application Errors If you get an error message, please post the error here and the steps you took to cause it to happen. There will be a long string of java diagnostics after the error message (java stack) that you do not need to include, just the error message details. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter. Note: The unRAID diagnostic file doesn't provide any help. If submitting a diagnostic file, please use the link at the bottom of the controllers in the Diskspeed GUI. Home Screen (click top label to return to this screen) Controller Information Drive Information Drive Editor my-DiskSpeed.xml
  27. 2 points
  28. 2 points
    Nice work! Right, I think a better place would be in the Security Board. If you want to add the post there, I'll make it 'sticky'. Also, thanks to all testing this out. We have no choice but to keep marching on with new kernel releases.
  29. 2 points
    You canuseTools->New Config to reset the array, make any drive assignments you want and then start the array. If you have a parity drive assigned then Unraid will build parity based on the current assignments. This will not actually erase any data on the drives, but it will allow you to specify the current drive set that Unraid is using. If you want to erase the data on a drive you need to do the following: stop the array click on any drive you want to erase the data on and change its file system type start the array. The drive(s) will now show as unmountable and there is a check box to allow you to format unmountable drives (and gives you their serial numbers so you can check they are the ones expected). Click the check box and tell the system to format the drive(s). This should only take a few minutes. stop the array Click on a drive and change the file system to the one you want to end up with. start the array and repeat the format step At this point your disk(s) will show that they are basically empty. There will be a small amount of space showing as used but that is the overhead of creating the empty file system on the drive.
  30. 2 points
    Thank you for the update and screen shots! And for your patience as we worked through this.
  31. 2 points
    Yes, you have a Marvell controller type (9230) which is affected 0a:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11) Subsystem: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] Go to Main -> Flash -> Syslinux Configuration Change the "append" line of the default (green) section to for Intel processors append iommu=pt initrd=/bzroot for AMD processors append amd_iommu=pt initrd=/bzroot Apply and reboot your system
  32. 2 points
    Looks like a kernel bug with some Ryzen and NVMe with IOMMU enable: https://bugzilla.kernel.org/show_bug.cgi?id=202665
  33. 2 points
    Yes we're watching this and waiting for the microcode release. Interesting comment from Greg K-H: Doesn't exactly give you the "warm and fuzzies" right? Then again, could be worse: think of the headaches over at Intel 🤣
  34. 2 points
    Sorry didn't realize that the savegames are stored in the home folder... :/ Please give me a few hours to update the docker i will make a special serverconfig.xml where the savegames are located in the main folder. I will report back.
  35. 2 points
    Read through the last few posts prior to yours and you will be up and running again in no time.
  36. 2 points
    I like the new dashboard, but is it possible for users to reorder the boxes? I'd like to have Parity appear above Shares and Users, as the utilization counters are useful to have on the screen without scrolling (even with Shares and Users collapsed, it doesn't quite fit on a 27" display — Edit: this is with the window sized for two columns. Making the window a bit wider gives a better layout, but it takes up rather a lot of the screen!) Similarly, I'd want to put Motherboard below Processor and Memory in the server view on the left, since it doesn't tend to change much, and you know when you've changed it Not suggesting making these changes for everybody, as everybody has different needs — but I can't see any way of reordering them myself.
  37. 2 points
    It's a design desease right now. Everyone's doing it. Drives me crazy on my Mac in Finder and various programs. It may seem like nitpicking, but it really, REALLY makes me stop and look for the right element ALL the time, even for well known sections like my Finder's side panel. Yes I know my Dropbox logo is a little further down and my home folder is all the way up and stuff, but everything being gray I have to stop and look before lifting my index finger on my mouse when dragging and dropping a file onto a target to name one example. It's the same for unRAID now and I wish it weren't so. Can't wait for the no-colors, low-contrast flat fad to die. The only problem is... history repeats itself and eventually there will be a comeback of these UX slowing design choices and knowing that will taint my enjoyment of better design that's to come. Anticipation is the greatest joy I guess...
  38. 2 points
    To expand on @saarg explanation: the "vfio-pci.ids" kernel parameter specifies devices that the Linux kernel should not try to initialize or assign to a driver, because doing so sometimes makes the device behave improperly when assigned to a VM (or makes it impossible to assign). This parameter identifies devices using "Vendor:Model" strings where Vendor and Model are numeric values assigned by the device manufacturer. This is easy because that string will never change. The disadvantage is if you have two or more of the exact same device in your server, then all of them will be "invisible" to Linux. The other kernel parameter available was "xen-pciback.hide" (and synonym "pciback") which accomplishes the same thing, but takes a string of the form "Domain:Bus:Device.Function". Each of those values are also numbers that identify the device according to where it exists in your server PCI bus topology. The advantage with this method is that an exact device can be identified independent of whether another of the same device exists in the server. The disadvantage with this method is that if the physical h/w configuration changes, e.g., you move the device to a different PCI slot, then the PCI-ID of that device also changes. The problem we ran across was that xen-pciback.hide/pciback wasn't working any more, and rather than wait for a kernel dev to fix it, we decided to use the above alternate method. Note that in general, kernel evolution is moving away from kernel parameters and to more flexible methods. For example, "isolcpus" is really deprecated and there is alternate method of isolation CPU's using config files which we will adopt in a future Unraid OS release. As you can see, there is currently no "perfect" way of maintaining permanent assignment of devices to VM's.
  39. 2 points
    It's because of the Marvell/VT-d issue, see here:
  40. 2 points
    Update: Was able to install 6.2.2-24922 DSM as a VM. This is what I did: - Used bootloader 1.03b. - Used DS3615 - Installed DMS version 6.2.2-24922 (also tried DSM 6.2.1-23824 Update 6 and that worked too!) - Created VM as "CentOS". Used Seabios and highest version of machine (Q35-3.0) - After I configured the bootloader, I loaded the img as the first bootable primary vdisk first but set the type to "USB". (doing it this way, as "usb", DSM won't see the bootloader image file as a hard drive and will show only your other vdisks attached) - I created a vdisk, and used qcow (other type could be used, but I didn't try). Must be at least 5gb I found. (anything less it won't see and/or will fail during setup) - The main part!! -- manually go into the XML and change the nic to "e1000e". This worked for me on my Supermicro - X9SRH-7F E5-2690 v2. YMMV. Good luck.
  41. 2 points
    Edit the container and paste this under repository: binhex/arch-sabnzbdvpn:2.3.8-1-01 When he announces the fix for be sure to change it back to binhex/arch-sabnzbdvpn This way you get furture updates
  42. 2 points
    I will take a look tonight guys, if you are desperate then roll back using specific tagged version. Sent from my EML-L29 using Tapatalk
  43. 2 points
    Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  44. 2 points
    Woohoo I won something. Never happened before.. thanks you guys I'll put it to good use
  45. 2 points
    RC8 uploaded, hasn't been tested by myself as I normally do as I'm in the middle of a parity check. Uploaded it in case someone else wants to test it first. @AnnabellaRenee87 Glad you like it!
  46. 2 points
    I enjoyed the Podcast. It takes time and energy to do these types of things so I wanted to drop a note to say thank you. @jonp
  47. 2 points
    just curious if this RC is going to be final.. as 6.7.0rc started in jan
  48. 2 points
    Here is my banner. Think it fits unraid well!
  49. 2 points
    Thanks for that. Just in case someone else has this issue and to expand upon what @dmacias said. I had to go into Nerdtools and set it to download and install the pip package. Once that was done everything worked again. Thank you!
  50. 2 points