Leaderboard

Popular Content

Showing content with the highest reputation on 09/24/22 in all areas

  1. Recompiled the drivers and they are now just working fine (to get it working scroll down): Please do the following (this is only necessary if you upgraded before I recompiled the driver): Open up a Unraid terminal and execute: rm -f /boot/config/plugins/nvidia-driver/packages/5.19.9/* Close the terminal Go to the Nvidia-Driver Plugin page Click on the button "Update & Download" (please wait until the download finished and the Done button is displayed) Reboot Again, sorry for the inconvenience...
    7 points
  2. The 6.11 release includes bug fixes, update of base packages, update to 5.19.x Linux kernel, and minor feature improvements. Sorry no major new feature but instead we are paying some "technical debt" and laying the groundwork necessary to add better third-party driver and ZFS support. Although, Samba is updated to version 4.17 and we're seeing some significant performance increases. There are other improvements still a work-in-process which we will publish in patch releases: better support for third-party drivers better macOS integration better Active Directory integration additional VM Manager improvements To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. ALL USERS are encouraged to upgrade. As always, prior to updating, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Special thanks to all our beta testers and especially: @bonienl for his continued refinement and updating of the Dynamix webGUI including new background downloading functionality. @Squid for continued refinement of Community Apps and associated feed. @dlandon for continued refinement of Unassigned Devices plugin and patience as we change things under the hood. @ich777 for assistance and passing on knowledge of Linux kernel config changes to support third party drivers and other kernel-related functionality via plugins. Also for working with us for better third-party driver integration (still a work-in-process). @SimonF for several improvements including better handling of USB assignments to Virtual Machines. @JorgeB for rigorous testing of storage subsystem Version 6.11.0 2022-09-23 Improvements With this release there have been many base package updates including several CVE mitigations. The Linux kernel update includes mitigation for Processor MMIO stale-data vulnerabilities. The plugin system has been refactored so that 'plugin install' can proceed in the background. This alleviates issue where a user may think installation has crashed and closes the window, when actually it has not crashed. Many other webGUI improvements. Added support for specifying custom VNC ports in VM manager form editor. Custom port number specified using XML editor will be preserved when switching to forms-based editor. Spin down for non-rotational devices now places those devices in standby mode if supported by the device. Similarly, spin up, or any I/O to the device will restore normal operation. Display NVMe device capabilities obtained from SMART info. Added necessary kernel CONFIG options to support Sr-iov with mellanox connectx4+ cards Merged Dynamix SSD Trim plugin into Unraid OS webGUI. Preliminary support for cgroup2. Pass 'unraidcgroup2' on syslinux append line to activate. Included perl in base distro. Bug fixes Fixed issue in VM manager where VM log can not open when VM name has an embedded '#' character. Fixed issue where Parity check pause/resume on schedule was broken. Fixed issue installing registration keys. Updated 'samba' to address security mitigations. Also should get rid of kernel message complaining about "Attempt to set a LOCK_MAND lock via flock(2)." Fixed issue switching from 'test' branch to 'next'. Quit trying to spin down devices which do not support standby mode. Fixed AD join issued caused by outdated cyras-sasl library Do not start mcelog daemon if CPU is unsupported (most AMD processors). Fix nginx not recognizing SSL certificate renewal. wireguard: check the reachability of the gateway (next-hop) before starting the WG tunnel. Ignore "ERROR:" strings mixed in "btrfs filesystem show" command output. This solves problem where libblkid could tag a parity disk as having btrfs file system because the place it looks for the "magic number" happens to matches btrfs. Subsequent "btrfs fi" commands will attempt to read btrfs metadata from this device which fails because there really is not a btrfs filesystem there. Fixed bug in mover that prevented files from being moved from unRAID array to a cache pool (mode Prefer) if the share name contains a space. Change Log vs. Unraid OS 6.10.3 Management: Add sha256 checks of un-zipped files in unRAIDServer.plg. bash: in /etc/profile omit "." (current directory) from PATH docker: do not call 'docker stop' if there are no running containers emhttpd: improve standby (spinning) support mover: fixed issue preventing moving filed from array to cache if share name contains a space rc.nginx: enable OCSP stapling on certs which include an OCSP responder URL rc.nginx: compress 'woff' font files and instruct browser to cache rc.wireguard: add better troubleshooting for WireGuard autostart rc.S: support early load of plugin driver modules SMB: fixed 'fruit' settings for the USB Flash boot device SMB: remove NTLMv1 support since removed from Linux kernel SMB: (temporarily) move vfs_fruit settings into separate /etc/samba/smb-fruit.conf file SMB: (temporarily) get rid of Samba 'idmap_hash is deprecated' nag lines startup: Prevent installing downgraded versions of packages which might exist in /boot/extra upc: version v1.3.0 webgui: Plugin system update Detach frontend and backend operation Use nchan as communication channel Allow window to be closed while backend continues Use SWAL as window manager Added multi remove ability on Plugins page Added update all plugins with details webgui: docker: use docker label as primary source for WebUI This makes the 'net.unraid.docker.webui' docker label the primary source when parsing the web UI address. If the docker label is missing, the template value will be used instead. webgui: Update Credits.page webgui: VM manager: Fix VM log can not open when VM name has an embedded '#' webgui: Management Access page: add details for self-signed certs webgui: Parity check: fix regression error webgui: Remove session creation in scripts webgui: Update ssh key regex Add support for ed25519/sk-ed25519 Remove support for ecdsa (insecure) Use proper regex to check for valid key types webgui: misc. style updates webgui: Management access: HTTP port setting should always be enabled webgui: Fix: preserve vnc port settings webgui: Fix regression error in plugin system webgui: Fix issue installing registration keys webgui: Highlight case selection when custom image is selected webgui: fix(upc): v1.4.2 apiVersion check regression webgui: Update Disk Capabilities pages for NVME drives webgui: chore(upc): v1.6.0 webgui: Plugin system and docker update webgui: System info - style update webgui: Plugins: keep header buttons in same position webgui: Prevent overflow in container size for low resolutions webgui: VM Manager: Add boot order to GUI and CD hot plug function webgui: Docker Manager: add ability to specify shell with container label. webgui: fix: Discord notification agent url webgui: Suppress info icon in banner message when no info is available webgui: Add Spindown message and use -n for identity if scsi drive. webgui: Fix SAS Selftest webgui: Fix plugin multi updates webgui: UPS display enhancements: Add icon for each category Add translation in UPS section on dashboard Add Output voltage / frequency value Add coloring depending on settings Normalize units Make updates near real-time Added UPS model field webgui: JQuery: version 3.6.1 webgui: JQueryUI: version 1.13.2 webgui: improved 'cache busting' on font file urls webgui: Fixed: text color in docker popup window sometimes wrong webgui: Fixed: show read errors during Read Check webgui: VM Manager: Add USB Startup policy; add Missing USB support webgui: Docker: fixed javascript error when no containers exist webgui: added 3rd party system diagnostics added diagnostics for third party plugin packages added diagnostics for /dev/dri devices added diagnostics for /dev/dvb devices added diagnostics for nvidia devices Linux kernel: version 5.19.9 (CVE-2022-21123 (CVE-2022-21123 CVE-2022-21125 CVE-2022-21166) md/unraid: version 2.9.24 CONFIG_IOMMU_DEFAULT_PASSTHROUGH: Passthrough CONFIG_VIRTIO_IOMMU: Virtio IOMMU driver CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver CONFIG_FIREWIRE: FireWire driver stack CONFIG_FIREWIRE_OHCI: OHCI-1394 controllers CONFIG_FIREWIRE_SBP2: Storage devices (SBP-2 protocol) CONFIG_FIREWIRE_NET: IP networking over 1394 CONFIG_INPUT_UINPUT: User level driver support CONFIG_INPUT_JOYDEV: Joystick interface CONFIG_INPUT_JOYSTICK: Joysticks/Gamepads CONFIG_JOYSTICK_XPAD: X-Box gamepad support CONFIG_JOYSTICK_XPAD_FF: X-Box gamepad rumble support CONFIG_JOYSTICK_XPAD_LEDS: LED Support for Xbox360 controller 'BigX' LED CONFIG_MLX5_TLS: Mellanox Technologies TLS Connect-X support CONFIG_MLX5_ESWITCH: Mellanox Technologies MLX5 SRIOV E-Switch suppor CONFIG_MLX5_CLS_ACT: MLX5 TC classifier action support CONFIG_MLX5_TC_SAMPLE: MLX5 TC sample offload support CONFIG_MLXSW_SPECTRUM: Mellanox Technologies Spectrum family support CONFIG_NET_SWITCHDEV: Switch (and switch-ish) device support CONFIG_TLS: Transport Layer Security support CONFIG_TLS_DEVICE: Transport Layer Security HW offload CONFIG_TLS_TOE: Transport Layer Security TCP stack bypass CONFIG_VMD: Intel Volume Management Device Driver added additional sensor drivers: CONFIG_AMD_SFH_HID: AMD Sensor Fusion Hub CONFIG_SENSORS_AQUACOMPUTER_D5NEXT: Aquacomputer D5 Next watercooling pump CONFIG_SENSORS_MAX6620: Maxim MAX6620 fan controller CONFIG_SENSORS_NZXT_SMART2: NZXT RGB & Fan Controller/Smart Device v2 CONFIG_SENSORS_SBRMI: Emulated SB-RMI sensor CONFIG_SENSORS_SHT4x: Sensiron humidity and temperature sensors. SHT4x and compat. CONFIG_SENSORS_SY7636A: Silergy SY7636A CONFIG_SENSORS_INA238: Texas Instruments INA238 CONFIG_SENSORS_TMP464: Texas Instruments TMP464 and compatible CONFIG_SENSORS_ASUS_WMI: ASUS WMI X370/X470/B450/X399 CONFIG_SENSORS_ASUS_WMI_EC: ASUS WMI B550/X570 CONFIG_SENSORS_ASUS_EC: ASUS EC Sensors patch: add reference to missing firmware in drivers/bluetooth/btrtl.c rtl8723d_fw.bin rtl8761b_fw.bin rtl8761bu_fw.bin rtl8821c_fw.bin rtl8822cs_fw.bin rtl8822cu_fw.bin CONFIG_BPF_UNPRIV_DEFAULT_OFF: Disable unprivileged BPF by default patch: quirk for Team Group MP33 M.2 2280 1TB NVMe (globally duplicate IDs for nsid) turn on all IPv6 kernel options: CONFIG_INET6_* CONFIG_IPV6_* CONFIG_RC_CORE: Remote Controller support CONFIG_SFC_SIENA: Solarflare SFC9000 support CONFIG_SFC_SIENA_MCDI_LOGGING: Solarflare SFC9000-family MCDI logging support CONFIG_SFC_SIENA_MCDI_MON: Solarflare SFC9000-family hwmon support CONFIG_SFC_SIENA_SRIOV: Solarflare SFC9000-family SR-IOV support CONFIG_ZRAM: Compressed RAM block device support CONFIG_ZRAM_DEF_COMP_LZ4: Default ram compressor (lz4) turn on all EDAC kernel options CONFIG_EDAC: EDAC (Error Detection And Correction) reporting CONFIG_EDAC_* Base distro: aaa_base: version 15.1 aaa_glibc-solibs: version 2.36 aaa_libraries: version 15.1 at: version 3.2.3 bind: version 9.18.6 btrfs-progs: version 5.19.1 ca-certificates: version 20220622 cifs-utils: version 7.0 coreutils: version 9.1 cracklib: version 2.9.8 cryptsetup: version 2.5.0 curl: version 7.85.0 cyrus-sasl: version 2.1.28 dbus: version 1.14.0 dhcpcd: version 9.4.1 dmidecode: version 3.4 docker: version 20.10.17 (CVE-2022-29526 CVE-2022-30634 CVE-2022-30629 CVE-2022-30580 CVE-2022-29804 CVE-2022-29162 CVE-2022-31030) etc: version 15.1 ethtool: version 5.19 eudev: version 3.2.11 file: version 5.43 findutils: version 4.9.0 firefox: version 105.0.r20220922151854-x86_64 (AppImage) fuse3: version 3.12.0 gawk: version 5.2.0 gdbm: version 1.23 git: version 2.37.3 glib2: version 2.72.3 glibc: version 2.36 glibc-zoneinfo: version 2022c gnutls: version 3.7.7 gptfdisk: version 1.0.9 grep: version 3.8 gzip: version 1.12 hdparm: version 9.65 htop: version 3.2.1 icu4c: version 71.1 inotify-tools: version 3.22.6.0 iperf3: version 3.11 iproute2: version 5.19.0 iptables: version 1.8.8 jemalloc: version 5.3.0 json-c: version 0.16_20220414 json-glib: version 1.6.6 kmod: version 30 krb5: version 1.20 libaio: version 0.3.113 libarchive: version 3.6.1 libcap-ng: version 0.8.3 libcgroup: version 3.0.0 libdrm: version 2.4.113 libepoxy: version 1.5.10 libffi: version 3.4.2 libgcrypt: version 1.10.1 libgpg-error: version 1.45 libidn: version 1.41 libjpeg-turbo: version 2.1.4 libmnl: version 1.0.5 libnetfilter_conntrack: version 1.0.9 libnfnetlink: version 1.0.2 libnftnl: version 1.2.3 libnl3: version 3.7.0 libpng: version 1.6.38 libssh: version 0.10.4 libtasn1: version 4.19.0 libtirpc: version 1.3.3 liburcu: version 0.13.1 libusb: version 1.0.26 libwebp: version 1.2.4 libxml2: version 2.9.14 libxslt: version 1.1.36 libzip: version 1.9.2 logrotate: version 3.20.1 lsof: version 4.95.0 lzip: version 1.23 mc: version 4.8.28 mcelog: version 189 nano: version 6.4 nfs-utils: version 2.6.2 nghttp2: version 1.49.0 nginx: version 1.22.0 ntfs-3g: version 2022.5.17 ntp: version 4.2.8p15 oniguruma: version 6.9.8 openssh: version 9.0p1 openssl: version 1.1.1q (CVE-2022-1292 CVE-2022-2097 CVE-2022-2274) openssl-solibs: version 1.1.1q (CVE-2022-1292) p11-kit: version 0.24.1 pciutils: version 3.8.0 pcre2: version 10.40 perl: version 5.36.0 php: version 7.4.30 (CVE-2022-31625 CVE-2022-31626) pkgtools: version 15.1 rpcbind: version 1.2.6 rsync: version 3.2.6 samba: version 4.17.0 (CVE-2022-2031 CVE-2022-32744 CVE-2022-32745 CVE-2022-32746 CVE-2022-32742) sqlite: version 3.39.3 sudo: version 1.9.11p3 sysfsutils: version 2.1.1 sysstat: version 12.6.0 sysvinit-scripts: version 15.1 talloc: version 2.3.4 tar: version 1.34 tevent: version 0.13.0 tree: version 2.0.2 util-linux: version 2.38.1 wayland: version 1.21.0 wget: version 1.21.3 xfsprogs: version 5.18.0 xz: version 5.2.6 zlib: version 1.2.12
    5 points
  3. EDIT: Nerdtools is now available as a replacement, you might want to check that first: Some tools like iperf3 and perl are now included in the base unraid release, hence them not being present in there. If it doesn't have what you need request it in the thread, and in the meantime the manual install below is still available in the original text below: ---------------------- Nerdpack is deprecated in 6.11. For the record, since it was unfortunately only posted in a thread in the German section instead of here where people would typically come for support (translated): To replicate the functionality (unsupported): Go to https://slackware.pkgs.org/15.0/slackware-x86_64/ which lists packages for Slackware 15 Unraid is based on Search for the packages you want Download the txz files for them, and put them on the flash drive in /extra (/boot/extra on a running system), that will cause them to auto-install on boot (create the folder if there isn't one) To be able to use them without a reboot use unraid CLI to navigate where you put the packages and run installpkg <filename> Packages might have dependencies, that would typically be pointed out by an error when trying to run the programs they contain, if so download and install those as well. The site also has a section listing dependencies that might help, although I wouldn't just install them by default since some are already built into unraid so try to run first. EDIT: Other package sources: https://slackonly.com/pub/packages/15.0-x86_64/ https://slackware.pkgs.org/current/slackers/ Of course the nerdpack repo although packages may be outdated: https://github.com/dmacias72/unRAID-NerdPack
    5 points
  4. Everything is now working again, to get up and running please see this post: @Gorosch, @Dave31337, @jvlarc, @musicking, @Lintux
    3 points
  5. This should be added in bold text to the changelog including instructions how to load the necessary packages !!!
    3 points
  6. python 2 and 3 are available as their own plugins in CA now.
    2 points
  7. then you dont need this plugin like described on page 1 ... and yes, its normal then as its "invisible" to the host
    2 points
  8. https://forums.unraid.net/topic/98978-plugin-nvidia-driver/?do=findComment&comment=1172045
    2 points
  9. Well done @ich777 for attending to the Nvidia-Driver problem so promptly, saving myself (and I'm sure countless others) the disruption/interruption this would have caused. I appreciate everyone's hard work in compiling this. A great Unraid community here. Cheers, gwl
    2 points
  10. Updated two servers to 6.11.0 without incident. One from 6.11.0-rc5 and another from 6.10.3. I was aware the Nerd Pack was deprecated for 6.11.0 and placed the necessary packages in /boot/extra before upgrading and rebooting. I am not sure that everyone is aware the Nerd Pack has been deprecated and how to load the necessary packages. It might be wise to include this in the release notes (as if everyone reads those 😀).
    2 points
  11. Nvidia-Driver (only Unraid 6.9.0beta35 and up) This Plugin is only necessary if you are planning to make use of your Nvidia graphics card inside Docker Containers. If you only want to use your Nvidia graphics card for a VM then don't install this Plugin! Discussions about modifications and/or patches that violates the EULA of the driver are not supported by me or anyone here, this could also lead to a take down of the plugin itself! Please remember that this also violates the forum rules and will be removed! Installation of the Nvidia Drivers (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'Nvidia-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads the Nvidia-Driver-Package ~150MB and installs it afterwards to your Unraid server) : Click on 'DONE' and continue with Step 4 (don't close this window for now, if you closed this window don't worry continue to read) : Check if everything is installed correctly and recognized to do this go to the plugin itself if everything shows up PLUGINS -> Nvidia-Driver (if you don't see a driver version at 'Nvidia Driver Version' or another error please scroll down to the Troubleshooting section) : If everything shows up correctly click on the red alert notification from Step 3 (not on the 'X'), this will bring you to the Docker settings (if you are closed this window already go to Settings -> Docker). At the Docker page change 'Enable Docker' from 'Yes' to 'No' and hit 'Apply' (you can now close the message from Step 2) : Then again change 'Enable Docker' from 'No' to 'Yes' and hit again 'Apply' (that step is only necessary for the first plugin installation, you can skip that step if you are going to reboot the server - the background to this is that when the Nvidia-Driver-Package is installed also a file is installed that interacts directly with the Docker Daemon itself and the Docker Daemon needs to be reloaded in order to load that file) : After that, you should now be able to utilize your Nvidia graphics card in your Docker containers how to do that see Post 2 in this thread. IMPORTANT: If you don't plan or want to use acceleration within Docker containers through your Nvidia graphics card then don't install this plugin! Please be sure to never use one card for a VM and also in docker containers (your server will hard lock if it's used in a VM and then something want's to use it in a Container). You can use one card for more than one Container at the same time - depending on the capabilities of your card. Troubleshooting: (This section will be updated as soon as more someone reports an issue and will grow over time) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.: This means that the installed driver can't find a supported Nvidia graphics card in your server (it may also be that there is a problem with your hardware - riser cables,...). Check if you accidentally bound all your cards to VFIO, you need at least one card that is supported by the installed driver (you can find a list of all drivers here, click on the corresponding driver at 'Linux x86_64/AMD64/EM64T' and click on the next page on 'Supported products' there you will find all cards that are supported by the driver. If you bound accidentally all cards to VFIO unbind the card you want to use for the Docker container(s) and reboot the server (TOOLS -> System devices -> unselect the card -> BIND SELECTED TO VFIO AT BOOT -> restart your server). docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd: unknown device\\n\""": unknown.: Please check the 'NVIDIA_VISIBLE_DEVICES' inside your Docker template it may be that you accitentally have what looks like a space at the end or in front of your UUID like: ' GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd' (it's hard to see that in this example but it's there) If you got problems that your card is recognized in 'nvidia-smi' please check also your 'Syslinux configuration' if you haven't earlier prevented Unraid from using the card during the boot process: Click Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a screenshot of the output of the command 'nvidia-smi' (simply open up a Unraid terminal with the button on the top right of Unraid and type in 'nvidia-smi' without quotes) and the error from the startup of the Container/App if there is any.
    1 point
  12. Introduction If you want to access your unraid server over tailscale at this point you are better using the plugin. Unraid Tailscale Plugin I will continue to update this for thos using tailscale to, for example, connect groups of docker containers on private networks into tailsacle. Please use the plugin otherwise, use for direct access to unraid is considered deprecated, support will not be provided by me. This is the support thread for deasmi/unraid-tailscale docker CA. If you have a feature request or bug report please also try and add an issue on github https://github.com/deasmi/unraid-tailscale If you find this useful please consider donating to my chosen charity, Cancer Research. https://www.justgiving.com/fundraising/unraid-tailscale Thank you to those that have already donated. Latest version of tailscale included: Please see last page of posts for update This supports TLS certificates and Downloads, see below for instructions Please note I normally skip 1.xx.0 releases as there are often bug fix releases shortly afterwards. In any event I will wait at least two weeks after a 1.xx.0 release before updating latest, or normally even pushing a build. What is this? This container sets up tailscale for unraid. Tailscale is a managed point to point VPN using wireguard. It is intended to allow you to access services of your unraid server over tailscale, it does not, and is not intended to, provide a VPN gateway to your LAN. If you can contact unraid services over tailscale this is working as intended. For clarity I cannot provide support for use of --advertise-routes or other custom setups, and in all likelehood it will not behave as you expect. Due to the way docker works, and arp works, and switches work you will potentially have a nasty time. **If you want a tailscale gateway to your lan use your firewall or a raspberry pi or anything else at all other than this container. We cannot support you at all if you are using --advertise-routes, please do not ask** Communications are limited to services that listen on all interfaces on the host itself via standard bridge or host networking. Installation and setup Before you start it is a good idea to make sure you have already registered with Tailscale and installed tailscale onto another computer. https://login.tailscale.com/start Then install this app on Unraid and start it up, there are no config changes needed for the detault setup however it will register as hostname unraid, if you want to change that see 'Extra Parameters' in the container config and change to the hostname you would like before you start up. This can be changed later. ** IMPORTANT When you first start this container you must check the log file for the logon URL and then enter it into a browser and logon to tail scale. I would then also advise setting the keys to not expire for your unraid host ** You need to look for the following in the log ** Note that this will expose your whole server into your tailscale VPN network ** The container runs with docker host networking, and so it shares a network stack with the underlying host, so any services you can see on the LAN you'll be able to see on tailscale. Do not do this if you do not understand what that means. Downloads Starting with release 1.24.2-downloads you can now support automatic downloads with taildrop. If you have already installed tailscale you will need to add some extra paramaters manually as shown below. TLS Certificates If you want to use TLS certificates as per https://tailscale.com/kb/1153/enabling-https/ you will need to connect to the console of the docker container and issue the tailscale cert command. External Links Ibracorp have a guide with video on how to set all this up, as well as some advanced topics like exit nodes. https://docs.ibracorp.io/tailscale/
    1 point
  13. I recently added a mirror set of cache drives to my array. I ran though the process of moving my appdata to the cache. I made sure that Docker was disabled and nothing was running before changing the share and running the mover. When mover completed I checked to make sure everything was moved successfully. However I noticed that were two directory (with files) still lefts on the disk1/appdata, binhex-plex and gitlab-ce. I did some investigation and all it appears that everything that is still left on the disk1 is all symlink with relative paths. (i.e. ../../somefile) This breaks docker images that require these files. I checked again on user/appdata and it appears that all the symlinks work from there, however when starting the Docker Image for gitlab-ce will fail to start and it looks like this is because all of the files on on the cache drive but the symlinks are on the array. With out going too much into the mover rabbit hole The only solutions that everyone seems to agree on is to copy the files manually as this issue pops up frequently. cp -avr /mnt/disk1/appdata /mnt/cache/appdata or rsync -avrth --progress --dry-run //mnt/disk1/appdata/ //mnt/cache/appdata/ I guess my next question is why does mover have an issue with symlinks? Thank you.
    1 point
  14. Hello, Comme certain utilisateurs d'unRaid je suis en train de me faire une VM Gaming. Vu le peut de ressource en français sur le sujet je fais ce topic afin de partager mon expérience. Mon serveur est dans mon garage, je compte y jouer via parsec/moonlight/etc depuis principalement mon réseau local mais aussi pourquoi pas depuis Internet (je suis fibré), il faudra que je bench ces softs. Mes clients seront : * Mon Macbook pro de 13". * Un petit PC branché sur un écran 4k, un ASRock A300. * Et une nVidia Shield Pro branché sur ma TV. Pour ma part j'ai donc paramétré une VM Windows 11 : * RAM : 16Go, mon serveur en compte 64, je suis donc large. * CPU Mode : Accès Direct. * CPU : j'ai isolé 4C/8T de mon 5800X et les ai assigné à la VM. * Modèle de machine : Q35-6.2 : j'ai vu certains retours comme quoi le modèle i440fx était plus performant, mais j'ai aussi vu l'inverse, bref j'ai décidé de commencer comme ça. * BIOS : OVMF TPM, obligatoire pour le support de Windows 11 qui impose le TPM. * Hyper-V : Oui. * Contrôleur USB : 3.0, mais je n'ai aucun périphérique USB branché à passer à la VM donc je m'en fiche un peu. * Disque : J'ai configuré un disque NVME en passthrough directement sur la VM, donc pas de vdisk. C'est assez simple à faire, si ça intéresse des gents je décrirai comment faire. l'avantage c'est que les performances sont excellentes, pas loin du baremetal (= sans virtualisation). Pour l'histoire j'avais auparavant un vdisk sur un SSD et j'ai fait une migration sans réinstaller la VM (pareil je peux décrire comment on fait). Afin de diffuser une session il faut un écran branché sur la CG, comme je n'en ai pas, je vais utiliser cela : * Un dongle fake HDMI (https://www.amazon.fr/dp/B07YMTKJCR/ref=twister_B08GSQ8SSR?_encoding=UTF8&th=1) Il se branche sur un port HDMI de la CG et simule un écran, permettant ainsi d'établir une session que l'on peut ensuite diffuser. Sinon Parsec à une solution software avec un Fake Display Driver, mais je n'ai pas réussi à la faire marcher. Voilà j'attend maintenant que les prix des CGs baissent un peut pour chopper une RTX 3080. Elle sera en passthrough de la même manière que le disque NVME. Elle ne pourra pas servir à autre chose (un docker plex par exemple) à part si ça tourne sur la VM. Mais j'ai déjà une autre CG pour cela. En attendant je test un peut les softs pour me faire une idée. J'ai aussi fait quelques benchs sur la partie disque suite au passage d'un vdisk sur un SSD SATA à un passthrough d'un disque NVME. J'en ferais d'autre une fois que j'aurais une CG.
    1 point
  15. I have not upgraded yet... I'll take a look when I can
    1 point
  16. Thank you! that was it. What was the error in which log for me to watch for next time.
    1 point
  17. See this post for a link to ipmitools- https://forums.unraid.net/topic/35866-unraid-6-nerdpack-cli-tools-iftop-iotop-screen-kbd-etc/?do=findComment&comment=1162669
    1 point
  18. I was just linking the instructions for how to install the packages. I remember seeing links to some of those packages further down the page as well as a discussion on wether or not to install python.
    1 point
  19. It probably would boot properly, but Unraid wouldn't work properly after you pulled it out until you rebooted.
    1 point
  20. No need to do that: This applies only if you have update Unraid before I've recompiled the driver packages for 6.11.0 stable.
    1 point
  21. I'm just responding to report that I've been running 6.10.3 stable for some time now, and no issues have been noticed with the Connectix-3 cards swapping around and acting strange. I'm very grateful to the Unraid Developers for attention on this issue and forum mods for making this space available.
    1 point
  22. I figured it out. Stupid me had set "Minimum free space" to 6TB on all or most of the shares, probably misunderstood it's behaviour and consequences. Set new limit to 50GB. Everything is peachy now.
    1 point
  23. Thank you so much! You just saved my hair, and my weekend! I'm not quite sure what happened here, as our backup server went through the same update just a week ago, and FileBrowser seemed to work just fine there. Whatever the case may be, chown seemed to be the answer here. Thank you for the quick answer!
    1 point
  24. may try chmod or chown the file from the terminal ? chown 99:100 /mnt/user/appdata/Filebrowser/database.db <- change this to the path on your mashine or chmod 777 /mnt/user/appdata/Filebrowser/database.db <- change this to the path on your mashine as note, working fine here ...
    1 point
  25. So far so good, I've run the following command for each SAS disk: sg_format --format --fmtpinfo=0 /dev/sdb Then they needed formatting from the Main tab Halfway through Parity check with 0 errors!
    1 point
  26. Tnx, i noticed (had only checked a few and assumed the rest was also not there) Already added the rest from slackware.pkgs.org
    1 point
  27. Did that and looking good now. Speed is at 120mb/sec (much better than the 1mb I had two days ago). After parity sync is done adding new disks and changing the file system.
    1 point
  28. $%^&$%^&*%&$%^&!!! After all this time i finally figured it out!!!!!! For some reason after 6.10 both acpid and logind were activly fighting to handle the button calls, so both responded, acpid was working correctly and start my VM but then logind would chime in and shutdown the system. I modified logind.conf to the below HandlePowerKey=ignore HandleSuspendKey=ignore HandleSuspendKey=ignore HandleHibernateKey=ignore Kept acpi_handler.sh the same Then modified my array start up script to edit the logind.conf on startup just like the acpi_handler.sh.
    1 point
  29. Upgraded 4 servers. 4/4 upgraded successfully. 6.10.3 -> 6.11.0 One of the servers has Quadro P2000 with 515.76 Driver - SUCCESS! Thanks!
    1 point
  30. 1 point
  31. 1 point
  32. For myself and my friend, we generate thumbnails which gets rather large as your library grows. That's the main reason. Plex can have a 1TB pool for itself. If I exclude my plex app data from my CA Backup I drop from 510Gb to about 27Gb with ~30 dockers. I now exclude three sub folders in my plex app data to basically obtain the same results but it's still in it's own pool now. I also moved my nextCloud data folder to the new pool so it's served from a never spun down SSD and somewhat protected with the pool's RAID. I then syncthing my nextCloud folder to my friends server and he does the same to mine for an offsite. (send only/receive only setup)
    1 point
  33. https://forums.unraid.net/topic/128645-unraid-os-version-6110-available/ The 6.11 release includes bug fixes, update of base packages, update to 5.19.x Linux kernel, and minor feature improvements.
    1 point
  34. My driver installed again no problems after I uninstalled/reinstalled the plugin, thanks for the fix! I don't see the video card section under the unraid dashboard anymore however, is this by design? Thanks.
    1 point
  35. Nothing to be sorry for, you're a rockstar for responding so fast.
    1 point
  36. Vielen Dank für die Info. Erst nach einigen Klicks habe ich die richtige Stelle in den Menüs gefunden. Das scannen hat schon ganz schön lange gedauert. Aber nun hat es geklappt. Die Cover werden angezeigt.
    1 point
  37. Update from RC4 to Stable...... Now my Nvidia is not detected. This pops up every seconds in system log. Sep 23 20:51:44 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:44 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:44 Unraid kernel: NVRM: GPU 0000:41:00.0: RmInitAdapter failed! (0x31:0x40:2459) Sep 23 20:51:44 Unraid kernel: NVRM: GPU 0000:41:00.0: rm_init_adapter failed, device minor number 0 Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: RmInitAdapter failed! (0x31:0x40:2459) Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: rm_init_adapter failed, device minor number 0 Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: RmInitAdapter failed! (0x31:0x40:2459) Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: rm_init_adapter failed, device minor number 0 Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: 0000:41:00.0: DMA mapping request too large! Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: RmInitAdapter failed! (0x31:0x40:2459) Sep 23 20:51:45 Unraid kernel: NVRM: GPU 0000:41:00.0: rm_init_adapter failed, device minor number 0
    1 point
  38. Das wird wegen SMB1 irgendwann eh nicht mehr gehen. Also dann muss man vermutlich eh einen Docker als Brücke nehmen oder seine SMB Config manuell unsicher machen. a) warum nicht? b) ich hab erst gar keine Laufwerke verbunden. Meine Frau darf auf die Verknüpfung zur Netzwerkumgebung des Servers klicken. Dann sieht sie ja alle Ordner wo sie drauf will. Was ein Netzlaufwek ist, weiß sie eh nicht. Am liebsten hätte ich das auf der Arbeit auch so. Support Anfragen wegen "Laufwerk X mit dem Namen Archiv geht nicht" und man weiß nie auf welchen SMB Server und welchen Unterordner die eigentlich zugreifen wollen ^^
    1 point
  39. 1.30 = 1.30.2 = latest new version pushed
    1 point
  40. Correct, it is not being supported or maintained. Understand that the 'fix' is not just to trick it to work with 6.11, all of the 6.10 packages need to be confirmed to be compatible with 6.11, or updated for 6.11. That's why the author has the pagkages on the flash drive organized by Unraid version. Part of the update to get it working on a new version of Unraid would be to confirm that all the packages put in the 6.11 folder would be compatible with 6.11. Don't fork the Nerd Pack Plugin unless you intend to invest the time to do this. If the plugin is distributed with a package that is not compatible with 6.11, you risk the potential of creating support issues for yourself and LT.
    1 point
  41. For anyone looking for the file, here it is, I think it's just named differently from https://github.com/ipmitool/ipmitool, this is the one from NerdPack which is the same version number. ipmitool-1.8.18-x86_64-1.txz
    1 point
  42. Bonjour, Il existe 2 "astuces" côté MacOS pour améliorer SMB : https://www.journaldulapin.com/2018/03/03/transferts-smb/ https://www.journaldulapin.com/2019/05/08/lister-partage-reseau-mac/ Pour ma part, je ne fais que le 1er lien qui accélère vraiment le transfert et qui je trouve accélère également le listing qui est vraiment parfois affreux.
    1 point
  43. I've been agonizing over this choice for weeks, unable to pick between the 2 file systems for my main array (I'm already decided on going dual raid 1e for ssd cache/light vms). I have found general answers for many of the questions I have about the choice but there are still ones that just don't seem to be answered that I kinda want to know the answers to, hopefully definitively. What questions do I have? When a file is corrupted with a BTRFS system, is it unable to be transferred? I feel like I have read something similar to this a few times with there being an error, but also that there might be a way to get around it if for instance, you don't have another good copy of the file, like if it is unsafe or between backups. Is anyone familiar with this experience with perhaps a link to some sort of relevant resource on the topic specifically? I've read multiple times that BTRFS doesnt have a working FSCK. Apparently though, this isn't actually a problem because it has a replacement in btrfs-check. Is my understanding there more or less correct? While it can detect errors due to checksumming, it does not have a method of fixing these errors in unraid, specifically because unraid is using it on singular disks inside of the unraid array so those features would not be feasible. Luckily, from what I can tell, BTRFS has a method for relatively easily figuring out which errors were found so that a use can potentially restore the file from backups. Is there any problem with the assertions I've just made? The write hole issues with BTRFS mostly come into play with sudden ungraceful shutdowns due to a lack of proper atomicity. Also, its now really just a concern for the Raid 5/6 implementations that Unraid does not use in favour of the proprietary Unraid solution. Is this correct (this is one that seems really important and related to the first question I asked)? I'm sure that having a UPS (Which I do have) helps, but even still, ungraceful shutdowns can happen for many ungraceful reasons Why should I/Shouldnt I use BTRFS (something I'll answer a bit myself)? OK, so why do I care about BTRFS and what benefits will it potentially bring to me to warrant all the fuss and agonizing about the choice. The main things are: Data integrity - I at least know when a file is bad, even though I may not be able to fix it/will have to get a copy. Snapshots - Something to aid in protecting me from user error once set up, something I feel is a bit lacking in general. Both of these points have large asterisks as well of course. For data integrity, or at least more data integrity rather than less. I'm not even running ECC, and quite frankly, if the data integrity of a regular pc has been good enough for me so far, I probably don't have data that is so susceptible to corruption that I would notice a flipped bit or 2. The data im storing is mostly media, and some backups of other systems anyways. On top of that and as I've read (I think on this forum) a likely place for data corruption to come from on a modern system is ram, so while this data integrity feature is a nice thing to have, it may not be all that I hope that it is, at least unless I in the future decide to upgrade to ECC. For Snapshots, as far as I can tell, this will take a lot of time and effort to figure out how to set up correctly (I'm no fan of running someone else's long script unless I've read through it and understand fully what its doing, and even then, Id prefer knowing how to make my own rather than simply trusting my assumptions). This is why if I'm honest with myself, this feature would sit unused until hopefully, at some point in the future, Unraid natively supported snapshots. OK, that was a lot, and I know posting too much leads to people being uninterested in answering, so Il stop right there, but hopefully I've included enough detail, and evidence that I've put in the leg work that my questions will get answered and I can worry not. Oh, and also, here is a previous post with my specs roughly, if anyone feels they are relevant to my post TL:DR; I basically have 4 questions that have been holding me off from configuring my new server related to BTRFS. Thanks in advance.
    1 point
  44. Yes, JRE is already part of the Docker image and I ready to be used by MakeMKV.
    1 point
  45. Bonjour, Thanks to @Squid, we now have an easy way to track and see any missing French translations by comparing the French github repo with the English Github repo. To see missing words/phrases, please see here: https://squidly271.github.io/languageErrors.html#fr_FR If you would like to make contributions to the French Github Repo, please do so there and be sure to follow the instructions outlined in the README.md file. All PR's will be reviewed by myself and the French Forum moderators prior to any merges. Merci beaucoup, Spencer
    1 point
  46. Merci pour l'explication, effectivement en faisant le zip depuis mon repos github ça fonctionne car il me renvoie tous les fichiers avec le bon retour charriot (LF). Je vais pouvoir enfin tout vérifier et m'assurer de la cohérence de l'ensemble des termes utilisés dans la traduction française
    1 point
  47. + Lien : Installation NEXTCLOUD + MARIADB + Lien : Installation WORDPRESS + MARIADB + Lien : Masquer plusieurs conteneurs derrière 1 seule Connexion VPN + Lien : Mettre une limitation de RAM sur un conteneur DOCKER + Lien : PLEX + Transcodage Hardware avec Processeur INTEL UHD630 + Lien : Installation de Rocket.chat + MongoDB
    1 point
  48. Yes, same a zfs, you get an i/o error during transfer/read, if you still want the corrupt file you can use btrfs restore. It is limited in what it can fix and needs to be used with care, though it's getting better all the time. It can only fix errors when using a redundant profile, on the array devices it can only detected errors and you need to replace the file from a backup, again same as zfs. Yes, write hole only affects raid5/6, and it's not much of an issue if you use raid1/c3 for metadata for any raid5/6 pool like recommended.
    1 point