Leaderboard

Popular Content

Showing content with the highest reputation on 01/29/21 in all areas

  1. I just updated the BIOS to 2.34. Update via IPMI went fine and I only had to make a few minor settings changes. Previously, IPMI was reporting an 85C MB temp even though unRAID showed it at 27C. After the update, IPMI agrees with the 27C temp. Only time will tell if it gets out of whack again as it has on all other BIOS versions. On 2.21 BIOS it was even starting to report card side temp in the 80s but after the last reboot when I upgraded my cache drive that was reset.
    3 points
  2. @FlippinTurt Sorry to waste your time. The answer to my problem is right in front of me this whole time. "NOTE 3: UnRaid network settings DNS server cannot point to a docker IP." When I set the pihole address in the "LAN DHCP" menu of the router, it automatically updated the unRAID's network DNS server to the pihole's IP. The solution is to strictly use router's "WAN" DNS server setting and leave the DHCP's DNS menu blank (for asus router at least).
    2 points
  3. Nein, er hat das dauerhaft auf allen Seiten angeschaltet. Ja oben rechts im Menü auf das Fragezeichen tippen. Das ist quasi der Schalte für "immer Hilfe / keine Hilfe einblenden".
    2 points
  4. Du hast jetzt aber SFP+. Meins bezog sich auf 10GBase-T. Mit SFP+ gibt es das auch mit der 530FLR-SFP+. Die liegt bei ~ 20 € mit Versand. Bei servershop24 zb 15 € zzgl 5 €. Braucht man also zwei, kommst du mit den Risern bei 35 + 20 € für die Adapter = 55 € für 2x Dual SFP+ Karten raus. China Händler habe ich also noch gar nicht einbezogen. Alles von deutschen Händlern. Aus China kostet die Karte 10 bis 15 € inkl Versand.
    2 points
  5. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: Note: RAID controllers are not recommended for Unraid, this includes all LSI MegaRAID models, doesn't mean they cannot be used but there could be various issues because of that, like no SMART info and/or temps being displayed, disks not being recognized by Unraid if the controller is replaced with a different model, and in some cases the partitions can become invalid, requiring rebuilding all the disks. 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples: 6 ports: Asmedia ASM1166 (PCIe 3.0 x4 physical, x2 electrical) * * There have been some reports that some of these need a firmware update for stability and/or PCIe ASPM support, see here for instructions. These exist with both x4 (x2 electrical) and x1 PCIe interface, for some use cases the PCIe x1 may be a good option, i.e., if you don't have larger slots available, though bandwidth will be limited: 8 ports: any LSI with a SAS2008/2308/3008/3408/3808 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, 9500-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed (most of these require a x8 or x16 slot, older models like the 9201-8i and 9211-8i are PCIe 2.0, newer models like the 9207-8i, 9300-8i and newer are PCIe 3.0) For these and when not using a backplane you need SAS to SATA breakout cables, SFF-8087 to SATA for SAS2 models: SFF-8643 to SATA for SAS3 models: Keep in mind that they need to be forward breakout cables (reverse breakout look the same but won't work, as the name implies they work for the reverse, SATA goes on the board/HBA and the miniSAS on a backplane), sometimes they are also called Mini SAS (SFF-8xxx Host) to 4X SATA (Target), this is the same as forward breakout. If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander. P.S. Avoid SATA port multipliers with Unraid, also avoid any Marvell controller. For some performance numbers on most of these see below:
    1 point
  6. USB_Manager is in CA as of 6th June 2021, Please continue to use USBIP-GUI but will be replaced by USB_Manager. The plugin supports attaching multiple USB devices of the same Vendor/Model to a VM. Also it will auto hotplug devices are plugged in if defined and the VM is started. Dashboard View add in 16.02.2021. To see USBIP Functions you need to enable in Settings. This function is only valid from 6.9.0-rc2 onwards. Once enabled additional panels are available. (USBIP status and connection host/ip from vers >14.02.21 USB_Manager Change Log 2021.12.12a Revert 2012.12.12 2021.12.12 - Add Hub processing.You can define a port mapping for a hub. If connected or vm starts all devices on that hub will be connected to the VM. Will not process next level down hubs. - Chg Disable device mapping for Root Hubs and Hubs. Disable port mapping for Root Hub. - Chg Detach button show next to connected port or device on the main line. - Fix Buttons if Hotplug mapping used. - Note reboot or disconnect/reconned of Hub may be required. 2021.09.18 - Code review and update. 2021.09.01 - Fix start of usbipd and load of modules on array start. 2021.08.01 - Code clean up - Change to udev rules for 6.10+ support. - Enable zebra strips on tables. 2021.07.27 - Fix Change Unraid Flash to Unraid inuse on the hub lines on dashboard page. - Chg Use Port as standard rather than Physical BUSI/Port. 2021.07.23 - Fix Disable roothub and hubs used for Unraid Flash device. 2021.07.10 - Add volume to device list in USBHotplug page. 2021.07.09a - Add display of Hotplug Devices on main USB page and allow detach. 2021.07.09 - Fix Virsh error if both port and device mapping exist for a device a connection time. - Add USB Manager Hotplug page on VM page, to enable change options in settings. Base code from DLandons Hot plug plugin. Addition support to show on USB page if mapping doesn't exist in next release. 2021.06.02a - Fix table formating if both port and device mappings for new volume column - Add Log virsh calls. 2021.06.26 - Enhancement Show Volume for USB Storage Devices. 2021.06.20 - Enhancement enable port processing for mappinng ports to VM at start. - Update text on edit settings page to describe entry being changed. 2021.06.19 - Install QEMU hooks file code, thanks to ljm42 for code. 2021.06.08 - Fix USBIP command check. 2021.06.06 - Initial beta release. If you are using USBIP-GUI continue to do so at this time. This plugin will suppercede USBIP-GUI in the future and will migrate configurations. USBIP-GUI and USB_Manager cannot co-exist. If you want to replace USBIP-GUI then uninstall first, Config files we remane on the flash drive you can copy them to usb_manager directory. USBIP and USBIP-HOST module are not loaded by default. If you want to use them enable USBIP in the Settings and click the install button to install the additional plug. Add the following lines, see support page for complete code as cannot be insert here to /etc/libvirt/hooks/qemu after the PHP line, These will be automatically added in the a future release. if ($argv[2] == 'prepare' || $argv[2] == 'stopped'){ shell_exec("/usr/local/emhttp/plugins/usb_manager/scripts/rc.usb_manager vm_action '{$argv[1]}' {$argv[2]} {$argv[3]} {$argv[4]} ................ Includes all changes from USBIP-GUI + Topology Slider addition. USBIP-GUI Change Log 2021.05.15 - Chg Fix Remove USB device from VM for devices not in a shutdown state, was previously only for running. 30.04.2021 - Add Remove USB Device from VM when disconnected. 2021.04.22 - Add Roothub and Hubs to view. - Add Switch to show empty ports. - No process added at this time for additional devices. 10.03.2021 - Add VM disconnect option to be used in pre sleep commands to remove USB mappings from VM before sleep. 09.03.2021 - Chg Fix issue introduce as part of port mapping for checking status. 24.02.2021 - Add Support for port-based mappings Auto connecting to a VM when device is connected to a USB port. Only devices being plugged in are supported for Ports at this time. Support for port level will be added in the furture for VM Starts. Precedence is for device level mappings over port. If a device is set to autoconnect no then the auto connect at the port level will be evaluated. 17.02.2021 - Add Dashboard update and refresh. 16.02.2021 - Add USB Dashboard entry. Enable within settings. 14.02.2021 - Add Display host name or IP address for remote USBIP Clients. 13.02.2021 - Add Show remote connection status. Host/IP to follow WIP. 12.02.2021 - Chg Fix for Bind/Unbind Button. 10a.02.2021 - Add Disconnect update function implemented. - Add Auto Connect on VM Start. - Chg Auto Connect on device added checks VM Status - Add Update Status when VM Stops. Note you need to manually add code to /etc/libvirt/hooks/qemu for VM start/stop process to function. See support page. Development yet to be completed. Update of qemu hook file. Add checks before historical info can be removed. Rename Plugin to USB Manager Change to include USBIP package install in the settings page. 08.02.2021 - Add: Autoconnect function. If VM defined and Autoconnect is set to Yes then when usb device is connected device will be connected to VM. If VM is not started Error is show. - Chg: Main USB list is no longer depetant on USBIP. Version change to support 6.8. Error will be seen during install as it trys to install USBIP package which doesnt exist pre kernel 5.10.1, but is ignored. Development yet to be completed. Autoconnect function, check VM status before connecting. Autodisconnect function. Will provide log entry but no action taken at present. Add checks before historical info can be removed. Rename Plugin to USB Manager VM Start/Stop process. Change to include USBIP package install in the settings page. 07.02.2021 - Add: VM Mapping functions - Add: Display USBIP function messages if not used. Enable/Disable for USBIP added to settings. Defaults to disable, change to enable if you are upgrading. - Add: Historical Devices added, list includes current devices also which can be removed whilst inuse. - Add: Failure Message for virsh errors. Note, Existing Libvirt plugin cannot be used to connect devices. Development yet to be completed. Autoconnect function. udev Rules exist and process works, but there are timing issues to be resolved. Add checks before historical info can be removed. Rename Plugin to USB Manager VM Start/Stop process. Changes to USB device list not to be depentant on usbip. Once change version pre-6.9.0-rc2 will be available. 31.01.2021 - Add: Revised Load process and addition of loading usbip package from ich777. USBIP package includes all commands and modules required for USBIP 28.01.2021 - Initial beta release.
    1 point
  7. Добрый день! Хочу представить Вам развивающийся блог, созданный новичком для новичков - MyUnraid.ru На создание личного блога меня сподвигло отсутствие русскоязычных мануалов по настройке OS Unraid и управлению NAS на нем в целом. На ресурсе планируется публикация собственных настроек, гайдов, твиков и прочих инструкций для: Докер контейнеров Плагинов Скриптов На сегодняшний момент имеется порядка 15-20 сформированных статей данной тематики, которые направленны на облегчение управления собственным сервером. Twitter Telegram
    1 point
  8. This repo was created to update the original piHole DoT/DoH by testdasi https://forums.unraid.net/topic/96233-support-testdasi-repo/ Official pihole docker with added DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). DoH uses cloudflare (1.1.1.1/1.0.0.1) and DoT uses google (8.8.8.8/8.8.4.4). Config files are exposed so you can modify them as you wish e.g. to add more services. This docker supercedes testdasi's previous Pi-Hole with DoH and Pi-Hole with DoT dockers. For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/flippinturt/pihole-dot-doh Github: https://github.com/nzzane/pihole-dot-doh Please make sure you set a static IP for this docker, as DHCP will not work! FAQ: Q: Can this be installed on top of testdasi's current pihole DoT-DoH? A: Yes, this can be installed over, without any problems Q: How do I change the hostname? A: Use the '--hostname namehere' parameter, under 'extra parameters' in the containers settings Q: Is there a list of good block lists? A: https://firebog.net/ Initial Upload: 20/1/21 Latest Update: 27/04/22 (dd/mm/yy) Current FTL Version: 5.15 Current WEB Version: 5.12 Current PiHole Version: 5.10
    1 point
  9. A container for explain shell to run on your server (as a self hosted alternative to using the explainshell website.) What is it? Not everyone, especially people new to Linux and Unraid, know what a command that they type will actually do. Especially when reading online commands to type into their servers. So just paste the command into the search box and click explain and explainshell will breakdown the command explaining what each part does. Quite a useful tool.
    1 point
  10. I was doing crazy research ... then I figured that the P2000 looks way shorter than a standard ATX board. And since the case can fit an ATX board before the fan wall, it should be fine.
    1 point
  11. All my APs have the 4.3.24.11355 firmware and no update is showing for a later version.
    1 point
  12. Read the readme on github for info about what this is.
    1 point
  13. Is there a chance you can give me the full log output? Click on the icon on the Docker page and click on log then click one more time somewhere on the text an CTRL+A and after that CTRL+C create a textfile on your desktop and open up the textfile and press CTRL+V save and exit the textfile then drag it into the textbox here.
    1 point
  14. Just wanted to +1 this recent development and add a bit of insight that may help others. I have a Plex setup that does not have surround sound capability but can handle H.265 video. When I stream a H.265 video on that device that does not have a stereo track, Plex transcodes the video stream to H.264 while it downmixes the multi-channel audio to stereo. This transcoded stream now requires more bandwidth simply because it's H.264. This change to Unmanic is awesome in that it has automatically gone through my entire library adding a stereo audio track to videos that didn't already have one, and now the client I'm referring to will directplay the H.265 videos because of the presence of the 2 channel audio track. Thanks @Josh.5 !
    1 point
  15. There is still an option for Onboard (which I also enabled) but I think that now applies to the VGA output for the BMC only. IGFX is what needs to be enabled for the iGPU. This appears to be the replacement for the "multi-monitor" option in previous BIOS version.
    1 point
  16. They share all of the following: Drive cage (handles power & SATA connections to the drive) HBA SFF 8087 Cable I'm looking into a new HBA. Next after that is a PSU upgrade (650W right now) and instead of using the rosewill cage, do a direct connection to the drives.
    1 point
  17. 1 point
  18. Of course, I had to enable the new IGFX option but, even though "Preserve BIOS Configuration" was checked, I needed to disable the hardware inventory check again. For me, it is a long wait at boot for information that is not useful with my server. There was one other small thing, but I forgot what it was. Everything else seemed to stick around from the prior BIOS settings (Boot settings, BMC settings, etc.)
    1 point
  19. I haven't noticed anything. I have uninstalled and re-booted my system. I'll see if that helps and report back. The plugin was working perfectly and I was able to use it in the TVHeadEnd Docker Container - I had recorded TV and all. I'll keep you updated.
    1 point
  20. ...better safe than sorry...bevor ich in weitere Hardware investiere ist das ein guter Test und diese Hardware auch notfalls dann direkt im Serverschrank einsetzbar. Sozusagen der reale Lo-Tek Kabeltester
    1 point
  21. You can run it from any folder except the flash drive. You can then run it from within a script in the UserScripts plugin.
    1 point
  22. 1 point
  23. One thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
    1 point
  24. Introduction bunker is a file integrity check utility based on the original bitrot utility of jbartlett and the calculated sha keys are compatible with bitrot. I want to thank jbartlett for bringing out his excellent idea and my version can be seen as an alternative which I initially developed to fulfill my own requirements and now may be useful to others. The purpose of bunker is to save the calculated hash value in the extended attributes of a given file, this allows for regular checking of the integrity of the file content. Note that the original file is never touched or altered by bunker no matter what options are chosen. Different hashing methods are stored under different extended attributes, it is possible to store sha256, md5 and blake2 hashes together with a file, if that is desired. Versions Version 1.16 verify location of 'notify' script to support unRAID v6.x Version 1.15 Fixed execution of export (-e) command Fixed -D option (modified time calculation) Swapped -r and -R commands Added new extended attribute: file size Added display of file name being processed Code optimizations Version 1.14 logger corrections and code optimization (thx itimpi) Version 1.13 logger improvements (thx itimpi) Version 1.12 added new option -L which allows to log only changes Version 1.11 bug fix, correction in report calculation (thx archedraft) Version 1.10 more comprehensive reporting, and minor bug fix Version 1.9 (minor) bug fixing release Version 1.8 introduced new option -n (notify) which let bunker send alert notifications when file corruption is detected Version 1.7 introduced new filedate attribute. New -t (touch) command. Various improvements. Version 1.6 regression fix in -v command and add missing files display for -c command Version 1.5 introduced new command -U and new option -D Version 1.4 is a bug fixing release. Correct ETA calculation, fix scandate with -C option, sort output files. Version 1.3 has new options -c and -C which allows the checking of the hash values from a previous exported file. This can be used for example when transferring files from one filesystem to another, e.g. from reiserfs to xfs, during this process extended attributes are not copied over, but with the -c (-C) option these can be verified/restored afterwards, make sure to do an export of the hash keys before the file transfer. Version 1.2 uses the already installed utilities sha256sum and md5sum, no external package needs to be installed unless one wants to make use of the new option '-b2'. This uses the blake2 algorithm. A download of the blake2 utility can be obtained from the Blake2 site. Extract the file "b2sum-amd64-linux" rename it "b2sum" and copy it to your server, e.g. to /usr/bin/b2sum. Version 1.1 is the initial release. Usage The following is from the help of bunker. bunker v1.16 - Copyright (c) 2015 Bergware International Usage: bunker -a|A|v|V|u|U|e|t|i|c|C|r|R [-fdDsSlLnq] [-md5|-b2] path [!] [mask] -a add hash key attribute for files, specified in path and optional mask -A same as -a option with implicit export function (may use -f) -v verify hash key attribute and report mismatches (may use -f) -V same as -v option with updating of scandate of files (may use -f) -u update mismatched hash keys with correct hash key attribute (may use -f) -U same as -u option, but only update files which are newer than last scandate -e export hash key attributes to the export file (may use -f) -t touch file, i.e. copy file modified time to extended attribute -i import hash key attributes from file and restore them (must use -f) -c check hash key attributes from input file (must use -f) -C same as -c option and add hash key attribute for files (must use -f) -r remove hash key attribute from specified selection (may use -f) -R same as -r option and remove filedate, filesize, scandate values too (may use -f) -f <file> optional set file reference to <file>. Defaults to /tmp/bunker.store.log -d <days> optional only verify/update/remove files which were scanned <days> or longer ago -D <time> optional only add/verify/update/export/remove files newer than <time>, time = NNs,m,h,d,w -s <size> optional only include files smaller than <size> -S <size> optional only include files greater than <size> -l optional create log entry in the syslog file -L optional, same as -l but only create log entry when changes are present -n optional send notifications when file corruption is detected -q optional quiet mode, suppress all output. Use for background processing -md5 optional use md5 hashing algorithm instead of sha256 -b2 optional use blake2 hashing algorithm instead of sha256 path path to starting directory, mandatory with some exceptions (see examples) mask optional filter for file selection. Default is all files when path or mask names have spaces, then place names between quotes precede mask with ! to change its operation from include to exclude Examples: bunker -a /mnt/user/tv add SHA key for files in share tv bunker -a -S 10M /mnt/user/tv add SHA key for files greater than 10 MB in share tv bunker -a /mnt/user/tv *.mov add SHA key for .mov files only in share tv bunker -a /mnt/user/tv ! *.mov add SHA key for all files in share tv except .mov files bunker -A -f /tmp/keys.txt /mnt/user/tv add SHA key for files in share tv and export to file keys.txt bunker -v -n /mnt/user/files verify SHA key for previously scanned files and send notifications bunker -V /mnt/user/files verify SHA key for scanned files and update their scandate bunker -v -d 90 /mnt/user/movies verify SHA key for files scanned 90 days or longer ago bunker -v -f /tmp/errors.txt /mnt/user/movies verify SHA key and save mismatches in file errors.txt bunker -u /mnt/disk1 update SHA key for mismatching files bunker -U /mnt/disk1 update SHA key only for mismatching files newer than last scandate bunker -u -D 12h /mnt/disk1 update SHA key for mismatching files created in the last 12 hours bunker -u -f /tmp/errors.txt update SHA key for files listed in user defined file - no path bunker -e -f /tmp/disk1_keys.txt /mnt/disk1 export SHA key to file disk1_keys.txt bunker -i -f /tmp/disk1_keys.txt import and restore SHA key from user defined file - no path bunker -c -f /tmp/disk1_keys.txt check SHA key from user defined input file - no path bunker -C -f /tmp/disk1_keys.txt check SHA key and add SHA attribute (omit mismatches) - no path bunker -r /mnt/user/tv remove SHA key for files in share tv bunker -r -f /tmp/errors.txt remove SHA key for files listed in file errors.txt - no path Look at the examples to make use of the possibilities of bunker. Operation There are two main ways to use the utility: [1] interactive or [2] scheduled. Interactive When used in interactive mode the utility can be executed from a telnet session with the given options, and results are made visible on screen. Stopping the utility can be done at any time using 'CTRL-C'. It is also possible to open several telnet sessions and run multiple bunker instances concurrently, e.g. for checking different disks. Most of the time is spent on I/O access and calculating/checking a large disk can be a lengthy process, for example it takes almost 7.5 hours on my system to go through a nearly full 2TB disk. Scheduled Another way of operation is to create scheduled tasks to do regular file verifications and/or other activities. For example I created the script 'bunker-daily' and copied this file to the folder /etc/cron.daily. It checks for new files, file changes and updates the export file. It will go thru all available disks (thx itimpi). #!/bin/bash bunker=/boot/custom/bin/bunker log=/boot/custom/hash/blake2 var=/proc/mdcmd day=$(date +%Y%m%d) array=$(grep -Po '^mdState=\K\S+' $var) rsync=$(grep -Po '^mdResync=\K\S+' $var) mkdir -p $log # Daily check on new files, report errors and create export file if [[ $array == STARTED && $rsync -eq 0 ]]; then for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ; do if [[ -e /mnt/disk$i ]]; then $bunker -A -D 1 -q -l -f $log/disk$i.$day.new.txt /mnt/disk$i $bunker -U -D 1 -q -l -n -f $log/disk$i.$day.bad.txt /mnt/disk$i if [[ -s $log/disk$i.export.txt ]]; then if [[ -s $log/disk$i.$day.new.txt || -s $log/disk$i.$day.bad.txt ]]; then mv $log/disk$i.export.txt $log/disk$i.$day.txt $bunker -e -q -l -f $log/disk$i.export.txt /mnt/disk$i fi else $bunker -e -q -l -f $log/disk$i.export.txt /mnt/disk$i fi fi done fi This can be combined with a verification script which checks for file corruptions. I do this on monthly bases, but it may be done on weekly basis instead. Copy the file bunker-monthly to /etc/cron.monthly. Note that you need to adjust the script to the number of disks in your system. #!/bin/bash bunker=/boot/custom/bin/bunker log=/boot/custom/hash/blake2 var=/proc/mdcmd day=$(date +%Y%m%d) array=$(grep -Po '^mdState=\K\S+' $var) rsync=$(grep -Po '^mdResync=\K\S+' $var) mkdir -p $log # Monthly verification of different group of disks (quarterly rotation) if [[ $array == STARTED && $rsync -eq 0 ]]; then case $(($(date +%m)%4)) in 0) for i in 1 2 ; do $bunker -v -n -q -l -f $log/disk$i.$day.bad.txt /mnt/disk$i & done ;; 1) for i in 3 4 5 ; do $bunker -v -n -q -l -f $log/disk$i.$day.bad.txt /mnt/disk$i & done ;; 2) for i in 6 7 8 ; do $bunker -v -n -q -l -f $log/disk$i.$day.bad.txt /mnt/disk$i & done ;; 3) for i in 9 10 ; do $bunker -v -n -q -l -f $log/disk$i.$day.bad.txt /mnt/disk$i & done ;; esac fi Export / Import The purpose of export and import is to create and save a copy of the hash keys of the given files (export) which can be restored at later time (import), e.g. after a disk crash keys can be imported to run a file verification afterwards to see if content has been damaged. Export/import is not a file repair method but a mechanism to find corruptions. Other tools need to be used to do any file repair. Download See the attachment to download the zip file. Copy the file 'bunker' to your flash or other convenient location, and execute it from there. Optionally use the files bunker-daily and bunker-monthly. Extra Included in the zip file are two additional scripts bunker-update and bunker-verify, provided by courtesy of itimpi. You can place these scripts in the cron.hourly and cron.daily folders respectively and introduce automation of the file checking. bunker.zip
    1 point
  25. If you do not have "Enable Audio Stream Transcoding" checked, then whatever audio streams exist in the original file will be copied to the destination file with no changes made to it's codec. I personally do not have this checked as my playback devices can handle any codec fine (with the exception of the Chromecast). However, if you find that you have files in your library that are not playing well due to the audio, you may wish to enable this setting. Below it you will see a setting "Enable Audio Stream Cloning". This setting clones any surround sound audio streams (greater than 2 channels) and encodes that clone with only 2 channels according to the selected audio codec and bitrate. As an example that affects me personally, I had some videos that contained audio streams encoded with some kind of DD 7.1. My Chromecast could not play these and Plex would not transcode the audio when it tried (it would happily transcode the video, but not he audio). I didn't want to remove the surround sound streams from my files. My Chromecast (in the kitchen) only plays stereo anyway. So my solution was to add this setting to Unmanic that clones the audio stream (at the cost of an extra few MB per result file). On the Plex app I can then select the stereo channel when casting these videos to the kitchen Chromecast and the video files that were unable to play before now work perfectly fine. These settings are really dependant on your setup. You may have no issues with audio playback on any of your players. And in that circumstance I would recommend NOT selecting the "Enable Audio Encoding" checkbox at the top of that page at all. When that is not selected, then your audio streams are left completely untouched and are just "copied" into the new destination file.
    1 point
  26. Gotcha, thanks. I can't imagine how flaky releases must be for even Unifi to consider them beta
    1 point
  27. its a beta release, only visible for logged in users and enabled beta use in there profile ;)
    1 point
  28. Not that one, avoid any Marvell controller and any controller with port multipliers, that one has both. 4 Ports look for Asmedia ASM1064 or ASM1164 5 Ports look for JMB585 6 Ports look for Asmedia ASM1166 8 Ports look for LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
    1 point
  29. https://www.dropbox.com/s/zsari9h7v2lxtjo/E246D4U2.34?dl=0
    1 point
  30. Please note that with the last lot of changes, if you have "Enable Audio Stream Transcoding" enabled, Unmanic will transcode your files if the audio codec does not match what you have specified in the dropdown.
    1 point
  31. Ich will/muss beides nutzen. ich brauche Kupfer vom zentralen Switch zu den Dosen in der oberen Etage, mindestens 2x. Dort ins OG muss dann je ein kleiner (Desktop)Switch hin, der eben dann mind. 1x 10G-Kupfer braucht. Da ich den passiv will, würde ich die Ports zu den Desktops auf LWL oder AOC nehmen. Ja, ein CRS309 soll ins Rack, zum CRS326 und RB4011...dann mit zwei MT S+RJ10 für die Links ins OG. Kleine Asus/Netgear/QNAP mit mind. 1x 10GBE-T gibt es, sind aber immer noch mindestens 50 Steine teurer als ein CSS610 mit RJ45 (MT S+RJ10) Modul. Die Desktops im OG dann Mellanox X-3 mit DAC oder auch SFP+ Fibers.
    1 point
  32. Add disks as needed for capacity. Don't put them all in just because you have them. The SSDs may be a little small for much caching but will be enough for dockers and VMs. How do you plan to use Unraid?
    1 point
  33. Hab so eine Karte bzw eine Mellanox ConnectX-3 MCX311A-XCAT EN 10G und bin sehr zufrieden nur musst hald bedenken das du noch ein SFP Modul dazu brauchst zB: Multimode Singlemode Ethernet 30m Bitte pass bei denen auf das sind QSFP ports und evtl schwerer zu bekommen oder teurer, aber mit denen hab ich mich noch nicht wirklich befasst. Interessant wäre was sich die an Strom genehmigt und mal abgesehen davon musst ob die 100% zu Unraid kompatibel sind bzw. obs damit Probleme gibt... Kann nur sagen das die ConnectX3 gegenüber der ConnectX2 ca 5-10W weniger braucht (kann aus erfahrung sprechen da ich meine erst umgerüstet hab)
    1 point
  34. ...also sowas "ähnliches" gibt es auch schon fertig: https://www.ebay.de/itm/Mellanox-ConnectX-3-MCX341A-XCGN-CA341-CX341A-10GbE-PCI-E-Adapter-Network-Cards/143606068104?hash=item216f96b388:g:9EYAAOSw56pevrrn ...und im Luxx Forum ist das der Geheimtipp. ...eine normale Dual startet schon bei unter 55EUR: https://www.ebay.de/itm/CX354A-Mellanox-MCX354A-QCBT-ConnectX-3-QDR-Infiniband-10GigE-Dual-ports/142919606396?hash=item2146ac207c:g:CBMAAOSwrR5bhVSJ Dieses Riser-Card artige Gedöns und Gefummel macht mich immer nervös, was in 3 Jahren mit Staub drauf draus wird.
    1 point
  35. What a coincidence. I too upgraded today. I wasn't running into any problems--just my standard need to run the latest and greatest. Only 5ish hours of uptime but all seems to be going well so far. Aside from having to re-enable the onboard graphics item, I don't think I had to make any other config changes. One BIOS setting I never noticed before but did exist in the 2.21 BIOS is a setting for HDD or SDD for onboard SATA disks. Anyone know what the purpose of that is?
    1 point
  36. Plex does all kinds of maintenance tasks during the night, some of which involve accessing the media files. https://support.plex.tv/articles/201553286-scheduled-tasks/
    1 point
  37. Has anyone had any success doing that? Several people have reported failure - I'm not aware of anyone reporting success in this thread. Even people attempting it on actual Apple hardware have reported problems.
    1 point
  38. I think he was commenting on the difference between your prior build with the D510 and this new one with an 8-core Xeon. Your new build looks nice. I am still very happy with my E3C246D4U/E-2288G in the Silverstone CS380. I don't need more than 8 HDDs. The Antec 1200 monster will certainly give you a lot of room to grow.
    1 point
  39. This is solved. I've first tried making a different vlan on my eth0 interface, but the problem remained the same. I had to use the eth1 interface (different nic on my microserver) and used the vlan on that. Haven't got a crach since.
    1 point
  40. Jan 28 16:15:23 TheVault kernel: pm80xx 0000:27:00.0: pm80xx: driver version 0.1.40 Jan 28 16:15:23 TheVault kernel: pm80xx0:: pm8001_pci_probe 1111:chip_init failed [ret: -16] Controller problem, it's failing to initialize, older release uses an older driver: Jan 28 16:30:14 TheVault kernel: pm80xx 0000:27:00.0: pm80xx: driver version 0.1.39 You'll need to wait for a new driver, and hope it fixes it, there have been other issues with theses controllers before, if it's an option I would recommend using an LSI instead, you'll need a new cable though.
    1 point
  41. A workaround for this would be to do it on your own for now until a fix is released, since Unraid is based on Slackware this is pretty straight forward... Open up a Unraid terminal and enter the following: cd /tmp wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/patches/packages/sudo-1.9.5p2-x86_64-1_slack14.2.txz installpkg sudo-1.9.5p2-x86_64-1_slack14.2.txz rm -rf /tmp/sudo-1.9.5p2-x86_64-1_slack14.2.txz You can also append this to your 'go' file to install it on every reboot. I know this is only a temporary solution but it's a solution that works. After that you can issue 'sudo -V' in the terminal and you will see that you now have sudo 1.9.5p2 installed. (Btw the package is from the official Slackware repo) EDIT: Wrote a quick Plugin if this is what you are after, it will do basically the same and you don't have to edit anything (works only from Unraid version 6.8.2 to 6.9.0rc2): https://raw.githubusercontent.com/ich777/unraid-sudo-patch/master/CVE-2021-3156.plg
    1 point
  42. A simple restart of the Container should do the job (open up the Log for the container and look if SteamCMD updates the game or if it says up-to-date). If it says up-to-date go into the template and set the variable 'Validate Installation' to 'true' (without quotes) and SteamCMD will verify the installation (don't forget to remove the value 'true' from the variable because a startup with the validation takes longer than normal). Hope this helps.
    1 point
  43. In Global Share Settings, is it set to include all disks or do you only have specific disks listed and disk 4 is not one of them? Note that you have to stop the array to make changes. Is disk 4 specifically excluded?
    1 point
  44. I have the same issue. I have set it up exactly as per SpaceInvaders video. If I set it to use the custom network with Docker DNS resolution it works via the subdomain. Setting the guacamole docker with its own IP times out via the reverse proxy but is available internally via the static IP. EDIT: SOLVED Go into settings - Docker - click on advanced view in top right hand corner - enable Host access to custom networks. You'll have to turn off the docker service to change this. Works perfectly as per Spaceinvaders video now.
    1 point
  45. @limetech Would it be possible to include usbip_host aswell as vhci_hcd and usbip_core. I have a compiled version of the usbip_host module to use for testing and it works fine. This is only required if you want to mount USB devices from unraid to another device(Top part in gui). Is anyone using USBIP as I am starting to create a plugin to provide an interface. Based on unassigned devices. Are there any specific requirements people are looking for? Already I have disabled the unraid flash from having an option to mount from the gui, and plan to disable USB devices if already in use i.e. in UD etc. This is how it is looking so far.
    1 point
  46. Add the ability for unraid to be a basic domain controller allowing for network control of users and machine as well as for dockers. Something similar to synologys version, simple lightweight and easy to manage.
    1 point
  47. I was having this problem earlier, it turns out there was a leading space at the start of the token from when I copied the token from duckdns. When I removed the leading space everything worked.
    1 point
  48. Yes, I had this issue about a year ago where CPU usage would skyrocket - it was Cache Directories. Right now I only have two plugins installed, System Stats and Open Files
    1 point
  49. Backup/Restore dockers. The backup directory contains cache files and Gzip files. Running dockers are stopped and started during backup and restore. Gzip compression is used so the owner and permissions are preserved in the backup files. If you want to run with hard coded variables, then set the variables under Defaults. Usage: Usage: backupDockers.sh: [-a] [-d <backup directory>] [<dockers and/or archive files>...] -b : backup mode -r : restore mode -l : list dockers -a : all dockers -c : crc comparison during rsync, default is check time and size -d : set backup directory -s : save backup during restore Examples: Backup dockers into a specific backup directory: backupDockers.sh -b -d /mnt/user/backup/dockers binhex-plexpass transmission Restore docker latest file and a certain docker file from a specific backup directory: backupDockers.sh -r -d /mnt/user/backup/dockers binhex-plexpass transmission.2019-07-02-12-30-38-EDT.tgz Backup all dockers into a specific backup directory: backupDockers.sh -bad /mnt/user/backup/dockers Restore all dockers from a specific backup directory: backupDockers.sh -rad /mnt/user/backup/dockers Source: If copy/paste doesn't work then download the file. backupDockers.sh #!/bin/bash # Defaults backup="/mnt/user/Backup/Dockers" restore=false all=false checksum=false dockers=() files=() usage() { echo "Usage: backupDockers.sh: [-a] [-d <backup directory>] [<dockers and/or archive files>...]" echo echo " -b : backup mode" echo " -r : restore mode" echo " -l : list dockers" echo " -a : all dockers" echo " -c : crc comparison during rsync, default is check time and size" echo " -d : set backup directory" echo " -s : save backup during restore" echo exit 1 } while getopts 'brlacd:?h' opt do case $opt in b) restore=false ;; r) restore=true ;; l) docker ps -a --format "{{.Names}}" | sort -fV exit 0 ;; a) all=true ;; c) checksum=true ;; d) backup=${OPTARG%/} ;; h|?|*) usage ;; esac done shift $(($OPTIND - 1)) if [ "$all" == "true" ] then readarray -t all < <(printf '%s\n' "$(docker ps -a --format "{{.Names}}" | sort -fV)") else all=() fi readarray -t items < <(printf '%s\n' "$@" "${dockers[@]}" "${all[@]}" | awk '!x[$0]++') [ "${items[0]}" == "" ] && usage for item in "${items[@]}" do if echo $item | grep -sqP ".+\.\d\d\d\d-\d\d-\d\d-\d\d-\d\d-\d\d-\w+\.tgz" then files+=("$item") else if [ ! -z $item ] then dockers+=("$item") fi fi done date=$(date +"%Y-%m-%d-%H-%M-%S-%Z") echo "DATE: $date" appdata="/mnt/user/appdata" cache="$backup/appdata" if [ "$restore" == "true" ] then restores=() errors=() for docker in "${dockers[@]}" do file="$(ls -t $backup/$docker/*.tgz 2>/dev/null | head -1)" [ -e "$file" ] && restores+=("$file") || errors+=("$docker") done for file in "${files[@]}" do docker=$(echo $file | cut -d '.' -f 1) file="$backup/$docker/$file" [ -e "$file" ] && restores+=("$file") || errors+=("$file") done for error in "${errors[@]}" do archive=$(echo "$error" | rev | cut -d '/' -f 1 | rev) echo "ERROR: $archive: archive not found" done readarray -t restores < <(printf '%s\n' "${restores[@]}" | awk '!x[$0]++') for restore in "${restores[@]}" do archive=$(echo "$restore" | rev | cut -d '/' -f 1 | rev) docker=$(echo "$archive" | cut -d '.' -f 1) [ "$docker" == "" ] && continue echo "DOCKER: $docker" running=$(docker ps --format "{{.Names}}" -f name="^$docker$") if [ "$docker" == "$running" ] then echo "STOP: $docker" docker stop --time=30 "$docker" >/dev/null fi cd "$appdata" backup="$docker.$date" echo "MOVE: $docker -> $backup" mv -f "$docker" "$backup" 2> /dev/null if [ -d "$backup" ] then echo "RESTORE: $archive -> $appdata" #tar --same-owner --same-permissions -xzf "$restore" pv $restore | tar --same-owner --same-permissions -xzf - if [ ! -d "$appdata/$docker" ] then echo "ERROR: restore failed" mv -f "$backup" "$docker" 2> /dev/null if [ ! -d "$appdata/$docker" ] then echo "ERROR: repair failed" fi fi if [ ! -d $docker ] then echo "ERROR: restore failed" fi else echo "ERROR: move failed" fi if [ "$docker" == "$running" ] then echo "START: $docker" docker start "$docker" >/dev/null fi done else for docker in "${files[@]}" "${dockers[@]}" do if ! docker ps -a --format "{{.Names}}" | grep $docker -qs then echo "ERROR: $docker: docker not found" fi done mkdir -p "$backup" "$cache" chown nobody:users "$backup" "$cache" chmod ug+rw,ug+X,o-rwx "$backup" "$cache" for docker in "${dockers[@]}" do if [ -d "$appdata/$docker" ] then echo "DOCKER: $docker" running=$(docker ps --format "{{.Names}}" -f name="^$docker$") if [ "$docker" == "$running" ] then echo "STOP: $docker" docker stop --time=30 "$docker" >/dev/null fi echo "SYNC: $docker" [ "$checksum" == "true" ] && checksum=c || checksum= rsync -ha$checksum --delete "$appdata/$docker" "$cache" if [ "$docker" == "$running" ] then echo "START: $docker" docker start "$docker" >/dev/null fi mkdir -p "$backup/$docker" echo "GZIP: $docker.$date.tgz" tar cf - -C "$cache" "$docker" -P | pv -s $(du -sb "$cache/$docker" | cut -f 1) | gzip > "$backup/$docker/$docker.$date.tgz" chown -R nobody:users "$backup/$docker" chmod -R ug+rw,ug+X,o-rwx "$backup/$docker" fi done fi date=$(date +"%Y-%m-%d-%H-%M-%S-%Z") echo "DATE: $date"
    1 point