Leaderboard

Popular Content

Showing content with the highest reputation on 07/09/20 in all areas

  1. I want to make a little change that will make files in /root/.ssh persistent but will affect your script. That is simply this: create /root/.ssh which is a symlink to /boot/config/ssh/root /boot/config/ssh/root is a directory where you put authorized_keys, known_hosts, etc. Sound good?
    4 points
  2. Added several options for dealing with this issue in 6.9.0-beta24.
    3 points
  3. Similar to my specs and it just works great. I would go with a WD red (m.2) SSD for cache as it is created for 24/7 and comes with 5yrs guarantee. Same speed (SATA3 6GB/s) by 10-15 EUR more. I also have the 1TB cache, but I've never used more than 300gb so far ("daily use" shares are on my cache prefered, archive has status "yes" for cache usage with daily scheduled mover at midnight). So I'd rather invest the bucks in an additional 8GB RAM stick and go with a 500gb cache. For chilling I use a ArcticFreeze 11 LP topblower (15 EUR). Small and runs by 1100-1300rpm and my CPU has not yet seen a temperature above 50 degree, even when using my Win10VM and playing some games. System overall is absolutely silent, you only can hear the case fans when getting to the Node304 case by 30cm. The ArcticFreeze quite wide, but using the same ASRock H370M-ITX/ac with two RAM sticks and there's still 1mm in between sticks and cooler. @SirReal63running a video with plex / emby / etc. which requires transcoding always pegs the max out of a CPU. With enabling the hardware decoding using iGPU or dedicated GPUs it will reduce CPU power to a minimum. My emby server takes the advantage of the integrated GPU of my i3-8100 an even transcoding two 4k movies in parallel pegs my CPU at not more than 30%. btw. plex / emby also use transcoding when the movie has embedded subtitles and / or your client does not support the audio codec (e.g. DTS or EAC3 with Amazon FireTV Stick).
    2 points
  4. Nice. Will update the prebuilt images tomorrow... EDIT: Prebuilt images are now updated.
    2 points
  5. https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-450.57-Linux-Driver https://www.nvidia.com/Download/driverResults.aspx/162107/en-us New stable
    2 points
  6. Just wanted to comment and report back. The card that @agarkauskas posted works very well. Here's a picture of the card: Here's the relevant lspci -nn of the card: 08:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1806] (rev 01) 09:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1806] (rev 01) 09:02.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1806] (rev 01) 09:06.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1806] (rev 01) 09:0e.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1806] (rev 01) 0a:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) 0b:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) 0c:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) 0d:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) The interesting thing about the card is that the usb headers on the front of the cards are also mapped to the ports at the back. Each header offers an additional 2 USB3 ports. The header at the top maps to the top two ports at the back. Each header port is also mapped to their own respective controller. So a single controller has 2 USB3 ports as long as you're using the headers. Otherwise you just have the 1 port at the back. I do have to mention that the way this card is designed, the ports have pretty much no clearance and can be hard to insert. I had a tough time with the port that is closest to the bottom of the card when inserted. I had to insert the usb plug through the pci-e slot at the back first and plug it into usb socket on the card. Keep it plugged in while you insert the card into the pci-e slot on the motherboard. The rest of the ports seem to be fine. I have a StarTech PEXUSB3S44V PI40202-7X2B and that throws Error Code 10 as seen above. I have a StarTech PEXUSB3S44V PI40202-6X2B that's on the way and I'm pretty interested to see if it actually works and find out what PCI bridge is running on this revision of the card. I'll post back with a lspci -nn when the card arrives and test if it actually works. Personally even if the older revision StarTech card works, I'm more inclined to keep the one purchased from AliExpress due to the expansion options it offers.
    2 points
  7. A "Cron" job is just basically a scheduled Task in Linux, like "Perform X command at Y date and time" But yes, I personally use it to keep a backup of all my Dockers in the AppData directory, but if you just specifically wanted 7dtd you could exclude everything else. I also agree with @ich777, I use it to back the entire thing up once a week, and it deletes the old backup every week as well, but the way its worded it sounds like just overwrites the old one, but the CA Backup thread would provide clarity on it.
    2 points
  8. The root folder needs to be removed. As much as it pains me to say it, this docker is not very well put together. I made it with the intent of just spending an hour or 2 getting it working and then leaving it there. So its very slapdash. Sorry. I committed this change to master this morning. So it should be available to you all now.
    2 points
  9. Bonjour, Qu’il est agréable de trouver une section française ! Mon parcours est assez simple. J’avais un simple serveur Plex et Homebridge sous Windows 10. Ça a duré 4 ans, il y a quelques semaines je me suis dit qu’il était grandement temps de passer à l’étape supérieur ! J’ai donc installé OpenMediaVault, mais j’ai du mal à accrocher, depuis je me renseigne pour UnRaid et j’avoue que je suis très enthousiaste ! J’attends encore quelques jours avant de tout migrer, je veux emmagasiner le maximum de connaissances et éviter de faire des bêtises...
    2 points
  10. 3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST. New users continue reading 13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/BinsonBuzz/unraid_rclone_mount 17/11/21 Update: Poll to see how much people are storing I've added a Paypal.me upon request if anyone wants to buy me a beer. There’s been a number of scattered discussions around the forum on how to use rclone to mount cloud media and play it locally via Plex, Emby etc. After discussions with @Kaizac @slimshizn and a few others, we thought it’d be useful to start a thread where we can all share and improve our setups. Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk. With access to unlimited cloud space available for the cost of a domain name and around $510/pm, then this becomes a very interesting proposition as it reduces local storage requirements, noise etc etc. At the moment I have about 80% of my library in the cloud and I struggle to tell if a file is local or in the cloud when playback starts. To kick the thread off, I’ll share my current setup using gdrive. I’ll try and keep this initial thread updated. Update: I've moved my scripts to github to make it easier to keep them updated https://github.com/BinsonBuzz/unraid_rclone_mount Changelog 6/11/18 – Initial setup (updated to include rclone rc refresh) 7/11/18 - updated mount script to fix rc issues 10/11/18 - added creation of extra user directories ( /mnt/user/appdata/other/rclone & /mnt/user/rclone_upload/google_vfs) to mount script. Also fixed typo for filepath 11/11/18 - latest scripts added to https://github.com/BinsonBuzz/unraid_rclone_mount for easier editing 3/1/20 - switched from unionfs to mergerfs 4/2/20 - updated the scripts to make easier to use and control. Thanks to @senpaibox for the inspiration My Setup Plugins needed Rclone – installs rclone and allows the creation of remotes and mounts. New scripts require V1.5.1+ User Scripts – controls how mounts get created How It Works Rclone is used to access files on your google drive and to mount them in a folder on your server e.g. mount a gdrive remote called gdrive_vfs: at /mnt/user/mount_rlone/gdrive_vfs Mergerfs is used to merge files from your rclone mount (/mnt/user/mount_rlone/gdrive_vfs) with local files that exist on your server and haven't been uploaded yet (e.g. /mnt/user/local/gdrive_vfs) in a new mount /mnt/user/mount_unionfs/gdrive_vfs This mergerfs mount allows files to be played by dockers such as Plex, or added to by dockers like radarr etc without the dockers even being aware that some files are local and some are remote. It just doesn't matter The use of a rclone vfs remote allows fast playback, with files streaming within seconds New files added to the mergerfs share are actually written to the local share, where they will stay until the upload script processes them An upload script is used to upload files in the background from the local folder to the remote. This activity is masked by mergerfs i.e. to plex, radarr etc files haven't 'moved' Getting Started Install the rclone plugin and via command line run rclone config and create 2 remotes: gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive: It is advisable to create your own client_id to avoid API bans. Mount Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc Choose a suitable cron job. I run this script on a 10 min */10 * * * * schedule so that it automatically remounts if there’s a problem. The script: Checks if an instance is already running, remounts (if cron job set) automatically if mount drops Mounts your rclone gdrive remote Installs mergerfs and creates a mergerfs mount Starts dockers that need the mergerfs mount e.g. plex, radarr Upload Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc - USE THE SAME PATHS Choose a suitable cron job e.g hourly Features: Checks if rclone is installed correctly sets bwlimits There is a cap on uploads by google of 750GB/day. I have added bandwidth scheduling to the script so you can e.g. set an overnight job to upload the daily quota at 30MB/s, have it trickle up over the day at a constant 10MB/s, or set variable speeds over the day The script now stops once the 750GB/day limit is hit (rclone 1.5.1+ required) so there is more flexibility over upload strategies I've also added --min age 10mins to stop any premature uploads and exclusions to stop partial files etc getting uploaded. Cleanup Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and set to run at array start (recommended) or array stop In the next post I'll explain my rclone mount command in a bit more detail, to hopefully get the discussion going!
    1 point
  11. Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. This guide explains how to make an outgoing WireGuard VPN connection to a commercial VPN provider. If you are trying to access your Unraid network from a remote location, see the original WireGuard quickstart guide. Commerical VPN Providers Several commercial VPN providers support WireGuard, a few are listed below. No endorsement is implied, you need to research and determine which one meets your needs. Comment below if you are aware of others: VPN Jantit (Free! Scroll down and pick a location. Note that the free options have to be recreated every few days.) Azire VPN Mullvad (download WireGuard config files - requires login. See this tip.) IVPN (download WireGuard config files - requires login) OVPN Windscribe (See this) Avoid these providers, they require a customized WireGuard client and will not work with Unraid: TunSafe (this seems to require a custom WireGuard client now) Nord (see this) PIA (see this, although with a lot of extra work it is possible. This definitely falls outside of what could be considered supported though. Also see this.) Note that with the current state of WireGuard, VPN providers cannot guarantee the same amount of privacy as they can with OpenVPN. See: https://restoreprivacy.com/wireguard/ Typically the objections are not around security, but around the fact that it is harder for them to guarantee that they cannot track you. Configuring “VPN tunneled access for docker” (New in 6.10.0-rc5! For older versions see the next post) Download a config file from your preferred commercial VPN provider On the Settings -> VPN Manager page, click the "Import Config" button and select the file on your hard drive. This will create a new tunnel specific to this provider. The “Peer type of access” will default to “VPN tunneled access for docker”. There are no settings to change, except perhaps to give it a local name. Click Apply. Note: You do not need to forward any ports through your router for this type of connection Change the Inactive slider to Active Take note the name of this tunnel, it will be wg0 or wg1 or wg2, etc. You'll need this later when setting up your containers Also note that any DNS setting the Commercial VPN provides is not imported. Open their config file and see if there is a "DNS" entry, make note of the server they provided, you will use it below. If they didn't provide one, you may want to use Google's at 8.8.8.8. Testing the tunnel Note: The "VPN tunneled access for docker" tunnel includes a kill switch - if the tunnel drops then any containers using that tunnel will lose access to the Internet. Important! Prior to Unraid 6.11.2, you must take care to start the WireGuard tunnel *before* the Docker container in order for the kill switch to work. If the docker container is started first, it will use the server's default Internet connection. That is no longer an issue for tunnels created/updated after installing Unraid 6.11.2. Using Community Applications, install a Firefox Docker container When setting up the container, set the “Network Type” to “Custom: wg2” (or whatever the name of the tunnel was in the previous step) Switch to Advanced view and add your preferred DNS provider to the "Extra Parameters". i.e.: --dns=8.8.8.8 (if you don't set this, the container may leak your ISP's DNS server) The rest of the defaults should be fine, apply the changes and start the container Launch Firefox and visit https://whatismyipaddress.com/ you should see that your IP address is in the country you selected when you signed up with the provider Also visit https://www.dnsleaktest.com/ and run a test, confirm that it only finds IPs related to the DNS provider you specified. Feel free to add more containers to this same tunnel, or create multiple tunnels if desired.
    1 point
  12. 6.9.0-beta24 vs. -beta22 Summary: fixed several bugs added some out-of-tree drivers added ability to use xfs-formatted loopbacks or not use loopback at all for docker image layers. Refer to Docker section below for more details (-beta23 was an internal release) Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Multiple Pools This features permits you to define up to 35 named pools, of up to 30 storage devices/pool. The current "cache pool" is now simply a pool named "cache". Pools are created and managed via the Main page. Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to current cache pool operation. Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order: pool assigned to share disk1 : disk28 all the other pools in strverscmp() order. As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "unRAID array" pools. We are also considering zfs support. Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool. Now you have x2 1-device btrfs pools. Upon array Start user might understandably assume there are now x2 pools with exactly the same data. However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device. Language Translation A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI. There are several language packs now available, and several more in the works. Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language. Note: Community Applications must be up to date to install languages. See also here. Each language pack exists in public Unraid organization github repos. Interested users are encouraged to clone and issue Pull Requests to correct translations errors. Language translations and PR merging is managed by @SpencerJ. Linux Kernel Upgraded to 5.7. These out-of-tree drivers are currently included: QLogic QLGE 10Gb Ethernet Driver Support (from staging) RealTek r8125: version 9.003.05 (included for newer r8125) HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request) Note that as we update Linux kernel, if an out-of-tree driver no longer builds, it will be omitted. These drivers are currently omitted: Highpoint RocketRaid r750 (does not build) Highpoint RocketRaid rr3740a (does not build) Tehuti Networks tn40xx (does not build) If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers. Base Packages All updated to latest versions. In addition, Linux PAM has been integrated. This will permit us to install 2-factor authentication packages in a future release. Docker Updated to version 19.03.11 We also made some changes to add flexibility in assigning storage for the Docker engine. First, 'rc.docker' will detect the filesystem type of /var/lib/docker. We now support either btrfs or xfs and the docker storage driver is set appropriately. Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name. For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs. If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs. We also added the ability to bind-mount a directory instead of using a loopback. If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker". If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker. For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker". Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this. In this release however, you must edit the 'config/docker.cfg' file directly to specify a directory, for example: DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker" Finally, it's now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up. Virtualization libvirt updated to version 6.4.0 qemu updated to version 5.0.0 In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. Refer also @ljm42's excellent guide. In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS. The primary use case is to facilitate accelerated transcoding in docker containers. For this we require Linux to detect and auto-install the appropriate driver. However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page. Users passing GPU's to VM's are encouraged to set this up now. "unexpected GSO errors" If your system log is being flooded with errors such as: Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66 You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net". In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page. For other network configs it may be necessary to directly edit the xml. Example: <interface type='bridge'> <mac address='xx:xx:xx:xx:xx:xx'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> Other AFP support has been removed. Numerous other Unraid OS and webGUI bug fixes and improvements. Version 6.9.0-beta24 2020-07-08 Bug fixes: fix emhttpd crash expanding number of slots for an existing pool fix share protected/not protected status fix btrfs free space reporting fix pool spinning state incorrect Base distro: curl: version 7.71.0 fuse3: version 3.9.2 file: version 5.39 gnutls: version 3.6.14 harfbuzz: version 2.6.8 haveged: version 1.9.12 kernel-firmware: version 20200619_3890db3 libarchive: version 3.4.3 libjpeg-turbo: version 2.0.5 lcms2: version 2.11 libzip: version 1.7.1 nginx: version 1.19.0 (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516) ntp: version 4.2.8p15 openssh: version 8.3p1 pam: version 1.4.0 rsync: version 3.2.1 samba: version 4.12.5 (CVE-2020-10730, CVE-2020-10745, CVE-2020-10760, CVE-2020-14303) shadow: version 4.8.1 sqlite: version 3.32.3 sudo: version 1.9.1 sysvinit-scripts: version 2.1 ttyd: version 20200624 util-linux: version 2.35.2 xinit: version 1.4.1 zstd: version 1.4.5 Linux kernel: version 5.7.7 out-of-tree driver: QLogic QLGE 10Gb Ethernet Driver Support (from staging) out-of-tree driver: RealTek r8125: version 9.003.05 out-of-tree driver: HighPoint rr272x_1x: version v1.10.6-19_12_05 Management: cleanup passwd, shadow docker: support both btrfs and xfs backing filesystems loopbacks: permit xfs or btrfs based on filename mount_image: suppport bind-mount mount all btrfs volumes using 'space_cache=v2' option mount loopbacks with 'noatime' option; enable 'direct-io' non-rotational device partitions aligned on 1MiB boundary by default ssh: require passwords, disable non-root tunneling web terminal: inhibit warning pop-up when closing window webgui: Add log viewer for vfio-pci webgui: Allow different image types to upload with 512K max webgui: other misc. improvements webgui: vm manager: Preserve VNC port settings
    1 point
  13. Yes, even with the cheap ArcticFreeze, the CPU keeps approx 10 degrees cooler (according to tests I've read, never tested by myself) and is more quiet than the stock one. For RAM, it depends of the usage of your system. I use 8gb RAM for my VM(s) and the other 8gb for unRAID itself (including docker container; unraid + dockers utilize approx. 5gb RAM in my setup). When using unRAID only as typical NAS, 8gb is fair, but once your system has the power to do more, I'm pretty sure you will do more There's always room for more RAM btw as you mentioned the 6 SATA ports for the H370m-ITX/ac board. When you use the m.2 slot, the first SATA port will be disabled as it uses the same bus. So you are only able to use six internal drives, regardless of 1x m.2 and 5x SATA or no m.2 and 6x SATA. However, this board is the only mITX board which comes with 6 drive ports for a reasonable price - that's why I also chose this one.
    1 point
  14. Thank you. I can recreate. There is something different about how disk statistics are recorded....
    1 point
  15. It will use the existing pool as is, as long as all members are assigned after the new config.
    1 point
  16. Please let me know if this is what you are looking for. Thanks! tower-diagnostics-20200709-0942.zip
    1 point
  17. Yes, it does. It certainly simplifies things. It will be easy enough to modify any scripts/go file entries that reference /root/.ssh and the files in /boot/config/ssh/root
    1 point
  18. If everything went well with the nVidia drivers the filesize of bzroot should be ~240MB, this should tell you everything went well with the nVidia drivers. EDIT: This is completely dependent on the variables you choose... and also the driver version could be 20 to 30 MB up or down.
    1 point
  19. Bonjour à tous ! Moi c'est botteur et je viens de Belgique ! Je suis sous OpenMediaVault depuis des années et j'étudies depuis quelques jours la possibilité de migrer vers UnRaid J'espère trouver les bonnes solutions et tips ici
    1 point
  20. Have you also left the nVidia driver version at latest? If so, that is pretty normal since I've baked the latest beta drivers into the images and these are not available with 'latest' since they are beta drivers you have to change it to '450.51' to get these drivers. It is also possible that the compression is slightly different, for example on the bzimage. I compile and create the images on my Xeon 2670's also possible that there is a slight difference.
    1 point
  21. @ich777 Is it expected that the sizes of the re-compiled files vary slightly in size? What I mean is I am using this to compile latest 6.9.0 beta 24 for Nvidia and DVB, using the defaults of latest and latest with beta enabled. The compilation seems to go smooth and without issue. But, if I download your pre-compiled beta 24 with nvidia and dvb and look at the file sizes compared to mine, they are slightly different. Does the fact that my architecture is an AMD system make a difference?
    1 point
  22. just need add to <features> , and add to <cpu mode='host-passthrough' check='none'>.
    1 point
  23. I think this is the savegame that the container initially creates since the standard name for the game is 'mygame'. But since I'm not familar with the game (created it only on a user request without actually owning the game) I really can't tell what this is... You can try to move the folder out to another location and start the server and see if everything works.
    1 point
  24. I use a Hyper 212 but if the fan on it ever dies I will change to a Noctua. Make sure whatever cooler you use doesn't interfere with the ram slots. I know you only plan on using one stick, but if you want to add more in the future it becomes important. My Hyper 212 blocks a ram slot which prevents me from using all 4 slots.
    1 point
  25. No, just the entire saves folder and all subfolders if there are any.
    1 point
  26. You need a fibre to the building provider, which isn't typically widely available. My building, for example, has a provider offering 1000/1000 500/500 and a "basic" package of 250/250. Most other fibre services have a upload speed cap from 10 to 50 Mbps even with gigabit download. E.g. Virgin has M500 pack that is 500/52.
    1 point
  27. Just power back on and it should start rebuild the data disk (from the beginning), if it doesn't or if there is any unmountable disk please post diagnostics.
    1 point
  28. you are in the wrong support thread, this is for the NON vpn version of sabnzbd, please do the following as detailed below:- https://github.com/binhex/documentation/blob/master/docker/faq/help.md and post it here:- https://forums.unraid.net/topic/44119-support-binhex-sabnzbdvpn/
    1 point
  29. 35 hours and no lock up. Still none the wiser as to why a power cut changed anything.
    1 point
  30. Disable SIP for Big Sur is FF0F0000 value and you can boot recovery if you enable JumpstartHotPlug under UEFI > APFS, but only use it to boot the Recovery, not normal boot.
    1 point
  31. Great guide, thanks! Just got mine up and running using this.
    1 point
  32. The reason for "adding a tick" to a device is to ensure Unraid doesn't install any drivers for it, thus leaving the device available to be passed through to a VM. I'd guess that Unraid doesn't load audio drivers right now so it isn't strictly necessary to bind the audio device to vfio-pci. But I'd probably add the tick anyhow, in case something changes in the future.
    1 point
  33. Thank you for the update and letting us know the driver is now added! I installed and tested the beta24 version and now my Ethernet port works! Happy Happy Happy
    1 point
  34. Hey looking at source of 'df' command, looks like they use 'statvfs()' instead of 'statfs()' - I'll give that a go.
    1 point
  35. A case can be made that the way we output is more "correct" than 'df' command because it takes into account the space used by raid5 parity - though not the case with raid1 😆. My understanding was that 'df' just used statfs() system call but obviously it is reporting the wrong "Total" for raid5. The thing maddening about btrfs is their stubbornness in not accepting reality that different subvolumes cannot at this time have different "raid" levels (the design theoretically permits this). Hence forever these free space reporting issues which does nothing but diminish confidence in btrfs. /rant
    1 point
  36. Awesome, the most likely reason is that plex lost the metadata for the audio associated with each of your video files, this usually occurs because your plex container cannot access your shares during the plex maintenance window and then deletes any analysis files it has. Alternatively, your audio codecs got corrupted and deleting the codecs folder forces plex to redownload fresh ones.
    1 point
  37. Granted, it does, though I only play on the local network and nothing higher than 720p so I don't think I do any transcoding. I would still budget for a better CPU cooler, the stock one is not very good.
    1 point
  38. Hello. I want to share, because I myself was looking for Old. I have NextCloud production, nextcloud repository: production-apache. Version 17. 1. I entered the script from the topic in the terminal sysctl -w vm.max_map_count=262144 2. I installed 4 plug-ins with a magnifier in the NextCloud, on the NextCloud web. 3. I installed the plugin for NextCloud. ingest-attachment Entered in the terminal: docker exec -it elasticsearch bash /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment 4. Then I made the settings in NextCloud, as in the photo. 5. Then started indexing. In terminal: docker exec --user www-data nextcloud php occ fulltextsearch:index ______________ I hope this helps someone.
    1 point
  39. I need tests for my tests ... fixed in next release.
    1 point
  40. MacOS Catalina 16GB - General Workstation (Photoshop & Illustrator) MacOS Mojave 16GB - General; Workstation (Some FCPX, Some coding, but usually just lots of Apps) Windows 10 Utility 12GB - Home Control incl Blue Iris security Windows 10 entertainment 16GB - Just Games and Home Theatre With 64GB, i had to pare back RAM on those systems to allow for a stack of Dockers and, of course, space for unRaid to run. the extra 32GB just gives the system room to breath and allows me maximise it. For example, it also allows me spin up additional VMs as testers or trials without disturbing anyone else. The above VMs are in pretty much constant use by one of the family at any given time and announcing I need to shut one or two down for a bit of experimentation would not go down well. There are really no other computer systems in the house - everything is consolidated into my unRaid server so it's something of a workhorse. I've been using Macs since a venerable LCII on OS6.x and always found them to run better with more RAM, particularly for the visual applications, or when using lots of apps simultaneously. For the Windows VM use cases, I find the same. You can never have enough RAM. I'll have it etched on my headstone.
    1 point
  41. Mover completed and everything appears to be working better. Appreciate it!
    1 point
  42. First thing to do would be to respect the max supported RAM speeds depending on the config, you're currently overclocking the RAM, also check the power supply idle control setting.
    1 point
  43. You can rename a pool by stopping the array, then click on the pool name in the Main page, which brings up the pool settings. In here clicking on the name opens a window to rename the pool. Renaming a pool does not change any internal references. For example if the path of your docker image contains a direct reference to the pool name, e.g. /mnt/cache/system/docker.img, you will need to update this reference manually.
    1 point
  44. Everything should work as you have it. If you have been happy with the prior system this will be much better. What you have selected isn't much different than what I started with, though I had 16 gigs of ram. The I3-9100 is practically the same power as a R3-2200G which is what I started with. What I noticed is it would peg the CPU at 100% with a movie on Plex. I noticed no issues with the movie and it played just fine. By switching to a 6 core 12 thread processor it rarely ever gets above 50% usage with everything else in the system being the same. A maxed out CPU will run hotter and potentially have it's life reduced compared to one running cooler and at a lesser max usage. If nothing else, add a better CPU cooler than the stock one. The stock cooler isn't very robust. If it were me, I would scale back to a 500 gig cache drive and put the extra money into a faster CPU. Having said that, what you have will work just fine.
    1 point
  45. Sometime very late last night/early this morning, the UniFi docker container hit the 2GB RAM limit I had set for the container. The result was that it kept running and did something to reduce RAM usage. When I went to bed last night, the container was at 1.97 GB RAM usage. When I checked it this morning, all was well and RAM usage was at 1.02GB. It has since increased to 1.11GB. The container was not restarted and never crashed according to logs. Having a RAM limit set on this container is very useful; otherwise, it would probably continue to use RAM until system RAM is gone or the container is restarted though an update or appdata backup. In either instance, with weekly updates and/or backups, somewhere around 2GB utilization is what I saw before the container was restarted and RAM utilization reset. The RAM limit on the container seems to cause it to "clean out" RAM it is not really using when it hits the limit.
    1 point
  46. Bonjour et bienvenue Je tourne également sous Jeedom (installer sur une co Debian sous unraid) en utilisant principalement du z-wave dans la maison
    1 point
  47. Pour Duplicati, il faut monter le /mnt/user, ensuite tu peux cocher ce dont tu as besoin à sauvegarde. Normalement avec CA Backup, tu fais bien un backup de la clé USB (donc toute ta config unraid), ton AppData pour Docker et le fichier img pour la config des VM
    1 point
  48. Ah-ha! I may have found the problem: If I change the Primary vDisk Location from Auto to Manual, I can then Update the VM. But when I Edit it again, the Primary vDisk Location reverts to Auto.
    1 point
  49. I don't believe there is log rotation enabled in unraid's implementation of docker. There doesn't seem to be a limit on log size either. I kept running out of space in my docker image and finally realized that a few of the containers were filling up the image with their logs. One container had a log that was 2.8GB (couchpotato) The temporary solution was to reinstall the container and it reset the files in /var/lib/docker/containers and got rid of the large logs. It freed up a ton of space, too. (For any users interested, the easiest way to reinstall is to click on the container image in the unraid gui, select edit, don't change any settings and just hit save, it will reinstall the container with the same settings) Is there anyway a limit on size can be implemented, or log rotation? Or perhaps an option to move the logs somewhere else where there is more storage available? Thanks PS. If anyone's interested in checking how big their logs are, type this in the unraid terminal and it will list the largest logs: du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60
    1 point