Jump to content


Popular Content

Showing content with the highest reputation since 07/10/20 in all areas

  1. 8 points
    To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image. The notable difference is that now the GUI supports setting a folder directly. The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash). If you simply pick an already existing folder, the system will automatically assume that you want to create an image. Hopefully for the next release, this behaviour will be modified and/or made clearer within the docker GUI.
  2. 7 points
    مرحبا The Arabic language pack is now available for Unraid in Community Applications. To install, please go to the Apps tab and search for Arabic, or click on the "Language" section on the lefthand navigation bar and you will find Arabic available! Big thanks to @albakhit, @Zyzto and the rest of the Arabic team for all of the hard work! 🙂
  3. 5 points
    6.9.0-beta25 vs. -beta24 Summary: fixed emhttpd crash resulting from having NFS exported disk shares fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below) fixed spin-up/down issues ssh improvements (see SSH Improvements below) kernel updated from 5.7.7 to 5.7.8 added UI changes to support new docker image file handling - thank you @bonienl. Refer also to additional information re: docker image folder, provided by @Squid under Docker below. known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Multiple Pools This features permits you to define up to 35 named pools, of up to 30 storage devices/pool. The current "cache pool" is now simply a pool named "cache". Pools are created and managed via the Main page. Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to current cache pool operation. Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order: pool assigned to share disk1 : disk28 all the other pools in strverscmp() order. As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "unRAID array" pools. We are also considering zfs support. Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools. Upon array Start you might understandably assume there are now x2 pools with exactly the same data. However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device. 1 MiB Partition Alignment We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256. This partition type is now used for all non-rotational storage (only). It is not clear what benefit 1 MiB alignment offers. For some SSD devices, you won't see any difference; others, perhaps big performance difference. LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be). To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device. Of course this will erase all data on the device. Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command: blkdiscard /dev/xxx # for exmaple /dev/sdb or /dev/nvme0n1 etc) WARNING: be sure you type the correct device identifier because all data will be lost on that device! Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it. Language Translation A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI. There are several language packs now available, and several more in the works. Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language. Note: Community Applications must be up to date to install languages. See also here. Each language pack exists in public Unraid organization github repos. Interested users are encouraged to clone and issue Pull Requests to correct translations errors. Language translations and PR merging is managed by @SpencerJ. Linux Kernel Upgraded to 5.7. These out-of-tree drivers are currently included: QLogic QLGE 10Gb Ethernet Driver Support (from staging) RealTek r8125: version 9.003.05 (included for newer r8125) HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request) Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted. These drivers are currently omitted: Highpoint RocketRaid r750 (does not build) Highpoint RocketRaid rr3740a (does not build) Tehuti Networks tn40xx (does not build) If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers. Base Packages All updated to latest versions. In addition, Linux PAM has been integrated. This will permit us to implement 2-factor authentication in a future release. Docker Updated to version 19.03.11 It's now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up. We also made some changes to add flexibility in assigning storage for the Docker engine. First, 'rc.docker' will detect the filesystem type of /var/lib/docker. We now support either btrfs or xfs and the docker storage driver is set appropriately. Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name. For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs. If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs. We also added the ability to bind-mount a directory instead of using a loopback. If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker". If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker. For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker". Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this. Additional information from user @Squid: Virtualization libvirt updated to version 6.4.0 qemu updated to version 5.0.0 In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. Refer also @ljm42's excellent guide. In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS. The primary use case is to facilitate accelerated transcoding in docker containers. For this we require Linux to detect and auto-install the appropriate driver. However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page. Users passing GPU's to VM's are encouraged to set this up now. "unexpected GSO errors" If your system log is being flooded with errors such as: Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66 You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net". In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page. For other network configs it may be necessary to directly edit the xml. Example: <interface type='bridge'> <mac address='xx:xx:xx:xx:xx:xx'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> SSH Improvements There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions): only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root') non-null password is now required non-root tunneling is disabled In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory. This means any files you might put into /root/.ssh will be persistent across reboots. Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs). The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended). Other AFP support has been removed. Numerous other Unraid OS and webGUI bug fixes and improvements. Version 6.9.0-beta25 2020-07-12 Linux kernel: version 5.7.8 Management: fix emhttpd crash resulting from exporting NFS disk share(s) fix non-rotational device partitions were not actually being 1MiB aligned dhcpcd: ipv6: use slaac hwaddr instead of slaac private docker: correct storage-driver assignemnt logic ssh: allow only root user, require passwords, disable non-root tunneling ssh: add /root/.ssh symlink to /boot/config/ssh/root directory syslog: configure to also listen on localhost udp port 514 webgui: Added btrfs info for all pools in diagnostics webgui: Docker: allow BTRFS or XFS vdisk, or folder location webgui: Multi-language: Fixed regression error: missing indicator for required fields webgui: Dashboard: fix stats of missing interface
  4. 5 points
    I'm currently searching for some users that help test my custom build with iSCSI built into Unraid (v6.9.0 beta25). EDIT: Also made a build for Unraid v6.8.3. Currently the creation of the iSCSI target is command line only (will write a plugin for that but for now it should also work this way - only a few commands in targetcli). The configuration is stored on the boot drive and loaded/unloaded with the array start/stop. If somebody is willing to test the build please contact me. As always I will release the complete source code and also implement it in my 'Unraid-Kernel-Helper Docker Container' so that everyone can build his own version with other features like nVdidia, ZFS, DVB also built in.
  5. 5 points
    The only logic course of action would be to change your name, move to another country hoping to escape the hitman after you. 😱
  6. 5 points
    Did a test with a Windows VM to see if there was a difference with the new partition alignment, total bytes written after 16 minutes (VM is idling doing nothing, not even internet connected): space_cache=v1, old alignment - 7.39GB space_cache=v2, old alignment - 1.72GB space_cache=v2, new alignment - 0.65GB So that's encouraging, though I guess that unlike v2 space cache the new alignment might work better for some NVMe devices and don't make much difference for others, still worth testing IMHO, since for some it should also give better performance, for this test I used an Intel 600p.
  7. 5 points
    New release of UD. Changes: When changing the mount point (which is also the share name), the mount point is checked for a duplicate of a user share or another UD device. Samba cannot handle two shares with the same name. When mounting a UD device, the mount point is checked for a duplicate and if found will not mount the device. You will need to resolve the duplicate in order to mount the device. Add '--allow-discards' to luks open when an encrypted disk is a SSD so discard and trim will work on the disk.
  8. 4 points
    Does it always work fine with another browser?
  9. 4 points
    Having had a few flashdrives fail over the years and all reviews i see online regarding flash drives seem to only care about speed, size and price I thought i would test 27 flash drives to find the most reliable one that i could.
  10. 4 points
    Only add drives to the array as you need the capacity. Don't populate all 18 data drives, only put in what's needed to hold your current data load plus 1. So, if you have 50TB you are going to load, only put in 8 data drives, for a usable total of 64TB leaving 14TB free. When you get down to 8TB free, add another 8TB. Leave the rest on the shelf if you already bought them, or better yet leave them on the store shelf. ALWAYS keep one physical drive slot empty, if that means sizing up replacement drives, then do it. The number of times I've seen on this forum where it's been useful to have an empty slot for troubleshooting or recovery purposes is countless. One of Unraid's great strengths is the ability to add drives as needed instead of trying to plan far into the future. Fewer drive slots in use equals fewer failure points, less power and heat, ability to pivot to newer technologies as they emerge, both hardware and software. When you land on a format and encryption decision, you can change your mind as you add new drives, if the tech or your needs shift. Each new drive can use a different format and still participate in the array as a whole, either in the parity array(s)*(7.X?) or cache pool(s)*(6.9.X). You can use new drives as you add them to move data from older obsolete formats and strategy, keeping the ability to refresh your array as things progress. You asked about what formats and such, but I'm giving you the answer to solve the long question, because information that is current will be old news soon enough. Good news is that Unraid has the long term solution, whatever that happens to be. That's how I personally would set up a new build.
  11. 4 points
    The next iteration of 'multiple pools' is to generalize the unRAID array so that you can have multiple "unRAID array pools". Along with this, introduce concept of primary pool and cache pool for a share. Then you could make different combinations, e.g., brtrfs primary pool with xfs single device cache. To have 'mover' move stuff around you would reconfigure the primary/cache settings for a share. This will work not get done for 6.9 release however.
  12. 4 points
    We have a solution for this in the works ...
  13. 4 points
    My take on this. Wifi support in Linux is limited, the main reason is drivers are not free. It will be hit and miss with your hardware. Unraid is based on slackware, which has virtually no wifi implementation. The network stack of Unraid is heavily modified to support a lot more networking than slackware offers. Wifi support requires additional development, which isn't a light job (not to mention hardware purchases to do so). Wifi is a support nightmare. Speaking from own experience I can tell that most connection problems are caused by wifi. It is questionable whether the additional support burden is worth it. The easy solution I have done, is the installation of an AP set as client. APs come in all sorts of shapes and sizes. It is a matter of finding the right one for the job. An advantage with a separate AP is its placement. Instead of a server somewhere tucked away in a corner, the AP can be placed anywhere in the room for best reception.
  14. 4 points
    While I appreciate your interest in esthetics, there is more to consider than just 'aligning' disk drive mounts: This disk is mounted without a UD script, it is probably better to put it in the cache pool of 6.9. UD was intended to be a backup/temporary mount plugin. Now with the cache pool feature of 6.9, it is probably best to move the drive to the cache pool. You get alignment like you want with temperature and error monitoring that the cache pool provides. The remote shares also would have to be considered in any page alignment and might be more challenging. Yes, the flow doesn't work for me. All that being said, I will continue to work on the layout and improve the 'look' of UD as I see things to make it better. I am open to new ideas. UD was originally written for older versions of Unraid and the look and feel of Unraid is evolving. I work on UD as I get time because it is not a full time job for me. Really! 'Stupid thing'? It it's so stupid why do you want to use it? If you would read the first and second post, you'd see this: Your snarky attitude is not really appreciated. I find this community of volunteers providing support is the best I've ever seen for a product of this kind. We are all willing to do what we can to help you get through the challenges of Unraid and those of us doing add-on development (plugins and dockers), are more than happy to entertain new and better features if they make sense for the wider community and are not a one off request.
  15. 4 points
    Because that's what Microsoft chose. Good reference to other references: https://superuser.com/questions/1483928/why-do-windows-and-linux-leave-1mib-unused-before-first-partition Theoretically partitions should be aligned on SSD "erase block size", eg: https://superuser.com/questions/1243559/is-partition-alignment-to-ssd-erase-block-size-pointless However "erase block size" is an internal implementation detail of an SSD device and the value is not commonly exported by any transfer protocol. You can write a program to maybe figure it out: https://superuser.com/questions/728858/how-to-determine-ssds-nand-erase-block-size But, referring back to "that's what Micorsoft chose" - SSD designers are going to make sure their products work well with Windows, and they know how Microsoft aligns partitions. Hence, pretty sure trying to figure out exact alignment is pointless IMHO.
  16. 4 points
    Long overdue updates: I am so happy with the Optane performance that I added another one. This time it's the same 905p but 380GB 22110 M.2 form factor. I put it in the same Asus Hyper M.2 adapter / splitter so it's now fully populated (and used exclusively for the workstation VM). My workstation VM now has 380GB Optane boot + 2x 960GB Optane working drives + 2x 3.84TB PM983 storage + 2TB 970 Evo temp. Finally bought a Quadro P2000 for hardware transcoding. Had some driver issues with didn't agree with Plex so I spent a few days migrated to Jellyfin and then Plex issue was fixed. 😅. I still decided to maintain both Plex and Jellyfin. The former is for local network (mainly because I already paid for Plex lifetime membership) and the latter for remote access (because Jellyfin users are managed on my server instead of through 3rd party like Plex). And talking about remote access, finally come to setting up letsencrypt to allow some remote access while on holiday e.g. watch media, read comics etc. Had to pay my ISP for this remote access privilege but it's not too bad. Resisted checking out 6.9.0 beta for quite sometime and then noticed beta22 enables multiple pools so I made the jump only to open the zfs can of worms. 😆 So it started with the unraid nvidia custom build has a aforementioned driver clash with Plex. That forces me to look around a bit and noticed ich777 custom version which has a later driver. He also built zfs + nvidia versions which I decided to pick just out of curiosity. My original idea was to set the 2x Intel 750 in a RAID-0 btrfs pool as my daily network-based driver. That wasn't the ideal thing though since I have some stuff that I want fast NVMe speed but not the RAID-0 risks. So after some reading, I found out that zfs pool is created based on partitions instead of full disks (in fact, the zpool create on a /dev/something will create 2 partitions, p1 is BF01 (Zfs Solaris/Apple) + p9 is 8MB BF07 (Solaris reserve) with only the BF01 used in the pool). So then came the grand plan. Run zpool create on the Intel 750 NVMe just to set up p9 correctly, just to be safe. Run gdisk to delete p1 and split into 3 partitions. 512GB + 512GB + the rest (about 93GB). Zpool p1 on each 750 in RAID 0 -> 1TB striped Zpool p2 on each 750 in RAID 1 mirror -> 0.5TB mirror Zpool p3 on each 750 in RAID 1 mirror -> 90+GB mirror Leave p9 alone So I now have a fast daily network driver (p1 striped), a safe daily network driver (p2 mirror e.g. for vdisks, docker, appdata etc.) and a document mirror (p3). I then use znapzend to create snapshots automatically. Some tips with zfs - cuz it wasn't that smooth sailing. It's quite appropriate that the zfs plugin is marked as for expert uses only in the CA store. I specifically use by-id method to point to the partitions. I avoid using /dev/sd method since the code can change. Sharing zfs mounts on SMB causes spamming of sys_get_quota warnings because SMB tries to read quota information that is missing in /etc/mtab. This is because zfs import manages mounts outside of /etc/fstab (which creates entries in /etc/mtab). The solution is pretty simple by echoing mount lines into /etc/mtab for each filesystem that is exposed to SMB, even through symlinks echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab For whatever reasons, qcow2 image on the zfs filesystem + my VM config = libvirt hanging + zfs unable to destroy the vdisk filesystem. After half a day of trouble shooting and trying out various things, my current solution is to create volume instead of filesystem (-V to create volume, -s to make it thin provisioned). That would automatically create a matching /dev/zd# (zd instead of sd, starting with zd0, zd16, zd32 i.e. increase by 1 hex for each new volume, don't ask me why) that you can mount in the VM as a block device through virtio (just like you would do to "pass through" storage by ata-id). You then need to use qemu-img convert to convert your vdisk file directly into /dev/zd# (target raw format) and voila you have a clone of your vdisk in the zfs volume. Just have to make sure the volume size you create matches the vdisk size. Note that you might want to change cache = none and discard = unmap in the VM xml. The former is recommended but I don't know why. The latter is to enable trim. Presumably destroying a volume will change subsequent zd# codes, requiring changes to the xml. I don't have enough VM for it to be a problem and I also don't expect to destroy volumes often. This is a good way to build snapshot and compression capabilities for OS / filesystem that doesn't support it natively. For compression, there should be somewhat better performance as it's done on the host (with all cores exposed) instead of limited to the cores assigned to the VM. Copying a huge amount of data between filesystems in the same pool using console rsync seems to crash zfs - indefinitely hanging that requires a reboot to get access back. Don't know why. Doing it through smb is fine so far so something is kinda peculiar there. Doesn't affect me that much (only discovered this when trying to clone appdata and vdisk between 2 file systems using rsync). You can use the new 6.9.0 beta feature of using folder as docker mount on the zfs filesystem. It works fine for me with a major annoyance that it would create a massive number of children filesystems required for docker. It makes zfs list very annoying to read so after using it for a day, I moved back to just have a docker image file. I create a filesystem for the appdata of each group of similar dockers. This is to simplify snapshots while still allowing me to have some degrees of freedom in defining snapshot schedules. Turning on compression improves speed but with caveats: It only improves speed with highly compressible data. e.g. reading a file created by dd from /dev/null is 4.5TB/s (write speed was 1.2TB/s) For highly incompressible stuff (e.g. archives, videos, etc.), it actually has a speed penalty, very small with lz4 but there's a penalty. You definitely want to create more filesystems instead of just subfolders to manage compression accordingly. gzip-9 is a fun experiment to hang your server during any IO. When people say lz4 is the best compromise, it's actually true so just stick to that. Future consideration: I'm considering getting another PM983 to create a raidz1 pool in the host + create a volume to mount as virtio volume. That will give me snapshot + raid5 + compression to use in Windows. Not sure about performance so may want to test it out.
  17. 4 points
    False. BTRFS is the default file system for the cache drive because the system allows you to easily expand from a single cache drive to be a multiple device pool. If you're only running a single cache drive (and have no immediate plans to upgrade to a multi-device pool), XFS is the "recommended" filesystem by many users (including myself) The docker image required CoW because docker required it. Think of the image akin to mounting an ISO image on your Windows box. The image was always formatted as BTRFS, regardless of the underlying filesystem. IE: You can store that image file on XFS, BTRFS, ReiserFS, or via UD ZFS, NTFS etc. More or less true. As said, you've always been able to have an XFS cache drive and the image stored on it. The reason for the slightly different mounting options for an image is to reduce the unnecessary amount of writes to the docker.img file. There won't be a big difference (AFAIK) if you choose a docker image formatted as btrfs or XFS. But, as I understand it any write to a loopback (ie: image file) is always going to incur extra IO to the underlying filesystem by its very nature. Using a folder instead of an image completely removes those excess writes. You can choose to store the folder on either a BTRFS device or an XFS device. The system will consume the same amount of space on either, because docker via overlay2 will properly handle duplicated layers etc between containers when it's on an XFS device. BTRFS as the docker.img file does have some problems. If it fills up to 100%, the it doesn't recover very gracefully, and usually requires a delete of the image and then recreating it and reinstalling your containers (a quick and painless procedure) IMO, choosing a folder for the storage lowers my aggravation level in the forum because by it's nature, there is no real limit to the size that it takes (up to the size of the cache drive), so the recurring issues of "image filling up" for some users will disappear. (And as a side note, this is how the system was originally designed in the very early 6.0 betas) There are just a couple of caveats with the folder method which is detailed in the OP (my quoted text). Cache only share. Simply referencing /mnt/cache/someShare/someFolder/ within the GUI isn't good enough. Ideally within its own separate share (not necessary, but decreases the possibility of ever running new perms against the share) The limitations on this first revision of the GUI supporting folders, that doesn't make how you do it exactly intuitive. Will get improved by the next rev though. Get over the fact that you can't view or modify any of the files (not that you ever need to) within the folder via SMB. Just don't export it so that it doesn't drive your OCD nuts. There is also still some glitches in the GUI when you use the folder method. Notably, while you can stop the docker service, you cannot re-enable it via the GUI (Settings - docker). (You have to edit the docker.cfg file and reenable the service there, and then stop / start the array)
  18. 4 points
    @Pducharme, @Allram & @david279 Prebuilt images for beta25 are now online
  19. 4 points
    I had the same issue... re-linking/aliasing to the existing libraries (libssl.so.1.1 and libcrypto.so.1.1) fixed the issue. A simple `ls -la` helped me figure out what libraries I had... root@Tower:~# cd /usr/lib64/ root@Tower:/usr/lib64# ln -s libssl.so.1.1 libssl.so.1 root@Tower:/usr/lib64# ln -s libcrypto.so.1.1 libcrypto.so.1 root@Tower:/usr/lib64# ldd /usr/bin/iperf3 linux-vdso.so.1 (0x00007ffe68998000) libiperf.so.0 => /usr/lib64/libiperf.so.0 (0x00001474185ba000) libssl.so.1 => /usr/lib64/libssl.so.1 (0x0000147418525000) libcrypto.so.1 => /usr/lib64/libcrypto.so.1 (0x000014741824b000) libm.so.6 => /lib64/libm.so.6 (0x00001474180fe000) libc.so.6 => /lib64/libc.so.6 (0x0000147417f19000) libz.so.1 => /lib64/libz.so.1 (0x0000147417d02000) libdl.so.2 => /lib64/libdl.so.2 (0x0000147417cfb000) libpthread.so.0 => /lib64/libpthread.so.0 (0x0000147417cd9000) /lib64/ld-linux-x86-64.so.2 (0x00001474187f2000) root@Tower:/usr/lib64# iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------
  20. 3 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottum of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with nVidia & DVB drivers (currently DigitalDevices, LibreElec, XBOX One USB Adapter and TBS OpenSource drivers selectable) optionally you can also enable ZFS, iSCSI Target, Intel iGPU and Mellanox Firmware Tools (Mellanox only for 6.9.0 and up) support. nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. You also can set the ZFS version from 'latest' to 'master' to build from the latest branch from Github if you are using the 6.9.0 repo of the container. iSCSI Target: Please note that this feature is at the time command line only! ATTENTION: Always mount a block volume with the path: '/dev/disk/by-id/...' (otherwise you risk data loss)! For instructions on how to create a target read the manuals: Manual Block Volume.txt Manual FileIO Volume.txt ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). Plugin now available (will show all informations about the images/drivers/modules that it can get): https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Or simply download it through the CA App This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Beta Build (the following is a tutorial for v6.9.0): Upgrade to your preferred stock beta version first, reboot and then start building (to avoid problems)! Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' (now you can also put in for example: 'beta25' without quotes to automaticaly download Unraid v6.9.0-beta25 and the other steps are not required anymore) Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) (You can get the files bzimage bzroot bzmodules bzfirmware also from the Beta zip file from Limetch or better you first upgrade to that Beta version and then copying over the files from your /boot directory to the directory created in step 5 to avoid problems) !!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!! Here you can download the prebuilt images: Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 440.100) Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 440.100 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 440.100 | ZFS version: 0.8.4) Unraid Custom DVB builtin v6.8.3: Download (LE driver: 1.4.0) Unraid Custom ZFS builtin v6.8.3: Download (ZFS version: 0.8.4) Unraid Custom iSCSI builtin v6.8.3: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt Unraid Custom nVidia builtin v6.9.0 beta25: Download (nVidia beta driver: 450.57) Unraid Custom nVidia & DVB builtin v6.9.0 beta25: Download (nVidia beta driver: 450.57 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.9.0 beta25: Download (nVidia beta driver: 450.57 | ZFS Build from 'master' branch on Github on 2020.07.12) Unraid Custom ZFS builtin v6.9.0 beta25: Download (ZFS Build from 'master' branch on Github on 2020.07.12) Unraid Custom iSCSI builtin v6.9.0 beta25: Download (targetcli version: 2.1.53) Manual Block Volume.txt Manual FileIO Volume.txt
  21. 3 points
    Добрый день! Хочу представить Вам развивающийся блог, созданный новичком для новичков - MyUnraid.ru На создание личного блога меня сподвигло отсутствие русскоязычных мануалов по настройке OS Unraid и управлению NAS на нем в целом. На ресурсе планируется публикация собственных настроек, гайдов, твиков и прочих инструкций для: Докер контейнеров Плагинов Скриптов На сегодняшний момент имеется порядка 15-20 сформированных статей данной тематики, которые направленны на облегчение управления собственным сервером. Twitter Telegram
  22. 3 points
    LT has only officially said they are only considering ZFS. If it's just integration with the ZFS plugin (i.e. everything else manual) then it's the same as just using the ZFS plugin so there isn't anything to consider. That means don't expect it in the GUI anytime soon. Specifically to 6.9.0, I don't think it will have ZFS for the reasons I already posted here. I think you also misunderstood a few things. If you don't want to use the array at all, plug in a USB stick, assign it as disk1 and Bob's your uncle. No need to waste a HDD slot. You can do a feature request to have the one-device-in-array requirement to expand to one-device-in-array-or-cache requirement but if you run pure ZFS then that wouldn't make any diff. When ZFS is integrated, don't expect it to replace the array either. The array is a primary feature of Unraid, why it's called "Un"raid, why it's such a good NAS OS for media storage etc. Mover is something users will have to "learn" (more like familiarize) if they want to use the array. Having ZFS isn't gonna change that. Multiple arrays is not the same as multiple pools and wouldn't be an ancillary benefit of ZFS integration. Even considering multiple-pool, I don't see how it is an ancillary benefit of ZFS integration. 6.9.0 has multiple-pool without ZFS. BTRFS has quota, it's not ZFS exclusive. BTRFS has snapshot, it's not ZFS exclusive. AFAIK, there are really just 2 ZFS key features that fundamentally cannot be replicated elsewhere. Zvol - KVM/QEMU can use vdisk as an alternative to using zvol so it's not essential. Write atomicity - important to those who run RAID5/6 pools but as someone who has experience recovering from both ZFS and BTRFS RAID5 failure, I can tell you it's not essential. And implementing ZFS to Unraid isn't as detriment-free as people seem to assume. For example, there's an official bug in ZFS which makes it not respecting isolcpus. Sure, lots of things don't respect isolcpus but specifically to ZFS, it causes severe lag under heavy IO if the cores are share with a VM. That makes ZFS pool a big no-no for those who want the most consistent performance e.g. gaming VM, which is a major use case for Unraid. FreeNAS is based on FreeBSD which officially says "Note: VGA / GPU pass-through devices are not currently supported." so I'm guessing that's why nobody paid attention to the ZFS bug since FreeNAS users don't even have the gaming VM use case. I'm not saying there's no reason to support ZFS e.g. to attract FreeNAS users. But IMO, it's just another feature in the long wanted list and given the pros/cons, there are other features that should have higher priority and/or can be accomplished with less effort.
  23. 3 points
    I have just pushed what I hope is the ‘fixed’ version of the plugin to GitHub. Let me know if you notice any further anomalies/bugs.
  24. 3 points
  25. 3 points
    Everything is built for unRaid v6.8.3 and working as expected. Please send me a PM if you want the download link and also a instruction.
  26. 3 points
    While things may change, I really don't expect LT to implement ZFS in 6.9.0 due to a few factors: Has the question surrounding zfs licensing been answered? It's less of a legal concern for an enthusiastic user to compile zfs with Unraid kernel and share it. Most businesses need to get proper (and expensive) legal advice to assess this sort of stuff. ZFS would count as a new filesystem and I could be wrong but I vaguely remember the last time a new filesystem was implemented was from 5.x to 6.x with XFS replacing ReiserFS. So it wasn't just a major release but a new version number all together. At the very least, 6.9.0 beta has gone quite far along that adding ZFS would risk destabilising and delaying the release (which is kinda already overdue anyway as kernel 5.x was supposed to be out with Unraid 6.8 - so overdue that LT has made the unprecedented move of doing public beta instead of only releasing RC) So TL;DR: you are better off with the ZFS plugin (or custom-built Unraid kernel with zfs baked in) if you need ZFS now. Other than the minor annoyance of needing to use the CLI to monitor my pool free space and health, there isn't really any particular issue that I have seen so far, including when I attempted a mocked failure-and-recovery event (the "magic" of just unplugging the SSD 😅)
  27. 3 points
    Many thanks to you and @albakhit& @Zyzto 🌹 شاكر لكم جميعًا على المجهود العظيم
  28. 3 points
    Here's the problem. As soon as we publish a release with Nvidia/AMD GPU drivers installed, any existing VM which uses GPU pass through of an Nvidia or AMD GPU may stop working. Users must use the new functionality of the Tools/System Devices page to select GPU devices to "hide" from Linux kernel upon boot - this prevents the kernel from installing the driver(s) and initializing the card. Since there are far more people passing through GPU vs using GPU for transcoding in a Docker container, we thought it would be polite to give those people an opportunity to prepare first in 6.9 release, and then we would add the GPU drivers to the 6.10 release. We can make 6.10 a "mini release" which has just the GPU drivers. Anyway, this is our current plan. Look about 10 posts up.
  29. 3 points
    I was messing with adding in a new drive I will be using for parity and it had write cache disabled, I decided to try a fix I found after finishing the last drives by using a more advanced smartctl command and what do you know, it worked! Much easier then using windows, don't even have to reboot. smartctl -s wcache-sct,on,p /dev/sdX I edited the OP with this command.
  30. 3 points
  31. 3 points
    Not having NFS4 is a real nuisance. There are valid reasons which have been outlined in numerous threads. I installed a workaround which is ok for the time being, but it is a) 3rd party b) still under development c) requires resources, which would be more useful in other places d) requires extra maintenance e) has its own (strange) behavior, which needs to be managed f) requires stupid exceptions in the setup of the infrastructure the Unraid NAS is operated in All of the above is superfluous and could be avoided. The posts quoted in the original post show, that it had been "sort of" considered at some point in 2016, but not followed up any further. Even worse, despite the numerous requests from the community (it goes back to 2011) this topic seems to be completely ignored by limetech. No comments, no answers, no roadmap, no proposals for alternative setups/configurations, nothing. Complete ignorance for years, apart from the single comment in 2016. This is not the way to deal with customers and not the way to attract new customers. My apologies to the developers for my harsh comments, but you've been working yourselves towards this sort of reaction. Regards
  32. 3 points
    <step on soap box> Keep in mind that the Unraid array disk configuration is static and doesn't change until it is stopped. UD has to deal with hot plugged disks, devices being dynamically mounted/unmounted, and keep the status of remote mounts current because thay can come and go if there are network issues. UD was designed as a means to hot plug disks and make it easy to do backups and copy files to the array. Over time users wanted UD to mount devices for VMs, Dockers, and heavy downloading schemes - not really what it was designed for. If you have a lot of disks in UD, it's time to re-think your needs. The pool feature of Unraid 6.9 allows you to have many disks in separate or combined pools with the additional support for array disk spin down and temperature monitoring. UD does a refresh when events occur that affect the status shown on the UI. Such as a disk being hot plugged. While it might be nice to have updated used and free status, disk temperatures, and open files real time, UD was not intended to do that and if that is what you need, put the disks in the array pool. That being said, a lot has been done to make UD more responsive. Disk temperatures are refreshed every 2 minutes, not every refresh of the UI. Getting disk parameters with commands like 'df' have been timed out because when a remote share goes off-line, the 'df' command hangs on all devices. </step off soap box>
  33. 3 points
    This is Settings/Disk Settings/Tunable (poll_attributes) This determines how often to issue 'smartctl' commands to the storage devices in order to read the temperatures. The 'smartctl' program issues a couple special commands to a storage device to read the SMART data. Typically with HDD's not only will this spin them up (if spun down) but will also flush and pause the I/O queue and send the r/w heads to the inner cylinder of the drive. This is a pretty large disruption in I/O flow and if it happens too often can result in 'glitching' when reading a video stream for example. The default value of '1800' means 30 min between polls. Probably you want to ask, "If a fan fails, how long before a drive burns up?" and set the poll_interval lower than that You can set it really low, like to 10 sec, and watch your HDD activity LED's burst with activity every 10 sec as 'smartctl' does it's thing. Other notes: If a drive is already spun-down when it's time to read it's temperature, we skip issuing 'smartctl' so as not to spin the drive up. Whenever a HDD is spun-up, either via command or new I/O to the device, after the spin-up has completed, we'll use 'smartctl' to read the temperature. Clicking on a device from the Main page will issue 'smartctl' in order to obtain and report the current SMART data for that device. This will result in spinning up a spun-down HDD. Even though 'smartctl' itself generates I/O to a drive, that I/O is not counted toward "activity" for purposes of spinning down the drive as a result of inactivity.
  34. 3 points
    Running the latest beta 6.9.0-beta25 Formatted my SSD's to the new partition alignment. Massive boost in speed on my Samsung 860Evo and Qvo drives! And not locking up when I do alot of transfers as they have done earlier.
  35. 3 points
    This is fixed in next beta... just hang on a bit..
  36. 3 points
    'next' is like hotel California - you can checkout anytime you like but you can never leave! We'll take a look.
  37. 3 points
    Indeed, the inability to use NFS v4 is still an annoyance. I don't have any machines running Microsoft - all my desktop m/cs run Linux Mint, my KODI/LibreElec boxes run a Linux kernel, my Squeezeplayer boxes run a Linux kernel, my homebrew domestic lighting control system runs on Linux, even Android phones run a Linux kernel. Why would I want to run a microsoft network filing technology. T0rqueWr3nch has highlighted some advantages of using the latest version of NFS in such an environment. Please, if it's simply a matter of turning on a kernel option, and it has no adverse effect on any other functionality, can this be implemented in the next release?
  38. 2 points
    Thanks for sharing this here. Rclone is working for me again. For anyone else that comes here looking for a solution, you just need to reboot unraid and the rclone plugin will be updated with the fix.
  39. 2 points
    Go one step further. After following all the areas, go to Streams and create a new stream. Check off all the various options accordingly. Now you can automatically display everything by simply selecting that stream under "My Activity Streams". You can also replace the "UNREAD" button on the forum by after displaying your custom stream there are a couple of icons after the name, one of which selects your default stream
  40. 2 points
    I just tested this to make sure. I changed one share which was prefer on one pool to prefer on another pool and invoked mover. Nothing was moved. So it appears that when moving files TO a pool, only array files are considered by mover and files in another pool will not be moved.
  41. 2 points
    This thread can be used as support for the Trilium Docker. About the template This template refers to the Trilium Notes Server running as a docker container. It uses the latest released version as the "latest" tag isn't recommended for stable use. The "/trilium-data" directory can be configured and has a default of "/mnt/user/appdata/trilium". It contains the "backup" and "log" directories as well as the configuration and database files. The backup directory ("/trilium-data/backup") directory can be configured to a different location. The default location is "/mnt/user/appdata/trilium/backup" About Trilium Trilium Notes is a hierarchical note taking application with focus on building large personal knowledge bases. Links Application Name: trilium Wiki: https://github.com/zadam/trilium/wiki Github Repo: https://github.com/zadam/trilium Docker Hub: https://hub.docker.com/r/zadam/trilium Template Repo: https://github.com/BGameiro2000/trilium-unraid Last Updated: 2020/07/22
  42. 2 points
    @johnnie.black Set my RAM to 2400 and I am at 25 days uptime with no issue. I think this is the longest uptime I've had since converting my box from Ubuntu 19.04 to Unraid. Thanks for the suggestion.
  43. 2 points
    La licence est à vie et comprend aussi les futures versions.
  44. 2 points
    @i-chat, perhaps you should make this a new feature request and present your points in that forum. You can find it here: https://forums.unraid.net/forum/53-feature-requests/ WiFi may be maturing to the point where it might be usable. (I know that it will introduce another variable when we see threads complaining about about transfer speeds. I know that I for one, will avoid those involving WiFi like COVID-19...)
  45. 2 points
    What I don't get about this whole thread is that it's VERY easy and cheap to get wireless support for Unraid right now, no need to wait for OS support. Wireless game adapters are readily available, and easy to change and upgrade when wireless technology advances, instead of waiting for linux support to catch up.
  46. 2 points
    وعليكم السلام في البداية اود انا اشكر مبادرتك للمساهمة في ترجمة أنريد ممكن تتواصل مع مدير المنتدى على رابط ادناه اذا حاب تشارك بالترجمة https://unraid.net/blog/professional-translators-wanted
  47. 2 points
    I have a NiceHash OS vm that I like to run in the background when I'm not using my gpu or on vacation. I often lose power for several seconds or minutes at a time where I live, and since the GPU draws so much power when mining the UPS can only run for 1-2 minutes on UPS power alone, which is also about the amount of time unraid needs to power down. Thus UPS daemon immediately will shutdown unRaid upon power loss, and If I'm away for several days I have no way to restart unraid once power is regained (I know you can use ACPI, but it won't work if it's a clean shutdown). I don't know how many other miners are using unraid, but I think it could be beneficial to be able to shutdown VMs or Docker containers upon power loss in order to maximize time on UPS, and then start them back up once power is back. It could be defined as a low-power mode where you can define critical containers or vms.
  48. 2 points
    I wouldn't bother with the Ethernet protection. The idea behind that port is to protect you from any surges through the ethernet cables themselves. Also does some filtering.. "Maintains clean power for connected equipment by filtering out electromagnetic and radio frequency interference to improve picture and sound quality." I don't think the filtering part is really needed... or the surge protection. If you wanted to use it, then I think you would want to have your cable modem IN to the UPS and then OUT to your router. But the surge suppression is listed at only 405 joules, which is nothing. And the filtering could potentially make your signal even weaker. Most ethernet cables are shielded already anyways. And no those ethernet ports on the UPS have nothing to do with shutdown commands or any communication. They are only for the surge protection.
  49. 2 points
    I can see some key factors that would swing you one way or the other. Your skill level, ability to follow instructions / guidance / advice and general savviness with IT stuff. Setting up a 2-in-1 can vary from easy to impossible, even when things are generally easier with Unraid than other OS I have dealt with. The ability to accept that some things won't work or won't work perfectly in a VM. For example, passing through Vega / Navi / iGPU / AMD graphics / Nvidia graphics / USB controller / onboard audio may or may not work. Some USB devices don't work if connected through libvirt. And so on. If you desire for things to "just work" then you have a much better chance with 2 baremetal systems. Your desire for best possible performance. Having 2 baremetal systems will give you the best and most consistent performance. A 2-in-1 carries compromises (most notably inconsistent frame rate aka lag) that may not show up on a benchmark but may annoy you in day-to-day uses. Note: core isolation is not the cure-all of lags. It helps a lot but for example, under heavy IO, lag is a more-or-less and not a yes-or-no. With regards to PSU, I don't trust sharing PSU among multiple systems. That is just asking for trouble in my opinion.
  50. 2 points
    Not OK for me: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='-amd-stibp'/> </qemu:commandline> Execution error: internal error: qemu unexpectedly closed the monitor: 2020-06-19T12:55:42.118694Z qemu-system-x86_64: unable to find CPU model '-amd-stibp' OK for me: Before <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> After <cpu mode='host-model' check='none'> <topology sockets='1' dies='1' cores='4' threads='2'/> <feature policy='require' name='topoext'/> </cpu> Asus Strix X570-E Gaming + AMD Ryzen 3600