Leaderboard
Popular Content
Showing content with the highest reputation on 01/27/23 in all areas
-
Uncast Episode XIV: Return of the Uncast with Bob from RetroRGB Season 2 of the Uncast pod is back and better than ever, with new host Ed Rawlings, aka @SpaceInvaderOne 👾 On this episode, Bob from RetroRGB joins the Uncast to talk about all things retro gaming, his discovery and use cases for Unraid, a deep dive into RetroNAS, and much more! Check out the show links below to connect with Bob or learn more about specific projects discussed. Show Topics with ~Timestamps: Intro from Ed, aka Spaceinvader One 👾 ~1:20: Listener participation on the pod with speakpipe.com/uncast. Speakpipe will allow you to ask questions to Ed about Unraid, ask questions directly to guests, and more. ~2:50: Upcoming Guests ~3:30: Bob from RetroRGB joins to talk about Unraid vs. prebuilt NAS solutions, use cases, and RetroNAS VMs. ~6:30: Unraid on a laptop? ~9:30: Array Protection, data recovery, New Configs, new hardware and client swapping. ~11:50: Discovering Unraid, VMs, capture cards, user error. ~17:30: VMs, Thunderbolt passthrough issues, Thunderbolt controllers, Intel vs. AMD, motherboard hardware, and BIOS issues/tips. ~21:30: All about Bob and RetroRGB. ~23:00: Retro games on modern TVs and hardware and platforms. ~24:34: MiSTerFPGA Project ~27:15: RetroNAS ~30:30: RetroNAS security: Creating VLANs, best practices, and networking tips. ~37:15: Using Virtiofs with RetroNAS on Unraid, VMs vs. Docker, and streamlining the RetroNAS install process. ~43:13: Everdrive Console Cartridges and optical drive emulators. ~46:50: Realistic expectations and advice to new retro gaming enthusiasts. ~51:05: MiSTer setup how to's and retro gaming community demographics. ~55:45: Retro gaming, CRTs, emulation scaling, wheeled retro gaming setups, and how to test components and avoid hardware scams. ~1:05: Console switches, scalers, and other setup equipment. In the end, it all comes down to personal choice. Show Links: Connect and support Bob: https://retrorgb.link/bob Send in your Uncast questions, comments, and good vibes: https://www.speakpipe.com/uncast Spaceinvader One interview on RetroRGB MiSTer FPGA Hardware https://www.retrorgb.com/mister.html RetroNAS info: https://www.retrorgb.com/introducing-retronas.html Other Ways to Support and Connect with the Uncast Subscribe/Support Spaceinvader One Youtube https://www.youtube.com/@uncastpod3 points
-
3 points
-
Not related to your original problems, but your appdata, domains, system shares have files on the array. In fact, domains and system shares are set to be moved to the array. Ideally, these shares would be all on fast pool (cache) so Docker/VM performance isn't impacted by slower parity, and so array disks can spin down since these files are always open. You have some unassigned SSDs mounted. How are you using these? Might be better as additional pools instead of unassigned.2 points
-
Regarding benchmarking solid state drives, I ditched the "dd" command in favor of the "fio" utility. I hope to have the current bugs fixed soon along with switching to using fio. I will still utilize dd for spinners. The dd command simply had too much overhead in pulling from /dev/random. While not as much overhead, there still was some utilizing /dev/zero. Using the fio utility, I was able to utilize the full bandwidth.2 points
-
Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks1 point
-
LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. cgroup v2: Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you enable cgroup v2 To enable cgroup v2 append the following to your syslinux.conf and reboot afterwards: unraidcgroup2 (Unraid supports cgroup v2 since version v6.11.0-rc4) Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.1 point
-
Unraid貌似用了r8169这个驱动来驱r8125,比较老。r8152也是很老的1.12版本,好像驱不动R8156。 以下补丁基于6.11.5定制,加了最新的V9.010.01的r8125 2.5G PCI网卡驱动和V2.16.3的r8156 2.5G USB 网卡驱动。解压后把所有文件覆盖到U盘根目录重启即可。覆盖前记得备份原始文件。 默认自带r8169的驱动,支持板载网卡r8169和USB网卡r8156共存的情况。 如果不需要r8169,需要r8125 PCI网卡的驱动,可以靠如下blacklist屏蔽掉。命令行跑一次,会存到U盘,重启看看。 echo "blacklist r8169" > /boot/config/modprobe.d/r8169.conf 如果需要r8169和r8125共存,这个比较麻烦,建议换个网卡吧。这个改动需要改bzfirmware文件。bzfirmware里面东西太多,不好改,有需要可以自行去bzfirmware内删掉r8169内置的r8125固件,重新打包(未测试,只是建议可以尝试下)。 原生Unraid内核之外的补丁修改部分: 1. 修改了ACS Override补丁,支持某些主板的IOMMU分组补丁。 2. 增加R8125驱动,修改打开了队列支持和关闭了ASPM节能。 3. 增加R8156驱动,修改了支持最新的Unraid的Linux5.19内核。 只是供测试,如果感觉不稳定,用原始文件覆盖回来,重启就能恢复原状了。 免责声明: 补丁只是用于测试,请勿使用于重要的数据服务器,如果出现数据丢失等情况概不负责。谢谢。 下载链接在: 链接: https://pan.baidu.com/s/1987agT_JHTv6QT3ds1olqg?pwd=zapu Mirror (Google Drive): https://drive.google.com/drive/folders/1wA9UNejVllZfBTjDQ-pquJ-Bx4Tlqluk?usp=sharing 里面还有6.11.3, 6.11.1, 6.11.0, 6.10.3, 6.9.2和6.9.1版本的,都是加了最新的驱动。有兴趣也可以使用。 已知问题1: Unraid内部似乎加了些版本检查的限制,新版本的补丁无法新建队列配置文件,会出现Stale Configuration的提示。有个妥协的办法就是先在原始文件里面将阵列启动好,生成阵列配置文件之后,再替换补丁重启。除非换硬盘,一般阵列配置也不会重新配置。 已知问题2: 某些r8125网卡在Unraid的共享目录拷贝速度跑不满280MB,只有140MB左右,可能跟主板设计或配置或桥接有关。有网友发现关掉Unraid网络配置的br0桥接,使用macvtap,可以跑满速度。推荐使用这种办法,虽然麻烦一些,但是速度确实不受影响了。 例如虚拟群晖,推荐第一个macvtap网卡使用群辉的MAC地址,第二个网卡随意MAC地址,参考本帖第三页 @benwwchen 网友的帖子。1 point
-
Btrfs was detecting data corruption before the problem, suggest running memtest, then after that reboot if the same see here for some recovery options.1 point
-
I've installed SAS spindown on my Unraid 6.11.5 and it works fine. I just have to say initially it wasn't working at all then I just reinstalled the plugin and rebooted the server and since then it started working perfectly... I'd just suggest to reinstall and reboot the server, then try to spin down a SAS drive while the syslog is open and you should see something as follows. Btw, the log you've to check is this one. PS I don't know for other users if it's the same, but some drives' models require more time to spin down, but it's not a problem for sure... Here I tried to spin up and then spin down all the drives, you can test if everything works doing the same. I'm really thankful to the creator of this plugin, I have 8 SAS drives in my build and I'm constantly saving 80w!1 point
-
1 point
-
Das sind Container Variablen. Jeder Container hat andere Variablen und viele haben diese gar nicht. Darauf kannst du in der Regel keinen Einfluss nehmen. Wenn ein Container als root läuft, dann hat das in der Regel auch seinen Grund. Allerdings ist dieser root nicht der root von Unraid. Er befindet sich in einer Art chroot-Umgebung, wo alle Ressourcen vom Host-System durch cgroups abgekapselt werden. Das ist eine Kernelfunktion, die die Basis für die Sicherheit in Linux darstellt. Gäbe es da ein Loch, wäre jedes Linux-System unsicher. Also ähnlich fatal als wenn das User-System nicht funktionieren würde und jeder User durch eine Lücke zum root werden könnte. Wegen dem zuvor genannten cgroups Konzept ist das nicht möglich. Jeder Pfad muss explizit übergeben werden. Dein eigentliches Problem ist aber eh, dass der Docker-Dienst selbst als Root läuft. Hat also Docker eine Sicherheitslücke und jemand schafft es auf die Art aus einem Container auszubrechen, so wäre er root auf dem Host-System. Das gilt bisher also nicht möglich, weil keine solche Lücke bekannt ist. Aber in der Theorie wäre das der beste Ansatz um einen Docker Host zu übernehmen. Übrigens lässt Proxmox aus dem Grund die lxc Container als User und nicht als Root laufen. Da ist es noch mal etwas sicherer umgesetzt. Dadurch laufen aber manche Container nicht problemlos. Hat also nicht nur Vorteile. Aber das hat Sicherheit ja nie. Fazit: Du kannst hier nichts optimieren. Du musst darauf vertrauen, dass Docker sicher ist. Natürlich ist nichts sicher, daher muss man abwägen wie viel Komfort man für die Sicherheit opfern möchte. Es gibt ja zB User, die Nextcloud nur per VPN erreichbar machen. Dadurch ist kein Container über das Internet erreichbar und entsprechend wenig Angriffsmöglichkeiten gibt es. Meine Meinung: Backups und mit dem Risiko leben. Kein System ist heute sicher. Außerdem ist man als Privatperson sowieso kein nennenswertes Ziel.1 point
-
I wasn't able to save much of anything. Thanks for your help though. Reformat complete. Docker containers reinstalled using pre-existing templates. Reconfiguring containers and rebuilding VMs.1 point
-
I am wondering why are you concerned about keeping all the episodes of a single season on one disk. I just timed the delay in loading a blue ISO from a spun-down disk and it was eight seconds! First, a bit of history, the split-level parameter was added many, many years ago. Here is the issue that was being addressed. DVD video files (.VOB) are restricted to being 1GB in size and it takes 4 to 7 of these files to make up a full length movie--- only the last file not being exactly 1GB. If you saving DVDs in folder-and-file structure and these .VOB files get split up onto two disks, you could have a eight pause in the middle of a word! NOW, that is a real problem!!! Thus, the split level came into Unraid as a Kludge solution. (I do mean Kludge as it can cause some very serious side issues as are being pointed out.) It also happened at a time when there were still Unraid servers with HDs of less than 500GB. (One of mine started out with 250GB parity and I think I had 100Gb data drive.) The probability of getting 1GB files split across two disks is much higher with those HD drive sizes than with one with a size of 8TB. If you really want a trouble free Unraid life, go with the default Unraid settings with the exception of the 'Minimum free space:' one (mine is set to 50GB). Don't worry about where files end up. Don't worry about one disk being fuller than other disk. These things are really quite travail. Trying to 'fix' them can cause real problems if you don't do things properly. (Believe me when I say this. Every couple of months, someone will screw up a Linux command trying to correct an OCD problem and that causes real problems.) If you really have an issue with a file being on a wrong disk--- address that problem when you identify it. (As I said, you may not even realize that Unraid has split the files...) Check to see what is causing the issue and look for a workable solution. You can probably find the room to move files if you have the 50GB Min Free Space setting in place as you have two disks to work with. (I would recommend use the Dynamix File Manager as it is designed to work with the Unraid array.)1 point
-
Not normal. I run pfSense as a VM, and don't have any of the listed issues. Running your gateway as a VM can cause issues, but not if you have things configured to account for them. In my case, Unraid has a 10GB interface with a fixed IP connected to the main switch, and the pfSense VM has 2 1GB connections passed through, the WAN assigned interface connected to the modem, the LAN assigned interface connected to the same switch as the 10GB. Other than not having DNS and WAN connectivity until the VM is running, Unraid handles it just fine.1 point
-
Apologies. Ignore me, as I just realized that this is the non-Alder Lake thread.1 point
-
reading what you have been contributing today, i'd like to say that i am on the exact same boat with you. with all of the same baggage: no vms, ipvlan, plex transcoding i915, 683 stable, etc etc @Tristankin1 point
-
1 point
-
I'd like to just chime in here to add to the list and confirm that I have no hang or crashing issues on 6.11.1 with an i5-12600k running on an MSI PRO Z690-A board. I have Windows VMs, about 20 docker containers, plex transcoding AND jellyfin transcoding confirmed to be working with no crashes or hangs. I have a monitor hooked up to the server directly at all times. I've never tried to run the transcoder with it unplugged. At the start of all of this, I was experiencing the hangs as described in the original report. It just took time for both unraid to release an update and for Plex to fix their transcoder. Before I enabled it, I was getting crashes that logged to syslog randomly, and ich777 was kind enough to help and suggested I switch to ipvlan from macvlan which fixed those crashes. Then I re-enabled plex and jellyfin transcoding and it all works now. I've been crash and hang free for over 3 months.1 point
-
Das entscheidet der Ersteller/Entwickler des Containers. Du kannst es aber auch selbst beeinflussen indem du entsprechende Container Variablen erstellt. Ich glaube USR_ID=99 und GRP_ID=100 wenn du den Container mit unraid Standard User laufen lassen willst und jeweils "0" wenn der Container als root laufen soll. Ganz sicher bin ich nicht, bin unterwegs und müsste daheim nachschauen.1 point
-
Unter Unraid ist das standardmäßig root. Diese Entscheidung ist vom Container abhängig, vom Autor. Weil das eher Docker betrifft. Unraid ist nur der Host. Warum fragst Du?1 point
-
1 point
-
Ok, I see the potentiel problem, it's happen to me the first days of transition to the VM, so i will follow this recommendation ahah ! 😄 Thank you for this prompt answer ! Have a great day !1 point
-
no performance issues, but, if you have a active docker running using the driver and turning on the VM ... this can result in a crash overall as its a either or usage only ... so either your VM using the GPU in passthrough OR the host using it, as sample for a GUI usage, docker usage, ... just be aware1 point
-
I hope you've seen the Variable WS_CONTENT in the template, I can tell for sure that it's working because a buddy has also a server running with some mods on it. Please keep in mind that if the container starts to loop and doesn't starts properly one of the mods is not working properly.1 point
-
it was a fan issue with asus x470 boards and the fan controller that's now in the base unraid/linux image bug out causing all fans to stop after a random amount of time, within a week or so. I have a file in 'config\modprobe.d' named 'disable-asus-wmi.conf' With the text of '# Workaround broken firmware on ASUS motherboards blacklist asus_wmi_sensors' So an update while I was awaiting a reply. I re downgraded back to unraid 6.10.3. tried the usual stuff and failed. Re updated back to 6.11.5 did the usual thing and now it works. I have no idea. but it has persisted through several reboots and transcoding now works on the GPU. Best guess something just stuck initially and fixed itself when I cycled the down/upgrade cycle1 point
-
Hi everyone, thougtht I'd update the thread and close it out, since it seems to be working now. The cause was my network switch (HP ProCurve 1800), which I reset to factory settings. Thank you everyone for the help, I didn't think this could happen since I hadn't configured any of the features of the switch.1 point
-
Okay, I feel like an idiot. Turns out someone - probably a cat - partially ejected the drive. Building my closet door just became this weekends priority project. The parity is rebuilding right now. Only 23 hours to go. Thanks for the help.1 point
-
You will see the container memory only with cgroup v1. I‘ve already looked into this and there is no easy fix for this on Unraid but I‘s on my to do list. Certain file paths are perhaps too long but it will create the tar file anyways. Have you yet tried the integrated snapshot feature?1 point
-
Dear community, Some thoughts following CNN article about: "hackers repeatedly took advantage of several known flaws and one newly discovered vulnerability in Pulse Secure VPN, a widely used remote connectivity tool, to gain access to dozens of organizations in the defense industrial sector" I am pretty sure others vpn like wireguard and openvpn may have the same flaws. But there is another point of failure in our network. Our ISP routers. Bypassing vpn by direct access using them is possible. Even sometime easy as they have built in login as admin/admin most of the time... Yesterday, using burp, hydra and kali I gained access to a test network through the wifi as a demonstration to one of my friend, trying to show him how to hardened his Isp routers. Once done, I hit his openmediavault Gui, trying log in. Using an eset network scanner, I highlight a login failure as admin/openmediavault was still used. The only thing stoping me by the lack of time was his F2A protection. My point here, is unRAID might be in the same trouble, and don't have F2A login protection. What are your tought on this subject ?1 point
-
WebAuthn in general would be a great addition here (adopting FIDO standard for passkeys, link for those who want to learn more: https://fidoalliance.org/passkeys/ ) Google, Apple, and Microsoft support the standard today. This would be great to see integrated as a sign-in option to UnRAID, even if it can only support single-device passkeys due to likely lack of BLE availability on most servers that is required for CTAP in cross device authentication scenarios (e.g. browser to mobile).1 point
-
Great, everything in those folder will survive container update since they are persistant data.1 point
-
1 point
-
1 point
-
For me there was a chart in the swag log with all the files that were outdated (around 8 files for me). I replaced them all with the new sample files. I was using 3 .conf files that I had to edit to match my old server names from when I set it up. Do you have a list of out dated files in the log? You will likely have to restart Swag to see them as my log was spamming an error.1 point
-
There have been issues with onboard Ryzen SATA controllers in the past, where they dropped out under intensive use, like parity check, disk rebuild, etc, it's much less frequent with latest Unriad and newer kernel, but t still happens, also look for a BIOS update, sometimes can help.1 point
-
This is what i did to fix it. Stop Swag docker Go to \\<server>\appdata\swag\nginx folder rename original nginx.conf to nginx.conf.old copy nginx.conf.sample to nginx.conf rename ssl.conf to ssl.conf.old copy ssl.conf.sample to ssl.conf restart swag docker This worked for me1 point
-
I've been running unRAID for at least 6 years. In that time I've had 2 different builds, but I've used the same USB key for both. I was skeptical about it at first but now I think the OS being held on a USB is one unRAIDs best features. Frees up a SATA/ nvme port for more storage plus makes it easier to move to new hardware.1 point
-
1 point
-
Please try to use ipvlan instead of macvlan in your Docker settings. I would recommend to create a dedicated bug report since this seems not related to this issue.1 point
-
You can only specify a single sub-directory for a share. Look at the Help: List of directories to exclude from the Recycle Bin separated by commas. To specify a particular share directory, use 'share/directory'. You can specify up to one sub-directory. Unassigned Devices are specified the same way using 'mountpoint/directory'. Wild cards '*' and '?' are allowed in the directory name. So you specify 'pool/Backups' for the complete 'Backups' directory. If you are lookiing to exclude only the 'pool/Backups/Portables', specify it like 'pool/Portables'. I see an issue with specifying multiple share folders. I think this should work 'pool/Backups,pool/Portables', but it looks like only 'Portables' is excluded. I think the Recycle Bin should exclude both 'Backups' and 'Portables' when specified this way.1 point
-
In der Übersicht auf das Disk-Symbol klicken. Dann geht ein Pop-up der jeweiligen Disk auf.1 point
-
I think it would be great to do sich things throught the CA App rather than through the plugin itself, this will be only integrated for testing. I have already made two scripts. The first will install a full (minimal) Desktop Environment and the second one will install Homeassistant Core to a fresh installed Bullseye container, these two scripts are more a PoC and for testing, but the work perfectly fine. 😁1 point
-
That's because unraid isn't meant to be enterprise software or externally accessible via webui. Unraid software is SMB at best and at worst more of a homelab software. As a security engineer, if I suggested Unraid in my work environment, without being utterly facetious, which is a more than 40k user enterprise and is subject to fedramp and hippa, I would probably be fired just for making the suggestion. I do think mfa on everything is a good standard to hold ourselves to...however mfa isn't a replacement for good security practices. I agree it should be on the road map, but high priority...? Honestly, if you throw out your unraid server admin ui and ssh wide open to the internet, or allow your wan-network facing dockers to be privileged, you shouldn't be running unraid in the first place. You should be learning basic network security. As people have said use a VPN, or a remote connection to a different PC on your network to access your admin UI when not there. Or even use the unraid MyServers Plugin and have MFA on your unraid community account. https://blog.creekorful.org/2020/08/docker-privilege-escalation/ here is a good example why you should not run your dockers as privileged. here is what privileged actually does: https://www.educba.com/docker-privileged/1 point
-
This is in fact exactly the problem. In my XML, I also had `type=raw` which when changed back to `qcow2` yielded a successful startup.1 point
-
I think the glaring issue is that this thread seems to imply that the unraid user interface, or server itself should be hardened against external attacks. This would mean that unraid itself is exposed to the external network/internet, which basically just shouldn't be the case. This is a big clear red "don't do that." Instead, use a reverse proxy to get services running on the unraid server exposed to the outside world. As far as getting access to unraid itself exposed to the outside world, if you absolutely must, I would use something like Apache Guacamole with 2FA. This way the server itself is never exposed to the outside world, and your interface to it is protected with 2FA. I don't think this is something in the scope of unraid to develop a secure remote access implementation. I don't think the WebUI has been scrutinized with penetration testing, and I don't think a system with only a root account should ever be exposed to the internet directly.1 point
-
Hi. (Un)fortunately, I deal with security every day at work. Your point is valid as long as you are referring to Unraid being used in a home setting. However, in an enterprise (or, maybe in Unraid's case, SMB) environment, perimeter-based security is (rightfully) considered an antiquated concept and each server needs proper protection, regardless of ingress sources. This means that MFA is, indeed, a must. My 2c. Edit: Also, with the new "My servers" plugin, even home configurations can be exposed, so I hope MFA finds its way in that online design.1 point
-
I would stop the docker service, (Settings - docker), delete the image, then re-enable the service followed by Apps, Previous Apps and check off what you want back installed. If that doesn't work, post up your Tools - Diagnostics. Each app has it's own permission requirements that may or may not be compatible with SMB. Quite normal and to be expected. By running New Permissions and including the Appdata share you may (or may not) impact the ability of the container to run (although at first thought this doesn't appear to be why yours refuses to run, unless you were also running a docker folder instead of an image in which case yes you've completely trashed the image, hence my comment above. This is the reason why (if you've got FCP installed) there also exists a Docker Safe New Permissions Tool which will not let you run against the appdata share.1 point
-
While I agree with you that the security in UnRAID seems pretty weak at default settings, your router admin page should not be accessible from the outside if you configure it correctly and keep it up to date. You highlight a big problem though, default settings in all these docker containers we pull, and I think that boils down to the individual user and the software being used. Your friend is tech savvy enough to setup his own OMV on UnRAID so he should definitely be techy enough to know to change the default admin password. And the software should be made in such a way that default passwords are a major error event that fires warnings everytime you log in to it. 2FA is in my opinion a complementary security feature that should not keep a software secure on its own. But I hope some big steps are taken in regards to security by the UnRAID team going forwards. I'm still on my trial period with 12 days left and I really love UnRAID but I keep being scared on some security defaults (SSH enabled with password even though the keys are generated and stored on flash, no simple switch in UI to disable PW logons, why???). Root as default user, major functionality put in the hands of the community (Fix Common Problems etc) which is a huge attack surface because I guess these plugins in UnRAID run as root? It only takes one big community addon to be hit and a lot of servers will be infected, and I guess UnRAIDs stance on this issue will be something along the lines of "you used community addons on your own risk", which is true. Sorry if I'm ranting in an somewhat unrelated thread as this post is more about general security on UnRAID.1 point