Leaderboard

Popular Content

Showing content with the highest reputation on 05/24/22 in all areas

  1. LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. cgroup v2 (ONLY NECESSARY if you are below Unraid version 6.12.0): Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you enable cgroup v2 To enable cgroup v2 append the following to your syslinux.conf and reboot afterwards: unraidcgroup2 (Unraid supports cgroup v2 since version v6.11.0-rc4) Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.
    5 points
  2. Install SSH Server in Debian based containers: Method 1 (recommended) : Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name) : I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) : Now Create a user with the command "useradd -m debian -s /bin/bash" (in this case the newly created username is "debian") : In the next step we will create a password for the user "debian" with the command "passwd debian" (replace "debian" with your preferred username) type in the password two times like above for the root user: Now install the openssh-server with "apt-get -y install openssh-server": After it successfully installed you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "debian" and the password set for the user "debian" (in this example we will connect through a Linux shell with the command "ssh [email protected]" you see the IP address in the LXC tab in Unraid) : Now you are connected through SSH with the user "debian" to your LXC container. Method 2 (not recommended - root connection) : Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name): I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) : Now install the openssh-server with "apt-get -y install openssh-server": Now issue the command: "sed -i "/#PermitRootLogin prohibit-password/c\PermitRootLogin yes" /etc/ssh/sshd_config" (this will basically change your SSH configuration file so that you can login with the root account through SSH) : Restart the sshd service with the command "systemctl restart sshd" to apply the new settings: After that you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "root" and the password set for the "root" user (in this example we will connect through a Linux shell with the command "ssh [email protected]" you see the IP address in the LXC tab in Unraid) : Now you see that you are connected through SSH with the user "root" to your LXC container.
    3 points
  3. Not yet clear what activities, usually issues start after a few hours of normal use, docker image or any other btrfs filesystem are usually the first to go since they are very susceptible to memory corruption issues, if the pools are btrfs run a scrub, parity check for the array if xfs, though if errors are detected basically you can only correct them, unless you have pre-existing checksums, also note that while some corruption is possible it's not certain, in at least one case btrfs detected some corruption but after disabling VT-d and running scrub no more corruption was found.
    2 points
  4. Updated my server from 6.8.3 to 6.10.1. No issues that weren't of my own making. I had an old version of PuTTY installed on the computer I was accessing the server from, so SSH would not connect because of Cipher selections. A quick update of Putty resolved that. I stopped docker to alter set IPVLAN. The dockers would not start with some old parameters still around from before. A quick adjustment to the extra parameters to remove the "--mac-address=" resolved that. I then updated all plugins to get versions tailored for 6.10.x series and had NERD Utils update the packages. I can't think of any additional steps I need to do at this time. Will update further should anything change.
    2 points
  5. does unraid support Optane DCPMM(DC Persistent Memory Module )?
    1 point
  6. the t400 for docker. the 1050ti for use with vfio. and.. as far as i can tell... this did the trick. BIG THANKS TO YOU!
    1 point
  7. Right. It took a while but this was the culprit. I was on 6.9.2 and I saw that 6.10.1 was available. After updating (this took a long time) the game ran fine. Thanks for the troubleshooting @ich777!
    1 point
  8. Follow This post is also pinned at the top of the page
    1 point
  9. It's definitely enough. I'm thrilled to have even just the Debian lxc working! Thank you so much.
    1 point
  10. I was able to pass it through and find it in /dev but for some reason I cant utilize the device. Messed around with permissions, probably have to alter udev rules or something. I'll have to play around with it later this evening. Also, when attempting to start Fedora in the foreground I get this: lxc-start -F Fedora Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing. Freezing execution. Great job though. I can see this going far!
    1 point
  11. Hi, könnte echt am Fastboot gelegen haben. Habe in abgeschaltet und 30 mal neugestartet. Keine Probleme. Davor war spätestens nach dem 3 reboot Ende.
    1 point
  12. Already changed the first post, I haven't been able to test everything so far because there are a lot of images... I can so far confirm that Homeassistant Core is working fine, Docker is working fine (with containers who doesn't need privileged rights), I've also made a container that uses noVNC in conjunction with TurboVNC to get a Desktop environment through a browser (xrdp should also work with a few tweaks from what I know).
    1 point
  13. Still on 0th pass on memory test, 134 errors 18 minutes into it. Well, crap. Ordered new memory, going to pick it up this afternoon. Thanks for the advice. I've tried replacing data cable on #2, so IO guess I'll try to see if I can find another power cable to use.
    1 point
  14. Thanks, I've updated the docs. Basically, if the certificate is provisioned in 6.10 it will use the myunraid.net domain, if it was provisioned in an earlier version of Unraid it will use the unraid.net domain.
    1 point
  15. We're working on an update to address this
    1 point
  16. Long story short, the parity appears to have gone back to normal. I tried so many different things, I have no idea what fixed it. -Updated BIOS -Updated to UnRaid 6.10.1 -I tried one of the drives, it originally did not work. (Now that it's working, I did try the other drive instead) So that might be it? -Tried to boot UEFI instead of legacy (Literally hours trying but no luck) -At some point, I wasn't even able to boot with legacy due to the AER issue. I added the following to the syslinux, nvme_core.default_ps_max_latency_us=5500 pci=nommconf Already had, pcie_aspm=off I am still seeing the "Hardware error from APEI Generic Hardware Error" but not as much. The only major issue I am seeing for now is that one of the 2.5 inch drives on one of the ZFS pools is failing. Waiting for the replacement. prdnas002-diagnostics-20220524-1235.zip
    1 point
  17. I installed a new PSU. Been running without a problem for two weeks. Thanks for help
    1 point
  18. Wieso ist das Backup das groesste Manko? Es gibt unzaehlige Wege fuer das Backup. Klar bleibt da die Qual der wahl. Je nachdem wie viele Daten gesichert werden muessen und wie oft, kann durchaus auch eine Kopie auf USB-Platte(n) reichen. Auch bei Qnap und anderen muss man fuer ein anstaendiges Backup ein weiteres Geraet hin zu nehmen.
    1 point
  19. Maybe I was still on a previous version. I only just updated to 6.10 because I happen to have a microserver gen8 which originally wasn't recommended due to a bug. On 6.10, I went back to libreelec (I found some forum posts that libreelec supports this on kernel 5.15 and later), and it worked without problems.
    1 point
  20. So, for reference for anyone else who happens by: Disabled my 2.5gbps onboard LAN, replaced with an older Intel 4-port lan card. Unraid still freezes intermittently, however it doesn't take my router down too. Small wins, I suppose. Next steps: Realized I was still running Intel GPU Top (though *not* passing through anything to Plex; Plex has no /dev/dri folder). Because the 12400's GPU is a known source of crashes, I removed that as well. This wasn't a problem pre-6.10, but daily crashes ever since the upgrade have been frustrating. If it still crashes, I'll shut down docker as well and run it as a pure NAS for a while and see.
    1 point
  21. For anyone else wondering, it was the missing boot argument "agdmod=pikera" that fixed my problem. And a HUGE thanks to ghost82 for getting me there!👍 Harald
    1 point
  22. Hmmmm, this is really strange, but I think a iPad from 2018 should be able to handle the format... This is really the last thing that I would do but maybe try to factory reset the iPad and see if this helps.
    1 point
  23. Hi, in Safari the same issue... Is not working... The server and the other clients works fine.... maybe i must order a new ipad.. 😕 Thanks for your great support 👍👍✌️
    1 point
  24. It allows us to see the hardware used and sometimes there are some known issues, in your case start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
    1 point
  25. I successfully updated from 6.9.2 to 6.10.1 I was initially concerned when looking at the Main tab, seeing all my drives mounting one by one with 0TB, but after the Array finished starting up, everything is fine. So don't be worried if you notice this! I really like the new dashboard graphs, and the btrfs schedule-able features. Very exciting!
    1 point
  26. Just wanted to say thanks because this solved an issue I had after I upgraded to Unraid 6.10.1. Not sure why all the ownership changed to root for everything in /mnt/user/appdata/binhex-sonarr. I just chowned nobody:users everything -R and then went in and changed supervisord.log files back to root:root. Sonarr started right up after that. Thanks again!
    1 point
  27. attach diagnostics to your NEXT post in this thread
    1 point
  28. I was able to delete one without any problem. Something about what you are doing must be pasting it twice. If you edit the post, you can see the attachments listed separately and delete them.
    1 point
  29. Unrelated, but your appdata, domains, system shares have files on the array. Docker/VM performance will be affected with these on the array, and array disks can't spindown since these files are always open.
    1 point
  30. The syslog you attached is the same as that already included in the diagnostics. It contains no information about what happened before reboot. Setup syslog server so you can get syslogs after crash to post. Your cache has corruption of the sort that might indicate memory issues, but you said memtest passed. Suspect some hardware problem is to blame though.
    1 point
  31. But keep in mind slackware, and by extension Unraid, doesn't do automatic package version management, so it's up to you to keep on top of versioning and conflicts. You can easily cripple your server if you aren't careful. If you have issues with your server and want help troubleshooting, you will need to remove or rename all the modifications you have done before we can help.
    1 point
  32. Prefail is just the type os some SMART attributes, also disk4 looks healthy, most likely not a disk problem.
    1 point
  33. Hi all, any chance of a mobile friendly ui being implemented? Current experience on mobile is terrible.
    1 point
  34. Don't forget to right click on the container -> console and run ` yarn database:setup ` after the rest of the setup. Thanks for the guide looking forward to checking this out!
    1 point
  35. 今天打算从开心版切为正式用户了,我备份了U盘,然后直接在系统中点击升级到了最新的6.10.1,升级过程很顺利,升级后购买了密钥也成功激活,但是激活后打开阵列,发现docker处于无限加载的过程中,我等了3分钟一直没出来,虚拟机也显示空,难道无法保留原有数据直接升级吗,由于因为我用的虚拟机OP上网,Unraid虚拟机丢失后网都没有了,而且重新部署的确有点费时间,所以我暂时有退回了6.8.2的开心版来发帖求救下,求各位大佬帮帮忙 已解决: 虚拟机的确丢失了,不过镜像文件都在,我重新添加一次就好了,几分钟就搞定 docker也没有丢失,docker一直加载不出来,貌似是网络的问题,因为我网络走的虚拟机openwrt,虚拟机恢复之前一直没网,虚拟机恢复后,启动op,docker就能正常加载了! 终于成功成为正版用户!
    1 point
  36. @OFark @bergi9 @spychodelics A new version is available that fixes the terminal style output for 6.10.0. A big thanks to @bergi9 for helping me track down the issue.
    1 point
  37. Most of the time users support other users on this forum. This is actually one of the greatest strengths of Unraid. Looking at your post history, it seems you have already gotten some help. Your license pays for lifetime upgrades to Unraid. Free support is on the forum from your fellow Unraid users. Paid support can be negotiated.
    1 point
  38. you need: agdpmod=pikera in your boot-args, in the opencore config.plist.
    1 point
  39. The amount of brave people without password for the root account is scary.
    1 point
  40. Or want to keep an absent minded moment from removing the wrong drive, or have kids, etc. It's a very good idea to keep hot swap drives locked, it's not like they are going to be moved that often, and if they are because you are using one of the bays to do backups that you pull, even more reason to lock the rest of the array drives in place. I'd also consider putting a piece of tape on each bay drive slider with the current last 4 serial on it. Makes life less stressful when a drive fails.
    1 point
  41. Very nice work guys. When are we going to be able to ditch using a USB flash drive for unraid, and move over to something more reliable as a boot device? It also makes it a royal pain to virtualise unraid (i know this isn't technically supported)
    1 point
  42. It might sound a little weird, but why do you have to use Google everywhere? Google here, google there, that doesn't have to be. Maybe in a new version you can simply replace the Google time servers against "neutral" time servers.
    1 point
  43. Two weeks and the system hasn't crashed. So I'm going to assume that removing all IPs from br0 and moving them to "bridge" has stabilized the server. Next I'll re-IP all my dockers back to br0 but adding a VLAN as described above. I hope Lime Tech addresses this issue in a future release so that this workaround isn't necessary.
    1 point
  44. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
    1 point
  45. I had bitrot issues with unraid using ECC (Supermicro server setup), I've since moved to ZFS and get bitrot/corruption issues reasonably regularly due to old disks (same hardware). I feel that you really need bitrot detection in the FS... I'd like to move back to Unraid if they can crack the online bitrot and recovery issue.
    1 point
  46. Edit the vm and change from form view to xml view, and remove the section that references the device from the xml
    1 point
  47. Create a user, and use that user to connect to the share. The root account is not allowed to connect to shares over the network, unless the share is in Guest mode
    1 point
  48. Well, this was easier than I thought. This is a pretty basic script that runs the 'tree' command against each of your array disks, creating a txt of their contents. My primary purpose for this is to serve as an archive of my array contents in case I have a catastrophic failure, then I can easily discern what I'm missing and start rebuilding from backups/etc. Default save directory goes to your flash drive in a subfolder called indexTree, with subfolders named for each disk containing a date stamped txt file. #!/bin/sh # Tree Index Array #description=Creates an inventory tree of all mounted disks (not cache) #arrayStarted=true SAVEPATH=/boot/config/indexTree for f in /mnt/disk* do echo "Scanning tree for $f" mkdir -p $SAVEPATH/$(basename $f) tree -o $SAVEPATH/$(basename $f)/$(date +%Y%m%d).txt "$f" done echo "indexTree complete!" exit 0 tree_index_array_drives.zip
    1 point