Leaderboard

Popular Content

Showing content with the highest reputation on 06/24/22 in all areas

  1. LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. cgroup v2 (ONLY NECESSARY if you are below Unraid version 6.12.0): Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you enable cgroup v2 To enable cgroup v2 append the following to your syslinux.conf and reboot afterwards: unraidcgroup2 (Unraid supports cgroup v2 since version v6.11.0-rc4) Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.
    1 point
  2. I have cloned and updated the linuxservio.io beta ownCloud docker. I have been using it for over a year and it works very well. I know that NextCloud is the in thing, but I am committed to ownCloud in my business and have over 200GB of media, calendars, contacts, and tasks on ownCloud. I do not want to invest time and take the risk trying to move to NextCloud. The thing I really like about this docker is that mariadb is installed in the docker so a separate docker for the database is not required. I also found other implementations of ownCloud dockers were lacking when it comes to updating ownCloud. ownCloud cannot be downgraded, so one has to be careful to always go forward, and not backwards. If a person manually updated ownCloud and then had to reinstall with the original docker, the manual upgrade got written over in other implementations. This docker prevents that situation because the ownCloud version is persistent in the appdata/owncloud folder. Anyway, the docker is available to install from CA. If you have the linuxserver.io docker installed, this is a drop in replacement for that docker. Be sure to back up your data before installing this docker. To replace the linuxserver.io ownCloud docker, remove that docker and then install this new docker from CA. Be sure to change any custom settings you made to the original docker template. Installing ownCloud from scratch will install the latest version (Currently 10.5.0 - called ownCloud X). If you've already installed the docker, your current ownCloud version will not be changed. To install the docker from scratch: Install docker and then go to the WebUI. Enter an administration user and password. Change the data folder to /data. Because the database is built into the container, the database host is localhost. The database user and the database itself are both 'owncloud'. If you do not change the default DB_PASS variable, the default database password is 'owncloud'. Once in the ownCloud WebUI, go to 'Settings->General' and click the 'Cron' method for the cron.php task. A cron to perform this is built into the docker. If you use your own certificate keys name them cert.key and cert.crt, and place them in config/keys folder. ownCloud can be updated from the WebUI, but it requires a certificate that is not self-signed and some other requirements that will be difficult for a self hosted server. I will post some manual update instructions so ownCloud can be updated and be persistent. I will be working on updates that can be done by updating the docker, but I have to put some time into how to do that without breaking things. I recommend you install some security apps for better security: OAuth2 - This app is for remote access to the ownCloud server and uses tokens rather than passwords to log into the server. Passwords are not stored locally by any clients or third party apps. Brute-Force Protection - Offers brute force login protection. Password Policy - Allows you to set password complexity rules.
    1 point
  3. Hallo alle, Für all unsere treuen Nutzer, hol dir 30% Rabatt auf Unraid Pro Upgrades bis Ende Juli! ⤵️
    1 point
  4. OMG you legend, it has worked straight away after i either added "video=efifb:off" or using a q35+ovmf virtual machine. What a relief, onto the next problem for future reference bios- vt-d, CMS, legacy, GPU primary ACS Override=both & allow unsafe interrupts=enabled, Separated IOMMU groups & Binded to VFIO Added "video=efifb:off" to the syslinux config. Created a win10 q35+ovmf virtual machine with vnc & Hyper-V set to No Installed "viostor" drivers during OS setup, updated windows, installed "virtio-win-gt" and confirmed remote desktop connection. Downloaded my own video rom dumped from the gpu using a SpaceInvaderOne script Added the GPU Then applied "multifunction='on'" after the vbios line in the xml, Changed all functions 0,1,2,& 3 to the same PCI slot <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/gtx1060-6gb.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev>
    1 point
  5. apologies i didnt realize that.
    1 point
  6. The above still applies - Within same share, the file stays on the same disk. Different shares the destination will follow the use cache rules on the destination. Using a rootshare file stays on the same disk
    1 point
  7. So for the past 4 days I have been trying to figure out why rtorrent (running on my vm) would not start downloading to my unraid share (ubuntu isos btw) and I found the solution. It turns out all you have to do is modify your fstab line to include cache=mmap and reboot. Here is an example fstab with the correct fix: sharename /mnt/placetomntinguestvm 9p msize=262144,trans=virtio,version=9p2000.L,cache=mmap,rw 0 0
    1 point
  8. Cache device dropped offline: Jun 23 07:43:03 ForwardUntoDawn kernel: ata2.00: failed to IDENTIFY (I/O error, err_mask=0x100) Jun 23 07:43:03 ForwardUntoDawn kernel: ata2.00: revalidation failed (errno=-5) Jun 23 07:43:09 ForwardUntoDawn kernel: ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Jun 23 07:43:09 ForwardUntoDawn kernel: ata2.00: failed to IDENTIFY (I/O error, err_mask=0x100) Jun 23 07:43:09 ForwardUntoDawn kernel: ata2.00: revalidation failed (errno=-5) Jun 23 07:43:09 ForwardUntoDawn kernel: ata2.00: disabled Then it came back with another identifier, suggest you shut down, check/replace cables, power back up and the device should be correctly assigned, if yes just start the array, if not start it without that device assigned, then stop, re-assign it and start again.
    1 point
  9. Yes Via Dynamix File Manager (or alternatively Krusader). If you have a parity drive, then under Settings - Disk Settings turn on Write Method: Reconstruct Write
    1 point
  10. 1 point
  11. That's form mcelog, not an issue. Screens are showing some ATA issues, first ones shows issues with ATA1, on the second one link is down, check all cabling, including power.
    1 point
  12. You need to disable spindown while running that type of test as otherwise the spindown terminates the test.- maybe that was the issue?
    1 point
  13. Thank you, for some reason I have missed that one! Works!
    1 point
  14. Hi, you need to: 1. setup the gpu as multifunction in the vm <-- to be done 2. it should be isolated (bound to vfio) <-- done 3. allow unsafe interrupts may be required in unraid <-- done 4. newest drivers should be installed <-- cannot say anything on this 5. modification to syslinux config may be required (es: video=efifb:off) <-- (to be done) 6. q35+ovmf should be preferred <-- give it a try 7. video rom should be dumped from your gpu and not downloaded somewhere <-- cannot say anything on this Setup a q35+ovmf virtual machine with vnc, with all the advices above, enable remote desktop inside the vm, shutdown the vm, enable gpu passthrough, boot and connect directly to the vm with remote desktop with a second external device to install the drivers; look at the system devices for errors if it doesn't work.
    1 point
  15. @Ich777 Yeah thats my thoughts too, i have the unit recognised in a VM on Unraid running Win10, Passed Thru the TV Tuner, Seems to Pick it up and will see if it picks up any channels.... thanks for your help, when i manage to save up some more coin ill buy a PCI Ex Card if i have a spare slot. cheers brodie
    1 point
  16. Not sure what you mean? The ‘root’ user has never been permitted to have private or secure access to network shares - and accessing a Public share makes the username irrelevant.
    1 point
  17. Thanks man! Awesome you put a working solution here! Was looking for an solution on github and other sites for hours, and was suffering for months!
    1 point
  18. Awesome, thanks! Appreciate the quick reply.
    1 point
  19. @SpencerJ? It may be a few hours until you can get a response, meanwhile I'd fill out and submit the form on this page https://unraid.net/contact
    1 point
  20. Bon, j'ai eu exactement le même souci que toi, les dernières versions de plex sont foireuses elles consomment un peu plus de ram chaque jour lorsqu'il transcode, et puis au bout de 4 ou 5 jours il "crashe" lamentablement et mets tout le serveur en carafe, plus d'activité disque, plus de réseau, plus de ssh, plus rien... Obligé de rebooter. Y'a une solution à ça, c'est de "downgrader" plex à la dernière version fonctionnelle sur Unraid, la version 1.24.5.5173. Je peux t'indiquer la marche à suivre, je te préviens la première fois c'est assez stressant car quand tu downgrade la base de données de la bibliothèque se restructure et pendant une bonne dizaine de minutes (selon la taille de ta bibliothèque) ton serveur plex ne répondra pas, ne t'indiquera aucun message, et l'interface de plex t'affichera des liens cassés... Surtout pas de panique et surtout pas d'arrêt ou de redémarrage du container au risque de vraiment casser la base de données de plex, faut juste attendre, tout rentrera dans l'ordre dans quelques minutes... Avant le downgrade un petit conseil, tu vas dans les paramètres de Plex, "Dépannage", puis tu fais "NETTOYER LES BUNDLES", et ensuite "OPTIMISER LA BASE DE DONNÉES", une fois que c'est fini tu peux passer au downgrade de la version de Plex : Rendez vous dans la section "Docker", tu cliques sur l'icône du container Plex, puis sur "Modifier", ensuite tu te retrouve sur "Mettre à jour le Conteneur", avec tous les réglages du container, pour choisir la bonne version tu vas dans "Référentiel:" et tu mets : linuxserver/plex:version-1.24.5.5173-8dcc73a59 Ensuite tu fais "APPLIQUER" et "TERMINER", et tu es patient pendant le downgrade de la base de données... Une fois que ton Plex est en version 1.24.5.5173 surtout évite les mises à jour tant que le souci du plantage lors du transcodage n'est pas résolu, et surtout ne te précipite pas, ça fait plus de 6 mois que toutes les nouvelles versions de Plex sont foireuses... Donc, vaut mieux rester sur la version 1.24.5.5173 tant que les autres n'ont pas testé avant...
    1 point
  21. All of the "games" in Apps aren't games, rather Game Servers (as the Category says). You're not downloading the game (which you have to buy on Steam), but rather a game server which allows you and your friends to connect to to play in your own separate world separated from the rest of the players online (along with other things)
    1 point
  22. Log is being spammed with PCIe errors, so we can't see what happened, wait for the result of the long SMART test.
    1 point
  23. Handling of disabled disks is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
    1 point
  24. 解决了,应该是安装my servers插件时有报错,删除重新安装就好了
    1 point
  25. Plex isn't running (nor is tautuli). Until they are running they effectively don't have an IP address
    1 point
  26. WORKAROUND for Ubuntu, Debian Bookworm+, Fedora 36,... containers that wont start. Add this line to your container config at the end: lxc.init.cmd = /lib/systemd/systemd systemd.unified_cgroup_hierarchy=1 This will actually enable the containers to start.
    1 point
  27. Bonjour @aerodomigue et bienvenue sur le forum. Je suis loin d'être un expert en linux ( ) mais j'ai survollé tes logs et vu quelques trucs qui me semblent bizarres : Jun 22 00:14:45 Navet kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330) Jun 22 00:14:45 Navet kernel: ACPI Error: Aborting method \_SB.PR01._CPC due to previous error (AE_NOT_FOUND) (20210730/psparse-529) Je dirais de regarder si ton BIOS n'a pas de mise à jour disponible ? Jun 22 00:15:39 Navet avahi-daemon[7676]: Registering new address record for fe80::98c2:37ff:fed0:3a33 on veth615cd03.*. Jun 22 00:15:39 Navet kernel: x86/PAT: freeipmi.plugin:14209 map pfn expected mapping type uncached-minus for [mem 0x9eb5b000-0x9eb5cfff], got write-back Jun 22 00:15:39 Navet kernel: freeipmi.plugin[14209]: segfault at 0 ip 00007ff62ba8c698 sp 00007ffc2c9493a0 error 6 in libfreeipmi.so.17.2.3[7ff62ba39000+8d000] Jun 22 00:15:39 Navet kernel: Code: 00 48 89 34 24 48 89 de 48 89 54 24 10 48 89 ef 48 89 4c 24 20 4c 89 44 24 28 64 48 8b 04 25 28 00 00 00 48 89 44 24 38 31 c0 <48> c7 01 00 00 00 00 e8 4c e8 fa ff 85 c0 0f 88 69 02 00 00 0f b6 Je ne sais pas ce que ça pourrait être, ni même si c'est lié à ton soucis, mais ça ne me semble pas super. Après, les logs étant stockés en RAM comme tout l'OS, ce qui est arrivé avant et pendant les crashs est perdu au reboot. Je te conseillerai d'activer un syslog server et de inclure ce fichier après le prochain crash/freeze.
    1 point
  28. Yes pip install -r requirements.txt, maybe with or without a venv, if a venv then may have to point the plugins exec (in its yml) to the venv's python I wrote the unraid section of this readme a while ago. https://github.com/com1234475/stash-plugin-performer-creator may need to follow that if python or something is missing. --------------- edit: Sorry I didnt look at that repo before I wrote that, none of those use a venv. just do this docker exec -it Stash sh apk update apk add git cd ~/.stash/plugins git clone https://github.com/niemands/StashPlugins.git apk add python3-dev python -m pip install requests ### add the other things to the above command if you want to use that yt-dl_downloader plugin exit
    1 point
  29. Give 0 */6 * * * a go. Might not give you a definite start hour, but might help identify the issue. I have custom schedules working, but I've never quite worked out which rules of cron apply.
    1 point
  30. The best way to handle this is with a User Script. Enter the following commands in a User Script set to run "At First Array Start Only". # Wait for 10 minutes sleep 600 # Auto mount all UD remote shares /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.unassigned mount autoshares If the shares are already mounted, UD will not attempt to mount them again.
    1 point
  31. I was going to say check out my build in my signature, but I guess signatures are only visible when viewing the forum on desktop. Here's where I'm at currently: Unraid Pro 6.10.0-rc1 Array: 10x 14TB Seagate Ironwolf Pro (2 parity) Case: Fractal Define 7 XL Motherboard: Asus Prime X570 Pro CPU: Ryzen 9 5950x Memory: Kingston 64GB 3200MHz DDR4 ECC Cache: Samsung 980 Pro 2TB NVMe and 980 Pro 500GB NVMe HBA Card: LSI 9211-8i (IT mode) (cooled with Noctua 40mm) SAS Expander: Intel RES2CV240 (cooled with Noctua 40mm) Network Card: Intel X550-T1 10Gbps PSU: EVGA 850 GR CPU Cooler: Corsair H115i RGB Pro XT UPS: APC 650W Hotswap Bay: Icy Dock flexiDOCK MB795SP-B
    1 point
  32. After tinkering with this for a while, it seems the solution is much simpler than I thought. For some reason, the only step required is to modify the config.php file. No need to install ffmpeg or even to enable the app "Preview Generator" inside Nextcloud. I'll leave what I did here if it can help anyone looking to generate thumbnails for video files : 1. Go to your config.php file located at /mnt/user/appdata/nextcloud/www/nextcloud/config/config.php 2. Just before the end of your config file, add the following lines (after the 'installed' => true,) : 'installed' => true, 'enable_previews' => true, 'enabledPreviewProviders' => array ( 0 => 'OC\\Preview\\TXT', 1 => 'OC\\Preview\\Image', 2 => 'OC\\Preview\\MP3', 3 => 'OC\\Preview\\Movie', 4 => 'OC\\Preview\\MP4', ), ); You can also add other file types that you require (mkv, avi, etc.)
    1 point
  33. If you have the same problem, then the solution is the same, uninstall atop.
    1 point
  34. Open device manager You will see your unknown devices. Right click the unknown device and select "update driver". Select "Browse my computer for the driver software" Click browse Select the CDROM Drive virtio-win-x.x Then click next. Windows will scan the entire device for the location of the best-suited driver. It should find a RedHat network adapter driver, follow the prompts and you're in business. ** I never bothered to locate the actual subfolder of the driver on the virtio-win-1-1 image, I just let windows do it for me. ** Hope this helps.
    1 point
  35. ok, figured a solution for me, just in case someone else needs an tip 1st, make a backup of your vdisk file ... just in case in windows 10 (1703+) there a new tool added, mbr2gpt so, i used powershell in admin mode mbr2gpt /validate /allowFullOS <-- if ok then mbr2gpt /convert /disk:0 /allowFullOS now your win10 VM disk is prepared for EFI boot,.shut the VM down. create a new win10 VM with your same settings (exept use OVMF instead seabios), pointing to your existing and edited vdisk1.img (or whatever name it has). That was it ... enjoy when all is good your can remove your backup from your vdisk ..
    1 point