• Posts

  • Joined

  • Last visited

Everything posted by bucky2076

  1. good luck with it.. Bear in mind, I got very uneven results, until such time I put an HDMI dongle on the graphics card hdmi port. After that its been smooth sailing. Also does not hurt to dump the bios to ensure the card is working with original bios. I could never get a consistant result trying to use one of the techz bios downloads. Good luck with it. /ben
  2. you can go back quite easily to the previous version, as I did. If you read above you will see there is a workaround due to a breaking change.
  3. ok, I got it working. Took weeks of effort... and has been a rough journey. If it helps someone else... here is what seemed to work for me.... My environment Swapped over the unraid 6.9 rc2. Seems quite stable now. Nice handling of vfio. onboard video (intel) used for unraid (set as primary in bios) MSI GTX 1050 ti oc - Nice card but very finicky. Tried Manjaro KDE/Gnome Win10, PopOs, Ubuntu Gnome - either got black screen or LLVMpipe for video. Tried a bunch of vbios available from techpowerup. Tried dumping my own bios from unraid as per spaceinvader Solution I downloaded HirenBootCD, took out the unraid usb key and started the server with Hiren. It come up in Windows PE tweak your bios and ensure gtx 1050 is set as primary. Attach monitor too... run GPU-Z (included in Hiren), and dump the vbios. Strip out the header of the vbios using a python program or hex editor as per spaceinvader Attach physical monitor to the gtx hdmi, and fire up your vm from another machine (desktop) I use nomachine as remoting software. so remote to the vm and worked great on ALL of the above VM's todo: Substitute an HDMI dummy plug instead of physical monitor.
  4. With respect to ports... good to be able to see in the template which ports are operational if you are a beginner. I found that useful. To be honest, I had a conflict with my unifi controller and had to massage one of the ports. For MQTT... I am actually running home assistant in a VM at the moment. The VM manages the HA software and an MQTT broker, which I find very convenient. That was the basis of my request... but another container seems appropriate. Open Hab looked more mature to me than HA... I'm still at the beginning of the journey, so can afford to poke around the different offerings. Thanks for doing this by the way.
  5. suggestions: for openhab 3. If you are going to be making changes to this template... can I suggest you a few enhancements ? take the "2" out of the mount file names expose used ports so they can be overriden (8080 conflicted with my unifi controller) option to include an MQTT broker in the same container (stretch) just my 2 cents
  6. I just installed the most recent 6.9 RC. I am wrestling with Nvidia GPU passthrough. I tried to set my vm back to use vnc and then launched, but got the following error on the NoVNC screen. >> noVNC encountered an error: SyntaxError: import not found: hasScrollbarGutter http://buckyu.local/plugins/dynamix.vm.manager/novnc/app/ui.js?ts=20200718:11:34 >> This is a showstopper. Not even sure which dynamix plugin this refers to: Rolling back to 6.8.3 in the meantime. >> update: This is a firefox problem. Works fine in chrome. >>>
  7. yeah me too... can't get popos to avoid the black screen when switching to nvidia card.
  8. bucky2076


    hi johnathan, your comment gave me reason to continue some investigation. After doing a bit more tinkering.... The jitsi VM is an ubuntu server vm, located on the same unraid server as the docker containers. Unraid networking was not able to route internally between the VM and the docker LE bridge network. That was a showstopper for me unfortunately, so licking my wounds and retreating for the moment. As a temporary measure, I am using custom external ports for jitsi, (81:444), and directing the router to forward directly to the VM without using LE as the gateway. many thanks, bucky2076
  9. bucky2076


    I tinkered with this quite a bit, and have some observations worth mentioning. The use of Docker-Compose/Portainer, alongside native unraid docker (Dockerman) handling, is certainly interesting, but fraught with problems in my opinion. These two are competing technologies that do not work well together. It clutters things up, and is responsible for ongoing errors warnings. So you have a few options to consider. Move all your dockers to compose/portainer. - Unraid support for compose is limited. If you want to maximise your use of unraid, just live with it the way it is. Move jitsi to a VM. Containers are for microservices, and is not meant for every use case out there. There are a couple of youtubes (crosstalk solutions) that take you through the steps. I would love to get this working while keeping letencrypt nginx proxy docker acting as a front end. Would be cool. Hybrid solution is to create a vm, and run docker in the vm, and then set up portainer/lxc as an alternative to dockerman. Run jitsi dockers here, or migrate all your dockers. You are going through an extra layer of virtualisation... so not sure how much this would drain performance. I ran up an instance of ubuntu server, and just going through the typical hardening process for a server. Lots to learn and fun doing this. Install of jitsi is a simple apt command once you get past server hardening. I am playing with all these possibilities, and honestly don't have a conclusion at this time. Strengths and weaknesses for all options
  10. fair enough. I've capitulated and bound my new license to a normal quality usb stick. Giving up on the small unltrafit 3.1, as I don't want to invite trouble.
  11. i also bought the ultrafit 3.1, and was dissapointed to see the automatic method would not work for this unit. I might try the manual install... is it documented ?
  12. saarg, the basic unraid template did not work for me, and I had to tear my hair out. DB_USER variable was incorrect. Needed to be DB_USERNAME Same issue for DB_PASS Passing IP/Port did not work for me on the DB_HOST /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='bookstack' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'DB_HOST'='' -e 'DB_USERNAME'='ben' -e 'DB_PASSWORD'='xyz' -e 'DB_DATABASE'='bookstack' -e 'APP_URL'='' -e 'PUID'='99' -e 'PGID'='100' -e 'DB_PORT'='3306' -p '8083:80/tcp' -v '/mnt/user/appdata/bookstack':'/config':'rw' 'linuxserver/bookstack' I've moved on now, experimenting with Traefik for reverse proxy.
  13. simply the template variables are misleading, causing some confusion. Also, the port assignment did not work when adding port to host name., I had to break this out to its on env var. The .env provides the proper name for these variables.
  14. oh thanks I did not realise that about unraid docker.... I have done some docker stuff in a normal linux host, and the docker networking is quite sophisticated. You would simply use the name of the docker rather than its internal IP. You also have the opportunity to manage your own networking so that you can split your backend from your foreground tasks across different network subnets (sql database, versus web server hosts). I did obscure a few things in the above post for obvious reasons... Name/PW and such. My server IP internally is actually, so there you go. I am ok using the host IP since that dosen't change. The 172.xx.xx.x.. ip changes each time you present a different order during startup. I did leave a message on bookstack issues section (git) suggesting the parameters need some attention. Thanks again for the chat... hopefully this will help the next newbie... /bucky2076
  15. mariadb... /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='mariadb' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'MYSQL_ROOT_PASSWORD'='XYZ' -p '3306:3306/tcp' -v '/mnt/user/appdata/mariadb':'/config':'rw' 'linuxserver/mariadb' bookstack.... /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='bookstack' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'DB_HOST'='192.168.xx.254' -e 'DB_USERNAME'='abc' -e 'DB_PASSWORD'='XYZ' -e 'DB_DATABASE'='bookstack' -e 'APP_URL'='' -e 'PUID'='99' -e 'PGID'='100' -e 'DB_PORT'='3306' -p '8083:80/tcp' -v '/mnt/user/appdata/bookstack':'/config':'rw' 'linuxserver/bookstack' and finally .env DB_HOST= DB_PORT=5506 DB_DATABASE=stack DB_USERNAME=n DB_PASSWORD=me This config actually works... because of the following: .env is ignored if overidden with params... DB_USER in template replaced with USERNAME DB_PASS changed to PASSWORD DB_PORT was added Would be better if it accepted the host as the container name rather than the ip.
  16. sure... I'll do it tomorrow, as my system is rebuilding the array since I upgraded a hdd. more space to play is always good.
  17. I tried this again... and it worked after a fashion. I used my top level ip instead of the container ip for the DB. Interesting though when I appended the port number to the IP, it barfed. I simply added a new variable for DB_PORT and it liked that better... despite what the documentation said. Not very container friendly at this stage... but it worked !
  18. its very finicky, and yes I tried setting it to the host ip, but did not work. A mystery ! Another thought is that whatever env variables you set here should override the .env file. I will try again... and report back.
  19. when configuring this container, there seems to be a problem in how the host_db parameter is managed. I have both mariadb and bookstack in separate containers, bridged. When I launch bookstack, I use either mariadb or linuxserver/mariadb as the database address. Unfortuntely does not work. When I put in the actual ip address given by docker (172.xx.xx.xx), then works fine. Unfortunately this is not supportable, as these numbers will change when bringing up/down the server. How do you configure the dbhost parameter in the bookstack template. I'm curious.
  20. this is working again. web gui is able to view activity of a minimal vm. Solution was drastic i'm afraid Killed all vm images, and libvirt.img Downgraded to latest stable (from beta) used only a single cpu as per default selection in vm creation. killed the domains share, and created a new version of it calling something else updated vm manager to use new share Safe boot I am going to lay off vms for a while and concentrate on docker, whcih looks cleaner imo one of these things worked... a difficult problem to be sure.
  21. still trying to nut this out. Could it be a permissions issue? something ?
  22. could be something to do with tainted kernel ? >>> -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-05-04 22:39:21.581+0000: Domain id=2 is tainted: high-privileges 2020-05-04 22:39:21.581+0000: Domain id=2 is tainted: host-cpu char device redirected to /dev/pts/1 (label charserial0) >> How do I recover ? NoVnc goes blank shortly after starting the vm creation process ? Maybe do a complete new install and copy over the key... that might clear this up... but is a drastic measure.
  23. thanks for the suggestion but did not work. I disabled docker and booted in safe mode. Trying to do a minimal install of fedora, and alas screen goes black after responding to boot menu. The kde logo comes on for about 10 seconds before it goes blank. I am running virt manager on my local pc to get experience with the libvirt technology. I'm going to leave VM's alone on the server at this time... So interested in returning back to default... at least be able to install a linux guest even if the display is stuck at low res. Many thanks...
  24. this plugin is wreaking havoc with my webui. When I try to press a start/stop menu item with the UI, nothing happens. But when I disabled the plugin, worked fine. The log shows an exception while loading... I am on the beta version, as testing things out before I buy... I am not really using the plugin at this time... but it looked interesting so installed it. thought you would want to know... snip follows: 3 16:45:06 buckyu root: Installing package docker.folder-2020.04.30-x86_64-1.txz: May 3 16:45:06 buckyu root: PACKAGE DESCRIPTION: May 3 16:45:06 buckyu root: Package docker.folder-2020.04.30-x86_64-1.txz installed. May 3 16:45:06 buckyu root: plugin: running: anonymous May 3 16:45:06 buckyu root: folders.json already exists May 3 16:45:06 buckyu root: Fatal error: Uncaught ArgumentCountError: Too few arguments to function finish(), 2 passed in /usr/local/emhttp/plugins/docker.folder/scripts/migration.php on line 22 and exactly 4 expected in /usr/local/emhttp/plugins/docker.folder/scripts/migration.php:42 May 3 16:45:06 buckyu root: Stack trace: May 3 16:45:06 buckyu root: #0 /usr/local/emhttp/plugins/docker.folder/scripts/migration.php(22): finish('/boot/config/pl...', Array) May 3 16:45:06 buckyu root: #1 /usr/local/emhttp/plugins/docker.folder/scripts/migration.php(12): init('/boot/config/pl...', '{\n "foldersV...', '{\n "foldersV...', false) May 3 16:45:06 buckyu root: #2 {main} May 3 16:45:06 buckyu root: thrown in /usr/local/emhttp/plugins/docker.folder/scripts/migration.php on line 42 May 3 16:45:06 buckyu root: May 3 16:45:06 buckyu root: ---------------------------------------------------- May 3 16:45:06 buckyu root: docker.folder has been installed. May 3 16:45:06 buckyu root: Version: 2020.04.30 May 3 16