• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rorton's Achievements


Explorer (4/14)



  1. Im getting this error about not being able to run the script, even though it still exists, im running it manually now and its running fine, but the scheduling bit of the plugin doesnt wanna find it for some reason? Event: Unraid Status Subject: VM Backup plugin Description: cannot run /boot/config/plugins/vmbackup/configs/Weekly_Backups/ Importance: warning 2022-05-22 04:40 User script file does not exist. Exiting.
  2. Thanks. Will try it. Do you have number of backups and number of days both set?
  3. Hi, having issues with the plugin removing old backups. I have number of backups to keep set to 2 at the moment. I have a Windows 10 VM, and this works perfectly, the script runs, backs up, and then deletes any other than 2 versions, no problem - the file type is .img. I have 2 other VMs, LibreNMS which is also a .img file, and Home Assistant which is a qcow2 file. When the backup runs, it deletes all the config related files for any backup over 2, but doesnt actually remove the disk image, so im left with manually having to clear out just the disk images. Ive pasted the log from the last back-up job, and highlighted this... 2022-05-08 06:05:47 information: backup of /mnt/user/vms/librenms/old_librenms-ubuntu-20.04-amd64.img vdisk to /mnt/user/sysbackup/vms/autobackup/LibreNMS/20220508_0440_old_librenms-ubuntu-20.04-amd64.img complete. 2022-05-08 06:05:47 information: the extensions of the vdisks that were backed up are img. 2022-05-08 06:05:47 information: vm_state is shut off. vm_original_state is running. starting LibreNMS. 2022-05-08 06:05:48 information: backup of LibreNMS to /mnt/user/sysbackup/vms/autobackup/LibreNMS completed. 2022-05-08 06:05:48 information: number of days to keep backups set to indefinitely. 2022-05-08 06:05:48 information: cleaning out backups over 2 in location /mnt/user/sysbackup/vms/autobackup/LibreNMS/ 2022-05-08 06:05:48 information: removed '/mnt/user/sysbackup/vms/autobackup/LibreNMS/20220424_0440_LibreNMS.xml' config file. 2022-05-08 06:05:48 information: did not find any nvram files to remove. 2022-05-08 06:05:48 information: did not find any vdisk image files to remove. So the backup says it cant find a disk image to remove, but looking in the directory, its there...(24th April 2022) -rw-r--r-- 1 root root 20G Apr 24 05:47 20220424_0440_old_librenms-ubuntu-20.04-amd64.img -rw-rw-rw- 1 root root 6.4K May 1 05:39 20220501_0440_LibreNMS.xml -rw-r--r-- 1 root root 20G May 1 05:44 20220501_0440_old_librenms-ubuntu-20.04-amd64.img -rw-rw-rw- 1 root root 2.6K May 1 05:44 20220501_0440_unraid-vmbackup.log -rw-rw-rw- 1 root root 6.4K May 8 05:59 20220508_0440_LibreNMS.xml -rw-r--r-- 1 root root 20G May 8 06:05 20220508_0440_old_librenms-ubuntu-20.04-amd64.img -rw-rw-rw- 1 root root 2.8K May 8 06:05 20220508_0440_unraid-vmbackup.log Any ideas why its happy with the Windows 10 backup, sees the img file, and deletes anything older than 2 files, but won't do this for LibreNMS (and the qcow2 for the other VM) but does delete all the other data older than 2 for the other vms? Thanks 20220508_0440_unraid-vmbackup.log
  4. does anyone know how to get the distribution to report back 'Unraid' instead of linux Unraid? I have librenms discovering my unraid server, but it detects it as Linux 5.10.28-Unraid, and LibreNMS then gives it a basic Linux Penguin Icon, and apparently, I need it just to return unraid This is a script they give you to put into /usr/bin and call it distro Ive done this, made it executable, but it returns Slackware 14.2 #!/usr/bin/env bash # Detects which OS and if it is Linux then it will detect which Linux Distribution. OS=`uname -s` REV=`uname -r` MACH=`uname -m` if [ "${OS}" = "SunOS" ] ; then OS=Solaris ARCH=`uname -p` OSSTR="${OS} ${REV}(${ARCH} `uname -v`)" elif [ "${OS}" = "AIX" ] ; then OSSTR="${OS} `oslevel` (`oslevel -r`)" elif [ "${OS}" = "Linux" ] ; then KERNEL=`uname -r` if [ -f /etc/fedora-release ]; then DIST=$(cat /etc/fedora-release | awk '{print $1}') REV=`cat /etc/fedora-release | sed s/.*release\ // | sed s/\ .*//` elif [ -f /etc/redhat-release ] ; then DIST=$(cat /etc/redhat-release | awk '{print $1}') if [ "${DIST}" = "CentOS" ]; then DIST="CentOS" elif [ "${DIST}" = "Mandriva" ]; then DIST="Mandriva" PSEUDONAME=`cat /etc/mandriva-release | sed s/.*\(// | sed s/\)//` REV=`cat /etc/mandriva-release | sed s/.*release\ // | sed s/\ .*//` elif [ -f /etc/oracle-release ]; then DIST="Oracle" else DIST="RedHat" fi PSEUDONAME=`cat /etc/redhat-release | sed s/.*\(// | sed s/\)//` REV=`cat /etc/redhat-release | sed s/.*release\ // | sed s/\ .*//` elif [ -f /etc/mandrake-release ] ; then DIST='Mandrake' PSEUDONAME=`cat /etc/mandrake-release | sed s/.*\(// | sed s/\)//` REV=`cat /etc/mandrake-release | sed s/.*release\ // | sed s/\ .*//` elif [ -f /etc/devuan_version ] ; then DIST="Devuan `cat /etc/devuan_version`" REV="" elif [ -f /etc/debian_version ] ; then DIST="Debian `cat /etc/debian_version`" REV="" ID=`lsb_release -i | awk -F ':' '{print $2}' | sed 's/ //g'` if [ "${ID}" = "Raspbian" ] ; then DIST="Raspbian `cat /etc/debian_version`" fi elif [ -f /etc/gentoo-release ] ; then DIST="Gentoo" REV=$(tr -d '[[:alpha:]]' </etc/gentoo-release | tr -d " ") elif [ -f /etc/arch-release ] ; then DIST="Arch Linux" REV="" # Omit version since Arch Linux uses rolling releases IGNORE_LSB=1 # /etc/lsb-release would overwrite $REV with "rolling" elif [ -f /etc/os-release ] ; then DIST=$(grep '^NAME=' /etc/os-release | cut -d= -f2- | tr -d '"') REV=$(grep '^VERSION_ID=' /etc/os-release | cut -d= -f2- | tr -d '"') elif [ -f /etc/openwrt_version ] ; then DIST="OpenWrt" REV=$(cat /etc/openwrt_version) elif [ -f /etc/pld-release ] ; then DIST=$(cat /etc/pld-release) REV="" elif [ -f /etc/SuSE-release ] ; then DIST=$(echo SLES $(grep VERSION /etc/SuSE-release | cut -d = -f 2 | tr -d " ")) REV=$(echo SP$(grep PATCHLEVEL /etc/SuSE-release | cut -d = -f 2 | tr -d " ")) fi if [ -f /etc/lsb-release -a "${IGNORE_LSB}" != 1 ] ; then LSB_DIST=$(lsb_release -si) LSB_REV=$(lsb_release -sr) if [ "$LSB_DIST" != "" ] ; then DIST=$LSB_DIST fi if [ "$LSB_REV" != "" ] ; then REV=$LSB_REV fi fi if [ "`uname -a | awk '{print $(NF)}'`" = "DD-WRT" ] ; then DIST="dd-wrt" fi if [ -n "${REV}" ] then OSSTR="${DIST} ${REV}" else OSSTR="${DIST}" fi elif [ "${OS}" = "Darwin" ] ; then if [ -f /usr/bin/sw_vers ] ; then OSSTR=`/usr/bin/sw_vers|grep -v Build|sed 's/^.*:.//'| tr "\n" ' '` fi elif [ "${OS}" = "FreeBSD" ] ; then OSSTR=`/usr/bin/uname -mior` fi echo ${OSSTR}
  5. Great plugin, but i have a few oddities when the plugin runs at it schedule time, i get an email that says... Description: cannot run /boot/config/plugins/vmbackup/configs/Weekly_Backups/, and yet the plug does seem to run. When it running, it seems to take a copy of an existing backup first, 2021-12-05 15:31:47 information: copy of backup of /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_0440_librenms-ubuntu-20.04-amd64.img vdisk to /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_1527_librenms-ubuntu-20.04-amd64.img starting. '/mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_0440_librenms-ubuntu-20.04-amd64.img' -> '/mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_1527_librenms-ubuntu-20.04-amd64.img' 2021-12-05 15:36:48 information: copy of /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_0440_librenms-ubuntu-20.04-amd64.img to /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_1527_librenms-ubuntu-20.04-amd64.img complete. Then the backup runs, but it cant see any other files to remove in the backup directory, so im gradually building up backup files even though i have the number of files to keep set to 2... 2021-12-05 15:40:57 information: copy of /mnt/user/vms/librenms/librenms-ubuntu-20.04-amd64.img to /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_1527_librenms-ubuntu-20.04-amd64.img complete. 2021-12-05 15:40:57 information: backup of /mnt/user/vms/librenms/librenms-ubuntu-20.04-amd64.img vdisk to /mnt/user/sysbackup/vms/autobackup/LibreNMS/20211205_1527_librenms-ubuntu-20.04-amd64.img complete. 2021-12-05 15:40:57 information: the extensions of the vdisks that were backed up are img. 2021-12-05 15:40:57 information: vm_state is shut off. vm_original_state is running. starting LibreNMS. Domain LibreNMS started 2021-12-05 15:40:59 information: backup of LibreNMS to /mnt/user/sysbackup/vms/autobackup/LibreNMS completed. 2021-12-05 15:40:59 information: number of days to keep backups set to indefinitely. 2021-12-05 15:40:59 information: cleaning out backups over 2 in location /mnt/user/sysbackup/vms/autobackup/LibreNMS/ 2021-12-05 15:40:59 information: removed '/mnt/user/sysbackup/vms/autobackup/LibreNMS/20211128_1302_LibreNMS.xml' config file. 2021-12-05 15:40:59 information: did not find any nvram files to remove. 2021-12-05 15:40:59 information: did not find any vdisk image files to remove. 2021-12-05 15:40:59 information: removed '/mnt/user/sysbackup/vms/autobackup/LibreNMS/20211128_1302_unraid-vmbackup.log' vm log file. 2021-12-05 15:40:59 information: removing local LibreNMS.xml. removed 'LibreNMS.xml'
  6. Hi, running an Ubuntu 20.04 VM (its an appliance, provided by LibreNMS) and am struggling to get it to resolve internal IP addresses on my LAN. I have a PiHole acting as my local DNS Server - all my clients on my network point to the PiHole, and they can all resolve local addresses set-up in there, EG, if I ping the name UnRaid from a client, it resolves fine to Now, on the Ubuntu box, this doesn't work, I can't ping anything local. I can ping stuff on the internet (ping for example) and that's perfect, its just the internal hosts. If I do an NSLOOKUP for (the IP of the unraid box) I do get a response.. nslookup name = UnRaid. But if I PING UnRaid from the Ubuntu box I get ping: unraid: Temporary failure in name resolution Any ideas? Im using Netplan to setup and my Netplan.yaml file is: network: version: 2 renderer: networkd ethernets: enp2s0: dhcp4: no addresses: [] gateway4: nameservers: addresses: []
  7. forgot to mention, looking at my auto backups done within the app, they stopped some time in 2019 - so I think the corruption has been there for a good while
  8. hi, yep, that works perfectly. At the moment, I have managed to install the 'older' docker container (5.13.32) and point it at my data and its working OK. My only problem is backups now. Every time I try to back up in the app - it fails. I can't seem to run 5.14 with my app data, so my logic was, work on 5.13, get it backing up, then build a new 5.14, and import the backup. I think I have corrupt db.
  9. still struggling with the above, ive tried to remove the docker container, selected remove image also - then re added while pointing at my existing app data, and still the same problem with tomcat
  10. got an issue today with the container, can't get to the web page: I get - HTTP Status 404 – Not Found looking at the log: _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by ------------------------------------- To support LSIO projects visit: ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... [cont-init.d] 30-keygen: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Exception in thread "launcher" java.lang.IllegalStateException: Tomcat failed to start up atö00000(Unknown Source) atõÓ0000(Unknown Source) at com.ubnt.ace.Launcher.main(Unknown Source) Caused by: java.lang.RuntimeException: Web context failed to initialize It got updated recently, anyone else with the same?
  11. I have a new version of LibreNMS which is an OVA file this time, and trying to get it running as a VM. As I understand it, the OVA is like a tar file, so I untarred the ova file, which gave me and OFV, and the vmdk image. -rw-rw-rw- 1 5 daemon 1.1G Apr 28 05:43 librenms-centos-7.7-x86_64-disk001.vmdk -rw-r----- 1 5 daemon 171 Apr 28 05:44 -rw-r--r-- 1 root root 1.1G May 14 16:03 librenms-centos-7.7-x86_64.ova -rw-r----- 1 5 daemon 6.6K Apr 28 05:43 librenms-centos-7.7-x86_64.ovf I tried to create a CENTOS vm, pointing o the vmdk file, and while the vm starts, I keep getting this error in the log: 2020-05-14T16:09:57.077401Z qemu-system-x86_64: Could not write to allocated cluster for streamOptimized 2020-05-14T16:09:57.077564Z qemu-system-x86_64: Could not write to allocated cluster for streamOptimized 2020-05-14T16:09:57.077664Z qemu-system-x86_64: Could not write to allocated cluster for streamOptimized 2020-05-14T16:09:57.077781Z qemu-system-x86_64: Could not write to allocated cluster for streamOptimized my xml is this: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>LibreNMS2</name> <uuid>771ac9c9-88bf-e3f3-7a3f-af9bb2c85abb</uuid> <description>Second install of Libre with updated PHP</description> <metadata> <vmtemplate xmlns="unraid" name="CentOS" icon="centos.png" os="centos"/> </metadata> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='9'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='vmdk' cache='writeback'/> <source file='/mnt/cache/vms/librenms2/librenms-centos-7.7-x86_64-disk001.vmdk'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:2b:cc:88'/> <source bridge='br0.1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='' keymap='en-us'> <listen type='address' address=''/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> any ideas what im doing wrong please?
  12. id like to do similar, I have it running in a VM at the moment, if you get it running, be great to know how!
  13. Ahh ok thanks. I did wonder if that happened. No point me doing what I’m doing then :)
  14. I have LIBRENMS monitoring my Unraid using SNMP. I was looking at the discovered ports, and I have a number of interfaces listed as vethxxxxxxx (7 characters) Assume these are for the docker containers, but wondered if there is a way to see which veth is assigned to which docker container, so I can lable these up in the SNMP software I show 7 veth interfaces, but have 11 docker containers - 8 set to bridge and one of these is stopped at the moment, so im assuming its these bridged containers? Just need to know which veth belongs to which docker container. Thanks
  15. Try this .. Once the contents of the file has been validated, save it by naming it config.gateway.json and placing it under the <unifi_base>/data/sites/site_ID directory stored on the Controller.