• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jbreed's Achievements


Newbie (1/14)



  1. Very odd. It appears the /tmp/recover directory was never cleaned up after doing the update causing it to try and create the directory and fail due to it being there already. Here is what the bottom of that log file should look like. Looks like I should check for this within the startup script. config/opt/nessus/var/nessus/plugins-attributes.db.new config/opt/nessus/var/nessus/tenable-plugins-20210201.pem Changing owner and group of configuration files... Creating symbolic links... Cleaning up deb file used for install.. Cleaning up backup files extracted for an update.. Starting nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. Cached 0 plugin libs in 0msec Processing the Nessus plugins... All plugins loaded (0sec) Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Changing owner and group of configuration files... Starting nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. Console into the container and get the results of: # bash # df -h This will tell you what the disk usage is looking like. The backup tar file for me is 1.6G (could gzip it to further compress) with the whole volume mount sitting at 49G (/config). Not sure how you were seeing such small disk usage before to be honest. root@2917db0051dc:/config# df -h Filesystem Size Used Avail Use% Mounted on /dev/loop2 120G 6.8G 112G 6% / tmpfs 64M 0 64M 0% /dev tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm shfs 466G 49G 417G 11% /config /dev/loop2 120G 6.8G 112G 6% /etc/hosts tmpfs 7.8G 0 7.8G 0% /proc/acpi tmpfs 7.8G 0 7.8G 0% /sys/firmware I'll look into adding a handler for the recovery of the backup file as well as add compression to further shrink it; however, I imagine it won't be a big difference. Have you logged in and updated all of your plugins? You could get a bash terminal and run 'du' to locate large files to understand what is using the space. If you take a look at the startup script, this is the basic logic flow: 1. Backup the Nessus user data to /config, which is a volume mount as a tar file 2. Delete the /config/opt directory 3. Unpackage the .deb Nessus installation back to the /config directory, which deploys /config/opt and nessus will be under this 4. Unpackage backup tar file to /tmp/recover 5. Move main backup components into the nessus installation (users, database, etc) 6. Cleanup the /tmp/recover and .deb file removal (doesn't delete the actual backup .tar file) UPDATE: Pushed an update correcting how the backup and recovery tar was working. You can probably free up over 1-2G by opening a console and deleting the /config/nessusbackup.tar if you don't care to have a backup residing on the volume.
  2. Just to give you a heads up, I just pushed an update updating Nessusd to the latest version as well as updating the base image and other dependencies. Let me know if you have any issues. UPDATE 1 5:30PM MST: No issues with the package and it functioning; however, if deploying an update/upgrade it appears there is some bug that causes a 404 error. Working on a fix right now, which involves backing up /config/opt/nessus/var/nessus (which is the configuration files for a user), cleaning up the whole previous install, then re-installing the new package. I think since I just copied the deb into the path there is something that it doesn't like. I plan to have this fixed today within the next hour or so. I don't recommend updating just yet! UPDATE 2 6:00PM MST: I believe to have a patch in place that i published. Doing a final test to make sure all is well. I recommend if you have anything important (IE: license for non-free) to backup the var/nessus folder and store it elsewhere. This can be done by accessing the console. Wouldn't be a bad idea to do this anyways if this is used on a business/production system. I imagine most UnRaid instances are for small home use and I know personally I wouldn't have an issue re-doing my configurations; however, this is probably not the same for a select few. If anyone is still having issues, please feel free to post your logs. I should have time over this 3-day weekend to work with anyone who is running into issues in debugging it. Seems each time I go for a simple patch it works fine except for upgrading where a volume and data is present. If all is well, I may look into making a CI/CD pipeline to auto-update the core components on a regular schedule, but I have to figure out good tests it can run before merging into the production version.
  3. Awesome! For me, I normally spin this up and run a scan on a known subnet of my internal network and when not in use I power the container off. I've used Nessus in an Enterprise environment for benchmarks (compliance), vulnerability scans using credentials/uncredentialed, etc. I recommend going to a Nessus forum if you want to work with others on specifics on the usage of the scanner. Have a good weekend!
  4. Glad you pinged me as I am able to get this handled today as I have some time on my hands. Been pretty busy and this keeps slipping my mind given it continues to work with updated plugins; however, I may look into creating an automated pipeline for publishing updates here soon if I can make the time out of my schedule to work this. Just re-packaged a new container update that I am spinning up for a test (wouldn't want to push an update and break everyone's functional scanner) before publishing it. I should have it up here shortly. 😁
  5. Strange. I'll see if I can replicate this. I did just update the core .deb components which may have been an issue; however, I personally haven't seen this. If having issues still, possibly consider re-deploying on a new volume mount or deleting the container with the volume data and re-deploying. What's strange in your error is it appears to be related to the nessusd_www_server6 plugin calling a function that is/was unavailable. Have you tried going to the console and manually updating the plugins?
  6. Sure thing! Let me know how it looks.
  7. Deleted! Just now noticed this was an option and assumed this was the same module as I've always used. Thanks!
  8. Chezro, you can change the port mapping on either of the containers. For doing it here, modify host port 1 on the docker configuration page. This will map the port you set to the containers internal port 8843.
  9. This is due to using a self signed certificate, which is generally the same issue when accessing the UnRaid login page. To fix this you must create a local certificate authority, create a CSR, sign the certificate with the root CA, then upload it to the application. You would then need to install this root CA into your local machine you access these from, which if I recall Chrome uses the same CA store as we explorer although Firefox you have to import it into directly. Most of the time you can simply bypass this by choosing I understand or something of that nature to continue to the page. This I believe is listed under an "advanced" button on the page I'm unsure on which chrome settings would need to change to permit this, but I imagine there is likely a browser setting that allows you to accept the certificate if this isn't available. Not many people go through the hassle of setting up a local CA to get the green lock.
  10. I currently don't have an environment to test this and haven't used Nessus via a proxy before. Based on past posts here, it appears others have had it working. If there is something I can add to the Nessus container to make it easily configurable, then I'll happily add it. Just need some guidance on what is needed to get it working, or if I have the time I'll see about making a use case in my lab to test with it. I'll have to do some research on what changes are made to Nessus for doing this.
  11. Looking back into this and going over the documentation, there is a way to fix the login via going into the console. /opt/nessus/sbin/nessuscli chpasswd username If you don't remember the username, you can also add a new user with: /opt/nessus/sbin/nessuscli adduser
  12. For a temporary fix, I posted the commands to resolve this. I'll be pushing an update soon to resolve this without needing to console into the container and will also keep everything up to date without having to re-compile the image. Thanks!
  13. Update: The issue was I needing to include updating the container core components to avoid a mismatch from happening due to auto updates of plugins. To fix this before I push an update, you can do the following: Get a shell/console to the container. Type the following: /opt/nessus/sbin/nessuscli update --all service nessusd stop service nessusd start **confirmed you can also use the GUI to do this** Click the top-right button for your account, then go to my account. On the left-side menu, click 'about' Select the third tab for 'software update' Choose 'manual software update' at the top right. This will force an update and rid any errors. I just need to update the included deb file for the initial install and then include in the initial startup script to update prior to starting the service to prevent issues with it being mismatched upon that initial launch. By default it should be set to update on a daily basis, but of course there will be issues if not updated when first running everything. Thanks for the screenshots. I was able to replicate the issue and should have a patch pushed here shortly once I make sure everything is good to go.
  14. Looking into this right now as I have the time to debug and re-package with the latest Nessus components. It may resolve simply by me updating the image and pushing an update. I'll know something by today. Thanks!
  15. Sorry for the late response, i've been pulled between multiple projects and need to revisit and update this. Upon initial setup, this does take some time as it has to pull all the latest plugins for scanning and such. I initially looked at if I could add all of these into the image so it shrunk the time and only added the new ones, but had some issues so I just left it as is (given it works, but takes some time on the first setup). After this initial setup it will be pretty quick to spin up compared to that first time. For login, (if I recall) you create the credentials upon the initial configuration. I recommend completely removing the container including the volume and re-installing it. The other option is to remove the container and leave the volume, but change the mapping so it saves to a new location. This way you don't re-use your old configuration as the volume mapping is for persistence. Hope this helps. I'll be jumping into this soon to update everything and see if I can speed up that initial setup any.