timethrow

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

timethrow's Achievements

Noob

Noob (1/14)

14

Reputation

  1. Slackware is a "proper" Linux distro, it just depends on how its maintained. I don't think its unreasonable to ask for security patches and fixes in a timely manor, especially for something that has a very high score, and is a core part of the product, even more so since Limetech/unRAID is supposed to be taking a more secure by default stance now.
  2. Details about the Vulnerability here; https://www.samba.org/samba/security/CVE-2021-44142.html As this gets a score of 9.9, Can we expect an update to unRAID v6.9 to fix this (prior to v6.10's release)? Additionally, is there a way to bind Samba within unRAID to only 1 IP Address? I have 3 networks defined, the main (eth0) is LAN, and have 2 VLANs attached, and unRAID listens for Samba (and other services) on each of those networks, even though the 2 VLANs don't have an IP Assigned (it seems to allocate istelf an IP ending in .128 e.g. 19.168.10.128)? From a security perspective, it would be good to be able to restrict what IPs/Networks it listens on. Naturally for anyone who has Samba exposed to the Internet (why?!?), I would seriously consider firewalling it to a trusted network or range of IPs only, or better yet put it behind a VPN to minimise the potential attack surface.
  3. Hi, I have 2 Disks showing as Disabled and having Read Errors, this originally started part way through my scheduled monthly parity check. The disks don't show any smart issues that I can see, and when I try and run a Self Test (Short or Extended they keep disappearing) and after a few seconds (~30) they come back again. Its for Disks 10 (sdz) and 15 (sdx). I have a few times managed to get the Short Self Test to complete, but its rather intermittent, when it does complete, it does not report any errors. These 2 disks are both connected to a SAS expander, but there are 13 other drives also connected to this expander and these all seem to work ok at the moment. I have tried rebooting etc, and I also tried disabling the VM PCIe ACS override but no luck with that. No Hardware changes made for a few months and no major software changes either (including no BIOS or Firmware updates). Diagnostics attached, does anyone have any suggestions please? Thanks unraid-diagnostics-20220118-1316.zip
  4. I always seem to have problems with the "Apply Fix" option for when a Template URL differs, its says the fix is applied and looking at the file it mentions, I can see its not using the new URL, and so FCP still says its an issue. For example, at the moment, I get this in FCP; Template URL for docker application pihole-template is not the as what the template author specified. The template URL the author specified is https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole.xml. The template can be updated automatically with the correct URL. Running the "Apply Fix" option, this is the output. Template to fix: /boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml New template URL: https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole.xml Loading template... Replacing template URL... Saving template... Fix applied successfully! Running a Rescan in FCP, it shows the same issue, but a slightly different template now (its has v2 in the URL); Template URL for docker application pihole-template is not the as what the template author specified. The template URL the author specified is https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml. The template can be updated automatically with the correct URL. Again, Running the "Apply Fix" option, shows; Template to fix: /boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml New template URL: https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml Loading template... Replacing template URL... Saving template... Fix applied successfully! Looking at the file the "Apply Fix" mentions, its shows the correct URL for this second message; root@Tower [Fri Dec 24 09:32]: ~# grep TemplateURL /boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml <TemplateURL>https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml</TemplateURL> I feel like its stuck in a bit of a loop, not sure if its FCP or the CA App/Template, but have had this issue a few times in the past and end up having to ignore it.
  5. Heya, Thank you for looking at it and the update. Most appreciated
  6. Heya, Thanks for providing this container. Are there any plans to update the Tomcat Version, as Nessus is reporting that the version Installed has a few Vulnerabilities (CVE-2021-25122, CVE-2021-25329). Thanks
  7. Hi, As part of the ongoing efforts to improve the security of the appliance and secure by default stance, could we please include a firewall such as UFW by default in the image. While unRAID comes with IPTables, ufw is (imo) much more user friendly, and can allow users to easily firewall off ports they dont need or restrict access. If ufw came with unRAID out of the box (currently it requires manual install, until/if its added to NerdPack), it would allow you to apply rules on first boot as required, minimizing the time period where your host could potentially be exposed. Thanks.
  8. Could we please add UFW to this. Tested on v8.9.2 and works as expected. https://slackware.pkgs.org/current/slackers/ufw-0.36.1-x86_64-3cf.txz.html Thanks
  9. Heya, I think there are still some issues with this container, I rebooted by unRAID Server (Cleanly, Part of Maintenance) and when starting the container, its given me a fresh install again. The first time the docker container did not start properly, the service started then stopped; Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Changing owner and group of configuration files... Starting the nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Changing owner and group of configuration files... Starting the nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Changing owner and group of configuration files... Starting the nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. When manually restarting it a few times, eventually it did a backup and then started a fresh; .... nessus/plugins-code.db.16321190631015150102 nessus/plugins-desc.db.1632119107882788674 nessus/global.db-wal nessus/global.db-shm Loading backup into new Nessus version path... Changing owner and group of configuration files... Creating symbolic links... Cleaning up deb file used for install.. Cleaning up backup files extracted and no longer required.. Starting the nessusd service... nessusd (Nessus) 8.15.1 [build 20272] for Linux Copyright (C) 1998 - 2021 Tenable, Inc. Cached 0 plugin libs in 1msec Processing the Nessus plugins... All plugins loaded (0sec) All plugins loaded (0sec) As a test I stopped the Docker Service, and started it again, and it did the same thing. I have a backup I can restore from, but something seems a miss here. Thanks. EDIT: Looks like I cant even restore from Backup as it gets stuck in a loop, by the container not starting (same as above logs), and then eventually when you get it started, its doing its own back up and starting again.
  10. Thanks, but this only works if you put that in for every eventuality in your script, whereas having the plugin send it after a script has completed (whether successful or not) ensures its always sent. For example, if my script encounters an error that was not captured, the notify may not be sent if included in the script manually, whereas this way, it will always be sent, so long as the underlying plugin works. I do have the notify in some of my user scripts, and I also do redirect alot of my output to log files I store on the array (in case I need it) but this is more for being notified for when a script is run and the output, similar to the cron service on a vanilla Linux server. It allows you to check quickly/easily if a script ran and if it has the expected output or not.
  11. Heya, Thanks for the update. Since updating this, the docker image use has gone from around ~1.5GB to 9.8GB, and has maxed out my docker image, I have given it another 5GB and its still growing. Is this expected, i.e. are there alot of changes or new files in this update, or are there any new paths we may need to map? This is the tail end of the Docker Log; config/opt/nessus/var/nessus/tmp/fetch_feed_file_tmp_32_2140101596_1745818318 config/opt/nessus/var/nessus/plugins-code.db.16305872481236034445 tar: config/opt/nessus/var/nessus/plugins-code.db.16305872481236034445: Cannot write: No space left on device config/opt/nessus/var/nessus/plugins-desc.db.163058740051507450 Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Backing up Nessus configuration to /config/nessusbackup.tar tar: Removing leading `/' from member names /config/opt/nessus/var/nessus/ /config/opt/nessus/var/nessus/tools/ /config/opt/nessus/var/nessus/tools/bootstrap-from-media.nbin /config/opt/nessus/var/nessus/tools/nessusd_www_server6.nbin /config/opt/nessus/var/nessus/tools/tool_dispatch.ntool /config/opt/nessus/var/nessus/logs/ /config/opt/nessus/var/nessus/nessus-services /config/opt/nessus/var/nessus/plugins-core.tar.gz /config/opt/nessus/var/nessus/tenable-plugins-a-20210201.pem /config/opt/nessus/var/nessus/users/ /config/opt/nessus/var/nessus/nessus_org.pem /config/opt/nessus/var/nessus/tenable-plugins-b-20210201.pem /config/opt/nessus/var/nessus/tmp/ /config/opt/nessus/var/nessus/tmp/nessusd /config/opt/nessus/var/nessus/tmp/nessusd.service Cleaning up old Nessus installation files Extracting packaged nessus debian package: Nessus 8.15.1... mkdir: cannot create directory '/tmp/recover': File exists Thanks EDIT: I couldn't get to the bottom of it, and no matter how much extra space I gave the Docker Image, within a few moments it used it all. So I ended up deleting the image, and moving the old appdata and starting a fresh. This solved the disk usage issue, however, on this clean install I had a couple of issues where nessus would not start, going to the console and starting the nessusd service manually worked. Will see how it goes over the next few days. - Thanks again for updating it
  12. Is it possible to add an optional feature, to send the output of the script (the same as what is shown if you Run the Task in the Web UI) as a Notification using the unRAID Notification system, so we can see when scripts have run, and if any issues etc were reported? Similar to how in Cron you can set it to email you on completion (assuming you have the right stuff setup).
  13. Thanks for having a look. Its currently in PCIEX1_1 and has also been tested in PCIEX1_2. I did have a USB Expansion Card in that slot before, and any devices connected always seems to show as expected, so it should be working.