• Posts

  • Joined

  • Last visited

Everything posted by timethrow

  1. Thanks, but this only works if you put that in for every eventuality in your script, whereas having the plugin send it after a script has completed (whether successful or not) ensures its always sent. For example, if my script encounters an error that was not captured, the notify may not be sent if included in the script manually, whereas this way, it will always be sent, so long as the underlying plugin works. I do have the notify in some of my user scripts, and I also do redirect alot of my output to log files I store on the array (in case I need it) but this is more for being notified for when a script is run and the output, similar to the cron service on a vanilla Linux server. It allows you to check quickly/easily if a script ran and if it has the expected output or not.
  2. Heya, Thanks for the update. Since updating this, the docker image use has gone from around ~1.5GB to 9.8GB, and has maxed out my docker image, I have given it another 5GB and its still growing. Is this expected, i.e. are there alot of changes or new files in this update, or are there any new paths we may need to map? This is the tail end of the Docker Log; config/opt/nessus/var/nessus/tmp/fetch_feed_file_tmp_32_2140101596_1745818318 config/opt/nessus/var/nessus/plugins-code.db.16305872481236034445 tar: config/opt/nessus/var/nessus/plugins-code.db.16305872481236034445: Cannot write: No space left on device config/opt/nessus/var/nessus/plugins-desc.db.163058740051507450 Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Backing up Nessus configuration to /config/nessusbackup.tar tar: Removing leading `/' from member names /config/opt/nessus/var/nessus/ /config/opt/nessus/var/nessus/tools/ /config/opt/nessus/var/nessus/tools/bootstrap-from-media.nbin /config/opt/nessus/var/nessus/tools/nessusd_www_server6.nbin /config/opt/nessus/var/nessus/tools/tool_dispatch.ntool /config/opt/nessus/var/nessus/logs/ /config/opt/nessus/var/nessus/nessus-services /config/opt/nessus/var/nessus/plugins-core.tar.gz /config/opt/nessus/var/nessus/tenable-plugins-a-20210201.pem /config/opt/nessus/var/nessus/users/ /config/opt/nessus/var/nessus/nessus_org.pem /config/opt/nessus/var/nessus/tenable-plugins-b-20210201.pem /config/opt/nessus/var/nessus/tmp/ /config/opt/nessus/var/nessus/tmp/nessusd /config/opt/nessus/var/nessus/tmp/nessusd.service Cleaning up old Nessus installation files Extracting packaged nessus debian package: Nessus 8.15.1... mkdir: cannot create directory '/tmp/recover': File exists Thanks EDIT: I couldn't get to the bottom of it, and no matter how much extra space I gave the Docker Image, within a few moments it used it all. So I ended up deleting the image, and moving the old appdata and starting a fresh. This solved the disk usage issue, however, on this clean install I had a couple of issues where nessus would not start, going to the console and starting the nessusd service manually worked. Will see how it goes over the next few days. - Thanks again for updating it
  3. Is it possible to add an optional feature, to send the output of the script (the same as what is shown if you Run the Task in the Web UI) as a Notification using the unRAID Notification system, so we can see when scripts have run, and if any issues etc were reported? Similar to how in Cron you can set it to email you on completion (assuming you have the right stuff setup).
  4. Thanks for having a look. Its currently in PCIEX1_1 and has also been tested in PCIEX1_2. I did have a USB Expansion Card in that slot before, and any devices connected always seems to show as expected, so it should be working.
  5. I have purchased a Dual NIC and installed it in my unRAID device, however it does not appear to be detecting it and allowing me to use it. I have tried reseating the NIC and moving it to another slot on the Motherboard but no luck. I did install it in another PC as a test and that uses Windows, and it detected it fine. This is the NIC I am using; https://www.amazon.co.uk/gp/product/B07Y1P4DGV I can't see anything obvious in the logs, and can see my onboard NIC shows up fine, but could well be missing something. Any help / advice is greatly appreciated. Thanks! tower-diagnostics-20210529-2207.zip
  6. Thanks, thats a shame, as they are quite large disks (3x 14TB & 1x 8TB) so its probably going to take quite a while to rebuild each of them individually. But will give it a go, Thanks. - I was kind of hoping there was an easy way to get unRAID to mount them (as part of the array) and continue from there (even if it meant rebuilding parity), since all of the data is there and its accessible. Errr, thats probably me just not using the right terminology, I have an LSI Card flashed to IT mode and using a couple of spare slots on that (was using it for over a year prior to this as well, without any issue). Sorry for the confusion.
  7. Hi, I had some data disks (4 of them) connected via USB, and just shucked them and attached them directly to my RAID Controller. All of the disks, show up and in Unassigned Devices all still have data on etc, but when adding them back to the array (as they now have different identifiers), they show as "Unmountable: Unsupported partition layout". Is there anyway to correct the partitions, without having to format the drives and shuffle all the data around between the drives? I dont have diagnostics right now, but will post them as soon as I can. Thanks.
  8. Thanks for you help, that is most appreciated. Here are a few screenshots. The first shows the query in use (the main part), if you need more, just let me know. The second, shows the serial numbers in the Data View, but the preview showing the incorrect values (it looks like its trying to format it as a number?) The final one, is something is stumbled on earlier, going on the theory it thinks the serial numbers are numbers/integers (numeric only), and formats it like that, the first drive in the list is the nvme one, which has a Serial Number with a Numeric Value only, the second one, is a regular HDD and has an alpha numeric one, so I tweaked the query (well just part A) to exclude the nvme drive, and after that, it shows the serial number correctly for all the other drives.
  9. Hi, Thanks for the continued development of this Dashboard its great, its so nice to have all the info in one place. I have a bit of an odd issue that I can't figure out. I'm using UUD 1.5 (though the issues still appears in 1.6) for me. The Disk Overview section and the SMART Stat Table, dont show the serial numbers of the drives correctly (with the exception of 2 SSDs), they seems to show as NaN or a single number. I thought it was something in my Telegraf Config at first, but noticed that the Temperatures graph show them correctly. I have checked the overrides for the Table and can't see any obvious issues, I even tried setting an override to format this column as a string but no luck. Oddly when I goto Query Inspector and then Data, it shows the correct Serial Number there. I am probably missing something really obvious here, but if anyone has any suggestions, they would be most welcome. Thanks
  10. Just updated to the new container this morning, all looks good, no issues from me. I have run my regular network scan and it completed as expected. Thank you for doing that, most appreciated
  11. Heya, Is there any chance the container can be updated to use a more recent version of Nessus please, as the current version reports Vulnerabilities with itself (1 High, 2 Low). I have been manually updating it myself, by downloading the deb file and running "dpkg -i <path_to_deb>" which seems to work, but would be useful to have it done by default. Thanks.
  12. A few suggestions if I may, from my experiences in the Cloud Infrastructure World; First, Reviewing Docker Folder Mappings (and to some extent VM Shares). Do all you Docker Containers need read and write access to non appdata folders? If it does, is the scope of the directories restricted to what is needed, or have you given it full read/write to /mnt/user or /mnt/user0 ? For example I need Sonnarr and Radarr to have write access to my TV and Movie Share, so they are restricted to just that, they don't need access to my Personal Photos, or Documents etc. Whereas for Plex, since I don't use the Media Deletion Feature, I dont need Plex, to do anything to those Folders, just read the content. So it has Read Only Permissions in the Docker Config. Additionally, I only have a few containers that need read/write access to the whole server (/mnt/user) and so these are configured to do so, but since they are more "Administration" containers, I keep them off until I need them, most start up in less than 30 seconds. That way, if for whatever reason a container was compromised, the risk is reduced in most cases. Shares on my VM's are kept to only the required directories and mounted as Read Only in the VM. For Docker Containers that use VNC or VMs, set a secure password for the VNC component too, to prevent something on the Network from using it without access (great if you don't have VLAN's etc). This may be "overkill" for some users, but have a look at the Nessus or OpenVAS Containers, and run regular Vulnerability Scans against your Devices / Local Network. I use the Nessus one and (IMO) its the easier of the two to setup, the Essentials (Free) version is limited to 15 IPs, so I scan my unRAID Server, VMs, and a couple of other physical devices and it has SMTP configured so once a week sends me an email with a summary of any issues found, they are categorized by importance as well. I don't think many people do this, but don't use the GUI mode of unRAID as a day to day browser, outside of Setup and Troubleshooting (IMO) it should not be used. Firefox, release updates quite frequently and sometimes they are for CVE's that depending on what sites you visit *could* leave you unprotected. On the "Keeping your Server Up-to-Date" part, while updating the unRAID OS is important, don't forget to update your Docker Containers and Plugins, I use the CA Auto Update for them, and set them to update daily, overnight. Some of the Apps, could be patched for Security Issues, and so keeping the up-to-date is quite useful. Also, one that I often find myself forgetting is the NerdPack Components, I have a few bits installed (Python3, iotop, etc), AFAIK these need to be updated manually. Keeping these Up-to-Date as well is important, as these are more likely to have Security Issues that could be exploited, depending on what you run. Also on the Updates, note, if you have VM's and they are running 24/7 keep these up-to-date too and try and get them as Hardened as possible, these can often be used as a way into your server/network. For Linux Debian/Ubuntu Servers, you can look at Unattended Upgrades, similar alternatives are available for other Distros. For Windows you can configure Updates to Install Automatically and Reboot as needed. Hardening the OS as well, is something I would also recommend, for most common Linux Distros and Windows, there are lots of guides useful online, DigitalOcean is a great source for Linux stuff I have found. If something is not available as a Docker Container or Plugin, don't try and run it directly on the unRAID Server OS itself (unless, its for something physical, e.g. Drivers, or Sensors etc), use a VM (with a Hardened Configuration), keeping only the bare minimum running directly on unRAID, helps to reduce your attack surface. Also, while strictly not part of Security, but it goes Hand in Hand, make sure you have a good Backup Strategy and that all your (important/essential) Data is backed up, sometimes stuff happens and no matter how much you try, new exploits come out, or things get missed and the worst can happen. Having a good backup strategy can help you recover from that, the 321 Backup method is the most common one I see used. If something does happen and you need to restore, where possible, before you start the restore, try and identify what happened, once you have identified the issue, if needed you can restore from Backups to a point in time, where there was no (known) issue, and start from there, making sure you fix whatever the issue was first in your restored server. I have seen a few cases (at work) where peoples Servers have been compromised (typically with Ransomware), they restore from backups, but don't fix the issue (typically a Weak Password for an Admin account, and RDP exposed to the Internet) and within a few hours of restoring, they are compromised again. Other ideas about using SSH Keys, Disabling Telnet/FTP etc, are all good ones, and definitely something to do, and something I would love to see done by default in future releases. EDIT: One other thing I forgot to mention was, setup Notifications for your unRAID server, not all of them will be for Security, but some of the apps like the Fix Common Problems, can alert you for security related issues and you can get notified of potential issues quicker than it may take you to find/discover them yourselves.
  13. Same on 6.9.1 as well, it leaves VNC exposed for anyone on the Network to access unauthenticated, could be a security risk for some.
  14. Another +1 for this. If the VM/Docker Container has no paths mapped to the array, and runs on an unassigned device, allowing it to start with the array running and to continue when the array is stopped would be most useful.
  15. Would it be possible to open all external hyperlinks in a new Tab? For example, I was installing a Docker Container and in the Overview section of the Add Container Page, it had a link to the Docs on GitHub, I clicked on the link and it took me from my unRAID Page to the Git Hub page, ideally, it would open up in a new tab so I can stay on my unRAID page and refer to the docs as well. - I know I can right click and open in a new tab, but I think for external links, this should probably be the default behavior?
  16. Hi, I have pretty much reached the limits of my current build in terms of the number of drives it can hold within the case, and have now got several connect via USB (eek!) and so I want to expand my setup further. I'm quite limited due to my MB, but I have found the QNAP TL-D1600S Expansion DAS online and it looks like this would suit my needs perfectly and allow me to expand with room to spare. It looks like a great enclosure and connects simply to a RAID Card, which they provide along with Cables (SFF-8088). As far as I can tell it looks like it supports JBOD out the box as well. So I am wondering if this would work with unRAID, if I were to add the Card it comes with (its looks like a regular 8 Port SAS Card) to my existing setup, and the attach the DAS with the SFF-8088 and add a few drives, it should then work? I dont have any other QNAP products so am hoping it does not have to be linked with one. Has anyone got any experience with this, or done something similar? Likewise, does anyone have any thoughts on a setup like this? Link to the Product; https://www.amazon.co.uk/QNAP-TL-D1600S-Desktop-Storage-Enclosure/dp/B084Q3DGTD Thanks
  17. Thanks for that, it has worked. The Data Drive shows as being Emulated and the contents look to be what I would expect, and the emulated files are accessible. The Data Drive is being rebuilt now. Thank you for your help.
  18. Thanks for that. The Original Parity was 10TB and its been replaced with a 14TB drive, which could be some reason for the 30+ hours. Surely, as everything has been copied over and the array has not been able to be started since, there must be some way to "force" it back into the state, as all the data is still there and nothing has changed, its just the unRAID GUI being restrictive. Are you able to provide some info on the Invalid Slot Command please, I might give that a go if I dont get any other results.
  19. Hi, I have been following the Parity Swap Procedure here (https://wiki.unraid.net/The_parity_swap_procedure) and got to Step 14, I have let it copy the data from the old Parity drive to the new one and it completed fine. I came back to it after it completed and it was prompting for the password to start the array (and subsequently start the rebuild of data from parity), I was about to enter it but accidentally refreshed the page, now its asking me to do the whole parity copy again (both drives show a blue icon and say New Device). How do I get it back to the previous state where it knows that the new Drive has had the Parity copied to it, so I dont have to spend another 30+ hours letting it copy over, and that the drive it was swapped with needs rebuilding when the array starts? Thanks.
  20. Hi, I dont often have to shut down my server, but lately have had to do so for a few reasons, including once to update to 6.8.3. Since then, everytime I do shutdown my server (I do this via the GUI (After shutting down my VMs and Docker Containers from the UI) and just let it do its thing), it always comes back up and says that an unclean shutdown was detected, the first few times I just let it run the parity check thinking something may have happened, but no parity errors were found. However it keeps happening and I did it again the other day, and watched it shutdown safely, and it still insists it was not shutdown cleanly, this time I have canceled the parity check. I need to replace one of the Disks in my Array and so I need to shut it down, and the idea is to let it rebuild once the new disk is in, but given that it thinks there are errors with the Shutdown and potentially the parity not being correct. I don't want to do this just yet. Attached are my diagnostics (Disk 3 is the one to be Replaced, so I am aware of the SMART issues on it). Also worth mentioning (but could be unrelated) the last 2 shutdowns, I have seen random Docker Containers disappear, and need to be manually installed again from the (CA) Apps Tab. Any help would be greatly appreciated. Thanks! tower-diagnostics-20200522-1332.zip
  21. In Fix Common Problems, I keep getting the following Warnings, I tried the fixes available for both, but it continues to show. Does anyone else get this?
  22. Upgraded from 6.8.0 to 6.8.2 and upgrade largely went fine, although it reported an "Unclean Shutdown" even though, I just selected the Reboot option as I did with other releases? But besides that, so far, so good.
  23. Have been using this for a few weeks but a couple of issues with it. 1) I dont seem to be able to CTRL + SHIFT Select items, for exmaple if I have 100 items, I have to CTRL and Click each item. 2) The WEB UI is very small and does not resize (even with auto scale) meaning I only see a few rows of each section at a time, is there anyway to change any of this?