Leaderboard

Popular Content

Showing content with the highest reputation on 03/28/21 in Posts

  1. A simple plugin that indexes most of the pages within the GUI and allows you to search for them directly from the menu bar without having to guess if they are within Settings - Utilities, Tools, System Profile, etc. Note that If you run Custom Tabs via the plugin, have a few of them, and your browser width is at the minimum 1280px you may see some display aberration where the buttons (switch language, log out etc) and search bar wind up below where they are supposed to be. (Don't we all have 1920px monitors minimum now?) It does not index any pages that are already available directly from the Task Bar. It will automatically adjust itself as plugins are installed or removed. After installing it via Apps, you will have to either reload the page, or navigate to a different page on the GUI for it to appear on the title bar. This does have a dependency of having Community Applications installed, but I don't think that would be a problem for anyone. Buy me a beer :)
    2 points
  2. Nö. Es gibt wohl die Möglichkeit die Laufwerke mit einer anderen Firmware zu versehen, damit sie schneller rippen, aber auch ohne diese Modifikation gibt es da keine Probleme. Zum Rippen nutze ich ausschließlich MakeMKV und 5.25 Zoll Blu-Ray Laufwerke.
    2 points
  3. You don't need to pass in the device to the container. It really is as simple as loading the plugin to get the drivers going and then telling Frigate to use it. detectors: coral_pci: type: edgetpu device: pci
    2 points
  4. Hello Unraid Community! It has come to our attention that in recent days, we've seen a significant uptick in the amount of Unraid server's being compromised due to poor security practices. The purpose of this post is to help our community verify their server's are secure and provide helpful best-practices recommendations to ensuring your system doesn't become another statistic. Please review the below recommendations on your server(s) to ensure they are safe. Set a strong root password Similar to many routers, Unraid systems do not have a password set by default. This is to ensure you can quickly and easily access the management console immediately after initial installation. However, this doesn't mean you shouldn't set one. Doing this is simple. Just navigate to the Users tab and click on root. Now set a password. From then on, you will be required to authenticate anytime you attempt to login to the webGui. In addition, there is a plugin available in Community Apps called Dynamix Password Validator. This plugin will provide guidance on how strong of a password you're creating based on complexity rules (how many capital vs. lowercase letters, numbers, symbols, and overall password length are used to judge this). Consider installing this for extra guidance on password strength. Review port mappings on your router Forwarding ports to your server is required for specific services that you want to be Internet-accessible such as Plex, FTP servers, game servers, VoIP servers, etc. But forwarding the wrong ports can expose your server to significant security risk. Here are just a few ports you should be extra careful with when forwarding: Port 80: Used to access the webGui without SSL (unless you've rebound access to another port on the Management Access settings page). DO NOT forward port 80. Forwarding this port by default will allow you to access the webGui remotely, but without SSL securing the connection, devices in between your browser and the server could "sniff" the packets to see what you're doing. If you want to make the webGui remotely accessible, install the Unraid.net plugin to enable My Servers on your system, which can provide a secure remote access solution that utilizes SSL to ensure your connection is fully encrypted. Port 443: Used to access the webGui with SSL. This is only better than port 80 if you have a root password set. If no root password is set and you forward this port, unauthorized users can connect to your webGui and have full access to your server. In addition, if you forward this port without using the Unraid.net plugin and My Servers, attempts to connect to the webGui through a browser will present a security warning due to the lack of an SSL certificate. Consider making life easier for yourself and utilize Unraid.net with My Servers to enable simple, safe, and secure remote access to your Unraid systems. NOTE: When setting up Remote Access in My Servers, we highly recommend you choose a random port over 1000 rather than using the default of 443. Port 445: Used for SMB (shares). If you forward this port to your server, any public shares can be connected to by any user over the internet. Generally speaking, it is never advisable to expose SMB shares directly over the internet. If you need the ability to access your shares remotely, we suggest utilizing a Wireguard VPN to create a secure tunnel between your device and the server. In addition, if the flash device itself is exported using SMB and this port is forwarded, its contents can easily be deleted and your paid key could easily be stolen. Just don't do this. Port 111/2049: Used for NFS (shares). While NFS is disabled by default, if you are making use of this protocol, just make sure you aren't forwarding these ports through your router. Similar to SMB, just utilize Wireguard to create a secure tunnel from any remote devices that need to connect to the server over NFS. Port 22/23: Used by Telnet and SSH for console access. Especially dangerous for users that don't have a root password set. Similar to SMB, we don't recommend forwarding these ports at all, but rather, suggest users leverage a Wireguard VPN connection for the purposes of connecting using either of these protocols. Ports in the 57xx range: These ports are generally used by VMs for VNC access. While you can forward these ports to enable VNC access remotely for your VMs, the better and easier way to do this is through installing the Unraid.net plugin and enabling My Servers. This ensures that those connections are secure via SSL and does not require individual ports to be forwarded for each VM. Generally speaking, you really shouldn't need to forward many ports to your server. If you see a forwarding rule you don't understand, consider removing it, see if anyone complains, and if so, you can always put it back. Never ever ever put your server in the DMZ No matter how locked down you think you have your server, it is never advisable to place it in the DMZ on your network. By doing so, you are essentially forwarding every port on your public IP address to your server directly, allowing all locally accessible services to be remotely accessible as well. Regardless of how "locked down" you think you actually have the server, placing it in the DMZ exposes it to unnecessary risks. Never ever do this. Consider setting shares to private with users and passwords The convenience of password-less share access is pretty great. We know that and its why we don't require you to set passwords for your shares. However, there is a security risk posed to your data when you do this, even if you don't forward any ports to your server and have a strong root password. If another device on your network such as a PC, Mac, phone, tablet, IoT device, etc. were to have its security breached, it could be used to make a local connection to your server's shares. By default, shares are set to be publicly readable/writeable, which means those rogue devices can be used to steal, delete, or encrypt the data within them. In addition, malicious users could also use this method to put data on your server that you don't want. It is for these reasons that if you are going to create public shares, we highly recommend setting access to read-only. Only authorized users with a strong password should be able to write data to your shares. Don't expose the Flash share, and if you do, make it private The flash device itself can be exposed over SMB. This is convenient if you need to make advanced changes to your system such as modifying the go file in the config directory. However, the flash device itself contains the files needed to boot Unraid as well as your configuration data (disk assignments, shares, etc). Exposing this share publicly can be extremely dangerous, so we advise against doing so unless you absolutely have to, and when you do, it is advised to do so privately, requiring a username and password to see and modify the contents. Keep your server up-to-date Regardless of what other measures you take, keeping your server current with the latest release(s) is vital to ensuring security. There are constant security notices (CVEs) published for the various components used in Unraid OS. We here at Lime Technology do our best to ensure all vulnerabilities are addressed in a timely manner with software updates. However, these updates are useless to you if you don't apply them in a timely manner as well. Keeping your OS up-to-date is easy. Just navigate to Tools > Update OS to check for and apply any updates. You can configure notifications to prompt you when a new update is available from the Settings > Notifications page. More Best Practices Recommendations Set up and use WireGuard, OpenVPN or nginxProxyManager for secure remote access to your Shares. For WireGuard set up, see this handy getting started guide. Set up 2FA on your Unraid Forum Account. Set up a Remote Syslog Server. Install the Fix Common Problems plugin. Installing this plugin will alert you to multiple failed login attempts and much, much more. Change your modem password to something other than the default. Consider installing ClamAV. In addition to all of the above recommendations, we've asked SpaceInvaderOne to work up a video with even more detailed best-practices related to Unraid security. We'll post a link as soon as the video is up to check out what other things you can do to improve your system security. It is of vital importance that all users review these recommendations on their systems as soon as possible to ensure that you are doing all that is necessary to protect your data. We at Lime Technology are committed to keeping Unraid a safe and secure platform for all of your personal digital content, but we can only go so far in this effort. It is ultimately up to you the user to ensure your network and the devices on it are adhering to security best-practices.
    1 point
  5. Hi All, So first time posting and getting ready to order components for my first UNRAID/Plex Media Server build. Below are the components I am leaning towards at the moment and also I've listed out main use cases for this build to help with recommendations. Sincerely appreciate any recommendations or tips folks have. Budget - Uses: 4K Streaming - I do share my Plex with friends and family but I cap all external streams to the 1080p library and have a separate 4K library only available on local devices which are all NVIDIA Shield TV Pros I already own. Currently have a single 14TB external drive most media is hosted on and want the security of Parity drives since I really don't want to download everything again (nothing irreplaceable). On the 14tb drive there are about 1,500 movies and 35 TV series all 720p/1080p/4K now plus I have another 4TB of 4K movies on other hard drives. I would like to start sharing the 4K files with friends as well for those with 4K TVs and my gigabit fiber internet I'd hope is enough just afraid to try with SHIELD TV. I keep both 1080p and 4K copies of all the 4K content so if not direct playing 4K it's starting with a 720p/1080p version that is easier to convert if needed. Mobile Plex Downloads - I do have Plex Pass so would like to be able to download movies from my library for on-the-go (iPhone/iPad) use which the Shield TV Pro 16GB I host my PMS on today can't do Windows 10 VM - no immediate plans to game, just be an extra computer and something I could potentially set to download torrents for transfer to main share. NAS - extra local storage for GF & myself to save files. Important stuff we backup on other external hard drives. Hardware: Case: Fractal Design Meshify 2 XL CPU: Intel 11600K - no OC, air cooling with Noctua Chromax NH-D15. Want faster base clocks vs 11600 non-K. MOBO: TBD - Leaning towards a Z590 board for no particular reason but it's the latest and enables PCI-e 4.0 with Intel. Priorities here are HDMI (may be using iGPU to start), 2.5GB/s LAN or more, 4 x m.2 slots, 1+ x USB 2.0 in Back I/O for UNRAID flash drive and either USB-C or Thunderbolt just so I do have newer generation connectivity possible without add-on card. RAM: 32GB G.Skill DDR4 3600MHz (16GBx2) CAS 16 SSD: TBD- For cache drives should it be one for read and a second for write in perfect world? The 3rd could be for Plex Metadata and 4th I could assign to be strictly for VM? Can Plex Metadata live on same m.2 drive as VM or should they be separate? I figured since most Z590 boards have 2 Gen 4 m.2 slots I could use the Gen 4 slots for Plex Metadata and a VM and the Gen 3 slots for cache. HDD: Starting with 6x16TB Seagate EXOS X16 with 2 in parity connected to LSI 9201-16i HBA card I am ordering to give me 64TB to start. I realize when active they will be loud but care more about them lasting in a 24/7 environment and it says when idle they are supposed to be quiet. Plan to add another 10 16TB drives over next 1-2 years to give me total of 192TB when all said and done Optical Drive: LG WH16NS60 flashed to older firmware to be UHD-friendly housed in external enclosure connected via USB 3.0 since I don't want to give up potential HDD slots in case for optical drive PSU: Corsair HX850 850W Platinum rated I would like the 1,000W version so even if I added a 3070/3080 later and got big into games I wouldn't have problem having 16 7200rpm SATA drives and everything else but good luck finding inventory here in USA without some ridiculous markup from reseller GPU: TBD - This is another area I'm really stuck which has me thinking I would try and start with just the iGPU on the new 11th gen chips. Current GPU supplies are awful and while I thought a P2200 would be perfect as main goal today is native unlocked Plex streams (not gaming) I am not sure if in 2021 that is still where I should be looking as they are still $400+. I read about abilities to unlock NVIDIA cards so would like something that can handle at least 2-3 simultaneous 4K streams and another 3-4 1080p streams which may involve some transcoding due to format incompatibilities. The odds are slim I would have 3 4K and 4 1080p streams going at the same time but only takes one time for me to be annoyed that even after spending all this money on other components I am trying to watch a movie and it keeps buffering. As you can probably tell I'm definitely not necessarily going for a budget build but rather since the last time I built a PC of any kind was 15 years ago and I don't plan to regularly update outside of adding hard drives I'm comfortable to spend a bit more than is probably necessary to get bare minimum requirements met. Always fun to geek out and splurge a little bit when it is something I know will be heavily used and that I can have a lot of fun putting together and setting up.
    1 point
  6. You've got some big files that you're creating yourself -rw-rw-rw- 1 root root 379021 Mar 28 14:18 internet_uptime.log -rw-rw-rw- 1 root root 180933 Mar 28 13:48 plex_empty_trash.log your script creating the latter is also logging what it's doing every hour. You've got atop installed via NerdPack, which usually isn't recommended as it can be a pig on steroid's that just had a dinosaur for lunch. Also some smbd errors which I'm not quite sure what to make of.
    1 point
  7. You have a docker container that is referencing /mnt/user/disks You should be able to see that on the docker page and hitting the caret to expand any of the path mappings.
    1 point
  8. several of us are using this: and using the hpssa option to enable it I use it on a p420i that does not have native hba mode. If yours supports hba mode, then it should be ok and you'll save yourself a pci slot for other things.
    1 point
  9. Unsure - Been a long time since before I bought my PlexPass. A good question to ask on the Plex support forum, as it is an issue with the Plex application itself.
    1 point
  10. Changed the cable for disk 7 and everything is back to normal.. Thank you so much for the help.
    1 point
  11. Mar 28 13:17:33 Monterrey root: cat: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg: No such file or directory Make a change to the settings for mover tuning (apply), then try again. After that, either uninstall the mover tuning plugin or post in it's support thread.
    1 point
  12. After some digging, I found that the problem is due to Deluge setting a cookie whose value is longer (1819 bytes) than noVNC can cope with, hence the 403 Request Entity Too Large. Quite why it needs that much to list which columns should be shown, and in what order, I don't know! I don't suppose this is that easy to fix — I assume the limit is in upstream code somewhere? As a workaround I've changed the Deluge Docker container to provide http://«servername»:8112 as its web interface, so that the browser sees it as being a separate webserver from http://«ip», which is what the VNC Remote button uses, but if any Docker container that's been accessed via the server IP and a port can break browser VNC, it's not good…
    1 point
  13. Hey there, The error shows you left the address as the default (unraid.local), you'll need to edit the container and give it the right address. Visit Unraid WebUI > click Docker > Click on the logo next to PhoenixStats > click Edit > Change Miner Host from unraid.local to your Unraid server's local IP > click Apply Then visit the PhoenixStats UI again and it should be working.
    1 point
  14. Das ist eine Vorstellung die wohl vielen gefallen würde.. aber es ist stand heute nicht möglich, bzw. die möglichen Umsetzungen schlicht zu leistungsschwach und zu teuer. Ich denke wir müssen warten, bis Intel mit den diskreten GPUs auf den Markt kommt und hoffen, dass diese ebenso wie die iGPUs gvt-g unterstützen. Damit wären die beiden Hersteller dann endlich mal gefordert, ihre Lösungen - wie schon lange gefordert - für Consumerhardware frei zugänglich zu machen.
    1 point
  15. Disks are not mounting because of read errors: Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653888 Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653896 Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653904 ... Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548952 Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548960 Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548968 .. Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953024 Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953032 Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953040 etc, check what they have in common, controller, expander, etc, there's likely a problem with one of those.
    1 point
  16. Be my guest. Use/modify it as you wish. It is nothing more than the standard UD Backup script modified to backup some additional shares. This is fine. It will take a while but just plug it in and let it run. I initially did about 8TB. I can't remember exactly how long it took but it was not an unreasonable amount of time.
    1 point
  17. If you want to have 3 domains all point to your public WAN IP it is very easy to do. The Cloudflare DDNS container would update your main domains IP (yourdomain1.com ) by means of a subdomain . For example you would set the container to update the subdomain dynamic.yourdomain1.com to be your public WAN IP So then dynamic.yourdomain1.com will always be pointing to your public WAN IP. Then any other domains or subdomains that you want to point to your public WAN IP, you just make a CNAME in your cloudflare account which points to dynamic.yourdomain1.com So for example you could point and subdomain for example www.yourdomain2.com to dynamic.yourdomain.1.com However many domains like to use a “naked” domain (which is a regular URL just without the preceding WWW) so for example you can type google.com and goto google without having to type www.google.com. This "naked" domain is the root of the domain. DNS spec expects the root to be pointing to an IP with an A record. However Cloudflare allows the use of a CNAME at root (without violating the DNS spec) by using something called CNAME flattening which enables us to use a CNAME at the root, but still follow the RFC and return an IP address for any query for the root record. So therefore you can point the "naked" root domain to a CNAME and could use yourdomain2.com pointing by CNAME to dynamic.yourdomain1.com So with your 3 domains you can point any subdomain or the root domain to the (cloudflare ddns docker container updated on unraid) dynamic.yourdomain1.com But you don't need to use Cloudflare DDNS if you don't want to. Instead of using the Cloudflare DDNS container you could use the DuckDNS container. Then for the subdomain dynamic.yourdomain1.com instead of having its a record updated by the Cloudflare container you would use a CNAME for dynamic.yourdomain1.com to point to yourdomain.duckdns.org (or whatever your DuckDNS name was) So basically its just a chain of things that eventually resolve an IP which is your public WAN IP. Also Swag reverse proxy allows you to make letsencrypt certs for not just one domain but multiple domains too. I hope that makes sense
    1 point
  18. Oh yeah I see that now. OMG I must need sleep or something. So I fixed that, deleted the cache directory and rebooted the server. So far everything seems to be working. My guess is that one of the docker containers created that directory and a few started using that causing the configs to be wiped on reboot. thank you all for your help.
    1 point
  19. Settings -> Management Access -> Use SSL/TLS -> No
    1 point
  20. diagnostics-20210327-2247.zip Diagnostics attached. I really hope it's not a bad card. I've only had it for 2 months. It worked fine up until last weekend, but I guess it could happen. I'm 99.99% sure I updated both the BIOS and BMC/IPMI firmware when I built the server. I will have to check the BIOS on the next reboot though, if it locks up again. I have a feeling (mainly hope) pulling the GT710, and running the Unraid video out through the P2200 fixed it though. I'm running a Supermicro X9DRL-iF, which only has an open-ended 8x PCI-E 3.0 slot (top) that I have the P2200 in. The other two are closed 8x PCI-E 3.0 slots, and I was using the bottom for the GT710 just to display Unraid video out. Now video out for Unraid is going through the top slot. If this does solve it, hopefully I can put the GT710 back in, stub it at boot and use it to pass through to something else.
    1 point
  21. Awesome, haha, I got this card randomly as a second-hand gift. I actually have 2 in the system right now, but Unraid is using one card as its primary right now, instead of the onboard graphics. A separate issue I'll deal with later. One thing at a time. It's a BIOS thing I believe, but I'm not too familiar with iDRAC in a Dell T420. I'm not physically at the server right now, doing everything remotely, but I'll be at it next week. Seeing how the OP listed the W5100 as compatible, makes me think this isn't a real FirePro, yet the packaging it came in seemed pretty legitimate. I did see the discrepancies in the lspci, but I never have used a compute card before, so I believed this to be normal. A beta build with the enterprise drivers would be awesome, and once I get back to the server, I'll have a gander at the card and packaging, and send along a few pictures. Till then, I can't really do too much, except try another build. Thanks for all the help so far, makes learning the innards of Unraid a little easier Cheers
    1 point
  22. @drpeppershaker@ainuke Only devices with IP's on the defined LAN_NETWORK container variable are allowed access. You can add additional network ranges to that variable.
    1 point
  23. # ls -lh /boot/config/modprobe.d/ total 0 -rw------- 1 root root 0 Mar 27 13:00 amdgpu.conf Attached is post reboot Cheers pronto-server-diagnostics-20210327-1944.zip
    1 point
  24. I lowered the font size, and made the box smaller.
    1 point
  25. is there any way to adjust the search length box? It totally blows my laptop screen out. Meaning If there was a way to limit the size of the box It would fit my screen. I did remove a couple of custom tabs that fixed my issue, but............. would be nice to be able to add or remove a few from the search box just incase we'd like to.
    1 point
  26. I would start with Ich777s ARK container, then when you catch the addiction look at my instructions for spinning up a cluster.
    1 point
  27. Selbst wenn du alle Platten ausbaust, wird der Stick danach noch gehen. Das einzige was sein kann, dass ein Container oder eine VM auf einem Share einer HDD liegt und die geht dann logischerweise nicht mehr. Mit Hardware wie einer Karte haben die HDDs aber rein gar nichts zu tun.
    1 point
  28. Ich nehme an, es soll eine Platte aus dem Array entfernt werden? Hier findest Du die offizielle Anleitung von Limetech dazu: Shrink array > Zur Not einfach mit dem Google Translater übersetzen... Oder besser mit Deepl
    1 point
  29. I got a few reports about issues with P2000 cards, after some troubleshooting it turned out that they where defective. Seems like for some cards the lifespan is over... But that's only a guess from me and doesn't mean that it the case for your card. Can you post your Diagnostics (Tools -> Diagnostics -> Download -> drop the downloaded file here in the text box). Please also make sure that you are on the latest BIOS version.
    1 point
  30. A better wild card would be - Unraid?Downloads.
    1 point
  31. Another theory... When the announcement was made that LimeTech was providing tools to allow users to easily access their servers directly from the Internet, hackers suddenly decided that this might well be a low hanging piece of fruit. These hackers (while they might be classed as 'loners') do 'network' with each other to exchange information. And I suspect that there will be a lot of Unraid users who have going to trying to see what can be done without having sufficient knowledge about the required security, monitoring techniques, and tools to keep the bad guys at bay. I have spend enough time attempting to help people on this forum that I know folks don't like to read, understand and follow instructions. When their Unraid server is securely behind a router and its firewall, this usually only results in something not working. When you move beyond that firewall and expose the server to the Internet, you can't afford to make a mistake! There will be somebody out waiting there to take advantage of you. And often they will find you within a couple of hours.
    1 point
  32. A few suggestions if I may, from my experiences in the Cloud Infrastructure World; First, Reviewing Docker Folder Mappings (and to some extent VM Shares). Do all you Docker Containers need read and write access to non appdata folders? If it does, is the scope of the directories restricted to what is needed, or have you given it full read/write to /mnt/user or /mnt/user0 ? For example I need Sonnarr and Radarr to have write access to my TV and Movie Share, so they are restricted to just that, they don't need access to my Personal Photos, or Documents etc. Whereas for Plex, since I don't use the Media Deletion Feature, I dont need Plex, to do anything to those Folders, just read the content. So it has Read Only Permissions in the Docker Config. Additionally, I only have a few containers that need read/write access to the whole server (/mnt/user) and so these are configured to do so, but since they are more "Administration" containers, I keep them off until I need them, most start up in less than 30 seconds. That way, if for whatever reason a container was compromised, the risk is reduced in most cases. Shares on my VM's are kept to only the required directories and mounted as Read Only in the VM. For Docker Containers that use VNC or VMs, set a secure password for the VNC component too, to prevent something on the Network from using it without access (great if you don't have VLAN's etc). This may be "overkill" for some users, but have a look at the Nessus or OpenVAS Containers, and run regular Vulnerability Scans against your Devices / Local Network. I use the Nessus one and (IMO) its the easier of the two to setup, the Essentials (Free) version is limited to 15 IPs, so I scan my unRAID Server, VMs, and a couple of other physical devices and it has SMTP configured so once a week sends me an email with a summary of any issues found, they are categorized by importance as well. I don't think many people do this, but don't use the GUI mode of unRAID as a day to day browser, outside of Setup and Troubleshooting (IMO) it should not be used. Firefox, release updates quite frequently and sometimes they are for CVE's that depending on what sites you visit *could* leave you unprotected. On the "Keeping your Server Up-to-Date" part, while updating the unRAID OS is important, don't forget to update your Docker Containers and Plugins, I use the CA Auto Update for them, and set them to update daily, overnight. Some of the Apps, could be patched for Security Issues, and so keeping the up-to-date is quite useful. Also, one that I often find myself forgetting is the NerdPack Components, I have a few bits installed (Python3, iotop, etc), AFAIK these need to be updated manually. Keeping these Up-to-Date as well is important, as these are more likely to have Security Issues that could be exploited, depending on what you run. Also on the Updates, note, if you have VM's and they are running 24/7 keep these up-to-date too and try and get them as Hardened as possible, these can often be used as a way into your server/network. For Linux Debian/Ubuntu Servers, you can look at Unattended Upgrades, similar alternatives are available for other Distros. For Windows you can configure Updates to Install Automatically and Reboot as needed. Hardening the OS as well, is something I would also recommend, for most common Linux Distros and Windows, there are lots of guides useful online, DigitalOcean is a great source for Linux stuff I have found. If something is not available as a Docker Container or Plugin, don't try and run it directly on the unRAID Server OS itself (unless, its for something physical, e.g. Drivers, or Sensors etc), use a VM (with a Hardened Configuration), keeping only the bare minimum running directly on unRAID, helps to reduce your attack surface. Also, while strictly not part of Security, but it goes Hand in Hand, make sure you have a good Backup Strategy and that all your (important/essential) Data is backed up, sometimes stuff happens and no matter how much you try, new exploits come out, or things get missed and the worst can happen. Having a good backup strategy can help you recover from that, the 321 Backup method is the most common one I see used. If something does happen and you need to restore, where possible, before you start the restore, try and identify what happened, once you have identified the issue, if needed you can restore from Backups to a point in time, where there was no (known) issue, and start from there, making sure you fix whatever the issue was first in your restored server. I have seen a few cases (at work) where peoples Servers have been compromised (typically with Ransomware), they restore from backups, but don't fix the issue (typically a Weak Password for an Admin account, and RDP exposed to the Internet) and within a few hours of restoring, they are compromised again. Other ideas about using SSH Keys, Disabling Telnet/FTP etc, are all good ones, and definitely something to do, and something I would love to see done by default in future releases. EDIT: One other thing I forgot to mention was, setup Notifications for your unRAID server, not all of them will be for Security, but some of the apps like the Fix Common Problems, can alert you for security related issues and you can get notified of potential issues quicker than it may take you to find/discover them yourselves.
    1 point
  33. OMG...I have been running for several years it would seem plugged into the surge only socket....I only came to this post as I just had a power cut and got an immediate parity check which didn't feel right.
    1 point
  34. It’s found under Users in each password section. Good question
    1 point
  35. Unfortunately I tested that and it won't work -- Splunk is very particular about the permissions on a bunch of files and I was unable to get them working in a volume. I documented some of this in the readme on github. You'll get errors like KVStore failing to start, modular inputs failing to execute, some search commands not working -- a whole lot of pain. I think the solution is to prune your volumes after upgrades as I previously ddescribed. Perhaps unRAID could add a feature to do this automatically, or add a GUI button for it. I will note this in the readme on the next release.
    1 point
  36. Attached is a new "age_mover" which has the echo/log statements in it to help trouble shoot. Please replace the age_mover with this file if you have trouble. make a backup of the age_mover file cd /usr/local/emhttp/plugins/ca.mover.tuning/ cp age_mover age_mover_original copy the new file attached to this post to same location /usr/local/emhttp/plugins/ca.mover.tuning/ age_mover
    1 point
  37. Nice tips, I just wish it would be easier to setup KeysFile authentication and disable password authentication for the SSH. Just placing your pupkey in the UI and setting a checkbox to disable password auth would be nice. I currently have it setup like ken-ji describes here. Then i edited PasswordAuthentication to "no". Also think about a secure by default approach with future updates. Why not force the user to set a secure password on first load? Why even make shares public by default? Why allow "guest" to access SMB shares by default? Why create a share for the flash in the first place? I get that some of those things make it more convenient, but imo convenience should not compromise security.
    1 point
  38. This is not the XML its the HW passthrough of the eth. - I didnt do any manuel editing I just crossed [X] of the 4 LAN in the UI (Nothing virtual, MAC etc clean passthrough of the HW) Space invader makes a great video on how to get this working (Sorry whole family have ben down with Corona so that's why I haven't replied before now 🙂)
    1 point
  39. This docker has a little bug. /root/.ssh/ dir schould be mounted and persistent, as after container reinstall known_hosts and private keys are lost. As workaround I am using: ssh -i /config/id_rsa -o UserKnownHostsFile=/config/known_hosts -p 222 [email protected] and my settings in rsnaphost.conf: ssh_args -i /config/id_rsa -o UserKnownHostsFile=/config/known_hosts -p 222 rsync_short_args -az # My mysql dump on my debian machine: backup_exec ssh -i /config/id_rsa -o UserKnownHostsFile=/config/known_hosts -p 222 [email protected] "mysqldump --defaults-file=/etc/mysql/debian.cnf --all-databases| bzip2 > /root/mysqldump.sql.bz2" # Then backup whole /root folder backup [email protected]:/root your.server.ip/ # and etc www vmain folders - it is ISPCONFIG machine backup [email protected]:/etc/ your.server.ip/ backup [email protected]:/var/www/clients your.server.ip/ backup [email protected]:/var/vmail your.server.ip/ Of course to install private key ssh login option do: # Only once to generate key ssh-keygen # do this for each server where you want to install your private key ssh-copy-id -i /config/id_rsa -p 222 [email protected] To make this post complete: it is required to modify crontabs: root@nas:/mnt/user/appdata/rsnapshot-backup/crontabs# cat root # do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6 run-parts /etc/periodic/weekly 0 5 1 * * run-parts /etc/periodic/monthly # rsnapshot examples 0 0 * * * rsnapshot daily 0 1 * * 1 rsnapshot weekly 0 2 1 * * rsnapshot monthly and in rsnapshot.conf: ######################################### # BACKUP LEVELS / INTERVALS # # Must be unique and in ascending order # # e.g. alpha, beta, gamma, etc. # ######################################### retain daily 7 retain weekly 4 retain monthly 3
    1 point
  40. Thanks for the considered response and for being open about it. Happy to take anything offline where helpful. Rate-limiting a great tool to employ as part of a wider security hardening toolkit. Unfortunately, with botnets the above will do little to prevent a brute-force of root on a specific server. All those hundreds of thousands if not millions of IoT devices that have been compromised will do their business for them from individual IP addresses. Fail2Ban suffers similarly. This is why layers are so important. Let me be clear, I am a paid customer and enthusiast of unraid. I even quite like the features this offers in principle. However, my fear is that savvy users whom require remote access have already arranged it with something like VPN, WireGuard the like. We might be sweeping the rest along here into remote access with root and "rootpassword". Additionally, I would suggest some security auditing of the code/api if you haven't already done so. Better to pay someone to find potential routes to compromise than being held over a barrel. Bug bounty could be beneficial, free license/$500/xyz for each vulnerability rated x or above. I will keep an eye on this to see how it develops. Thanks for engaging.
    1 point
  41. Things like movies, photos, videos, documents, etc. (the user shares) are what I care about having cached (via disk caching). It is browsing those folders that can cause disk spin up I wish to avoid. appdata can have hundreds of thousands of files (Plex database) and isos, domains, system and backup folders are likely to have very few so I don't really need to worry about caching them much. Besides, other than backup folders, they are all on the cache drive which is an SSD and I don't have to worry about spin up.
    1 point
  42. I also think we should unpin it too so its not bringing so much attention. At least until its resolved to work again.
    1 point
  43. If I understand you correctly, cache-dirs should no longer be used? Is there any use case left? If not, I think @Squid should probably do his thing with CA and FCP. Are you officially declaring cache-dirs dead for now?
    1 point
  44. Hey yo all, I'm the last active maintainer of the plugin cache-dir script (I guess since Joe wrote it). There's 'complaints' for a long time here, that cache-dir spikes CPU, and I finally had a look at it. As Hoopster (partially) writes above, a solution to the CPU spikes is to turn of the /mnt/user scanning. There were some hints in the cache-dir log as to what part of cache-dirs caused the cpu-issue, though as I'm the writer of those logs, its easier for me to understand them Here's a bit of info about the logs. This is a sample with user-dirs being scanned (/mnt/user): ``` 2021.01.31 03:01:24 Executed find in (76s) (disks 2s + user 74s) 76.41s, wavg=66.78s NonIdleTooSlow__ depth 9999 2021.01.31 03:22:42 Executed find in (75s) (disks 2s + user 72s) 75.25s, wavg=62.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 6s/26s Scan completed/timedOut counter cnt=4/0/2 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:23:58 Executed find in (75s) (disks 2s + user 72s) 75.36s, wavg=63.44s Idle____________ depth 9999 slept 10s Disks idle before/after scan 26s/101s Scan completed/timedOut counter cnt=5/1/0 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:25:23 Executed find in (74s) (disks 2s + user 72s) 74.99s, wavg=64.42s Idle____________ depth 9999 slept 10s Disks idle before/after scan 111s/186s Scan completed/timedOut counter cnt=6/2/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:26:48 Executed find in (75s) (disks 2s + user 72s) 75.29s, wavg=65.39s Idle____________ depth 9999 slept 10s Disks idle before/after scan 196s/271s Scan completed/timedOut counter cnt=7/3/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:28:13 Executed find in (79s) (disks 2s + user 77s) 79.42s, wavg=65.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 281s/31s Scan completed/timedOut counter cnt=8/0/1 ``` It says in the last line it took 2sec to scan the /mnt/disk* and /mnt/cache*, and 'only' 77 seconds to scan /mnt/user. 77s is so much, its doing some CPU or disk access. 2 sec to scan disks indicates they are properly cached in memory. The second last line reads `idle before/after scan 196s/271s`, which indicates disks where idle before during and after scan, so no file was actually read. Hence cache-dirs is doing some crazy cpu in scanning /mnt/user All cache-dirs does is scan with 'find', /mnt/user so there's no way to fix this, except turn it off. The reason we might want to turn it on, is that in a recent version of unRaid (some years back) it started spinning up disks if /mnt/user wasn't included in the cache. So in other words, its likely cache-dirs is broken now. We need to turn off /mnt/user scan, or it eats our CPU. But if we turn it of, it might not prevent disk spin-ups, from just reading dirs. Which was almost the hole point of it. If it still spins up disks, it might still be useful in some scenarous like if syncing directory structure regurlarly, and then the sync program doesn't have to load all folders from disk, because they actualy are in memory, unRaid just spins up the disks anyway. Best Alex
    1 point
  45. It was an error in the config file. To check syntax I suggest running rsnapshot configtest inside the docker container, as stated on the GitHub page for rsnapshot https://github.com/rsnapshot/rsnapshot/blob/master/README.md#configuration
    1 point
  46. So after a variety of attempts from a number of forum posts, the following approach link1 link2 worked for me: >> lsscsi [2:0:0:0] cd/dvd HL-DT-ST BD-RE WH14NS40 1.03 /dev/sr0 And then hand modifying the ubuntu XML VM with <controller type='scsi' index='0' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host2'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> Other posts had suggested changing the controller value to "1" which did not work for me. I now have access to the Blu-ray drive from within the Ubuntu VM (it automatically detects a Audio disk insert and mounts it). I am now able to rip audio CDs which was my original objective.
    1 point
  47. Is Cache_Dir incompatible with OSX? It seems OSX will do a disk read no matter what, leading to it bypassing the directory cache? I wonder if OSX is trying to read the .ds_store file containing the folder meta data. If so, it would be good if unRAID has some way of keeping the .ds_store file on a SSD cache, thereby negating the need to spin up the whole array to read these small files. Does anyone know if there is a way to do that?
    1 point
  48. You could create a local backup and then rsync it over. Otherwise there are a bunch of guides here for backing up to NFS, SSHFS, and SMBFS http://wiki.rdiff-backup.org/wiki/index.php/TipsAndTricks
    1 point
  49. I find rdiff-backup best for my needs. From the website: Compared to rdiff-backup, rsync is faster, so it is often the better choice when pure mirroring is required. Also rdiff-backup does not have a separate server like rsyncd (instead it relies on ssh-based networking and authentication). However, rdiff-backup uses much less memory than rsync on large directories. Second, by itself rsync only mirrors and does not keep incremental information (but see below). Third, rsync may not preserve all the information you want to backup. For instance, if you don't run rsync as root, you will lose all ownership information. Fourth, rdiff-backup has a number of extra features, like extended attribute and ACL suport, detailed file statistics, and SHA1 checksums. I like rdiff-backup because it keeps the backup as a current mirror and then stores all the earlier changes as diffs. A few years back I had a catastrophic failure and was able to just mount my backup drive and it ran exactly as the failed drive did. But if I end up deleting something, or needing an earlier version of a file, I can just restore it from the diff.
    1 point