Jump to content

limetech

Administrators
  • Content Count

    9268
  • Joined

  • Last visited

  • Days Won

    125

limetech last won the day on February 21

limetech had the most liked content!

Community Reputation

1466 Hero

About limetech

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The main roadblock to adding Nvidia and AMD gpu drivers has been that Linux will grab those devices upon boot - which is what you want for them to be used by docker containers but makes it a real PITA for those wanting to passthrough the cards to VM's instead. Traditionally you had to find out vendor id and stub the drivers via syslinux kernel command line. To help with this we added vfio-pci.cfg method to select by PCI ID, but still no slick user interface for easily selecting the devices to stub - but lately I've seen a plugin called "VFIO-PCI Config" - maybe the author would help us integrate this natively into Unraid OS 😎 This would open door for us to add gpu drivers without adding a huge burden to VM users.....
  2. I don't think what you are asking for is possible. The Unraid boot menu is syslinux - you can go research if booting windows via syslinux is possible but I don't think so. Why do you need 'baremetal'?
  3. If you get this worked out, here's a thought. You could put the code and Makefile in a github repo and once other issues are sorted (such as udev rules) we can look at cloning repo and adding to Unraid OS. However, if a newer kernel comes along and now driver won't build, we'll have no choice but to file an Issue in the repo and omit the driver until issue is resolved.
  4. Something else I wanted to add, as long as we're talking about security measures in the pipe: we are looking at integrating various 2-Factor solutions directly in Unraid OS, such as google authenticator.
  5. I haven't "danced" around anything, sorry if it appears like that. How does this apply in an Unraid server environment? Yes this is something we're looking at. why? why? There is only one user: root You can set file permissions however you want using standard linux command line tools. Again, what are you trying to accomplish? We do have plans to introduce the idea of multiple admin users with various roles they can take on within the Management Utility. For example, maybe you create a user named "Larry" who only has access to the Shares page with ability to browse shares only they have access to. However this functionality is not high on the list of features we want/need to implement. Earlier you were confused by my term "appliance". What this means is the server has a single user that can manage the box. If you don't have the root user password, all you can do is access shares on the network that you have permission for, and access Docker webUI's - but most of these have their own login mechanism. Things like the flash share exported by default, new shares public by default, telnet enabled by default, SMBv1 enabled by default, etc. are all simplifications to reduce frustration by new users. Nothing more frustrating that creating a share and then getting "You do not have permission..." when trying to browse your new share. We are trying to reduce the swearing and kicking of dogs by new users just trying to use the server. Eventually everyone needs to be more security conscious - and in that spirit we are working on "wizards" that will guide a user to setting up the correct settings for their needs. I hope this starts to answer some questions and sorry if I came across flippant to your concerns, but trust me, security is a foremost concern and to have someone imply otherwise ticks me off to be honest.
  6. This is a load of B.S. While I appreciate the sentiment of your post (wanting to improve security), it is not helpful to simply complain. What is helpful is to point out specific attack vectors that we can address. Unraid is rapidly evolving from a simple NAS mainly used by tech-savvy home users to a more general platform with a wider range of users. It used to be the introduction of some bug that causes customer data loss that kept me up a night. These days, having a bug that presents a security risk is far more worrisome. So don't tell me we don't take security seriously. That said, there is a trade-off between making the server easily accessible for a first-time user vs. locking it down so tight no one can figure out how to even get in. I'll give you an example. By default we export the 'flash' share as a public share. Some people's hair catches on fire because of this. But the reason it's done this way is that after a user creates a bootable USB flash a very simple test is to see of the 'flash' share shows up in network explorer. There are other reasons it's handy to have this public for at least some amount of time. These days we have an icon next to the flash share if it's public, where rollover warns about this. Moving forward we are developing an initial configuration wizard that will guide a user in setting up the level of security appropriate for them.
  7. There is a ssh login attempt from an IP geo-located in China. But either your win10 VM has malware or maybe a Docker container has some kind of malware. Please provide a list of all your containers.
  8. Unraid is an appliance. There is only one user: root. We can rename to "admin" but it's still root. There are not traditional user logins. Users are only used to validate SMB connections. Running as non-root would not have prevented this vulnerability which btw, was a couple 1-line bugs. re: the request: we have a blog post that talk about this: https://unraid.net/blog/unraid-os-6-8-2-and-general-security-tips Sure I can go reply in there...
  9. Unraid OS Forum Moderators professional work space, sure beats google's!
  10. Something else to try: On Setting/Global Share Settings set Tunable (enable Direct IO) to Yes.
  11. Thanks for the link. I don't have any windows code development environment and as soon as I find the half a day to learn this and get it running, I'll take a stab. In meantime I'll do some testing with linux CIFS and see if I can reproduce similar results.