bman212121

Members
  • Posts

    6
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

bman212121's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Um, then don't boot in that boot mode? Better yet, what do you suggest? This isn't for me, this is for your basic user who doesn't know about security and best practices or the pitfalls of browsing the internet as super-user on a server. I can already see the horror support threads about unknowing consumers' systems being pwned. So is the suggestion to simply block access to the internet from the browser aside from a connection to the local web interface? So I'm not sure how some of the old users are using this product, but to me it seems like a moot point to be worrying about a browser hijack with the default configuration of unRAID. There is absolutely no security in the default configuration, so providing a web browser running under root isn't going to make or break anything. If you were wanting to lock this down, there is a lot more that needs to be done than just changing how firefox is launched.
  2. This confuses me a bit. I don't use a cache disk, and run docker from a mounted SSD. What do I need to do to add these shares? Are they SMB Shares? User Shares? Has anyone without a cache disk tried the upgrade? Any further details on the above? My docker lives outside the array on a separately-mounted disk. I don't really want or need a cache drive, but don't want docker on the array, either. Has support for that configuration been eliminated? No this should work just fine. I removed my cache drive from the array, which gave me a drive called sdc. I mounted sdc1 to /mnt/diskA via ssh. From there I just went into settings for docker, and told it to create a docker image under /mnt/DiskA. Enabled docker and setup a docker image. For VMs, it was being a bit odd at first as I think I had the Libvirt storage location on the array, but the default VM location on diskA. I wasn't getting the VM tab to show up in the list until I moved the Libvirt storage location to diskA as well. Once I did that I was able to click on one of the new template buttons, and select the ISO, and it would automatically locate my vDisk onto diskA for me. The VM started up and booted to a cd to install just fine. After a reboot I just had to recreate the mount point /mnt/diskA again, and run mount /dev/sdc1 /mnt/diskA and the system had no issues with detecting the data and running docker and VMs again. I'm guessing you probably would have those in a startup script to make it work correctly each time.
  3. I looked through all of the posts, it doesn't appear that anyone has asked this yet. If you are using Seabios + GPU passthrough, will starting up the VM also take down the WebGUI now? Or has that problem been fixed for both the console and WebGUI? I'd test it myself but can't take down the system that supports IOMMU atm.
  4. Thanks again tdallen! I really appreciate the depth of the knowledge you provide. The documentation is great for the average end user, but it's great knowing there are community members who really know the ins and outs of the system.
  5. Thanks for the info tdallen. I've also been wondering the same questions about how the storage is supposed to work. (And still have a few more) It does make more sense realizing the background that UnRAID comes from to determine why they have chosen the model they have. This setup has the advantage for the end user of being able to swap out drives and use any sized drive they want, but it comes at a price. Basically what I'm gathering is that UnRAID uses a modified version of RAID 3, which is certainly not designed with performance in mind. The parity drive will be the weak point of the system especially since it's not handled asynchronously, so if you're writing data to multiple drives at once it's only going to go as fast as the parity drive allows. It will also probably be the first drive to fail as it's going to be handling far more I/O than any other drive. (Two of the reasons why RAID 5 was invented) For consumer usage where the majority of data is written once to disk these short comings are likely not a big deal. Obviously the cache drives can definitely help alleviate problems by holding as much data as possible so it can be written to disks as time allows. The cache drives themselves seem straight forward. Any writes destined for the array go to the cache drives first, they are written to disk asynchronously at a later time. You can have multiple copies of the cache with no performance hit since it's in a software RAID 1. The part that I can't seem to figure out is if there is actually any type of read cache for any of the system. Best I can tell is that it uses a decent chunk of RAM to store data it thinks it might need. I'd really like to know if there is any more info on read performance as it seems like that could be really good if the data happened to be on different disks, or so so if all of the data you wanted to access was on the same disk. (Since we're not striping there is no guaranteed performance increase that all other RAID levels have.) Maybe 6.2 will bring improvement for read caching so you can hold large chunks of data in cache so there is a greater chance of success that the data needed will be on the fast cache and not the slow HDDs. The last thing I haven't figured out yet is how you go about getting the VMs to cache at all. There is no setting for the VMs themselves to set whether or not they cache, and I believe that I put their disks onto a share that was supposed to be cached, but it didn't seem to make a difference. Can you just load and keep the VMs onto SSD cache drives and then have them periodically write the data back to the disk array for redundancy? How would you go about getting the most I/O for the VMs without taxing the parity drive or the disk drives?
  6. Hi guys! I'm new over here but just wanted to say that I'm impressed you were able to get SMB running at a full 10gbe! I don't follow Linus, but happened to catch the first video of the dual gaming pc build and was interested in unRAID after seeing that work. I stopped by to check in again as I'm thinking about testing it out to see if I can add features to my all in one system. This video has me even more interested as one of the reasons why I have Windows as the primary OS is because SMB performance is much worse on any nix distro I've ever used. I definitely still need (and want) Windows in my environment, but it seems like unRAID might have all of the components needed to take over the virtualization role while still providing a solid NAS solution and full GPU acceleration.