TOoSmOotH

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by TOoSmOotH

  1. I have a couple of failed drives and I need to identify them without rebooting. I can do it within the card bios but I don't want to restart my server. The utility from LSI's site has a deb and an RPM. I guess I can unzip the RPM but was hoping for a more eligant solution.
  2. Ok I can confirm that this is a gui limitation. I followed the naming scheme and all is good as far as the drives showing up.
  3. Is there a way to go beyond 24 vdisks? I have 44 drives in my system, 2 Parity, 8 Data, 2 RAID 1 cache, and the rest unassigned. I am passing the unassigned to a VM but I can't go past 24. Is this a limitation in the gui? Can I just add stuff to the XML and keep cruising? Or would this cause issues?
  4. I just nuked the image and installed everything again.
  5. Everything was fine in rc3 but since I upgraded Docker fails to start. Seeing this error: time="2018-09-21T13:26:28-07:00" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.18.8-unRAID\n": exit status 1" Full log attached. docker.txt
  6. Just create a SMOS/NVOC/Ethos VM and pass the cards to it. Should work just the same.
  7. yea best bet would be to install something like ethos and not use Unraid.. That motherboard is not the ideal setup for what you are trying to do with unraid.
  8. What motherboard do you have? Why are you using a riser?
  9. Anyone had any luck with a threadripper build yet? I am thinking about jumping over to team red and curious what types of issues folks are having. I saw you need to run a release candidate right now. Just curious what other problems are out there?
  10. My kids computer has 2 Windows 10 VMs on it and I am passing through a GTX 1070 to each one. Those cards mine when the VM is idle for more than 10 minutes. I used to mine Ethereum and was able to overclock and get good speeds. I am running EBWF on them now mining Zcash and getting 400-420sol/s on each VM. I am just using PCIE passthrough and they game on it when it isn't idle.
  11. Everything is mounted rw. # cat /etc/fstab /dev/disk/by-label/UNRAID /boot vfat auto,rw,exec,noatime,nodiratime,umask=0,shortname=mixed 0 1 # ls -la /boot/config total 0 Looks like there is something going on with /boot/config that I thinks is missing.
  12. Well I never made any changes and it was working before. Looks like it happened from the 6.3.5 upgrade
  13. Seeing this lately: Warning: unlink(/boot/config/plugins/dockerMan/images/linuxserver-plex-latest-icon.png): Read-only file system in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 792 Any docker container that I update loses its icon due to this error.
  14. Warning: preg_grep() expects parameter 2 to be array, boolean given in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 110 Just saw that when stopping my array. Not sure if its a know issue or an issue at all but I thought I would share. 6.3.3
  15. I do security stuff for a living and I would say there are no glaring vulnz with unraid. Just some best practices that need to happen (that I listed previously) that would help with the optics of its level of security. No matter what you do if someone wants to get to you they will. Forcing people to change the root password will help save them from themselves in some cases. That is another option as well. If you want to go full tin foil I would stick it on another VLAN and make it pass through your firewall to access you LAN where you unraid box sits.
  16. Yes sorry those would be feature requests not something you can do sans the key based auth.
  17. Glad to hear about HTTPS support coming. Some other suggestions: - Have a default root password vs just blank. - Force password to be changed on first login. - ssh key support in the web interface.. more advanced users can do this today. - Run docker as its own user - Run KVM as its own user - Create a separate user account for the web interface that isn't root so you are not passing root creds over the network (although ssl sorta addresses this) Even with that you should never stick an unraid box on a DMZ port. It's use case is inherently less secure than say a web server due to the attack landscape it exposes. (smb etc) Always use docker containers and only forward specific ports to it. Try and always use ports that are not default. Plex example: Use port 34121 on your router that points to 32400 on your internal plex docker. That way if some plex vuln comes out you have a little time to address it. The baddies will be scanning 32400 looking for them. YES you can still find it but that requires a full port scan on you and that is a lot slower then looking for plex by 32400. Another pro tip: Don't open SSH to your unraid server from the internets. If you really need to be able to ssh into your house create a small VM that you can use to connect to. Or better yet install the openvpn docker and connect that way. Never have direct connectivity on any port to your unraid box from the interwebs... always use a docker
  18. I am able to confirm that this worked for me! Thanks for the help!!
  19. Ok I was able to confirm the problem is unrelated.
  20. So a new wrinkle. Since I made that change I can't start any CentOS based VMs. I get an immediate kernel panic. I am going to try and reverse the change and see if they will boot again. I get a panic on a new install of centOS as well.
  21. looks like I had a . where a : should have been. looks like they are hidden now.
  22. So the OS still sees the interfaces.. should they be hidden from the OS? I was supposed to put that in the syslinux.cfg correct?
  23. So I am running into a snag when trying to pass some quad NIC ports through. When I do the PCI stub thing I actually disable all of my NICS and am unable to connect. For some reason both cards are in 8086:1521 c1:00.0 and .1 are my onboard that I still want unraid to see. Thoughts? Onboard NIC: IOMMU group 59 [8086:1521] c1:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) IOMMU group 60 [8086:1521] c1:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) Quad NIC: IOMMU group 11 [8086:1521] 01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) IOMMU group 12 [8086:1521] 01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) IOMMU group 13 [8086:1521] 01:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) IOMMU group 14 [8086:1521] 01:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)