Jump to content

ken-ji

Members
  • Content Count

    908
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by ken-ji

  1. Not going to try to convince you - but as a PSA, containers vs VMs - they have the same practical security unless your hardware and OS supports CPU and memory isolation (IBM Power series is the only thing I've seen that does this) in which case a VM wins hands down. The important part of security is the application, because even with netfilter protection, a VM has its entire network stack is still technically exposed for any possible vulnerabilities. With containers and macvlan (almost forgot about that one), the attack surface shrinks to the application running in the container in its dedicated IP. And standard containers using the default bridge networking is like having a simple linux router in front to provide portforwarding. The usual portforwarding using a router in front of either a VM or container helps a lot with the security. AFAIK, running nextcloud with a vulnerability in a container would leave the attacker in the nextcloud container, which presumably has nothing else to leverage off, and they would need to figure out a way to move to another target - the data in the mysqldb container? or the host? In the VM scenario, the attacker would have been closer to access the data and about the same to access the host. The bit about HVM is that we are in 2019, but not all countries and users have the money or access to 2015+ stuff and some are still rocking something from the early 2000's and for them only containers work. I for example have decided to use only a Pentium G4620 processor, because an emby container with HW transcoding on the iGPU would work even better than an i3/i5/i7 + Nvidia GPU in a VM at a significant fraction of the cost. (the fact that I'm using an ITX board and I need the only slot for my HBA is also a factor here)
  2. Shouldn't be a headache as the migration would of course be finished as soon as unraid can detect all your disks as properly connected. Then upgrading parity would be a simple remove and replace.
  3. One option that might work is to setup unRAID on the current PC without touching or overwriting the OS drive, or the Storage Spaces drives. You need to have at least one hard drive (get 8TB if you can, and remember the Parity Drive requirement to be just as big or bigger than other drives) and another smaller drive (preferrably an SSD) to act a cache and hosting a Windows Server 2019 VM. * Unplug window server 2019 HDD so we don't accidentally touch it. Unplug the storage space drives as well to be safe. * Boot up unRAID. * Assign the new drive as array member. * Assign the other drive as a cache drive. * Create a Windows Server 2019 VM. * Shutdown * Attach the storage spaces drives (by USB is you absolutely have too - I don't know if there will be recognition problems on Windows) * Passthru the disks to the VM (by attaching the USB bay, or perform controller pass thru) * Windows Server 2019 should be able to mount the storage spaces drives * Windows Server 2019 should now be able to move files from a single drive (or at least migrate and fill out the unRAID drive) * Remove freed drive from storage spaces. shutdown VM. detach from VM * Stop Unraid array, add the freed drive * Repeat and loop until all drives have been processed. * Delete Windows 2019 VM * Delete Window 2019 installation * Reuse last HDD * Add parity drive There, you should be done.
  4. A VM with all the apps would only be faster than multiple containers if the data flows using unix sockets, otherwise its the same thing. A VM is about as secure as a container A VM is only an option if your server has hardware support, but can still get really slow when doing certain cryptographic operations unless the CPU does it in hardware. A VM can only access other server hardware if the server has hardware passthru support. A container can access server hardware if the server OS has drivers for it. A container works without needing hardware support, and will always run at baremetal speeds - not counting possible networking issues. A container doesn't even need firewalling as only the application running in it would be exposed on the network interface. A container doesn't need patching, only checking if the application and related libraries have vulnerabilities. I'm also an old school sysadmin, but I see that containers work better than VMs for many situations, unless you want to support multiple tenancy, hardware passthru, and or complex firewalling. I mean, do you really need to emulate an RTC, or a floppy controller to run a web server?
  5. rsync will always check the full list of files everytime and the non-default option of using checksums will definitely cause all files on both sides to be read, to generate the checksums for comparison. rclone sending to cloud storage I think uses the size and timestamp check first, before falling back to the checksums (depends on the cloud provider)
  6. you seem to have a 2nd NIC: eth1 ? or did you remove this at some point? run docker network list run docker network inspect [name of network] run iptables -vnL
  7. You've need to add ip forwarding and routing rules on unraid server and other clients, as the clients on subnet 1/2 would not know to use the dual port server as a router to reach the other subnet. Then unraid server would need ip forwarding enabled (by default when dockers are enabled) to it will pass traffic for the other subnet thru
  8. Its been a while since I did this on plain linux, but I think if you bridged all the interfaces on Server 1, Server 1 would then act like a switch. Of course, you can only connect Server 3 to Server 1 if you still enough physical ports to use ( and I'm not familiar with 40gb interfaces ) Edit: And if you do bridge all the interfaces together, you can't connect Server 1 to the switch with multiple cables, you will be limited to 1, unless bonding is also in place.
  9. I've never felt my server running slow, maybe when my 5900rpm disks are really busy - like during parity check/ drive rebuild/ or mover is running. Or when the discs spin up. but that's how all of my Linux servers feel. Snappy enough. Windows file sharing does feel smoother and faster, but I normally use terminal / scp for file management. Windows is only for browsing the shares and I stress, it is fast enough for me that I don't feel it slowing me down.
  10. I haven't done much LAN file transfer in recent times, but I've never felt my server running slow unless I was transcoding 1080p Hi10p videos in software as that would usually consume all of my Pentium G4620 cores. But I can comfortably watch Quicksync transcoded 4K videos while doing SMB transfers, remotely and over a measly 25/25Mbps link (IPSEC VPN)
  11. Hmm. Last time I tried, the kernel modules for IPSEC were not compiled in. But that back in 6.0
  12. Well posting your diagnostics would save everybody doing guess work ie CPU is too slow for dual parity, etc.
  13. Have you tried following this in the NFS rules?
  14. and google won't let a device use a reserved IP unless the mac address matches?
  15. And you can't just assign a static IP to Unraid and be done with it?
  16. Not a google wifi user so i don't know how hard/annoying assigning a static IP can be.
  17. Is there any real reason why you cannot use the real mac address of the board? I personally think using a made-up / copied mac address is only for transitions or workaround to licensing
  18. Yes. That's my setup. The secondary NIC is used to create a docker network on the same IP space as the host and all the needs to be local IP dockers are there.
  19. Well that depends on the security consciousness of the user, since if the physical security was really important, he'd have the usb drive locked away and/or have a CCTV monitoring the server. It is less way over the top then having to have USB boot disk encryption. Why? Any plain Joe with the right hardware - ie a lockable server chassis (or small padlock + regular chassis and a drill) and correct motherboard (or optional parts) can easily install the boot flash drive in the chassis and lock the chassis (+ the rack while you are at it) and be better secured than if the system had an encrypted USB which upon failure or error might render your entire config non-recoverable (of course you have backups)... Any plain Joe can unlock same system, take out the USB to check the filesystem for corruption or replace, and lock it back in. It would work (licensing is tied to the physical USB not the partition/filesystem) but the encrypted USB would require the server have a console attached or support IPMI (Mine is headless and I do not have IPMI support). Current drive encryption does not require this, as the Web UI can be used to mount the drives, or the keyfile be fetched from a nearby local device (even on the network) before the Unraid core process is started up, neither can be done if your USB is encrypted.
  20. A server reboot will be necessary after unplugging the USB stick from a live system, in which case the disks are now encrypted and awaiting proper key input. The attack vector is a bit long winded and will probably be a very directed attack. Additionally, this already means the knowledgable attacker is on premise and on hand to the server. I don't know if the typical user warrants such a level of security, but I guess its possible to wrap the entire USB boot process with an optional mini encrypted disk image on the USB drive. I can definitely guess that Limetech cannot afford such a development direction, unless they have a number of enterprise customers and support contracts lined up for the resulting security featured product (as the resulting encryption makes it difficult for normal users to fix any issues) Personally, I'd rather store the usb boot drive inside the server chassis (physically hardlocked with necessary alarms) which mitigates the perceived attack vector.
  21. I think something is misconfigured. Is there an IP address assigned to eth2 on the Unraid network settings? post you diagnostics so the simple questions are already answered instead of us trying to extract it from you.
  22. post you diagnostics file too. something is quite right with your config if eth2 and eth0 are on the same physical LAN, yet the pfsense VM has a different subnet 172.16.1.0/24 (?) and still be able to see the docker0 (172.17.0.0/24) traffic. are you doing any form of bridging by cli?
  23. This didn't provide enough info on what's connected to what and how. but answer to the question is no.
  24. Haven't watched the video, but a VM for VPN access typically has 2 network interfaces - one on the regular network and another for the VLAN/subnet that will be given VPN "protection" outwards.
  25. the VM has to be the gateway as its the one to route the container out.