Jump to content

fluisterben

Members
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About fluisterben

  • Rank
    Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As you seem to be a Windows data-hoarder, I recommend a few things first; Get rid of the hoard and doubles; http://www.joerg-rosenthal.com/en/antitwin/ Use a good file-manager like TotalCommander to move your lower importance directories in a \deep\deeper\deepest\ dir and then run anti-twin on your pooled driveletter, have it tag based on deeper dir depth. Trust me, this is an amazingly well written piece of freeware, it's gonna save you terabytes of BS doubles and moot backups you still had lying around. Then, while you're cleaning up, handy tools I often use as well: http://www.jonasjohn.de/red.htm https://portableapps.com/apps/utilities/revo_uninstaller_portable https://portableapps.com/apps/utilities/wise-registry-cleaner-portable https://sdi-tool.org/download/ and, since you're running Windows 10, I don't know, consider these: https://github.com/Sycnex/Windows10Debloater https://github.com/madbomb122/Win10Script https://github.com/madbomb122/BlackViperScript Realize you didn't ask for any of these, but still, what you wrote made me think of those. Some take a level of tech that you seem to fall under, so that's OK. And I stand behind all of the above mentioned. I'm at the point of buying the 12 drive Unraid license myself. And my main PC also runs Covecube's DrivePool, so there..
  2. Can we have wildcard options for whitelist domains in there, please? Either by regex, or simply adding domain.org and domain.com in some txt file etc. (where *.domain.org *.domain.net would then be whitelisted, or removed from gravity.list after the blocklists have been imported, before reloading dnsmasq). This is, in my opinion, severely lacking from pi-hole. whitelist.txt uses only exact name matches. See also: https://discourse.pi-hole.net/t/wildcard-and-regex-support-for-whitelisting/14538/3 I had to, for example, remove these from gravity.list; a1599.g.akamai.net a1767.g.akamai.net a19.g.akamai.net a1964.g.akamai.net aksb-a.akamaihd.net apiskywebbercom-a.akamaihd.net appnext-a.akamaihd.net asrv-a.akamaihd.net blacktri-a.akamaihd.net canvaspl-a.akamaihd.net cdn-guile.akamaized.net cdn2sitescout-a.akamaihd.net cdncache2-a.akamaihd.net cdnstats-a.akamaihd.net contentcache-a.akamaihd.net contentclick.akamaized.net couponcp-a.akamaihd.net downloadandsave-a.akamaihd.net ds-aksb-a.akamaihd.net dyn-beacon.akamaized.net e3694.a.akamaiedge.net e6913.dscx.akamaiedge.net fb_servpub-a.akamaihd.net giantsavings-a.akamaihd.net greatfind-a.akamaihd.net inmobisdk-a.akamaihd.net pxlgnpgecom-a.akamaihd.net redge-a.akamaihd.net rvzr-a.akamaihd.net rvzr2-a.akamaihd.net sonybivstatic-a.akamaihd.net speee-ad.akamaized.net spotxchange-a.akamaihd.net supersonicads-a.akamaihd.net zvsuhljiha-a.akamaihd.net
  3. yes, bump. Array just won't stop here. How do I force it to stop Moving files around? It keeps saying in the left bottom corner; Array Stopping•Retry unmounting disk share(s)... I'm afraid to force reboot the machine because of possible data loss of some sort. I noticed stuff freezes when I force the Mover to clear cache drives or when I've stopped the Array. Just can't do anything, and many dockers fail to even work while I'm forced to just wait it out, with no ETA in sight anywhere. I have no idea what it is doing. I constantly hear drives busy writing/reading. OK, found out one disk was behaving faulty, which is weird, since I didn't get warnings about it beforehand, just found out by viewing the system logs.
  4. If it isn't already in Unraid's slackware OS firmware, I would very much like to have options to; - Override config, for example for dnsmasq, ssh and possibly other services already running in the OS. Usually this is done by using /local.d/ dirs or local.conf files that, if present, override other config files (but only for everything set in them). That way we can -for example- change sshd config to always and only use a specific port and Ed25519 public-key signature system. Or use dnsmasq as a LAN dns resolver (with cache) and blocking host methods, like here. - Be able to have persistence for those config files and other additions/changes to the firmware USB-storage media content. So have them remain after upgrades/updates of Unraid. If options to do so are already there, please point me to a wiki or docs that explain how. Currently I'm using the /boot/config/go for this; #!/bin/bash cp -af /boot/config/xroot/. /root/ cp -af /boot/config/xssh/. /etc/ssh/ chmod -R 0700 /root chmod 0600 /root/.ssh/* chmod 0644 /etc/ssh/* # Start the Management Utility /usr/local/sbin/emhttp & but I suspect there are better ways to do the same..
  5. If you want to really annoy the NSA, CIA, GCHQ, MI5, AIVD etc. you could change your rsync / ssh solution to use solely https://ed25519.cr.yp.to/ A fine tutorial to achieve that is here: https://stribika.github.io/2015/01/04/secure-secure-shell.html Currently I'm using the /boot/config/go script thusly; #!/bin/bash cp -af /boot/config/xroot/. /root/ cp -af /boot/config/xssh/. /etc/ssh/ chmod -R 0700 /root chmod 0600 /root/.ssh/* chmod 0644 /etc/ssh/* # Start the Management Utility /usr/local/sbin/emhttp &
  6. Not sure if that would be reassuring for newcomers. It's the "easily updated" thing that makes it less secure. Unraid doesn't use a key-exchange or encrypted file-hash check over TLS or anything of that nature. Again, I'd advice to implement an alteration of CSF/LFD scripts built in the OS. At the very least you need to inform the user when files on the USB stick have been altered by something other than Lime tech themselves. Directory and file watching, with a default set of dirs and files. Thus far I've not seen unraid do anything of that nature. For now we should see unraid as a DMZ and treat it as such, but still, the easily removable flash with its firmware, hmm. For our use here at home I'm not worried, because our unraid hardware is very well hidden, but any USB stick that can be pulled out, which itself isn't encrypted in any way, on which you can easily put something that boots for your remote exploit is not 'secure'; You take it out, put it in a laptop or smartphone, change config or plant that exploit, put it back in, powercycle the server, and you have your remote root access, done in 1 minute flat. Just sayin'.
  7. Eeh, no, and no. Even using translations of networks, ports, bridging, hops in between, will be slower than, for example, using a localhost mariadb server directly from nextcloud. Likewise you don't even need unix sockets for nginx+php to perform faster when it serves nextcloud locally, instead of from an external container and/or proxying. No, it's not. Inside a VM everything can be isolated and tracked, a container is open to whatever exploit can be run directly on its serving apps. https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal We're in 2019 now. The cheapest boards and CPUs on the market today offer HVM, IOMMU by default.
  8. Even coming from linux, it's rather peculiar to have such importance put on a USB stick. And it seems insecure. You grab the stick, change config and you have full remote access to everything on the server. I don't know, I would not design it this way either. I'm experienced closing down co-located hardware in ways that prevent datacenter personnel or thieves from easily getting access to the data and servers. A USB stick with the bootloader and full config is not exactly how I would do that..
  9. I didn't need an explanation for how to mount an external dir in a VM on unraid (there's only one way for that in unraid for linux), what I wanted to know is how performance of an 'external' storage dir on unraid fares between access from a VM and a docker container. I did not make up my mind before asking, which is why I asked; I would tend to favor a container if it would yield A LOT faster disk-IO for nextcloud, but thus far, I've been testing that in my unraid side by side just now, the VM wins this battle. Probably because in the VM nextcloud, mariadb, nginx and php-fpm are all accessed directly, without having to use network protocol conversions. Or port conversions. I can run nginx on the IP of the VM's port 443 and nothing needs to be redirected or proxied.
  10. I still don't see much of a difference there. Like I wrote, I maintain a lot of docker instances at work, as well as a lot of VMs and VPSs and bare metal servers. For someone like me, making sure apps in a VM don't crash is not harder or easier than doing the same for a container. In fact, combining containers, like nginx, letsencrypt, mariadb and nextcloud is much harder to maintain and more time-consuming than having those 4 in one VM. In VMs it's also easier to control the use of resources, as in, have them not accidentally take over all resources of the hardware. I posted here because I wanted to know what others think about the differences while using them, if I was missing something, but I don't see valid reasons to not pick a VM. I do have some docker containers on my unraid machine, like syncthing, but mostly because they don't need isolation. Containers are interesting if you're a developer and changing config often, if not, you should prefer isolation (more secure) and use a VM.
  11. Like I wrote earlier; Someone needs to bench-test disk-IO/speed for one exact same service from within a docker instance to/fro the mounted data/content, versus the same service from within a VM to/fro the mounted data/content. I currently don't have enough time to do benchmarks between the two on unraid, but I'd be very interested to see the results for mounts. In the mentioned example above; using the nextcloud data storage folder for example. In a VM it's reached through "trans=virtio,version=9p2000.L,_netdev,rw 0 0" in fstab. I'm curious how that would fare against a docker mounted data folder. I've been working in IT for 3 decades now I'm a CISSP and CHFI, and do server admin work for a hosting/webdev company. I can message you my LinkedIn page if you're interested..
  12. And I don't see the benefit of running this in separated containers, just makes no sense. On the exact same hardware no less. Why would you want to create networks between docker containers in order to communicate with webserver (nginx) and database, while you can have it all as localhosted within 1 VM ? Again, I mentioned all the arguments in favor, like the CSF/LFD firewall I can run it all behind. I have yet to see 1 argument in favor of using containers for a nextcloud install. No, you also run a webserver for it. nginx, preferably. And you run LetsEncrypt for it. Really, I've seen that video by SpaceInvader One and it's like opening a can of worms. I can have LetsEncrypt using cloudflare api for DNS verification, all within one VM, much easier and cleaner too.
  13. What do you mean by that? How does one 'pass through the shares' ? Also, why are you assuming docker speeds are faster? Do you have the benchmarks that prove that?
  14. Yes, but it allows me to do entire system snapshots twice a day or more. Please, allow me to chime in with this; https://security.stackexchange.com/questions/169642/what-makes-docker-more-secure-than-vms-or-bare-metal If you require constant interaction between services within a VM, I'd say concentrating them within a VM is a better option. Especially with the easy mount options used for Unraid's VMs. Someone needs to bench-test disk-IO/speed for one exact same service from within a docker instance to/fro the mounted data/content, versus the same service from within a VM to/fro the mounted data/content. I'd be very interested to see the results for unraid mounts in this benchmark.
  15. I'm not sure what to do. I currently have a plain debian server, no virtualization, on that server I serve nginx with php-fpm running nextcloud for 7 people to both LAN and WAN, thus it also runs mariaDb/mysql. I run dnsmasq on it as a speedy LAN dns resolver and use it to filter/block (similar to pihole), and it serves as a backup for remote servers in a datacenter and as a secondary mx, a failover when the external mailserver is down, and it also runs a few tiny opendir websites and syncthing mostly auto backing-up mobile devices and desktop stuff, all runs behind letsencrypt certs. It runs CSF/LFD for iptables firewall control, and I share its block/allow lists with other external servers. And last but not least it also runs several rsync tools and scripts. It has been doing most of this for years without much trouble. But now Unraid has arrived. Which I very much prefer over that server's hardware control (its RAID is useless, slow, not reliable and when a disk fails it takes ages to fix things). Plus, nextcloud is getting to be used more and more recently, as my users are slowly moving away from other less private space constrained cloud services. Needed more diskspace and more powerful hardware, so I built a new machine and plan on selling the old hardware. Let's assume there is plenty of RAM and fast M.2 SSD cache, making me question; - Is it smarter to run a debian VM with nextcloud on it? And most of the stuff mentioned above on that same VM? - Or is it smarter to run all in separate docker instances? If so, why? I've been working a lot with docker containers at work, and I have to say; I really don't like it all that much. Maintenance for updating is never as easy as just running apt update && apt upgrade on a debian instance. I've seen dockers fail way more often than plain server instance services, to be honest. - For nextcloud and its database, why would running them in separate containers be any more efficient for them to interact while on the exact same hardware? - Is there an obvious advantage for disk-IO for docker over VM that I'm missing? Or anything else? And then there's the security implications of docker containers; I need to firewall all of it. When it's on that VM, I just run CSF/LFD on it and I'm done. I already installed and use syncthing as a docker instance on the unraid machine, since that seemed easier to maintain. Syncthing is a bit of a GUI-based tool anyway, doesn't fit a 'server' OS. But I still have to migrate most of the other stuff, and I was tempting to just copy the way nextcloud is running now, on 1 server with nginx, mariadb, letsencrypt through cloudflare dns etc.