RT87

Members
  • Posts

    11
  • Joined

  • Last visited

RT87's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. Could this be used to implement a feature as requested by myself here:
  2. Hi, I would love for an additional "cache copier/syncer" as an addition to the existing mover. What exactly do I mean by that: Say I only want to use one large SSD cache drive (instead of 2 or more to get an array-like redundancy, since large SSDs are still somewhat pricey and, of course, take up space and ports). I would like to have all my docker/vm/often used data ond the cache drive, but I would also care to have that cache-data backed up on the standard array/data drives on a regular basis. Hence, I would like for a nightly backup to the array to occur, but not with moving, but copying/syncing instead. Ideally, the share which is used for this would be read-only to all users (a "cache-sync-share", if you will), so only the syncer can write to it (root definitely shouldn't, but could). However, the users/root CAN use it to restore the data from this share (in case of SSD failure), or, of course, delete it entirely, This would be very helpful, because I could save on SSDs without sacrificing redundancy. I personally do not care about the performance of the syncer, because this job would occur during low-to-none-load times. I also know that I can probably build this very feature on my own, using the "cache: only" setting + rsync/rclone or something similar (most basic: cp -uvr + crontab + remove data-singles without cache-equivalent), but I really feel like this is something many people would benefit from. Maybe this request will get a few "+1"s, which would be nice, but otherwise I would also be happy to just script-kiddie implement this on my own ;).
  3. +1 for docker compose Not for the featureset, but for universality and ease of use/migration.
  4. I agree, so +1 from my side. Yes, I know I can pretty much do the same thing by using Nextcloud or something similar, but still, it would be nice-to-have and reduce boilerplate.
  5. Ah, too bad... hopefully it will soon! Since I am "forced" to use IPv6 for such matters (I despise paying my ISP a premium for an IPv4 ^^), I feel a bit discriminated against ;).
  6. Oh, I see... and I agree, handling the entire unraid server as "exposed host" would be most unwise, especially if you are running privileged containers and such (which I am). But no, I have only opened port 443, nothing else. My router has a separate checkbox for allowing pings to a specific machine, which (to my knowledge) doesn't even know ports. So thanks for clearing that up! Back to my original question: Does myservers support the IPv6 protocol (or is planned to)?
  7. Well, given the obstacles, they should have a hard enough time. Besides, they would need to figure out my IP first. Given that the IPv6 pool is "reasonably hard to scan", I wish them good fortune ;). After all, we're not talking about the IPv4 pool of AWS, where bots hacking sandbox systems in that IP range has been common for years and years. But still, how exactly is this setup different from what myservers is trying to do? What precisely makes the myservers approach "secure"? Or am I completely mistaken?
  8. Nope, no VPN. However, I have obviously put as password in place, plus its a lab with no important data whatsoever, if thats your concern. None the less, I thought this was basically the idea behind this feature, access to my entire server from anywhere, was it not?
  9. There appears to be a bug in this package: If I use the "br0" interface for the connection, i.e. the container obtains its own IP address from the DHCP server/router in the network in which unraid lives, the http/https ports that I specify are ignored. The default values, e.g. 8080, are used, which is bad in case you just want ot use normal http/https behaviour (but with its own ip) and your router doesn't allow portforwarding to another port.
  10. Hi, I'm unsure whether this issue has been already addressed (if so: sry!), but online my access is marked as "local access", although I have configured the portforwarding etc. (clicking on this link then simply leads me to my local IP address). However, my ISP only gives me a DS-Lite connection, i.e. I do not have a true IPv4, but only the one to the server of my ISP (for which portforwarding obviously will not work ). To combat this, I use a public IPv6 for my unraid server, for which portforwarding is indeed configured (server can be pinged and ui can be accessed from the internet via this IPv6). Could it be, that the "My Servers" service does not (yet) support IPv6?
  11. Yeah, well... thats what happened. Unfortunately, having a valid cert for the <myhash>-subdomain apparently caused my unraid server to perform redirects. Which DOES make sense, but only if said record exists. In addition to fixing the vanishing-record problem, I would propose the following: Make the SSL connection optional, i.e. if a user wants to use http instead of https, let him. In particular for local connections that is less of an issue. Make the redirect optional, i.e. if the server is addressed via a domain that does not fit to its cert, it does not "force" the redirect. The user will have to deal with the ensuing warning, but he can at least still access his server. Maybe: allow for specifying an explicit redirection address (again, never mind the cert) Maybe: Make the redirect-module/nginx check if there actually IS a DNS record that matches the redirect, if not... do NOT redirect ;). Checking "once in a while" would be a start. This would safe your customers the hassle of having to enter the https://<my-local-ip> alternative, avoiding any DNS resolution and hence the redirect. Btw: Great product, still on the trial version but I really love it so far, truenas and proxmox can't compare! Keep it up =)!