Jump to content

Glassed Silver

Members
  • Content Count

    47
  • Joined

  • Last visited

Community Reputation

5 Neutral

About Glassed Silver

  • Rank
    Advanced Member

Converted

  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Spotted the Valve employee.
  2. Extending on that I'd love an easy and obvious path to use SSD storage in a protected way beyond just mirroring two SSDs. Maybe filesystem snapshots getting backed up in intervals to the main array. I'm sure I have other questions, but that's all that comes to mind right away.
  3. The docker file was fine, it was the cache drive’s filesystem requiring a scrub.
  4. My first thought is that old snapshots are keeping the drive full maybe. Cache drives use btrfs and whilst I’m new to btrfs myself I believe I know enough to at least point into the direction. This page might help you: http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html Let me know if it works.
  5. Maybe it's about time unRAID reigns in a new concept. In my humble opinion a cache should be a cache and the fast permanent storage should be an entirely different drive. Ideally with an automated way to protect it with parity as well, maybe not on the fly, but at least to save snapshots to the array. All drives fail eventually and do I really want to go through the hassle to re-deploy my VM and docker storage? No, not really...
  6. Were you able to get more than 42TB to run ok with the H220 and if so with which firmware? I'm not anywhere near 42TB myself, I'll have like 20TB when my first stage of deployment (transitioning from external HDDs to unRAID and consequently shucking them) is done, but I'd like to know what the outlook is. Add 3 8TB drives for example and I'll be there myself. As for fan speed... ProLiants just favor higher rpm to begin with, adding something the system doesn't recognize as HPE equipment can dramatically screw up your rpms and to be frank, you did well with 55%, I suffered from 100% for some time. All fixed now though. My ProLiant loves its H220 a lot. Would need to check what the fan speed is right now on my server, but it's alright, no motivation to run the cable for iLO now.
  7. Hmm, very interesting. Thank you for the explanation. I think it's reasonable to assume that a VPN'd Privoxy docker provides more value than a VM setup for a JDownloader instance. Since all it does is establish connections to OCHs not through APIs but acting as a browser (afaik) it'll be going through that all the times anyways. Either way, successfully set up SABnzbdvpn already - rest to follow, seems to be working wonderfully!
  8. Interesting. Do you know why traditional kill switches are so unreliable? Mind you, I don't mean a kill switch that kills a specific app if the VPN app sees a connection go down. My experience on my Mac is pretty much that I'll often see a page not loading even before the VPN app tells me the connection died and then proceeds to re-establish a new connection. I hear what you say, that configuration is time consuming and I definitely don't want to sound like I'm not grateful for your dedication and helping out the community, obviously you don't limit your app choice for no reason and do all the work when it's not necessary, however I feel like the whole time I had been using VPN network kill switches under the assumption they are reliable and now it's all just a lie? Would it be possible to set up something like a virtual router in a VM that's more reliable and then wire it up to dockers? Again, not like I wouldn't like to use your docker apps, but in Germany for example one click hosters are HUGE. Guess that's because they let you monetize downloads aggressively. Obviously I would love to avoid them entirely, but German torrents are often dead a LOT faster than English ones for example. I guess those issues are very similar beyond the English speaking hemisphere.
  9. Ooof... At what point would it be easier to just have a VPN client container with a built-in firewall that I connect official/any downloading dockers to as tunnel and with a killswitch? That's all I want. Would also only need to run one VPN connection at a time and be able to use it for, well, anything. As it is, I'm limited to HTTPS and trusting the docker not leaking through background processes or somehow else. I just wanna set up qBittorrent, SABnzbd and JDownloader and maybe who knows anything else I might want to add in the future and tell them all on "system-level" within the docker to use the VPN docker's tunnel as network interface or bust. There, no reason to "trust" any given app I may want to tunnel. Or am I missing something here because of the glaring lack of a simple one-stop solution for this? Is there any hurdle? Or am I in the end of the day better off setting up a VM for this with the official client app? Would love to keep it lightweight though and manageable through the docker section. Cheers! PS: Would it be possible to add NordVPN to the pre-configured providers?
  10. Does COPS support two-way sync of reading progress? Also, bonus question: which app would I best use on Android to keep my device and server in-sync? I know about Calibre Companion, but as far as I can tell a companion that doesn't read itself will always rely on the reader app relaying read progress to it......... I've been using computers long enough to know that mixing too many programs often tends to create a lot of friction and cases where you need to troubleshoot or stuff just isn't optimized... yadda yadda... tl;dr: I think what I'm looking for is a CC-like app that will also be a good reader app. When reading with Moon+ I noticed it didn't save page progress nor bookmarks... kinda defeats the purpose of integrating syncing all that if you ask me and Moon+ is apparently what "everyone" and the CC makers themselves heavily focus on. Is this as good as it gets? Manually triggering "read progress: complete" on every book that I finish manually and if my Android device gets lost I have to figure out which books I had been reading, where I stopped, what I bookmarked, etc? That's before even considering multi-device usage. Trust me, I really searched for option, but I cannot seem to find satisfying options and every app description I read talks all about what is possible but won't tell you about the little details... And as we all know... the devil is in the detail. Sorry if this is kinda-hijacking the thread, but I'm fairly new to Calibre and I am not quite sure how specific this might be to COPS, so I figured I'd best ask in the most specific place that applies to me.
  11. Hmmm, on that note: if I connect to my VPN _per docker_ that means I'm multiplying my VPN overhead, depend on binhex' release schedule (not implying anything, just saying that I'm totally new to unRAID AND Docker so just throwing out there what's crossing my mind) and then there's the issue with not every application desirable being available as binhex vpn docker. I've seen that you can use a docker like that as proxy for other dockers, but my line of thought is that I'm relying on the application within a docker to apply the proxy connection leaving possible (unknown) background processes un-routed through the proxy. The beauty (but also a pain point in other ways) of VPNs on a classic desktop is after all a one-setup experience. Connect once, route everything or nothing. Major application missing an obvious VPN path to me right now is jDownloader. Theoretically I could just set up a VM, install my VPN's application in there, add the applications I want to the mix and have them all download to a share. Waaaaaaaaay less elegant, but at least a catch-all approach. The VM itself would obviously be configured with a firewall. Is that a lot of overhead? Sure is. Is that a great concern? Well.... 16 physical cores and 48GB of RAM say: we can do it. Despite all of that, I'd still favor the leanest approach for obvious reasons. Surely there's something I'm missing or something I misunderstood?
  12. Can't get the Minecraft server to properly run. The log is filled with this error or variations of it: [Server thread/ERROR]: java.lang.OutOfMemoryError: GC overhead limit exceeded Now the server did show up in my Minecraft client (so broadcast works), but connecting to it failed as well. Anything obvious I missed or should check? Settings I used: mojang latest server build, imported worlds of various sizes from my local client (the goal is to make this server basically a 1-2 user environment so I have my Minecraft clients across different OS's or computers as "thin clients" and I won't have to deal with keeping my worlds in-sync anymore by hand. ) Edit: Okay, so I did some further Googling and as it turns out, adjusting the values for Xmx and Xms way above what the (apparently too old) tutorial I checked out suggested. My values are now 4096MB for Xmx and 512MB for Xms. That fixed it for vanilla, mojang-build (latest) based servers. Glad I can now finally centralize my Minecraft experience and even facilitate any folks coming over to my house to play together. Good times!
  13. 48GB of DDR3 ECC UDIMM. Will expend it to 96GB later this year I think. It'll be overkill, but I like having room to grow and so far I've only populated 3 of my 12 slots. Enterprise gear is fun.
  14. I typo'd. What I meant to write in my second sentence was that creation dates get screwed (basically they get equalized with the modification dates (and times obviously)) Other than that, the lesson remains the same. Don't touch AFP and you get a fairly okay experience cross-plat. I guess it pays off that in some files eventually I started using the initial date as part of the file name, still leaves some gaps, but alright. Going forward with the new files coming in from Windows or Linux I guess this shouldn't be an issue anymore. Don't forget the setting to hide . files and folders. Works wonderfully! Windows doesn't display them, but you can still view them if needed with "Show hidden files and folders" in the Windows Explorer settings. And a Mac can still access them and store its auxiliary data there just fine. If you ask me, this should be the default setup for anyone doing cross-plat. So far very pleased!
  15. How is this not a thing? The tiniest tiniest Linux distro out there will include a file manager, because guess what's it's VERY handy... Right now I'm looking at my "root share" and although I included the browser I'm browsing as for specific shares for read and write access, when browsing through the root share I don't have that access consistently. This is especially cumbersome when trying to move stuff between shares. Don't MC me. Is it a PEBKAC error? Oh I bet it is. Do I feel like troubleshooting silly permission problems when all I want to do is move stuff around so I can get on with my life and my newly set up server that still has a bucket list of things that need to be worked on? Well, no of course not.