Glassed Silver

  • Content Count

  • Joined

  • Last visited

Community Reputation

10 Good

1 Follower

About Glassed Silver

  • Rank


  • Location

Recent Profile Visitors

1140 profile views
  1. Hey, thank you for providing this! One question though, how do I best tackle this error as displayed on the status page? This old thread still leaves me pondering a bit. Cheers and thank you again so much!
  2. Can't get wallabag to mass-import my pocket exported json... Tried to use your supplied redis as well, but they don't seem to work hand in hand... Has anyone accomplished a mass-import in wallabag before and give me something like a howto? I'm a complete noob to redis as well, so ELI5 please and honestly, I don't think I'll need redis too soon anymore after using it to feed my wallabag instance, so we can keep it simple and single-purpose if that facilitates that. Cheers and thank you to anyone who is interested in helping me out here!
  3. Yes there is... (edit: well... sure they would run in other setups other than unRAID as well, but on others they wouldn't exist, because you'd just use docker-compose to setup the stack...) There is unRAID-specifically bundled images that are only bundled as ONE image, because the "stack" as Portainer calls is not natively supported from within the UI. The UI that we praise - for good reasons - to have upsides to Portainer like nice metadata presentation, icons, easy updates, better handling of appdata management, etc... If you want the simplicity of un
  4. Rework is ongoing:
  5. Did you try user default and no password?
  6. One BIG upside to supporting docker-compose would be a heavily decreased "reliance" on software stacks being bundled into unRAID-specific (and hence barely ever official or verified) images. Why that may be a concern?
  7. Will the CA template get upgraded to 2.x anytime soon?
  8. I'm having a bit trouble setting wallabag up to use my redis container. Does anyone else have this configured successfully? All I get is this message: 500: Internal Server Error php_network_getaddresses: getaddrinfo failed: Name does not resolve [tcp://redis:6379]
  9. Yeah I've looked in the files it puts in there before and privacy-wise there is a lot that isn't anonymized. What files from there would be the ones we really need, because that's a LOT of stuff for me to go through and double-check? Cheers PS: actually the reason why I never asked for help here in the first place. Once I saw that "anonymize" doesn't work to my expectations...)
  10. Hey guys, so apparently even though I don't have a cache drive (yet) in my server, Fix Common Problems reports I have appdata in the cache anyhow? I do wonder what's that all about, because to my understanding, if I were to reboot now and I have some appdata in a cache that isn't cache DISKS, but just... temporary storage, then that gets wiped after a reboot. (something that I suspect happened before already on one of the 6.9 betas where I lost a lot of app settings... didn't have backup for the appdata setup yet, yes. My fault, I am not complaining about that. But befo
  11. This should help you get there: and: Honestly speaking, I think I had setup a password, but I did not. Ooops. I'm in what constitutes as basically a single-user network, nothing is exposed to the web, so this being the one application not guarded with a password did not irk me all that much yet. I'd still love to get there, because I'm a believer in a "secure-by-default" philosophy. (and yet I run unRAID, hehehehe..) The reason I thought I had setup a
  12. Ah, command line sounds like it'd be more of an edge case and hence less well supported... Not really feeling like going down that route. So how about a scenario where unRAID notices a difference between both drives? How are errors handled, because if full physical drive failure is the only thing I'd be protected against then that'd be a bummer... I'm already a bit puzzled by what to do when unRAID sees sync errors on my array when all drives are in good health... I run another parity check and fix errors and keep going, hoping nothing blows up. (not that I'm indifferent to my
  13. I know about these things, but I honestly don't really care about the performance of most of my dockers. Back to my question though, because no matter the capacity I will invest in, that aside, is triple ripe enough for production? Which of them is better? Dual or triple? I'm talking strictly about reliability here. I don't care if my youtube downloader feeds into 100MB/s HDD space or SATA-saturating SSDs or 2/3 saturated. My internet connection is 100MBit/s. Now I know there are still many other apps that could greatly improve and I would rather have wiggle room anyhow, so th
  14. Hey guys! I'm considering going from no cache disks to triple cache disks. I'm playing with the thought of getting 3x 860 EVOs with 500GB each. That'd yield me 750GB of usable space, I'm not worried at all about losing some usable space at all, since it does give me parity on the cache drives through the btrfs pooling. Now the question is, for more, but not that much more I could be getting 2x 1TB drives which would be mirrored and yield me 250GB more space, but I'd be in the conundrum of "which drive to trust if one of them throws errors" (basically the old RAID-1