Jump to content

Glassed Silver

Members
  • Content Count

    38
  • Joined

  • Last visited

Community Reputation

4 Neutral

About Glassed Silver

  • Rank
    Advanced Member

Converted

  • Location
    Germany
  1. Does COPS support two-way sync of reading progress? Also, bonus question: which app would I best use on Android to keep my device and server in-sync? I know about Calibre Companion, but as far as I can tell a companion that doesn't read itself will always rely on the reader app relaying read progress to it......... I've been using computers long enough to know that mixing too many programs often tends to create a lot of friction and cases where you need to troubleshoot or stuff just isn't optimized... yadda yadda... tl;dr: I think what I'm looking for is a CC-like app that will also be a good reader app. When reading with Moon+ I noticed it didn't save page progress nor bookmarks... kinda defeats the purpose of integrating syncing all that if you ask me and Moon+ is apparently what "everyone" and the CC makers themselves heavily focus on. Is this as good as it gets? Manually triggering "read progress: complete" on every book that I finish manually and if my Android device gets lost I have to figure out which books I had been reading, where I stopped, what I bookmarked, etc? That's before even considering multi-device usage. Trust me, I really searched for option, but I cannot seem to find satisfying options and every app description I read talks all about what is possible but won't tell you about the little details... And as we all know... the devil is in the detail. Sorry if this is kinda-hijacking the thread, but I'm fairly new to Calibre and I am not quite sure how specific this might be to COPS, so I figured I'd best ask in the most specific place that applies to me.
  2. Hmmm, on that note: if I connect to my VPN _per docker_ that means I'm multiplying my VPN overhead, depend on binhex' release schedule (not implying anything, just saying that I'm totally new to unRAID AND Docker so just throwing out there what's crossing my mind) and then there's the issue with not every application desirable being available as binhex vpn docker. I've seen that you can use a docker like that as proxy for other dockers, but my line of thought is that I'm relying on the application within a docker to apply the proxy connection leaving possible (unknown) background processes un-routed through the proxy. The beauty (but also a pain point in other ways) of VPNs on a classic desktop is after all a one-setup experience. Connect once, route everything or nothing. Major application missing an obvious VPN path to me right now is jDownloader. Theoretically I could just set up a VM, install my VPN's application in there, add the applications I want to the mix and have them all download to a share. Waaaaaaaaay less elegant, but at least a catch-all approach. The VM itself would obviously be configured with a firewall. Is that a lot of overhead? Sure is. Is that a great concern? Well.... 16 physical cores and 48GB of RAM say: we can do it. Despite all of that, I'd still favor the leanest approach for obvious reasons. Surely there's something I'm missing or something I misunderstood?
  3. Can't get the Minecraft server to properly run. The log is filled with this error or variations of it: [Server thread/ERROR]: java.lang.OutOfMemoryError: GC overhead limit exceeded Now the server did show up in my Minecraft client (so broadcast works), but connecting to it failed as well. Anything obvious I missed or should check? Settings I used: mojang latest server build, imported worlds of various sizes from my local client (the goal is to make this server basically a 1-2 user environment so I have my Minecraft clients across different OS's or computers as "thin clients" and I won't have to deal with keeping my worlds in-sync anymore by hand. ) Edit: Okay, so I did some further Googling and as it turns out, adjusting the values for Xmx and Xms way above what the (apparently too old) tutorial I checked out suggested. My values are now 4096MB for Xmx and 512MB for Xms. That fixed it for vanilla, mojang-build (latest) based servers. Glad I can now finally centralize my Minecraft experience and even facilitate any folks coming over to my house to play together. Good times!
  4. 48GB of DDR3 ECC UDIMM. Will expend it to 96GB later this year I think. It'll be overkill, but I like having room to grow and so far I've only populated 3 of my 12 slots. Enterprise gear is fun.
  5. I typo'd. What I meant to write in my second sentence was that creation dates get screwed (basically they get equalized with the modification dates (and times obviously)) Other than that, the lesson remains the same. Don't touch AFP and you get a fairly okay experience cross-plat. I guess it pays off that in some files eventually I started using the initial date as part of the file name, still leaves some gaps, but alright. Going forward with the new files coming in from Windows or Linux I guess this shouldn't be an issue anymore. Don't forget the setting to hide . files and folders. Works wonderfully! Windows doesn't display them, but you can still view them if needed with "Show hidden files and folders" in the Windows Explorer settings. And a Mac can still access them and store its auxiliary data there just fine. If you ask me, this should be the default setup for anyone doing cross-plat. So far very pleased!
  6. How is this not a thing? The tiniest tiniest Linux distro out there will include a file manager, because guess what's it's VERY handy... Right now I'm looking at my "root share" and although I included the browser I'm browsing as for specific shares for read and write access, when browsing through the root share I don't have that access consistently. This is especially cumbersome when trying to move stuff between shares. Don't MC me. Is it a PEBKAC error? Oh I bet it is. Do I feel like troubleshooting silly permission problems when all I want to do is move stuff around so I can get on with my life and my newly set up server that still has a bucket list of things that need to be worked on? Well, no of course not.
  7. Update: Enhanced OS X interoperability solved my issues, I get tags, get info file descriptions, etc... Creation dates still get screwed, but I also discovered that if I put a file over AFP into a share and access it over SMB either from a Mac or from Windows that date is getting screwed either way... so... since I want to rely less on my Mac anyways and use Windows primarily in the future I guess there simply is no way to keep creation dates intact whatsoever no matter the route you take. Guess I'll have to pull this tooth already and be done with it. Sucks, but so be it. At least SMB transferred files retain their tags in macOS when touched by Windows. (BIG ASTERISK: e.g. an rtf doc opened and saved again in Word that you created in TextEdit on Mac is saved as a new file so to say (this will get rid of any extended proprietary macOS metadata)... Meanwhile a .txt stays unchanged as far as that metadata is concerned. At least if I remember correctly. I did some random testing like that same night I last posted, bottom line is: SMB, Enhanced interoperability activated and then completely forgetting AFP exists is the way to go if you want to work with the files on both macOS and Windows. Otherwise only use AFP at all times, iirc that got a little speedier too after upgrading to unRAID 6.7)
  8. Haven't tested these so far, just wanted to say thank you for providing all of these! Let it be known that I very much appreciate the prospect of letting my unRAID practically take care of anything. Makes me wonder: do I even need a desktop computer anymore? *snickers* (well strong yes, but between this and being able to run games on the server itself with some catchy PCIe pass-through... Latter of which I don't intend on implementing for a while though)
  9. Well that's certainly an option, true, but it's still quite the inconvenience. As for the 6.7.0 update: that's what I'm running and I'm experiencing this problem regardless. So unless we have different things going on triggering the same symptom I think it's fair to say it's unfixed.
  10. I still have my old docker appdata folder for Plex, I could compare the two folders. Any file in particular I should take a more intensive look at? Like a file the configs the internal webserver or something? Before the migration I could access the WebUI just fine... Edit: I think I meant too well for my Plex config... I carried over ALL entries of the plist to the XML. The conversion itself was alright, because I did it manually one by one, but I should have only replaced the entries' values that the default Preferences.xml file had. After that fix permissions (unRAID > Tools > New Permissions | select all disks and feel free to only fix the appdata share). Fire up Plex, boom! Works! The rest of the guide I linked works perfectly fine. Importantly, take care of the location of your media content as outlined there. Keep old places in the settings, let it all scan and then I think you're free to remove the old locations. Now my weekend project shall be to migrate my iTunes library to Plex. *gulp* Going cross-plat, cross-device, cross-application, cross-anything. That's gonna be a wild ride.
  11. I feel like a child relearning computing all over again... There's just 2 dockers that worked frictionless so far... That being said, this might not be a Docker issue... Basically what happened is this: I have a PMS already set up on my Mac, now I want to move its settings, play counts, media to my new unRAID server and use Plex in a docker, this docker. I followed this tutorial: https://forums.plex.tv/t/migrate-from-osx-to-ubuntu/168608/13 And after restoring my profile folder to the correct location and putting my media in the share that is mapped to the container path /media I cannot access my server anymore whatsoever. It's certainly started and remains started, but when I click on it and go to WebUI all I get is a nasty "Cannot connect" page. Before restoring my old profile the server WAS accessible using the same address, port, browser constellation...
  12. How do you ignore something like this? Usually you need to do something rather important when going into the terminal don't you? Personally I'm trying to migrate my Plex lib to my new unRAID server and getting 502'd on my way to the terminal is pretty bad. I usually try to avoid my root share as much as possible, guess I have no choice today... A terminal session should ALWAYS "just work".
  13. the m3u file (with pipes) expects the ffmpeg binary in /usr/bin/ffmpeg. I guess that's not where it is, right? The other playlist is rtp based. I did find this thread https://tvheadend.org/boards/4/topics/37025?r=37053#message-37053. What I don't get however is, when I add a new port in the docker's settings it's doing nothing.
  14. Nope, didn't help. Edit: attached screenshot of VLC's info dialogue about a sample stream. Maybe it's a codec issue? The problem is going from mux to service. I don't even see a button to map the mux as service. (but maybe that's just the old, outdated UI in the video tutorial I tried to partially adapt)
  15. Anyone got this working with MagentaTV in Germany? When I feed it the m3u for MagentaTV from this list it will find the muxes fine, but scanning FAILs and I cannot create services from the muxes. The same m3u works just fine in VLC for example.