Jump to content

Glassed Silver

Members
  • Content Count

    50
  • Joined

  • Last visited

Community Reputation

5 Neutral

About Glassed Silver

  • Rank
    Advanced Member

Converted

  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I don't want to pass-through my USB 3.0 card, unRAID only is fine. Disabling IOMMU is system-wide or per device? Any drawbacks like hypervisor performance or such? I do have VMs, but they don't need USB3
  2. Heya guys! I'm trying to add USB 3.0 to my DL380e right now. The card I bought is an HP branded card to avoid the fan noise issue us HP users know and hate... Either way, the card I got is the "HP SuperSpeed USB 3.0 PCIe x1 Card" (part no. 663213-001). I installed it and it got recognized in unRAID: IOMMU group 23: [104c:8241] 0d:00.0 USB controller: Texas Instruments TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller (rev 02) All fine, all good... or is it? I attach my 3.5" WD Elements drive to it (with external power) but it doesn't work.... No new unassigned drive. With the built-in 2.0 ports it get recognized. The main reason I bought this card was that it is a nice way to pre-clear a drive at least for 1 pass at acceptable speed before I go ahead and shuck it. (I'd pre-clear at least with another pass after shucking, but before I risk breaking a tab despite some training... I'd rather like to know it's worth pulling it apart) Syslog: Sep 16 20:19:15 Ahri kernel: ACPI: Early table checksum verification disabled Sep 16 20:19:15 Ahri kernel: ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20180810/tbfadt-674) Sep 16 20:19:15 Ahri kernel: ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20180810/tbfadt-674) Sep 16 20:19:15 Ahri kernel: acpi PNP0A08:00: _OSC failed (AE_SUPPORT); disabling ASPM Sep 16 20:19:15 Ahri kernel: acpi PNP0A08:01: _OSC failed (AE_SUPPORT); disabling ASPM Sep 16 20:19:15 Ahri kernel: pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) Sep 16 20:19:15 Ahri kernel: pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) Sep 16 20:19:15 Ahri kernel: pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) Sep 16 20:19:15 Ahri kernel: pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) Sep 16 20:19:15 Ahri kernel: pci 0000:0a:00.0: BAR 6: failed to assign [mem size 0x00100000 pref] Sep 16 20:19:15 Ahri kernel: floppy0: no floppy controllers found Sep 16 20:19:15 Ahri kernel: random: 6 urandom warning(s) missed due to ratelimiting Sep 16 20:20:16 Ahri rpc.statd[2403]: Failed to read /var/lib/nfs/state: Success Sep 16 20:20:34 Ahri avahi-daemon[6741]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns! Sep 16 20:23:26 Ahri kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) Sep 16 20:23:26 Ahri kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) Sep 16 20:23:26 Ahri kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) The last few lines repeat a lot. My assumption is my card isn't liking that I haven't attached the SATA power connector to it, but I read that it should be good to go without SATA power as long as the devices attached don't need power from the USB port? If that's the case I'll go ahead and buy the needed SATA slimline to SATA power connector adapter, but if I can do without that'd clearly be preferable. Or maybe the card isn't fit for a DL380e G8 with unRAID to begin with and I can discard that idea right away...? I know it's not advertised as compatible for DL380 servers but rather for workstations and the like, but adding USB3 at a low cost would REALLY be a massive quality of life upgrade for me. I could make good use of the second port for something like a backup drive that I can disconnect at will or to pull data from drives. Any ideas? Thank you so much in advance.
  3. Hey guys, hopefully this is the correct place to post this... I want to set up a mariaDB database, probably with a docker container, maybe with a VM, haven't decided yet. My use case is this: 1) slow and steady wins the race. Reliability over milliseconds for sure. This will be a database written to very rarely actually. It's just going to catalogue and log a few personal matters like keeping a DB of my game collection, play time (yeah I log such things) and inventory of a few other things I collect. 2) I will mainly work on this from Windows (to submit and administer entries) and mobile (Android mostly, to update things like a game's play sessions within a table for that entry) 3) I am not without DB experience, that's fortunate I guess. So far I'm cataloguing my game collection with Tap Forms on iOS and SEPARATELY logging my gaming sessions with a..... *takes a deep breath* spreadsheet. This has to change. 4) I'm looking at mariaDB, because it's well supported and has widespread adoption. I need a relational database. 5) My unRAID server is not always accessible and to be frank even if it was I'd rest easier knowing that it CAN be down for a while and I can still record my sessions and then just push the queries to the DB from my mobile. Which app would be good to use for this? 6) which desktop app is best to use to setup and administer the DB from Windows? All my experience with DBs that aren't contained in the app like Tap Forms and Bento have been limited to MySQL and phpmyadmin. Would love a desktop app for thing, especially one that might be able to import data by feeding it previous work invested into Tap Forms which is based on Apache CouchDB. (document-oriented) I thought I might be better off asking you guys who may have transitioned databases before instead of jumping in head-first and finding out I'm doing something substantially wrong later on. Thank you so much for reading to the end and helping out.
  4. Spotted the Valve employee.
  5. Extending on that I'd love an easy and obvious path to use SSD storage in a protected way beyond just mirroring two SSDs. Maybe filesystem snapshots getting backed up in intervals to the main array. I'm sure I have other questions, but that's all that comes to mind right away.
  6. The docker file was fine, it was the cache drive’s filesystem requiring a scrub.
  7. My first thought is that old snapshots are keeping the drive full maybe. Cache drives use btrfs and whilst I’m new to btrfs myself I believe I know enough to at least point into the direction. This page might help you: http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html Let me know if it works.
  8. Maybe it's about time unRAID reigns in a new concept. In my humble opinion a cache should be a cache and the fast permanent storage should be an entirely different drive. Ideally with an automated way to protect it with parity as well, maybe not on the fly, but at least to save snapshots to the array. All drives fail eventually and do I really want to go through the hassle to re-deploy my VM and docker storage? No, not really...
  9. Were you able to get more than 42TB to run ok with the H220 and if so with which firmware? I'm not anywhere near 42TB myself, I'll have like 20TB when my first stage of deployment (transitioning from external HDDs to unRAID and consequently shucking them) is done, but I'd like to know what the outlook is. Add 3 8TB drives for example and I'll be there myself. As for fan speed... ProLiants just favor higher rpm to begin with, adding something the system doesn't recognize as HPE equipment can dramatically screw up your rpms and to be frank, you did well with 55%, I suffered from 100% for some time. All fixed now though. My ProLiant loves its H220 a lot. Would need to check what the fan speed is right now on my server, but it's alright, no motivation to run the cable for iLO now.
  10. Hmm, very interesting. Thank you for the explanation. I think it's reasonable to assume that a VPN'd Privoxy docker provides more value than a VM setup for a JDownloader instance. Since all it does is establish connections to OCHs not through APIs but acting as a browser (afaik) it'll be going through that all the times anyways. Either way, successfully set up SABnzbdvpn already - rest to follow, seems to be working wonderfully!
  11. Interesting. Do you know why traditional kill switches are so unreliable? Mind you, I don't mean a kill switch that kills a specific app if the VPN app sees a connection go down. My experience on my Mac is pretty much that I'll often see a page not loading even before the VPN app tells me the connection died and then proceeds to re-establish a new connection. I hear what you say, that configuration is time consuming and I definitely don't want to sound like I'm not grateful for your dedication and helping out the community, obviously you don't limit your app choice for no reason and do all the work when it's not necessary, however I feel like the whole time I had been using VPN network kill switches under the assumption they are reliable and now it's all just a lie? Would it be possible to set up something like a virtual router in a VM that's more reliable and then wire it up to dockers? Again, not like I wouldn't like to use your docker apps, but in Germany for example one click hosters are HUGE. Guess that's because they let you monetize downloads aggressively. Obviously I would love to avoid them entirely, but German torrents are often dead a LOT faster than English ones for example. I guess those issues are very similar beyond the English speaking hemisphere.
  12. Ooof... At what point would it be easier to just have a VPN client container with a built-in firewall that I connect official/any downloading dockers to as tunnel and with a killswitch? That's all I want. Would also only need to run one VPN connection at a time and be able to use it for, well, anything. As it is, I'm limited to HTTPS and trusting the docker not leaking through background processes or somehow else. I just wanna set up qBittorrent, SABnzbd and JDownloader and maybe who knows anything else I might want to add in the future and tell them all on "system-level" within the docker to use the VPN docker's tunnel as network interface or bust. There, no reason to "trust" any given app I may want to tunnel. Or am I missing something here because of the glaring lack of a simple one-stop solution for this? Is there any hurdle? Or am I in the end of the day better off setting up a VM for this with the official client app? Would love to keep it lightweight though and manageable through the docker section. Cheers! PS: Would it be possible to add NordVPN to the pre-configured providers?
  13. Does COPS support two-way sync of reading progress? Also, bonus question: which app would I best use on Android to keep my device and server in-sync? I know about Calibre Companion, but as far as I can tell a companion that doesn't read itself will always rely on the reader app relaying read progress to it......... I've been using computers long enough to know that mixing too many programs often tends to create a lot of friction and cases where you need to troubleshoot or stuff just isn't optimized... yadda yadda... tl;dr: I think what I'm looking for is a CC-like app that will also be a good reader app. When reading with Moon+ I noticed it didn't save page progress nor bookmarks... kinda defeats the purpose of integrating syncing all that if you ask me and Moon+ is apparently what "everyone" and the CC makers themselves heavily focus on. Is this as good as it gets? Manually triggering "read progress: complete" on every book that I finish manually and if my Android device gets lost I have to figure out which books I had been reading, where I stopped, what I bookmarked, etc? That's before even considering multi-device usage. Trust me, I really searched for option, but I cannot seem to find satisfying options and every app description I read talks all about what is possible but won't tell you about the little details... And as we all know... the devil is in the detail. Sorry if this is kinda-hijacking the thread, but I'm fairly new to Calibre and I am not quite sure how specific this might be to COPS, so I figured I'd best ask in the most specific place that applies to me.
  14. Hmmm, on that note: if I connect to my VPN _per docker_ that means I'm multiplying my VPN overhead, depend on binhex' release schedule (not implying anything, just saying that I'm totally new to unRAID AND Docker so just throwing out there what's crossing my mind) and then there's the issue with not every application desirable being available as binhex vpn docker. I've seen that you can use a docker like that as proxy for other dockers, but my line of thought is that I'm relying on the application within a docker to apply the proxy connection leaving possible (unknown) background processes un-routed through the proxy. The beauty (but also a pain point in other ways) of VPNs on a classic desktop is after all a one-setup experience. Connect once, route everything or nothing. Major application missing an obvious VPN path to me right now is jDownloader. Theoretically I could just set up a VM, install my VPN's application in there, add the applications I want to the mix and have them all download to a share. Waaaaaaaaay less elegant, but at least a catch-all approach. The VM itself would obviously be configured with a firewall. Is that a lot of overhead? Sure is. Is that a great concern? Well.... 16 physical cores and 48GB of RAM say: we can do it. Despite all of that, I'd still favor the leanest approach for obvious reasons. Surely there's something I'm missing or something I misunderstood?
  15. Can't get the Minecraft server to properly run. The log is filled with this error or variations of it: [Server thread/ERROR]: java.lang.OutOfMemoryError: GC overhead limit exceeded Now the server did show up in my Minecraft client (so broadcast works), but connecting to it failed as well. Anything obvious I missed or should check? Settings I used: mojang latest server build, imported worlds of various sizes from my local client (the goal is to make this server basically a 1-2 user environment so I have my Minecraft clients across different OS's or computers as "thin clients" and I won't have to deal with keeping my worlds in-sync anymore by hand. ) Edit: Okay, so I did some further Googling and as it turns out, adjusting the values for Xmx and Xms way above what the (apparently too old) tutorial I checked out suggested. My values are now 4096MB for Xmx and 512MB for Xms. That fixed it for vanilla, mojang-build (latest) based servers. Glad I can now finally centralize my Minecraft experience and even facilitate any folks coming over to my house to play together. Good times!