Jump to content

drumstyx

Members
  • Content Count

    82
  • Joined

  • Last visited

Community Reputation

0 Neutral

About drumstyx

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Update: I uninstalled almost all plugins, and when it settled down, I slowly reinstalled some. I think it *might* have been a misconfigured 'fix common problems' plugin.
  2. I've got a number of unassigned drives I keep connected as warm spares, as I got a good deal on a bunch of 8TB drives a while back. They're precleared, and I intend to keep them just dead idle until I need to either expand or replace a drive, at which point I can do it remotely (as I keep my server at a different location than I live primarily). I had some trouble with the unassigned drives plugin, in that it wouldn't recognize unassigned drives connected via my HBA (all drives reside in a Netapp DS4246 shelf), so I uninstalled the plugin, and am just using Unraid's native handling, which at least allows me to spin them up/down. The trouble is, I spin them all down, and seemingly randomly, they come back! No docker containers I have have access to /dev directly, and I can't figure out what plugins would possibly be causing it. Does the mover maybe accidentally hit those drives when scanning for changes?
  3. Quick question on this -- do I need to be careful of every Unraid upgrade now? Should I only upgrade through the plugin settings tool, rather than the built-in tool now?
  4. Necro-ing this thread to put a vote in for this. It's not a *huge* deal, but would be very nice. It basically completes the loop on all writes only ever happening when mover runs, and parity drives staying spun down 23 hours out of the day.
  5. Very nice, super handy! A few feature requests: Ability to reorder disk tray layouts, even if just with an up/down button, or even as simple as an order field. I've got hotswap units in my main server (3x4 drives) and one of them is sideways, plus a DS4243 shelf, so I have 3 configs, and I accidentally ended up with a couple out of order. Minor issue, but it just looks strange to see 2 before 3. Line breaks in dashboard view and tools area.
  6. I've shucked 3 elements drives, and all 3 had EMAZ drives, so I'm fairly certain the new ones will be the same, but I've heard the MyBook enclosures have EZAZ, and I've heard some things about them -- I thought EMAZ was air filled, but looking at SMART, you're right, they're helium -- so I guess I'll just have to see what's in the EZAZ drive when it arrives. Maybe I'll keep one for kicks to see how it compares to the EMAZ drives
  7. I've done a lot of confusing reading, and as I've got a bunch of mybooks and elements drives coming in the mail (and mybooks tend to have EZAZ vs EMAZ in the elements/easystore devices) I'm trying to figure out what drives I should actually want. Some info points to one being He filled, while the other is air filled, some info points to a lower cache on EZAZ, or EMAZ being more likely to be a white labelled red, but I'm honestly confused -- what drives do I want? I plan to only shuck the ones with the drives I want.
  8. Running a parity check right now for the first time since loading 4 drives into my DS4243 drive shelf, and it's fairly slow -- around 45MB/s, compared with the previous average around 100-125MB/s. Is the shelf itself something of a bottleneck? I'm using an LSI HBA, as recommended, and each individual drive is fine in terms of speed, but I'm thinking maybe the total bandwidth available from the shelf is only a few hundred MB/s? Is this to be expected? On that note, what happens when I start running 20 odd disks in this thing? Ah, there we go -- rebooted and I'm seeing 108MB/s or so. Much better! I wonder why...
  9. I'm constantly floored by the rapidly dropping prices of SSDs, to the point where I've flippantly bought SSDs for various machines I've got laying around. $35 CAD for a branded 250gb NVMe? Good lord it's cheap now. That got me thinking about the future of storage -- SATA is no longer being developed any faster, which means 2.5" SSDs will go by the wayside in favour of M.2, but that's a whole other issue of bus limitations, and I'm here to talk about storage. Hard drives don't seem to be getting much cheaper. Of course, it's happening slowly, but here in Canada, a GREAT deal on an 8TB drive (the cheapest per TB right now here, as opposed to the 12TB deals y'all are getting in the USA these days) is $180 for a WD drive that still needs to be shucked. That's $22.50/TB. An extremely cursory search shows a Silicon Power 1TB SATAIII SSD at $115 on Amazon. A cursory Black Friday deal search comes up with a Team Group 1TB SATAIII SSD at a shocking $90 from Canada Computers! So we're at a factor of 4-5x difference, and just a year ago the factor was more like 8-9x. Point is, even assuming a non-linear change year over year, we can probably expect a crossover some time in 2021-2023 -- which is the timeframe we all should be planning replacements/growth for anyway. So now the tech issues: SSDs need to be TRIMmed periodically, which as I understand it, deals with some of the drawbacks associated with wear-leveling. This means parity calculation as we know it is fundamentally broken for SSDs as long as they require TRIMming. My main question is: Can this be rectified, or does the very concept of parity need to be revised? If so, what options exist for this right now? What options WILL exist? Will a mix-n-match system like Unraid even be possible? What's on the roadmap here?
  10. HBA is a PMC-Sierra PM8005 rev 5 (which I believe is a PM8001, rev 5) I first noticed this issue when one of my drives, which was already in the array (precleared OUTSIDE the array, but while it was in the disk shelf) dropped during a mover task, which was executing while Plex was also updating metadata, which then required the disk shelf to be rebooted (server reboot did not resolve the issue). It actually required a disk rebuild too, as unraid somehow lost the configuration of which disk belonged in that slot, but that's another issue. Then yesterday, I was clearing 3 8tb drives that I put in there and added to the array, and they dropped out at the same time, but interestingly, the 3TB drive in there (the one that previously dropped) was fine! My first thought was that the HBA was to blame -- many people use LSI HBAs and an adapter cable, whereas I was using an HBA that worked natively with QSFP, but has significantly less community market share. Is this assessment correct? I bought a new HBA and an adapter cable, and they're on the way from ebay, but I'm wondering if this is maybe a known issue of sorts? Could it be the IOM3 controller? The fact that I only have 1 PSU plugged in on the shelf? Anyone else had this problem?
  11. In that case, frankly, I might as well just set up a VM to remote into with any remote desktop protocol. Of course, the best part of guacamole (aside from avocados) is being accessible from ANY machine with a web browser, so still something to think about I suppose. All that said, I've managed to get port-sharing working with openvpn-as, so I'm only exposing 443 right now for both openvpn-as and my reverse proxy. I'd REALLY love a secure way to ssh in without VPN too though, but that's less necessary. I guess ssh itself is ostensibly secure enough to simply be exposed, but with root being the main user for unraid, that's pretty risky.
  12. Ah right, I forgot to mention that I've already done that, naturally. Is it really no good to open up things with a reverse proxy like spaceinvader one's tutorial?
  13. Ah yeah, makes sense -- I couldn't figure out how to specify an older version of the docker via GUI, but I suppose I could do it in console...
  14. Having issues with python cryptography -- looks like py3-openssl was updated just a few hours after the current latest version was updated, and is causing issues because py3-cryptography is outdated now? I'm no expert, just a bit of digging. Error is: pkg_resources.ContextualVersionConflict: (cryptography 2.6.1 (/usr/lib/python3.7/site-packages), Requirement.parse('cryptography>=2.8'), {'PyOpenSSL'}) EDIT: In the meantime, running this in console and restarting works fine, though it has to be done each time the container is recreated (edited, etc) apk add gcc musl-dev libffi-dev openssl-dev python3-dev; pip install cryptography --upgrade apk add gcc musl-dev libffi-dev openssl-dev python3-dev; pip install cryptography --upgrade
  15. I've been using openvpn-as (as a docker container on both my unraid servers) for a while now as my primary entry point for when I'm doing remote admin stuff, but as I sit here on a zoom meeting call, I'd really like to ssh into my server. That got me thinking -- with the pretty new login setup in 6.8.0 rc's, what's safe to open up for outside access? I already have a few ports forwarded to openvpn-as access, and for some reason I assume openvpn to be secure enough to do so, but I'm hesitant to open port 22, for example. Time was, it was inadvisable to open ANY ports to unraid, so I'm curious what the status is these days. I'd love to be able to open the web interface to direct access, but if that's not a good idea, could I at least do ssh?