Jump to content

drumstyx

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by drumstyx

  1. I'm constantly floored by the rapidly dropping prices of SSDs, to the point where I've flippantly bought SSDs for various machines I've got laying around. $35 CAD for a branded 250gb NVMe? Good lord it's cheap now. That got me thinking about the future of storage -- SATA is no longer being developed any faster, which means 2.5" SSDs will go by the wayside in favour of M.2, but that's a whole other issue of bus limitations, and I'm here to talk about storage. Hard drives don't seem to be getting much cheaper. Of course, it's happening slowly, but here in Canada, a GREAT deal on an 8TB drive (the cheapest per TB right now here, as opposed to the 12TB deals y'all are getting in the USA these days) is $180 for a WD drive that still needs to be shucked. That's $22.50/TB. An extremely cursory search shows a Silicon Power 1TB SATAIII SSD at $115 on Amazon. A cursory Black Friday deal search comes up with a Team Group 1TB SATAIII SSD at a shocking $90 from Canada Computers! So we're at a factor of 4-5x difference, and just a year ago the factor was more like 8-9x. Point is, even assuming a non-linear change year over year, we can probably expect a crossover some time in 2021-2023 -- which is the timeframe we all should be planning replacements/growth for anyway. So now the tech issues: SSDs need to be TRIMmed periodically, which as I understand it, deals with some of the drawbacks associated with wear-leveling. This means parity calculation as we know it is fundamentally broken for SSDs as long as they require TRIMming. My main question is: Can this be rectified, or does the very concept of parity need to be revised? If so, what options exist for this right now? What options WILL exist? Will a mix-n-match system like Unraid even be possible? What's on the roadmap here?
  2. HBA is a PMC-Sierra PM8005 rev 5 (which I believe is a PM8001, rev 5) I first noticed this issue when one of my drives, which was already in the array (precleared OUTSIDE the array, but while it was in the disk shelf) dropped during a mover task, which was executing while Plex was also updating metadata, which then required the disk shelf to be rebooted (server reboot did not resolve the issue). It actually required a disk rebuild too, as unraid somehow lost the configuration of which disk belonged in that slot, but that's another issue. Then yesterday, I was clearing 3 8tb drives that I put in there and added to the array, and they dropped out at the same time, but interestingly, the 3TB drive in there (the one that previously dropped) was fine! My first thought was that the HBA was to blame -- many people use LSI HBAs and an adapter cable, whereas I was using an HBA that worked natively with QSFP, but has significantly less community market share. Is this assessment correct? I bought a new HBA and an adapter cable, and they're on the way from ebay, but I'm wondering if this is maybe a known issue of sorts? Could it be the IOM3 controller? The fact that I only have 1 PSU plugged in on the shelf? Anyone else had this problem?
  3. In that case, frankly, I might as well just set up a VM to remote into with any remote desktop protocol. Of course, the best part of guacamole (aside from avocados) is being accessible from ANY machine with a web browser, so still something to think about I suppose. All that said, I've managed to get port-sharing working with openvpn-as, so I'm only exposing 443 right now for both openvpn-as and my reverse proxy. I'd REALLY love a secure way to ssh in without VPN too though, but that's less necessary. I guess ssh itself is ostensibly secure enough to simply be exposed, but with root being the main user for unraid, that's pretty risky.
  4. Ah right, I forgot to mention that I've already done that, naturally. Is it really no good to open up things with a reverse proxy like spaceinvader one's tutorial?
  5. Ah yeah, makes sense -- I couldn't figure out how to specify an older version of the docker via GUI, but I suppose I could do it in console...
  6. Having issues with python cryptography -- looks like py3-openssl was updated just a few hours after the current latest version was updated, and is causing issues because py3-cryptography is outdated now? I'm no expert, just a bit of digging. Error is: pkg_resources.ContextualVersionConflict: (cryptography 2.6.1 (/usr/lib/python3.7/site-packages), Requirement.parse('cryptography>=2.8'), {'PyOpenSSL'}) EDIT: In the meantime, running this in console and restarting works fine, though it has to be done each time the container is recreated (edited, etc) apk add gcc musl-dev libffi-dev openssl-dev python3-dev; pip install cryptography --upgrade apk add gcc musl-dev libffi-dev openssl-dev python3-dev; pip install cryptography --upgrade
  7. I've been using openvpn-as (as a docker container on both my unraid servers) for a while now as my primary entry point for when I'm doing remote admin stuff, but as I sit here on a zoom meeting call, I'd really like to ssh into my server. That got me thinking -- with the pretty new login setup in 6.8.0 rc's, what's safe to open up for outside access? I already have a few ports forwarded to openvpn-as access, and for some reason I assume openvpn to be secure enough to do so, but I'm hesitant to open port 22, for example. Time was, it was inadvisable to open ANY ports to unraid, so I'm curious what the status is these days. I'd love to be able to open the web interface to direct access, but if that's not a good idea, could I at least do ssh?
  8. Let's assume I'm willing to wait as long as it takes to guarantee stability when it comes back up -- are there any settings to tweak this? Ideally "wait for UPS power above x%, then start", but I'd settle for a "wait x seconds before array start on every boot"
  9. I've finally gotten my DS4243 working with unraid, and I'm about to add my first disk that lives in the shelf to the array. I had a thought though, as I was checking out the UPS load settings (I'll need a bigger UPS now, but that's another matter). Right now, when power fails, it'll initiate a shutdown as necessary, then shut off the UPS, and when power is restored, it'll come back up. But what if the disk shelf takes longer to boot than unraid? (unlikely, but a thought). Will the array fail to start completely because of missing disks? Or will it keep trying to detect and start the array?
  10. Just did a new build in my old (existing) server. I upgraded to rc5 from stable a couple days ago, and today I put in an intel CPU/mobo to run hardware plex transcoding. Unfortunately, I don't have /dev/dri, and lsmod doesn't show any i915 driver. Do I have to add a kernel build flag, or rebuild since I went from AMD to Intel? EDIT: I'm an idiot, modprobe i915 oughtta take care of it...
  11. Ah! Yes, I'll do exactly that. I was thinking that might be a problem, but I couldn't find where to change what folder Sonarr uses for /tv
  12. I'm trying my best, but I just can't figure out why the heck hard links aren't working as expected -- No matter what I try, sonarr will make a .backup~ version of the file before moving it to its final resting place. My setup: Downloads come via usenet onto the cache ssd, and are direct unpacked to the non-cached array (essentially, final resting place, ideally). I would expect Sonarr to do a simple mv on the file, which would just move the pointer, or at the very least just make a hard link, and delete the old hard link. It does neither (despite settings saying use hardlink instead of copy), copies to the same folder with a different extension, and then apparently moves THAT and deletes everything in the initial folder to clean up. NZBGet's output directory, Sonarr's /downloads directory, and Sonarr's /tv directory are all on the same share...
  13. I'm starting to really stress my server, it seems -- the main systems I have running are a bunch of dockers: Plex, Sonarr, Radarr, NzbGet, plus a few less taxing ones like pihole, openvpn (not currently active, but on and listening), and NoIP. Hardware is pretty weak, but modern -- Ryzen 3 (forget if it's a 1200 or 1300, but it was the cheapest thing I could get when my last board crapped out) with 8GB of DDR4. I've isolated CPU1 so I can at least have constant access to the UI, because it would lock up even that. I'm running it pretty hard -- I'll search lots of big things in Sonarr, and watch it load up NzbGet with a great big queue full of large (7-40GB each) files. This of course needs to be downloaded, extracted, and eventually moved off the cache (250gb SSD) then Plex will index when it scans. So the main question: Plex will completely crap out frequently when lots is going on, like if I'm downloading one file, unpacking another, and at the same time running the mover to clear up the cache (which actually runs slower than my downloads most of the time, so I actually have to babysit it to make sure it doesn't overflow, and sometimes pause downloads). Is this normal? The weird thing is, if I SSH in, htop doesn't even show things working all that hard -- nothing's truly pinned at 100% hard, it's fluctuating and tapping 100 here and there, but it hardly seems like it should lock up like this. I've been planning to work on fixing this, especially since this is obviously a bad environment for any server-side transcoding -- would I be better off using a second mediocre machine running standalone Plex with network access to the Unraid system, or going all out and putting a decent CPU in the current machine?
  14. Just experimenting, but I excluded appdata and CPU usage seemed to decline a lot, as appdata was the heaviest of the searches (due to plex folder structure, I assume) It's not perfect, but along with the latest version, it seems to keep CPU usage down, and disk reads low.
  15. Not that I don't appreciate this container, but I've put a script together to bypass this whole problem with rclone. If you have issues with speed, this will help. It requires that you've set up your FTP/SFTP server as a remote with rclone, but set this up with a cron of whatever your schedule in davos would have been, and tailor it to your own needs. remote_dir="remotename:/path/to/complete/items" # wherever your files are coming from -- make sure only completed files are here local_dir="/local/working/directory" # davos' initial download folder local_completed_dir="/local/complete/directory" # equivalent of davos' move after complete log_dir="/log/dir" function scrape () { if rclone lsf $remote_dir | grep . -c then if rclone move $remote_dir $local_dir -v --delete-empty-src-dirs --buffer-size=2G --stats 5s --transfers 4 2>&1 >>$log_dir then echo "scrape succeeded into $local_dir" if mv $local_dir/* $local_completed_dir then echo "mv succeeded into $local_completed_dir" else echo "mv failed to move to $local_completed_dir" return 1 fi else echo "rclone failed on $local_dir" return 1 fi else echo "No new files" fi } scrape || exit 1 EDIT: And here's a sloppy rsync version, which allows for as-you-go 'completion', where the other script doesn't have anything in 'complete' until the whole job is done. This one's probably more delicate as-is (like I said, sloppy) but it does what I want better. Depends on parallel, rclone, and rsync remote_dir="remotename:/path/to/complete/items" # wherever your files are coming from -- make sure only completed files are here local_dir="/local/completed/files" # where things will end up -- rsync uses temp files, so no need for a working dir mount_dir="$local_dir/tempmountpoint" # a temp folder where the remote server gets mounted while syncing function scrape () { echo -n "files found: " if rclone lsf -R --files-only $remote_dir | grep . -c then echo "attempting to mount" find $mount_dir/ -depth -type d -empty -delete &>/dev/null fusermount -u $mount_dir &>/dev/null mkdir $mount_dir &>/dev/null if rclone mount $remote_dir $mount_dir --daemon then echo "mount to $mount_dir succeeded" echo "sleeping a few seconds for mounting to be complete I guess" sleep 5 echo "making local directory structure to fill" find $mount_dir -type d | sed 's|'$mount_dir/'|'$local_dir/'|' | tr '\n' '\0' | xargs -0 mkdir -p echo "trying to rsync with parallel" if find $mount_dir -type f | sed 's|'$mount_dir/'||' | parallel -v -j8 --progress --line-buffer rsync -v --progress --stats --remove-source-files $mount_dir/{} $local_dir/{} then echo "rsync succeeded into $local_dir" find $mount_dir/ -depth -type d -empty -delete &>/dev/null fusermount -u $mount_dir &>/dev/null echo "attempted to remove empty directories from $mount_dir" else echo "rsync failed to move to $local_dir" return 1 fi else echo "rclone failed to mount $remote_dir" return 1 fi else echo "exiting" fi } scrape fusermount -u $mount_dir &>/dev/null find $mount_dir/ -depth -type d -empty -delete &>/dev/null exit 0 EDIT: Tailored the script above for better performance on large sets of files. I'm hitting speeds in excess of 800mbps if there's enough data to keep transferring -- it happens so fast that it sometimes can't even hit its peak. To @Stark, if I could figure out how to get a better handle on the parallel output, something like this could replace the internal FTP client (which to be fair is very difficult to wrangle, it seems), while keeping the management and scheduling system. rclone's configs are machine readable/writeable, and rsync *does* provide pretty progress output when it's not parallelized, but parallel butchers it. Worst case, that could probably be faked by reading the file sizes and measuring the total transferred, but that would just give you batch-level progress.
  16. Ok so uh...how do I do a "new config"? Thought it'd be as simple as just assigning what I want and going through extra confirmation steps, but I don't see anything for this... Found it, under tools tab
  17. Found my way! This post is the fix: Seems to be a compounding issue that gets worse over time as it's zeroing.
  18. Well that's not good -- is there any way to install an old version? I'm sure I could let it run, but probably looking like 10+ days... Are we sure there are such severe issues? Any thread on this?
  19. First and foremost, I know SMR drives are known to have poor write speeds, but my understanding was that it was smart enough to know if it's writing multiple whole shingled sections (like y'know, writing the entire disk) it would know to just write through, rather than caching etc. I was getting what you'd expect for read speeds, something like 160MB/s, but now in the write phase, it's 12MB/s. Is this because they're going through the USB controller, or are they really that dumb, or....what? Obviously can't be shucking these for my array if they're *that* slow consistently. The idea was to be that it could handle loading large chunks of data without going through the SMR shuffle.
  20. Ah! I didn't think of it like that! I do have quite a bit of free space, offloading that 1TB is no problem at all. Will the 4TB drives that used to be parity come up as empty? The risk of failure during parity build is surely no more than the risk of failure during parity CHECK, which I have no problem doing monthly, and often more frequently even than that.
  21. Sure! Current: P1 - 4TB P2 - 4TB D1 - 4TB D2 - 4TB D3 - 1TB D4 - 3TB D5 - 4TB D6 - 4TB Cache - 250GB Desired: P1 - 8TB P2 - 8TB D1 - 4TB D2 - 4TB D3 - 4TB (previously P1) D4 - 3TB D5 - 4TB D6 - 4TB D7 - 8TB D9 - 4TB (previously P2 Cache - 250GB Funny you mention my risk tolerance -- I actually have a pretty high risk tolerance for most of this data, the only reason I have dual parity is because I use the cheapest drives I can, and some are just drives I had lying around, or shucked out of external drives I've already used for a year. I just like the extra durability. I know I can do this slowly with high durability, but doing it slowly means many rebuilds, which also increases he likelihood of a failure overall, though I guess likelihood is very low. Biggest thing is I'm skeptical of reliability of doing it while the server is being used -- I know it slows down operations significantly to use it while rebuilding, but I'd be concerned it actually increases likelihood of failure, where a single rebuild operation overnight wouldn't have those issues. As for SMART reports and parity -- 2 drives have UDMA CRC errors, but they haven't changed in many months and reboots/parity checks I guess they don't disappear when you resolve them (as it's often just a bad SATA connection). Parity will be run tonight to ensure everything's up to snuff, as the 8TB's go through a preclear cycle.
  22. I've got an array with various drives. Parities are currently 4TB, so the max drive size is 4TB, but I've just bought 3 8TB drives, 2 of which will be parity after preclearing. The two 4TB parity drives will become data drives, and ideally one of them will actually replace a 1TB drive I have. So this is a whole lot to do! I'm trying to reduce the number of rebuild operations as much as possible. As I understand it, you can't add new drives at the same time as rebuilding another drive, which makes perfect sense, but that means the "standard" way to do this would be to swap out BOTH parities at once, rebuild parity, then add the 3rd 8TB and one old 4TB parity as data drives, rebuild parity, then rebuild the 1TB drive on the 4TB drive. That gives me a total of 3 rebuild operations just to add 3 drives! ? Now, the 1TB is still fine, I'm just running out of license slots (not to mention physical slots) and it slows down parity checks. But is there any way I can maybe, remove parity protection, copy the 1TB data onto a 4TB drive (or really just somewhere else in the array), add the drives for data, then finally throw the parities in for a total of 1 rebuild operation?
  23. I've had mild issues with my Ryzen 3 "temporary" hardware (after last motherboard failed) since I built it up. Of course, that was back with the Ryzen issues when it was first released, but since those were sorted, the system itself has been mostly stable, but docker would crash maybe once a day to once a week, completely silently. Hardware is pretty low-end. 8GB DDR4, Ryzen 3, and the motherboard doesn't even support AMD-V. I figure I'd need more RAM, and a stronger CPU, but reading the guide, it says "Most users will find it difficult to utilize more than 8GB of RAM on Docker alone"...is that really still the case? Array info: 5 data drives, plus dual parity, 250GB SSD cache. As you might expect, appdata and configs are kept on the cache and backed up daily. I run: Plex openvpn-as pihole NoIp Plus a full stack of apps to automate media: sonarr radarr jackett davos Memory use at idle (fresh boot) with all my apps running sits at ~2GB cached, ~1.66GB used, and ~4GB free. I don't think I've ever seen "free" actually drop to 0, but if my experience with Windows is at all relevant (it probably isn't) things start to drag at about 25% free memory remaining -- the OS starts worrying it'll run out, and pages the crap out of it. So I guess it comes down to figuring out why the heck docker is crashing on me, but at a glance, does this hardware need upgrading? EDIT: Alright, so Plex isn't crashing on me anymore after upgrading from 6.5.0 to 6.5.3. Could be much more stable now, we'll have to see... "used" memory never goes above ~2.5GB, and I've learned cached memory isn't actually in use.
  24. Hey @Stark, really appreciate the response! I definitely feel ya that it's a personal project -- maybe I'll see what I can do to chip in with enhancements. Speeds are confirmed by the overall network speed in the stats plugin, the readout on davos is buggy, so I knew to look elsewhere, but ultimately not a concern. It's quite possible that it's an issue of lack of parallelism. IIRC, even using Filezilla, transfers from the seedbox don't reach full potential unless I'm downloading either multiple files at once, or large files in chunks. For my use-case in particular, I often download large sets of files, so dead-simple parallel downloads might be the way to go. I'll see if I can get some time for this. Heck, realistically, rclone could probably handle my case, it just didn't have a pretty UI at the time.
  25. Not sure if the title makes sense -- basically I have a share, we'll call "xyzShare", which basically houses everything. Cache drive is a 250GB SSD. I have a workflow for media with Radarr and Sonarr which could easily automatically tip over 250GB if I wasn't careful (and I often am not), particularly because I can't get Sonarr/Radarr to automatically clean up the downloads folder (from Davos) once it's moved the media to its final resting place. In short, I don't have the cache drive enabled for the main share because media will overwhelm it When I'm moving files around within my network, I'm not doing nearly as much traffic, and write a lot of small files, which means the speed of the cache would really help. So I figure I should make a new, cache-enabled share for non-media stuff. Problem is, I've got lots of non-media data on the main share that would need to be migrated. How much effort/how long would it take to move everything that isn't in media folders into another share? If I just "mv /mnt/user/share1/datafolder /mnt/user/share2/", is Unraid smart enough to do mv's on individual disks to avoid rewriting everything?
×
×
  • Create New...