Jump to content

gilahacker

Members
  • Posts

    67
  • Joined

Everything posted by gilahacker

  1. I formatted a single 4 TB drive in my array as BTRFS and enabled compression using `chattr -c`. I also installed the "compsize" tool, to show me what kind of compression I was getting: Processed 213570 files, 1597070 regular extents (1597070 refs), 46817 inline, 223456 fragments. Type Perc Disk Usage Uncompressed Referenced TOTAL 96% 1.8T 1.8T 1.8T none 100% 1.7T 1.7T 1.7T zlib 43% 50G 115G 115G Out of 1.8 TB total data, only 115 GB was compressed. It was getting a good compression ratio (2.3:1). I wanted to increase the compression level, but couldn't find any way to do that since there's no entry for the drive in /etc/fstab like there would be on a non-unRAID setup. While reading up on that, I discovered zstd (even though it's been around for a while now) and that seemed like a better option than the default zlib, so I ran btrfs filesystem defragment -rf -czstd /mnt/disk18 to switch to zstd and it compressed a lot more of the files: Processed 149606 files, 12382461 regular extents (12382461 refs), 29274 inline, 781994 fragments. Type Perc Disk Usage Uncompressed Referenced TOTAL 73% 1.2T 1.7T 1.7T none 100% 405G 405G 405G zstd 65% 915G 1.3T 1.3T (note that I deleted some unnecessary files after the defragment, so my total size decreased) The compression ratio dropped (1.45:1), but >10x more data is being compressed now so I'm "saving" ~416 GB of disk space whereas I was only saving 65GB before. Based on what I've found, zstd compression should be as good or better than zlib while being a whole lot faster. Increasing the compression level means it takes longer to initially compress, but decompression rate stays about the same (a trait zlib shares). I'd love to be able to crank up the compression level, but the changes to allow that either aren't merged into the btrfs command line tool yet or the version we have isn't up to date. I found lots of discussion about adding them on github, but haven't compared version numbers or anything. There's also supposedly a way to force compression on all files as, by default, btrfs only tries to compress the beginning of the file and if it doesn't appear to compress well it just leaves the whole file uncompressed. This is why even after running the defrag command above, 405 GB is still listed as having no compression. I haven't figured out if there's a way to do that yet. In my particular case, many of the files are already RAR or ZIP or whatever, which probably wouldn't compress well anyways, but I've been slowly crawling through and unpacking those files so I can actually browse their contents in my file manager. This drive contains my collection of STL files for 3D printing which take up the majority of the space and a few old system backups that take up 158 GB.
  2. Then one of us misunderstood what @Zer0Nin3r was asking about regarding `copy --reflink`, which requires the `reflink=1` flag to be set when the disk is formatted.
  3. I just added a new disk to my array and found out about the reflink thing while trying to figure out exactly why it shows 66 GB used* on an empty, newly formatted 10 TB disk. All of my old disks have reflink=0 (per xfs_info command), and I don't believe it's possible to enable it without a reformat. *Seemed high, but I'm honestly not sure what it was on other disks when I added them. Something I stumbled upon in a Google search indicated that new XFS disks have significantly more "used" space to start with when that feature is enabled.
  4. @Hoopster brings up an interesting point. Because, unlike many (most?) Linux distributions, unRAID runs completely within RAM and nothing else gets mounted over-top the default rootfs (special instance of tmpfs). I believe this is similar to any "Live" distro, but don't have experience with those. So even though my /tmp isn't a tmpfs mount itself, it is *on* a tmpfs mount and exists only within RAM. Thus, either /tmp (a path on a tmpfs mount) or /dev/shm (an explicit tmpfs mount) should work exactly the same other than the fact that some of the space on / will be in use already.
  5. On *my* server, running 6.6.6, I have a /dev/shm tmpfs mount that is allocated 32 GB of RAM (per df). I have 64 GB total RAM, not sure why it's set to 32. I do not have a /tmp mount. I do have a /tmp directory, but it's not a tmpfs "RAM drive", just a plain directory. You can run cat /etc/mtab to see what your current mount situation is. I imagine it's similar. I have the Plex /transcode folder mapped to /dev/shm in its Docker settings. No issues so far.
  6. I am not a developer, though I do have some experience in that regard, but I don't think an entirely separate branch would be necessary. Toggles to allow users to enable/disable features should be sufficient. A tool to build a custom USB image with/without features could also work*. Splitting different components into their own packages might be best so different parts can be updated independently and not even need to be downloaded by those not using them. For example, if the GUI were a separate package it could be updated without needing a full "OS update", and those not using it wouldn't need to have it installed or even download that new package when they update their system. *But may require significant dev work to make things more modular.
  7. https://lime-technology.com/forums/topic/61771-optional-gui-resolution/?do=findComment&comment=656621 TLDR: Switching from HDMI to DVI got me 1920x1080px resolution in the GUI. I have no idea why, but it worked.
  8. Bump. I have a Geforce 710 hooked up to a 4k TV and my GUI is running at 1024x768, which looks like crap. I have to zoom out in the browser to fit everything on the screen and the text ends up too small and horribly pixelated. It's far more convenient for me to mess with the web GUIs for my various Dockers through the unRAID GUI than it is on my phone (in the market for a new laptop), so I'd like to make it usable. xrandr lists the available resolutions as 640x480, 800x600, and 1024x768. I was able to add a custom resolution following directions I found here, but can't switch to it. I just get a "Failed to change the screen configuration" error message. I've tried 1920x1080@60hz and @30hz with the same results. I went so far as to reboot into non-GUI mode and try to install the Linux drivers for my card (after using the devpack plugin to install things like gcc) but it wants kernel source files that I don't have (I think I'd need a kernel-devel package?). I'm highly doubting that even if I got drivers installed that they'd persist a reboot, but figured it was worth a try. If anyone has any suggestions, I'm all ears. :-/
  9. Cron format: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed For 0 0 * * 1,3,5 it would run at midnight every Monday, Wednesday, and Friday. https://crontab.guru is great for figuring out cron schedules. But, as @ljm42 said, you might want to use the CA User Scripts plugin instead.
  10. Supposedly, but it does require something extra. I don't remember what it was called or if it was DVD only or worked for Blu-ray as well. It was some kind of Linux library equivalent to AnyDVD on Windows. Support for it may have needed to be compiled into Handbrake. I wish I remembered more. I just recall someone posting here asking if it could be added or why it wasn't added to the Docker.
  11. The new version does have the updated x265, which is great, but the FDK-AAC audio encoder has been removed by the developer of Handbrake due to licensing issues. :-( https://handbrake.fr/news.php?article=36 My speaker system (Sonos) can't do DTS (which is super-annoying), so I'm using the lower quality avcodec AAC encoder for now. You can compile your own Handbrake installation from source to include the FDK-AAC audio encoder. If I ever get around to doing that myself I will try to make some easy-to-read directions for it and post them here. I would assume we can just swap out whatever files are different between the pre-compiled version and the compile-it-yourself version without having to do anything else to the Docker image.
  12. Installing libdvdcss didn't fix it, though it probably is still required for decrypting the DVDs once Handbrake can actually see them. I tried installing gvfs and restarting the docker, as someone at https://bbs.archlinux.org/viewtopic.php?id=155620 said that fixed this issue for them. It did nothing for me. I'm open to suggestions, but don't know of anything else to try at this point.
  13. I did not even notice it was there. Unfortunately, medibuntu has been shut down since some time in 2013, according to a Google search. The domain doesn't even resolve anymore. I did find this though: http://www.videolan.org/developers/libdvdcss.html Looks like the good folks that make VLC have taken ownership and are keeping it available for everyone. I'll give that a try some time tomorrow. Right now my Dockers are down for backup of the appdata folder from cache to array (hooray new feature in Community Applications!) and the Plex metadata is taking *forever* to copy over. A bajillion tiny files or something. It's been running for over an hour and it's just gotten to the d's and appears to be going alphabetically...
  14. I've attached a screenshot of the source selection screen for reference. As you can see, my drive is showing up as /dev/sr0 but nothing happens if I try to select that directly as the source (as I've seen recommended on some forums), and the drop-down next to "Detected DVD Devices" is empty. From what I've read, if an optical drive is detected it should also have a listing in the left column of this window and it does not. I opened a bash shell in the Docker and tried to follow the directions here: http://askubuntu.com/questions/239748/unable-to-rip-dvd-using-handbrake-or-ogmrip Only the update/upgrade worked. It couldn't find the restricted packages. libdvdread4 was already installed and there is no /usr/share/doc/libdvdread4/install-css.sh script on the system. One completely unintended side effect of this was that it did upgrade my Handbrake version to 10.5 (had to restart the container). So, that's one previous question answered. Every attempt I've made to "mount" the DVD drive has given me an "access denied" error. I've tried every way I could find via a Google search. Surprisingly, there's basically no results for how to do this within a Docker container specifically. Running blkid shows it does see the drive, and is even able to identify the DVD I put in it: /dev/sr0: LABEL="DOGMA" TYPE="udf" But I can't get any further. In all likelyhood, we'd need something specifically for breaking the encryption on the disks as well. I assume that is what that install-css.sh script I'm missing is for, as well as the restricted packages that I can't install. My MakeMKV Docker does a generally fantastic job of ripping discs, and I'm primarily using it for Blu-rays, not DVDs. This is just one of those things I want to figure out for the fun of it. :-)
  15. I just tried this on mine and Handbrake does not appear to recognize the drive. The same exact line works for my MakeMKV Docker, so I'm pretty confident /dev/sr0 is the right device without manually checking. I'm thinking there has to be something else required for the mounting of the drive. I'll poke around in my MakeMKV docker tonight and see if it's something simple like an /etc/fstab entry (assuming that these Dockers have that). All my Linux experience is on actual servers with no optical drives and no Docker containers, so I don't really know how it all works in that regard but I'm sure we can figure it out. :-)
  16. Now I really want to try this once I have stupid amounts of RAM, even if it's just to see if I can make it work.
  17. Finally got this container working just now. Had tried when I first started using unRAID about a month or two ago and did something wrong, who knows what. Blew away existing config, reinstalled with proper path and port mapping, and it's doing its thing as I type this message. It hasn't gotten far enough along the current disk to say for sure, but it looks like it's going about as fast as was when I had this drive in the Windows machine that unRAID is replacing. I understand that Docker isn't really supposed to add any overhead, but you never know what the differences are going to be between two completely different systems. Now to see how long it takes to go through the pile of DVDs and Blu-rays in my closet. :-P Thanks Saarg!
  18. @Sparklyballs Would it be possible for me to update the version of Handbrake inside this Docker? It currently has 10.2 and Handbrake is on version 10.5. Changes since 10.2 include: - Various bug fixes for all platforms and the core engine. - Updated x265 to 1.9 which brings bug fixes and performance improvements. - Improvements in large AVI file handling. I haven't run into any bugs so far, but I am using x265 so having the most up to date version of that encoder would be nice. I'm new to Docker and hope to eventually start making my own Dockers, but at this point I'm not even sure what is/isn't possible. If it's something you'd have to update then I guess we just wait until you have the time/desire to do so. :-)
  19. +1 Is there a way of making this an official feature request? Okay, forum n00b just realized this *is* the way you officially request features around here...
  20. 16 GB right now, which is significantly more than all but one of my current video files (because I haven't reencoded it to a reasonable size yet). I did just get a 4k TV and an Nvidia Shield Android TV, so I *might* try out some 4k videos. Planning on getting 128 GB when I upgrade later this year (yes, partially just because I can). I know I'm just being paranoid about the SSD. I went with an 850 Evo instead of the Pro and, IIRC, it's rated for half as many writes by Samsung (but will likely still outlive all my spindle drives). Still... there's really no reason *not* to use the RAM if it's available. If the server crashes/loses power/whatever then that data doesn't matter anyways.
  21. My SSD is plenty fast, but transcoding in RAM would be nice just to avoid unnecessary wear on the SSD for those of us who have enough RAM.
  22. *UPDATE* Nevermind, it just took *forever* to become accessible. GUI is working, streaming is working. Everything is good and right in the world. :-) Original post: Updated my Docker today (first time I've done that since setting up the server about a week ago) and now it doesn't work. unRAID shows it as running, but GUI is not accessible and the log shows it's just sitting at "Starting Avahi daemon". Using linuxserver/plex.latest. :-(
×
×
  • Create New...