gilahacker

Members
  • Posts

    67
  • Joined

Posts posted by gilahacker

  1. On 10/10/2022 at 2:51 AM, NLS said:

    I still need actual users that have tried BTRFS on array and even better, compression, to chime in. I am 100% sure we can find at least one.

    I formatted a single 4 TB drive in my array as BTRFS and enabled compression using `chattr -c`.

    I also installed the "compsize" tool, to show me what kind of compression I was getting:

    Processed 213570 files, 1597070 regular extents (1597070 refs), 46817 inline, 223456 fragments.
    Type       Perc     Disk Usage   Uncompressed Referenced
    TOTAL       96%      1.8T         1.8T         1.8T
    none       100%      1.7T         1.7T         1.7T
    zlib        43%       50G         115G         115G


    Out of 1.8 TB total data, only 115 GB was compressed. It was getting a good compression ratio (2.3:1).

    I wanted to increase the compression level, but couldn't find any way to do that since there's no entry for the drive in /etc/fstab like there would be on a non-unRAID setup. While reading up on that, I discovered zstd (even though it's been around for a while now) and that seemed like a better option than the default zlib, so I ran btrfs filesystem defragment -rf -czstd /mnt/disk18 to switch to zstd and it compressed a lot more of the files:

    Processed 149606 files, 12382461 regular extents (12382461 refs), 29274 inline, 781994 fragments.
    Type       Perc     Disk Usage   Uncompressed Referenced
    TOTAL       73%      1.2T         1.7T         1.7T
    none       100%      405G         405G         405G
    zstd        65%      915G         1.3T         1.3T

    (note that I deleted some unnecessary files after the defragment, so my total size decreased)

    The compression ratio dropped (1.45:1), but >10x more data is being compressed now so I'm "saving" ~416 GB of disk space whereas I was only saving 65GB before.

    Based on what I've found, zstd compression should be as good or better than zlib while being a whole lot faster. Increasing the compression level means it takes longer to initially compress, but decompression rate stays about the same (a trait zlib shares). I'd love to be able to crank up the compression level, but the changes to allow that either aren't merged into the btrfs command line tool yet or the version we have isn't up to date. I found lots of discussion about adding them on github, but haven't compared version numbers or anything.

    There's also supposedly a way to force compression on all files as, by default, btrfs only tries to compress the beginning of the file and if it doesn't appear to compress well it just leaves the whole file uncompressed. This is why even after running the defrag command above, 405 GB is still listed as having no compression. I haven't figured out if there's a way to do that yet.

    In my particular case, many of the files are already RAR or ZIP or whatever, which probably wouldn't compress well anyways, but I've been slowly crawling through and unpacking those files so I can actually browse their contents in my file manager. This drive contains my collection of STL files for 3D printing which take up the majority of the space and a few old system backups that take up 158 GB.

  2. On 4/12/2020 at 2:49 AM, Zer0Nin3r said:

    Ah very cool. I guess there's no chance in getting it to work unless I were to reformat everything which means, I probably won't be able to enjoy the new functionality.

    I just added a new disk to my array and found out about the reflink thing while trying to figure out exactly why it shows 66 GB used* on an empty, newly formatted 10 TB disk. All of my old disks have reflink=0 (per xfs_info command), and I don't believe it's possible to enable it without a reformat.

    *Seemed high, but I'm honestly not sure what it was on other disks when I added them. Something I stumbled upon in a Google search indicated that new XFS disks have significantly more "used" space to start with when that feature is enabled.

  3. @Hoopster brings up an interesting point. Because, unlike many (most?) Linux distributions, unRAID runs completely within RAM and nothing else gets mounted over-top the default rootfs (special instance of tmpfs). I believe this is similar to any "Live" distro, but don't have experience with those.

     

    So even though my /tmp isn't a tmpfs mount itself, it is *on* a tmpfs mount and exists only within RAM. Thus, either /tmp (a path on a tmpfs mount) or /dev/shm (an explicit tmpfs mount) should work exactly the same other than the fact that some of the space on / will be in use already.

  4. 3 minutes ago, drawmonster said:

    So is

    
    /tmp/

    still the directory we should be using? Seeing posts where people are recommending

    
    /dev/shm/

    Can a mod or someone comment on which directory is recommended by the Unraid team?

    On *my* server, running 6.6.6, I have a /dev/shm tmpfs mount that is allocated 32 GB of RAM (per df). I have 64 GB total RAM, not sure why it's set to 32.

     

    I do not have a /tmp mount. I do have a /tmp directory, but it's not a tmpfs "RAM drive", just a plain directory.

     

    You can run cat /etc/mtab to see what your current mount situation is. I imagine it's similar.

     

    I have the Plex /transcode folder mapped to /dev/shm in its Docker settings. No issues so far.

  5. The article about TCP BBR just recently popped up in my news feed. Did a Google search for "unraid tcp bbr" and it led me here to find that it's already been added to the next version. 😁

    Dunno if I'll actually see any difference myself (sightly faster downloads? smoother remote streams?) but I'm glad to see that Limetech included it just the same.

  6. 5 hours ago, jonathanm said:

    ...Lime would need to start maintaining 2 different branches, one with all the add ons that consume tonnes of RAM, and another lightweight version for people that don't want all the fluff.

    I am not a developer, though I do have some experience in that regard, but I don't think an entirely separate branch would be necessary. Toggles to allow users to enable/disable features should be sufficient. A tool to build a custom USB image with/without features could also work*. Splitting different components into their own packages might be best so different parts can be updated independently and not even need to be downloaded by those not using them. For example, if the GUI were a separate package it could be updated without needing a full "OS update", and those not using it wouldn't need to have it installed or even download that new package when they update their system.

     

    *But may require significant dev work to make things more modular.

  7. Okay, it makes absolutely no sense, but switching from HDMI to DVI worked for me.

     

    I'm using a Zotac GeForce GT 710 (only thing I could find that's cheap, fanless, and fits in the x1 slot) and I've been stuck with a stretched-out, blurry, wrong-aspect-ratio 1024x768px resolution when accessing the GUI locally (using a 4k Sony TV as my monitor).

     

    I tried everything I could find software-wise, even attempting to install drivers from Nvidia (never managed to get them installed), and couldn't get the resolution any better than 1024x768.

     

    Just now, I swapped out the HDMI cable for a DVI cable with a DVI>HDMI adapter on the TV side, since the TV doesn't have DVI, and rebooted. The GUI came up at 1920x1080 (the terminal even looked like it was higher res before the GUI loaded). Running at actual 4k would have made things too tiny. 1080p is perfect. ?

     

    If anyone understands *why* it works better with DVI than it did with HDMI, I'm all ears.

  8. 1 hour ago, methanoid said:

    Is anyone running TR without the Zenstates C6 thing? That nerfs the power saving so I'd prefer to not have to do that!

    I've never done the zenstates thing. No problems. Asus ROG Zenith Extreme mobo.

  9. Bump.

     

    I have a Geforce 710 hooked up to a 4k TV and my GUI is running at 1024x768, which looks like crap. I have to zoom out in the browser to fit everything on the screen and the text ends up too small and horribly pixelated.

     

    It's far more convenient for me to mess with the web GUIs for my various Dockers through the unRAID GUI than it is on my phone (in the market for a new laptop), so I'd like to make it usable.

     

    xrandr lists the available resolutions as 640x480, 800x600, and 1024x768.

     

    I was able to add a custom resolution following directions I found here, but can't switch to it. I just get a "Failed to change the screen configuration" error message. I've tried 1920x1080@60hz and @30hz with the same results.

     

    I went so far as to reboot into non-GUI mode and try to install the Linux drivers for my card (after using the devpack plugin to install things like gcc) but it wants kernel source files that I don't have (I think I'd need a kernel-devel package?). I'm highly doubting that even if I got drivers installed that they'd persist a reboot, but figured it was worth a try.

     

    If anyone has any suggestions, I'm all ears. :-/

  10. Update: The issues with launching my Steam games appear to be purely due to software configuration w/ my new Windows 10 install trying to use my already-installed Steam games. Nothing to do with Threadripper or the ugly patched kernel.

     

    I installed the old DirectX runtime from http://www.microsoft.com/en-gb/download/details.aspx?id=8109 because when trying to manually launch Doom outside of Steam I got an error saying that xinput1_3.dll was missing. Doom now works in normal and Vulkan (and looks oh-so-pretty at 4k60). Bioshock Infinite and Descent: Underground were also working after the DirectX install.

     

    When trying to launch Sonic Adventure 2 though, same behavior as the others. I launched it outside of Steam and got a generic error, but Windows helpfully popped up a message saying that I needed to install .NET Framework 3.5. I'm guessing that'll fix some of the other Sonic games I haven't tried yet as well.

     

    FWIW, I did first try uninstalling and re-installing the games through Steam with no luck. I probably should have just started from scratch on the installs. :-/

    • Like 1
  11. Can't get any games to launch. Steam says they're running but nothing ever pops up. Within a second or two it changes from "running to syncing".

     

    Installed the Passmark Performance Test and it locked up and rebooted the VM when trying to access the 3D test page. Not running the test, just accessing the page you run it from, so I'm guessing something is wonky with the video card detection? GeForce Experience sees it fine, Windows is running at 4k60, etc.

     

    Can't even get steam to launch now. Going to install all the Windows 10 updates and try again.

     

    I'm not sure if it has anything to do with the issues I'm seeing, but Windows is listing several devices, including the GPU, as if they're removable devices (see attached screenshot).

    Capture.PNG

  12. I downloaded the kernel thing @Jcloud made and rebooted. My VM still output no video. I noticed a comment on the Reddit thread about the dirty patch that SeaBIOS doesn't work, but OVMF does. Sure enough, my VM was using SeaBIOS.

     

    Unfortunately, there doesn't seem to be any way to change it. I created a new OVMF VM and holy crap we have video output! However, it won't boot off my existing virtual disk (despite many attempts to repair the startup with the Windows DVD image and some manual command-line-fu I found online), so I just created a new one and got windows installed. Just installed the GeForce experience software and had it update the driver and that's all working fine so far. Now to figure out how I had my games folder mapped from the array and re-install steam and try a game.

  13. 8 hours ago, Jcloud said:

    For anyone lurking here. If you're on, 6.4.0 and want to give "Ugly Patch" a shot I've compiled it, see this post 

    My system has only been running it for about eight hours now.

    Definitely going to try this. Maybe tonight.

    • Like 1
  14. Had to dial back the RAM to the "default" speed (2000-something MHz) as I had two complete lock-ups and one unexpected reboot. It's been up for nearly 14 hours now, so I'm pretty sure it was the RAM speed setting.

     

    The first lock-up happened when I tried to re-encode a Blu-ray with Handbrake. The server completely froze up within seconds of starting the encode. The subsequent unexpected reboot and lock-up happened while I was not actively using the server, but someone could have been streaming from Plex.

     

    My only theory (guess) on that at this point is that, because part of the chip runs at the RAM speed and I still have that woefully insufficient all-in-one cooler on there (haven't had time to figure out and install all the water cooling stuff) is that the chip is running hot and overheated easily when running the higher RAM speed.

     

    However, I had managed to do some encodes with Handbrake before (with half as mu RAM running at a lower speed), which definitely strained the processor and didn't run into any problems, so who knows. I still don't have any kind of CPU temperature monitoring and can only guess at where it's running hot by the sound of the fan on the all-in-one.

    • Like 1
  15. I'm running 6.4.0 stable with a 1950X on an Asus ROG Zenith Extreme. I get an "internal error: Unknown PCI header type '127'" message when trying to boot my Windows 10 VM with a Titan X (Pascal) passed through. It booted the first time and was accessible via RDC but there was no video output and subsequent attempts to boot give that error so I'm guessing the first boot borked it until I reboot again.

     

    I assume that the kernel with the dirty patch is now older than what comes with 6.4.0. Willing to be a Guinea pig if someone has a current fix.

     

    FWIW, this video card and this VM did previously work when I had an Intel chip.

  16. Finally got to mess with it again today. Got it booting with all 4 sticks of RAM at 3333 MHz with the latest beta BIOS (0902) so I finally have 64 GB to play with. I also threw in another SSD so I can make my cache RAID 1, but haven't set that up yet.

     

    I'm on 6.4.0 Stable and it boots off my "basic" video card (GeForce 710, IIRC) just fine. It allowed me to assign the Titan X to the Windows 10 VM and it started up and was accessible via RDC, but there was no video output. I stopped the VM and tried to restart it, but got "internal error: Unknown PCI header type '127'". I tried to create a new VM and assign it and got the same error again (screenshot attached).

     

    A Google search for the error brings up some posts from this forum, but I haven't read through them yet.

    Screenshot_20180113-162846.png

    • Like 1
  17. And... I guess I screwed up. Just updated to the latest RC and rebooted and then updated the kernel with the ugly patch and rebooted again and now I can't start my VM because IOMMU isn't enabled in the BIOS (I'm guessing it got reset with the latest BIOS update?). About to leave the house. If I manage to get back to it tonight I'll drop an update. Otherwise it'll be a week or so.

  18. 17 hours ago, skunky99 said:

    Any news?

    Unfortunately, I haven't had any free time to work on the server and I'm flying out of state tomorrow and won't be back for a week. I went through some of my stuff and couldn't find my old spare video card so I may take you up on your offer, if needed.

     

    @david279 - Thanks for the link! If I get a chance tonight I'll try the updated kernel and see if the VM will boot with the single video card. I don't know if it's even supposed to work with only a single video card, as I've always had onboard video before and never tried passthrough until I got the Titan X.