jaylo123

Members
  • Posts

    85
  • Joined

About jaylo123

  • Birthday 08/05/1996

Converted

  • Gender
    Male
  • Location
    Houston, TX.

Recent Profile Visitors

1164 profile views

jaylo123's Achievements

Rookie

Rookie (2/14)

10

Reputation

1

Community Answers

  1. Had the same issue. Figured maybe it was a widespread thing. Nope. I checked the Radarr logs going back over a month old (max) and every single entry has some message about a malformed DB. No idea when it started, but we're in the same boat: blow it away, start from scratch, assuming your backups are also invalid. I only wish I could get notified about this - might need to set up a cronjob to troll the logs and shoot me an email alert if the word 'malformed' ever appears so I can jump right on it. I did a cursory check of all my other apps and everything else seems fine. Just this one, on my local storage. /shrug
  2. I've read the above. I still don't understand why this is even required. Responses above have said that this isn't a bug in .8. Yet, this "patch" exists as a thing in our collective universe. Either this "patch" should be included in UnRAID's base code and a .9 release addresses this, or I should uninstall "Fix Common Problems" / tell it to ignore this issue. I completely understand and appreciate that none of the above is part of the UnRAID native operating environment, but I also shouldn't be getting nightly reminders that I require a "patch" for something that doesn't impact me at all. And if I install it to quiet my alerts, I then have to remember to uninstall it when it isn't an issue.
  3. Didn't even think to check there. And I clearly missed that the log file isn't automatically rolled. I'll add to logrotate/tmpwatch/whatever Unraid uses (I float between different distros at work and can't remember what Unraid uses lol). Thanks again!
  4. This is filling up my /var/log dir. How do I stop it from creating a runaway .csv? root@mediasrv:/var/log# ls -l|grep cache_dirs -rw-r--r-- 1 root root 1.4M Feb 11 18:43 cache_dirs.log -rw-r--r-- 1 root root 42M Feb 11 18:43 cache_dirs.csv root@mediasrv:/var/log# df -h . Filesystem Size Used Avail Use% Mounted on tmpfs 128M 76M 53M 60% /var/log
  5. I know this is an older post now, but don't forget to go back and reset the vDisk bus from SATA to VirtIO! This significantly increases read/write speeds on disks in VMs managed by KVM.
  6. For those reading in September of 2023, this app is done upstream since Dec 2022. See: GitHub - JasonHHouse/gaps: Find the missing movies in your Plex Server Maybe someone will pick up the mantle in the future. Does anyone know of any alternatives? Or did I miss this in a comment above?
  7. Found it. /boot/config/plugins/dockerMan/templates-user/<container name>.xml Update it, restart docker via Settings -> Docker -> Enable Docker: No Save Set Enable Docker: Yes I guess this is the correct method: Apps -> Previous Apps -> Reinstall -> Remove offending entry -> Success
  8. Hello - I updated an application in docker with an invalid configuration setting. I meant to add a port to a container's configuration but forgot to change the setting from 'volume' to 'port' in the web gui. Now the container is gone, with an orphan image. Reinstalling from 'previous apps' produces the same issue. I'd prefer to not wipe/reload the entire container and start over from scratch because the configuration is quite tweaked overall. I've tried to locate the file in the OS but I'm not having much luck. I was hoping to just remove the last update I did to the config from the configuration file / template but I cannot seem to locate the data. I've checked the docker.img mount point under /var, appdata and /boot. Where would one go to quickly edit the config so I can fix my 'uh-oh'? I'm fine once I know where the app configurations are stored and can figure it out from there, I just need some guidance on where to look. Thanks in advance!
  9. Very excellent point. The server is actually within reach, but of course it is also always on and I'm not always at home. I did slyly purchase this setup with the intent of turning it into a gaming rig and grabbing a Rosewill chassis or whatever and moving my server duties to that. I built the system in 2020 and the last component I was missing was a GPU. Well, Etherium put a stop to those plans. I did have an M60 available from work that I used up until 6 months ago for vGPU duties but ... eww. Maxwell. It got the job done though for vGPU passthrough for my needs, which mostly consisted of either Plex transcoding (I've since flipped to QuickSync from Intel) or a VM running a desktop so my daughter could play Roblox / Minecraft / whatever remotely using Parsec. And Maxwell was the last generation of GRID GPU that didn't require a license server. That all started with Pascal and continues on with Turing, Ampere and now (I assume) Lovelace. Now, however, E-VGA GeForce GPUs are about to go on a firesale so maybe I can continue with those plans - albeit 2.5 years later. I already have my half-height rack in my checkout box on Amazon but I'm still struggling with a case that meets my needs.
  10. 32GB dual channel, non-ECC (hope to switch to ECC soon as I'm about to flip to ZFS) Here is my full component list: Completed Builds - PCPartPicker And yes yes, I know, I don't need an AIO for this setup. But I hadn't installed one before and I had the scratch available.
  11. Interesting. Saw 40GB and my brain went "wait, what?" Dual channel I assume since you're using 2x pairs? Same model, just a different size?
  12. Nope. I did have the parity drive operational and the array online, other than the single disk that I removed from the array. That's why /mnt/user/ was my <dest> path, and I was getting those speeds. And the amount of data being transferred was 6x the size of my SSD cache disk. My signature has the drives I use, which are Enterprise Seagate Exos disks. I guess their on-disk cache is able to handle writes a bit more efficiently than more commodity drives? /shrug - but 240MB/s is the maximum for spinning disks w/o cache and I assume the writes were sequential. Oh that's interesting (Turbo Write Mode). That ... probably would have been beneficial for me! But I got it done in the end after ~9 hours. Of course, as a creature of habit, I re-ran the rsync one more time before wiping the source disk to ensure everything was indeed transferred. I didn't measure IOPS but I'm sure they would have been pretty high. I just finished benchmarking a 1.4PB all-flash array from Vast Storage at work and became pretty enamored with the tool elbencho (a pseudo mix of fio, dd and iozone, with seq and rand options - and graphs!) and after spending basically 4 weeks in spreadsheet hell I wasn't too interested in IOPS - I just needed the data off the drive as quickly as possible . That said, making 16 simultaneous TCP connections to an SMB share and seeing a fully saturated 100GbE line reading/writing to storage at 10.5GB/s felt pretty awesome! For anyone interested in disk benchmarking tools I highly recommend elbencho as another tool in your toolkit. The maintainer even compiles a native Windows binary with each release. Take a look! breuner/elbencho: A distributed storage benchmark for file systems, object stores & block devices with support for GPUs (github.com) Oh certainly and yes, I knew there was a performance hit using the unraidfs (for lack of a better term) configuration/setup. And agreed too, eschewing the UnRAID array entirely and hacking it at the CLI to set up ZFS is a route one could do. But at that point, it would be better to just spin up a Linux host and do it yourself w/o UnRAID. Or switch over to TrusNAS. The biggest draw for me to UnRAID was/is the ability to easily mix/match drive sizes and types into one cohesive environment. I guess I was really just documenting how I was able to achieve what OP was trying to do for future people stumbling across the thread via Google (because that's how I found this post).
  13. No need to apologize! DEFINITELY appreciate the work you've done! I was probably a bit too discerning myself. I've just seen soooo many apps on CA sit there in various states of functionality (even 'Official' ones) so I kind of soapboxed a bit. This thread was probably the wrong place for it. Your work here is great and I actually use your contribution for Homarr in CA every day! Cheers
  14. Yep. I came across this thread (now years later via google) because the speeds were horrible. I didn't even consider taking the disk I want to remove out of the array. I stopped the array, went to Tools -> New Config and set everything up the same way except I didn't add the disk I wanted to repurpose outside of UnRAID. When the disk was in the array I was getting ~50-60MB/s using rsync on the command line while transferring large (5+GB) files. After I removed the disk from the array, restarted the array w/o the disk, then SSHed into the UnRAID system and manually mounted the disk I wanted to remove and re-ran the same rsync command, I was getting ~240MB/s. Which is the maximum my spinning disks can do for R/W ops. I would expect a destination array setup using SSDs to also reach their theoretical maximum throughput, depending on your data of course (block size, small files vs large files, etc). It meant the difference between a 32 hour transfer and just over a 9 hour transfer for 7TB of data. Steps I used, hopefully someone else that finds this thread via a Google search like I did finds it useful. Full warning: The below is only for people that understand that UnRAID Support on these forums will only help you as a 'best effort' and you are comfortable with the command line. There is no GUI way of doing this. You've been warned (though, that said, this is fairly easy and safe but since we are "coloring outside of the lines", BE CAREFUL). After removing the drive from the array via Tools -> New Config and starting the array without the drive, manually updating all shares to a configuration where the mover will not run, and assuming /dev/sdf1 is the drive you want to remove, install 'screen' via the Nerd Pack plugin, launch a console session (SSH or web console via the GUI - either works) and type: # Launch Screen root@tower# 'screen' # Create mount point for drive you want to remove data from root@tower# 'mkdir /source' # Mount the drive you want to remove data from root@tower# 'mount /dev/sdf1 /source' # Replicate the data from the drive you intend to remove TO the general UnRAID array root@tower# 'rsync -av --progress /source/ /mnt/user/' # --progress is optional, it just shows speed and what is happening # Press 'CTRL+A, then the letter D' to DETACH from screen if this is a multi-hour process or you need to start it remotely and want to check on it later easily. # Press 'CTRL+K, then the letter Y' to kill the entire screen session. Note that this WILL stop the transfer whereas 'CTRL+A, then D' will not. # # To reconnect, SSH back into the system and type: root@tower# 'screen -r' (wait for rsync to complete) root@tower# umount /source root@tower# rmdir /source # IMPORTANT: If either of the above commands fail, you have ACTIVE processes that are using the drive you want to remove. # Unless you know what you're doing, do not proceed until the above two commands work without any warnings or errors. Shut down server, remove drive, turn server back on, and change the shares that were modified at the start of this process to their original state so mover will run once again. Why use screen? You can certainly do this without screen, however if you don't use screen and you get disconnected from your server during the transfer (WiFi goes out, you're in a coffee shop, etc), your transfer will stop. Obviously this is not an issue if you're doing this on a system under your desk. But even then, it is probably still a good idea. What if the X session crashes while you're booted into the GUI? Screen does not care - it will keep going, and you can reattach to it later to check on the progress. I did try to use the Unbalance plugin in conjunction with the Unassigned Drives plugin so that the drive I wanted to copy data FROM was not in the array, however Unbalance doesn't work that way - at least not that I could dig up.
  15. Well, I can see both sides. I've certainly also abandoned perfectly working apps because of similar issues. While they are maintained by volunteers, it would seem that in at least some cases (especially with lesser-known apps) the volunteer in question just abandons it and it languishes. Maybe it's a larger discussion for CA where the program requires a twice-a-year check-in from the volunteer or the app gets tagged with something like 'possibly no longer maintained' so people browsing the store know that they may have issues. Folks like binhex are notoriously reliable, but "johnsmith1234" with one app may publish it once and then never re-evaluate it for any required updates ever again even though the docker version it was created against is now 8 major releases ahead, yet the CA offers it as if it works just fine out of the box. If the volunteer doesn't check in within a 'renewal month' check or something - maybe twice a year? - it gets an 'unmaintained' flag so the community knows they may have issues and/or need to have deeper Linux and Docker knowledge if they wish to use the app. If the volunteer wishes to remove the flag, they just go to their app and maybe check a box that says, "This app has been validated to still be functional" or something. netbox by linuxserver is a perfect example of this. It gets you around 70% there but unless you're fine dabbling with command line edits of an application you've never messed with and know where to look for the right log files to debug / troubleshoot, you're just going to give up after a few minutes. Just thinking out loud, certainly not knocking anyone in particular but I do think some additional QC of CA wouldn't be too much.