Jump to content

jaylo123

Members
  • Posts

    87
  • Joined

About jaylo123

  • Birthday 08/05/1996

Converted

  • Gender
    Male
  • Location
    Houston, TX.

Recent Profile Visitors

1,448 profile views

jaylo123's Achievements

Rookie

Rookie (2/14)

10

Reputation

1

Community Answers

  1. Yea, doesn't work. Fresh install. Logs just contain the following: started 240711 as root (amd64-prod) init: account with the user id 99 already exists init: updating filesystem permissions PHOTOPRISM_DISABLE_CHOWN="true" disables permission updates Problems? Our Troubleshooting Checklists help you quickly diagnose and solve them: https://docs.photoprism.app/getting-started/troubleshooting/ file umask....: "0002" (u=rwx,g=rwx,o=rx) home directory: /photoprism assets path...: /opt/photoprism/assets storage path..: /photoprism/storage config path...: default cache path....: default backup path...: /photoprism/storage/backups import path...: /photoprism/import originals path: /photoprism/originals switching to uid 99:100 /opt/photoprism/bin/photoprism start /opt/photoprism/bin/photoprism start /opt/photoprism/bin/photoprism start Here is what I see. It "refused" to connect, which tells me it is responding but won't serve any web traffic:
  2. +1. MTR is invaluable. Traces each hop and helps track down which hop has latency / dropped packets. Invaluable information when troubleshooting networks with multiple switches or internet issues. I came across this thread after looking for MTR in the current release of UnRAID. It's a pretty standard package, and I'm pretty surprised it isn't included with Nerd Tools as an installable package.
  3. Had the same issue. Figured maybe it was a widespread thing. Nope. I checked the Radarr logs going back over a month old (max) and every single entry has some message about a malformed DB. No idea when it started, but we're in the same boat: blow it away, start from scratch, assuming your backups are also invalid. I only wish I could get notified about this - might need to set up a cronjob to troll the logs and shoot me an email alert if the word 'malformed' ever appears so I can jump right on it. I did a cursory check of all my other apps and everything else seems fine. Just this one, on my local storage. /shrug
  4. I've read the above. I still don't understand why this is even required. Responses above have said that this isn't a bug in .8. Yet, this "patch" exists as a thing in our collective universe. Either this "patch" should be included in UnRAID's base code and a .9 release addresses this, or I should uninstall "Fix Common Problems" / tell it to ignore this issue. I completely understand and appreciate that none of the above is part of the UnRAID native operating environment, but I also shouldn't be getting nightly reminders that I require a "patch" for something that doesn't impact me at all. And if I install it to quiet my alerts, I then have to remember to uninstall it when it isn't an issue.
  5. Didn't even think to check there. And I clearly missed that the log file isn't automatically rolled. I'll add to logrotate/tmpwatch/whatever Unraid uses (I float between different distros at work and can't remember what Unraid uses lol). Thanks again!
  6. This is filling up my /var/log dir. How do I stop it from creating a runaway .csv? root@mediasrv:/var/log# ls -l|grep cache_dirs -rw-r--r-- 1 root root 1.4M Feb 11 18:43 cache_dirs.log -rw-r--r-- 1 root root 42M Feb 11 18:43 cache_dirs.csv root@mediasrv:/var/log# df -h . Filesystem Size Used Avail Use% Mounted on tmpfs 128M 76M 53M 60% /var/log
  7. I know this is an older post now, but don't forget to go back and reset the vDisk bus from SATA to VirtIO! This significantly increases read/write speeds on disks in VMs managed by KVM.
  8. For those reading in September of 2023, this app is done upstream since Dec 2022. See: GitHub - JasonHHouse/gaps: Find the missing movies in your Plex Server Maybe someone will pick up the mantle in the future. Does anyone know of any alternatives? Or did I miss this in a comment above?
  9. Found it. /boot/config/plugins/dockerMan/templates-user/<container name>.xml Update it, restart docker via Settings -> Docker -> Enable Docker: No Save Set Enable Docker: Yes I guess this is the correct method: Apps -> Previous Apps -> Reinstall -> Remove offending entry -> Success
  10. Hello - I updated an application in docker with an invalid configuration setting. I meant to add a port to a container's configuration but forgot to change the setting from 'volume' to 'port' in the web gui. Now the container is gone, with an orphan image. Reinstalling from 'previous apps' produces the same issue. I'd prefer to not wipe/reload the entire container and start over from scratch because the configuration is quite tweaked overall. I've tried to locate the file in the OS but I'm not having much luck. I was hoping to just remove the last update I did to the config from the configuration file / template but I cannot seem to locate the data. I've checked the docker.img mount point under /var, appdata and /boot. Where would one go to quickly edit the config so I can fix my 'uh-oh'? I'm fine once I know where the app configurations are stored and can figure it out from there, I just need some guidance on where to look. Thanks in advance!
  11. Very excellent point. The server is actually within reach, but of course it is also always on and I'm not always at home. I did slyly purchase this setup with the intent of turning it into a gaming rig and grabbing a Rosewill chassis or whatever and moving my server duties to that. I built the system in 2020 and the last component I was missing was a GPU. Well, Etherium put a stop to those plans. I did have an M60 available from work that I used up until 6 months ago for vGPU duties but ... eww. Maxwell. It got the job done though for vGPU passthrough for my needs, which mostly consisted of either Plex transcoding (I've since flipped to QuickSync from Intel) or a VM running a desktop so my daughter could play Roblox / Minecraft / whatever remotely using Parsec. And Maxwell was the last generation of GRID GPU that didn't require a license server. That all started with Pascal and continues on with Turing, Ampere and now (I assume) Lovelace. Now, however, E-VGA GeForce GPUs are about to go on a firesale so maybe I can continue with those plans - albeit 2.5 years later. I already have my half-height rack in my checkout box on Amazon but I'm still struggling with a case that meets my needs.
  12. 32GB dual channel, non-ECC (hope to switch to ECC soon as I'm about to flip to ZFS) Here is my full component list: Completed Builds - PCPartPicker And yes yes, I know, I don't need an AIO for this setup. But I hadn't installed one before and I had the scratch available.
  13. Interesting. Saw 40GB and my brain went "wait, what?" Dual channel I assume since you're using 2x pairs? Same model, just a different size?
  14. Nope. I did have the parity drive operational and the array online, other than the single disk that I removed from the array. That's why /mnt/user/ was my <dest> path, and I was getting those speeds. And the amount of data being transferred was 6x the size of my SSD cache disk. My signature has the drives I use, which are Enterprise Seagate Exos disks. I guess their on-disk cache is able to handle writes a bit more efficiently than more commodity drives? /shrug - but 240MB/s is the maximum for spinning disks w/o cache and I assume the writes were sequential. Oh that's interesting (Turbo Write Mode). That ... probably would have been beneficial for me! But I got it done in the end after ~9 hours. Of course, as a creature of habit, I re-ran the rsync one more time before wiping the source disk to ensure everything was indeed transferred. I didn't measure IOPS but I'm sure they would have been pretty high. I just finished benchmarking a 1.4PB all-flash array from Vast Storage at work and became pretty enamored with the tool elbencho (a pseudo mix of fio, dd and iozone, with seq and rand options - and graphs!) and after spending basically 4 weeks in spreadsheet hell I wasn't too interested in IOPS - I just needed the data off the drive as quickly as possible . That said, making 16 simultaneous TCP connections to an SMB share and seeing a fully saturated 100GbE line reading/writing to storage at 10.5GB/s felt pretty awesome! For anyone interested in disk benchmarking tools I highly recommend elbencho as another tool in your toolkit. The maintainer even compiles a native Windows binary with each release. Take a look! breuner/elbencho: A distributed storage benchmark for file systems, object stores & block devices with support for GPUs (github.com) Oh certainly and yes, I knew there was a performance hit using the unraidfs (for lack of a better term) configuration/setup. And agreed too, eschewing the UnRAID array entirely and hacking it at the CLI to set up ZFS is a route one could do. But at that point, it would be better to just spin up a Linux host and do it yourself w/o UnRAID. Or switch over to TrusNAS. The biggest draw for me to UnRAID was/is the ability to easily mix/match drive sizes and types into one cohesive environment. I guess I was really just documenting how I was able to achieve what OP was trying to do for future people stumbling across the thread via Google (because that's how I found this post).
  15. No need to apologize! DEFINITELY appreciate the work you've done! I was probably a bit too discerning myself. I've just seen soooo many apps on CA sit there in various states of functionality (even 'Official' ones) so I kind of soapboxed a bit. This thread was probably the wrong place for it. Your work here is great and I actually use your contribution for Homarr in CA every day! Cheers
×
×
  • Create New...