SimplifyAndAddCoffee

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by SimplifyAndAddCoffee

  1. Need help with photoprism. I was trying to update the container and added Container Variable: PHOTOPRISM_SITE_URL=mydomain.com Container Variable: PHOTOPRISM_DISABLE_CHOWN=false I also deleted the default admin password variable since I had already configured the admin account on the server, and didn't want it overwriting it.... however now when I start the container, the site no longer loads and I get a splash screen that just says "Not Found" over the photoprism logo. The console would complain about a warning for the admin password not being set, and the account unable to initialize, so I tried adding back in PHOTOPRISM_ADMIN_PASSWORD=MyAdminPassword However the issues with it not loading are still persisting. What did I break and how can I fix it? EDIT:: after changing the SITE_URL to https://mydomain.com the site once again loads correctly, however the external URL used in links is, while different, still not correct... before adding the SITE_URL variable, generating share links used the TLD photoprism.me:port, but now after setting it to https://mydomain.com it is generating links as localhost:port ... how can I update this variable so that links work correctly?
  2. Hi, I have a dokuwiki install on an older server and I am trying to migrate the data to a new server. I can copy the appdata folder, but I get access denied on the /config/keys/cert.key, and nginx hangs on launch due to the key mismatch. What is the process for migrating this to a new server successfully?
  3. Hi, after installing this plugin, I can no longer rearrange the order of any docker containers outside of the folder(s) or of the folders themselves. It seems this problem also persists after uninstalling the plugin again.
  4. If possible I would like to port my Windows server VM from UNRAID to my new ESXi host now that I am virtualizing UNRAID without needing to re-license Server 2016. Is there a way to do this?
  5. I am currently running a Quadro M2000 and am finding it is barely adequate for a single transcoding stream, and doesn't have enough memory to even attempt running things like Stable Diffusion. I need something I can replace it with that has at least 8GB of VRAM, and can handle 2 or more simultaneous 4k video transcodes between HEVC and MP4. The catch is that it needs to operate solely on the power it gets from the PCIE slot. Are there options out there or am I SOL?
  6. I'm going to try to condense this down as much as possible without leaving out anything important... I have a media share at /mnt/user/media It is set as follows: Use cache pool: yes cache pool: cache CoW: auto Export: yes Security: Private User access: R/W I recently moved all my data from an old UNRAID server using an SMB share on the old server as a mount point, and pulling the data with: rsync -avh "/mnt/remotes/$OLDSERVER/media/" "/mnt/user/media" --progress --stats --iconv=. I then did some other stuff, setting up and configuring my media managers etc... all mostly behaved as expected, although *rr did seem to struggle with some files that did not have valid filenames... I remediated those manually by using mv $BADNAME $GOODNAME in the console Now the trouble begins... when I browse to the media share from UNRAID, from either the share browser, the root console, or a docker console with a mount point there, I can see 800+ files in /mnt/user/media/video/movies. expected behavior: see the same when mounting the SMB share from windows observed behavior: only see the ~ 100 files that were created on the share *after* the data migration. Troubleshooting steps already taken: set windows to display hidden and system files -- no change run docker safe new perms -- no change disable docker, disable VMs, reboot the server, disable SMB, start the array, mount the array, run New Permissions on all shares except for appdata, dismount the array, stop the array, enable SMB, start the array, mount the array, remove and recreate drive mapping in Windows. -- no change tried on another computer -- no change laforge-diagnostics-20230113-2019.zip
  7. I have both docker and VMs disabled, and rebooted the server without restarting docker, yet there are docker files on /mnt/disk3/system which will not move to the cache drive when the mover is invoked. One of my dockers is taking an unusually long time to start up, and I'd like to eliminate this as the cause by getting this all onto the cache drive. Is there any way to force the files to move?
  8. I got this working after running Docker Safe New Perms under Tools.
  9. I guess it was. Now I feel like an idiot. I should probably get some sleep.
  10. any chance we could get nano added to this? I would really like to not use vi.
  11. Is there a way to export/backup and import/restore jellyfin database and config? I'm trying to migrate to a new server, but rsync is failing me, and I get a database disk image is malformed error if I try to copy stuff manually.
  12. I'm having an issue migrating Radarr to a new server. Whether I copy the config files manually, or use the built in backup/restore feature. I am trying to change the relative location of the root folder. e.g. OLD: path /mnt/user/media/=/media && root folder =/media/Movies NEW: path /mnt/user/media/Movies/=/media && root foldere =/media The problem I am having is that I cannot add or change the root folders. DEBG 'radarr' stdout output: [Warn] RadarrErrorPipeline: Invalid request Validation failed: -- Path: Folder is not writable by user nobody
  13. 6.11.5 I have mounted a remote share using mount -t cifs //RemoteMachine/Share /mnt/remote_shares/RemoteMachine After running an rsync script, I am trying to unmount the share to re-mount on a different path. I get this: root@UNRAID:~# unmount /mnt/remote_share/RemoteMachine bash: unmount: command not found
  14. gonna necro this to ask: did you ever find a solution? Does your IPMItool detect exhaust temperature? or just intake and CPU temps?
  15. Hi, I am trying to decide how to manage the cache drives on my new server. I have 4x 1TB SSDs which I intend to use for protected cache. I have one or more shares which will need a very large cache pool for storing downloads on their way to the array, and also docker images and appdata which will be used regularly. I had planned initially to use 2 pools of 2x 1TB SSDs each in BTRFS RAID1, and use one for caching the downloads and other shares, and one for docker, system, and appdata... however I am now wondering if I might be better served using a single BTRFS RAID10 pool for all of them, in order to fully utilize all of the available drive space. Apart from the obvious benefit of being able to use more of the 2TB of space for caching downloads etc when the docker/appdata folders are small, I want to address some concerns about the performance of the pools for caching in BTRFS RAID1 vs RAID10 mode, and fault tolerance. Am I asking for trouble using BTRFS RAID10? Is there a significantly greater chance of data loss or downtime from drive failure? Does RAID10 offer a clear performance benefit over RAID1 (outside of that which can be gained by splitting competing services to different pools)? with BTRFS RAID1 will the RAID remain software and hardware agnostic? e.g., can you read from a single disk without the array, like you can with the UNRAID array? Conversely, with BTRFS RAID10, I would assume you cannot... but can you at least still rebuild the array without taking it offline? Thanks in advance.
  16. I did some searching on this issue and really didn't want to necro this old thread again for a "me too": This seems to be a widespread problem with these servers, and I am running into it as well. I really do not want to fall back on booting legacy mode just to boot unraid on this server, although I may go the route of using an ESXi host with unraid as a guest OS if another option doesn't present itself. Does anyone know why syslinux is crapping its pants when trying to boot unraid on this board? I've tried different memory configs, using ECC RAM.
  17. including cache drive? is there a way to lock the state of things so the original configuration can't be altered or broken in the process of migrating data to new drives? What about exporting/importing the docker containers and configurations? plugins? etc. If I clone the original boot drive, can it be adapted to associate new drives and pools on a new controller without breaking anything?
  18. I have an unraid server I am replacing, looking to upgrade everything. I'm moving from a j5005 with 4x 2.5" drives to a proper rack server with an 8x3.5 backplane, and I'm upgrading my cache drive and my app data drive pool to their own 1TB SSDs. What is the fastest/easiest way to accomplish this? I'm willing to spend on another unraid license to attach all the drives for the build if needed. The new drives are SAS and I can't plug them into the existing server. I also can't plug in all of the existing drives at one time to the new server. Is there a way to just clone all the data over onto the new drives and then export/import my config to a new USB drive? Oh yeah, the boot USB drive will also need to be replaced, since the old one is a custom made Disk-on-Module and I don't have a USB header on the new mobo to plug it into. Also I'm going from a bare metal install of unraid to a guest VM on ESXi. What's the order of operations here in order to do this and not risk data loss from screwing it up?
  19. Hi, I am looking to build a new server box with a dedicated GPU for hardware accelerated video transcoding on Jellyfin. What GPUs should I be looking at for best performance, compatibility, and lowest energy requirements (at idle)?
  20. I am looking for the best way to implement a full bidirectional sync between one or more windows PCs and the server, using something like osync/bsync or other rsync based scripts. The goal is to have any changes in files on the server or client side be reflected immediately on the other, but with the server side not deleting files, and keeping soft backups of changed files. Ideally I'd also like to be able to sync just select directories with PCs that may not have enough storage for all of it. Is there any way to set up something like this?
  21. Array is 12TB, most full disk has 2TB free. The backup size is less than 40GB and it starts running but fails after less than a minute every time. I've tried high water and most free. Cache is set to yes. The cache drive has > 200GBfree. I get no meaningful event log errors and nothing shows up in the unraid system log. I have a diagnostic file saved, just lmk where to upload it if you need it.
  22. ok so I was sure there was a good tutorial somewhere on correct way to export the SSL certificate for other services to use for end to end encryption, but I can't seem to find it anywhere. Does anyone know the one I mean, or can maybe just point me in the right direction?