Jump to content

rutherford

Members
  • Posts

    664
  • Joined

  • Last visited

Everything posted by rutherford

  1. I'm really struggling with postgres15 vs what I'm used to, old mysql. I'm used to a database, and you grant all privileges to a user on that database. But now, there are RELATIONS and SCHEMA and it's very confusing. I was getting something about VECTOR errors (which I think the fix above from @Nirvash addresses), but I've managed to get more basic, not even connecting successfully to the database. I'm here as a circus act: I make all the mistakes, record them, and hopefully cut down on the other circus acts out there.
  2. mind if I post this in english over at that other thread? Hmmm this didn't work. I assumed the spaces were not supposed to be in there.
  3. I'm doing a large copy, 18TB. I was thinking it would be fastest to take the parity drive out of the array, do all the writes, then rebuild the parity. Hmm looks like even at 110MB/s it will take 1 day 21 hours, that's not too bad. Any other advice there? thanks!! Derp that's the saturation for gigabit ethernet. Looks like it's a hardware swap that needs to happen, 10G ethernet to make it any faster. 1.8 days it is! Can do. Oops, I need to turn my cache drive off. This will change things again. "Turbo write mode" Settings > Disk Settings > Tunable (md_write_method) options are Auto, read/modify/write, or reconstruct write. Reconstruct write is the "turbo mode" I'll do a short test without the cache drive, see where we land. https://wintelguy.com/transfertimecalc.pl https://www.gbmb.org/mbps-to-mbs
  4. https://docs.unraid.net/unraid-os/manual/storage-management/#array-write-modes
  5. Under Settings > DateTime I suggest you make a setting to put the date in a format we like. yyyy-mm-dd or mm/dd/yy or dd-mm-yyyy whatever. I'm pretty sure these are both the 1st and 4th of December 2023.
  6. This has happened several times for this drive. 2-3 now I think. I'll repair this drive and swap it out later. Thanks @itimpi !
  7. I have a new drive ready to go, but won't get around to it until Monday evening. I also want to be sure I replace the right drive! it's saying XFS (md5p1), but my drives aren't quite like that. They're all sdd, sdh, sdi etc. Dec 3 21:05:57 rubble kernel: XFS (md5p1): Unmount and run xfs_repair Dec 3 21:05:57 rubble kernel: XFS (md5p1): Unmount and run xfs_repair New (used quality eBay) sata cables: check. and a few other threads all relate to this I think: https://forums.unraid.net/topic/144440-is-this-hard-drive-dying-thanks/#comment-1301047 https://forums.unraid.net/topic/147430-somethings-dying-disk5-sdc/#comment-1323148 https://forums.unraid.net/topic/146790-mini-sas-to-sata-cable/#comment-1318730 thanks! rubble-diagnostics-20231203-2102.zip
  8. I've been getting a Common Problems note about the docker URL not being what the author put. Anyone else have any ideas about that? Docker > Roonserver > advanced https://hub.docker.com/r/steefdebruijn/docker-roonserver/ Is what I have.
  9. So new complication: our Immich docker now says it required Postgresql15. yay. Got postgresql15 installed with following settings: You can see I changed the port so it's available in parallel with the other postres14 docker. Dropped into the postgresql15 Console # pg_dump -h <myserverIP> -p 5432 -U postgres immichdb > immichdb5.sql this is also where my postgresql14 port is 5432. I was prompted for the postgres userpass, entered it. postgres=# create database immichdb; CREATE DATABASE postgres=# create user immichuser with encrypted password 'immichpass'; CREATE ROLE postgres=# grant all privileges on database immichdb to immichuser; GRANT postgres=# then ran this command psql -U immichuser immichdb < immichdb5.tar I've been getting fail after fail here..... Oooo got it. I ended up installing pgAdmin4 (available in unRaid Apps the dpage version) to manage these two database dockers. Setup connections to both, and used the Backup and Restore from inside those. The only thing I can think of that I did differently here was setup identical database, user, and userpass. I'm thinking those were hardcoded somewhere into the v14 database. ?? change port on Immich docker from 5432 to 5433. Wait for it to come up... This always takes a couple minutes for me. Seems to have worked. I'll take the old postgresql14 docker down, make sure it keeps working... yup. we're good to go on postgres15. whew. I'm going to tack this problem solution into this reply of this thread as well. It related to postgres15. The post I made at github. I was getting an error in my immich log 2023-12-07 21:02:20.914 PST [29601] ERROR: permission denied to create extension "earthdistance" 2023-12-07 21:02:20.914 PST [29601] HINT: Must be superuser to create this extension. Alright so how to add SUPERUSER to my existing immich database user. postgres=# ALTER USER immichuser WITH SUPERUSER; ALTER ROLE Double checked that it worked with \du<enter>. Sure enough, immichuser is now SUPERUSER. Errors gone
  10. How about stability? I had ... five players from various areas in US. A few from the Geyser Switch connections, a few on java connecting to my Purpur 1.20.2 server. Plugins are: DropHeads, floodgate, Geyser-Spigot, InfoHUD, spark and Updater. From time to time the game will glitch out, sometimes kicking players. Maybe it's my lacking server hardware? Seems like the docker status seemed alright for CPU and MEM (I forget that command in CLI). Any suggestions on how I might make it more stable? All CPU pinning options are un-checked, in Docker settings > advanced. Defaults. thanks!
  11. Oh nice! I think that did it. I saw that option in there you pointed at "destination is FAT/NTFS." Seems to be flying over there now. Worked!
  12. @alturismo thanks for getting back. Double checking these settings here. I have made no progress luckysync details source: /mnt/user/nextcloud/username destination: /external/username_nextcloud DOCKER settings Shares: /mnt/user, to /mnt/user (added Read/Write) /external, to /mnt/disks/TOSHIBA_EXT/ I did add a Port/Path/Variable, I'll include a screen shot. This also depends on Unassigned Devices working, the drive I'm trying to go to is a USB external drive. on the server, /mnt/user/nextcloud is a user share on xfs drives. The attached USB drive is exfat. Hmmm this might be the issues, need to format it to something more usable. Got that done: we're on NTFS now on the external drive. Still weird stuff happening. It still creates all the folders, but copies no files. I'm not running luckybackup as root. I tried manually stopping the Nextcloud container, same behavior. before with exfat reformatted external drive to NTFS This does need to work. I could return the drive, get another internal drive, and use that as backup. Just keep it out of the array. I like the idea of a separate device though. But it does need to work first!
  13. luckybackup write errors to Unassigned Devices USB drive I'm trying to use Luckybackup in setting up an rsync connection with some recursive folders on my array to an external USB drive mounted via Unassigned Devices. I can send the files to a different spot on the array no problem, but when trying to write to the external USB, I get write errors. Not sure if I'm mounting it wrong, or setup the Luckybackup template incorrectly? Luckybackup Docker > Console can read and write to the USB drive. You can see I've passed the /mnt/ to the Docker. Paths are relative in the luckybackup Docker source /mnt/user/user/nextcloud/username1 dest /mnt/user/disks/TOSHIBA_EXT/username1_nextcloud rubble-diagnostics-20231119-2209.zip
  14. I'm trying to use Luckybackup in setting up an rsync connection with some recursive folders on my array to an external USB drive mounted via Unassigned Devices. I can send the files to a different spot on the array no problem, but when trying to write to the external USB, I get write errors. Not sure if I'm mounting it wrong, or setup the Luckybackup template incorrectly? Luckybackup Docker > Console can read and write to the USB drive. You can see I've passed the /mnt/ to the Docker. Paths are relative in the luckybackup Docker source /mnt/user/user/nextcloud/username1 dest /mnt/user/disks/TOSHIBA_EXT/username1_nextcloud rubble-diagnostics-20231119-2209.zip
  15. SOLVED: Formatted the external drive from exFAT to NTFS. Then under Advanced tab, I could specify: target drive was FAT/NTFS drive. Ticked that box, worked like a charm. ================================================================ I'd like to get this working for a backup of my precious files (pictures and word docs) to an external USB drive. I'm getting all sorts of errors. Folders are being written, but no files. I have the USB drive mounted in Docker Template, added new Container Path: /external to /mnt/disks/TOSHIBA_EXT, it is "Read/Write - Slave" mode. I went to Luckybackup console, went to /external, and can create file "touch test.txt" and the file is there. I can remove file, file disappears.
  16. Yeah, you're probably right. How about any OTHER additional backup solution? A nextcloud mirror maybe.?
  17. Ya know how: unRaid is NOT Backup!?, well I've had corruption before, and loads of lost&found files. Total bummer. Anyhow, I'd like to back my stuff up on an external USB drive (Unassigned Devices). I use Luckybackup, which is an rsync GUI apparently. I got it setup, and it's mostly working. But I get loads of errors when running the backup on the nextcloud share. Anyone have luck with some sort of additional backup docker, offsite, USB etc. ? thanks https://unraid.net/blog/unraid-server-backups-with-luckybackup
  18. does Fix Common Problems check that that CA Backup is running somewhat regularly? It should. Patient was lobotomized so I guess that’s that. Thank you very much for your help @JorgeB
  19. Thanks for getting back @JorgeB I ended up formatting, un-assigning, re-assigning, and reformatting my primary cache nvme01 drive. I restored from backup (make sure that CA Backup is working y’all!) my SWAG custom Network didn’t stick through it, and I had to restore all Docker Apps individually. But their templates with my old settings and their appdata was there. Whew! Thanks again root@rubble:~# btrfs fi show Label: none uuid: 95f77608-913b-413d-a71e-adcb24ec0978 Total devices 1 FS bytes used 101.69GiB devid 1 size 931.51GiB used 117.02GiB path /dev/nvme0n1p1 Label: none uuid: a8a7a88b-ef16-4f67-9e1c-0a3e1f831bf2 Total devices 1 FS bytes used 13.11GiB devid 1 size 40.00GiB used 14.02GiB path /dev/loop2 warning, device 1 is missing Label: none uuid: 5d4b5c1c-0209-4acf-9211-1c1d3242d14a Total devices 2 FS bytes used 145.23GiB devid 2 size 953.87GiB used 736.03GiB path /dev/sde1 *** Some devices missing root@rubble:~# rubble-syslog-20231109-2126.zip
  20. I have btrfs drives. I was getting loads of errors in my second pool drive. It's probably smoked. Stopped array, unassigned second pool drive. Started array, whoops! I forgot to move the Cache Drives from 2 down to 1. Stopped array, Cache drive from 2 to 1. Now when I start array, I'm getting Unmountable: Unsupported or no file system. I went to re-add the old drive, and it said adding this drive will be formatted, all data will be lost on there. Dang it. Oh thank God, the backup system (Settings > Backup/Restore App Data) /mnt/user/backup/unraid/appdata-backup/ab_20231106_030004 <whew>!!! I need to get app_data back to run all the services etc. thanks! tried these steps ( 2018 limetech post ) stop array unassign cache drive start array stop array reassign cache drive start array. Still unmountable. rubble-syslog-20231108-2036 BTRFS error.zip
  21. If there's nothing else going on, Mover should move files.
  22. Uninstalled Sonarr, deleted it's appdata, reinstalled: fixed. Weird. <shrug>
  23. My Plex install is reporting it can't see episodes 6-9. Episodes 1-5 are spread over several drives, and eps 6-9 are all on the same Disk5. Sonarr reports the same thing: can't see those episodes and thinks they're missing. When doing into Docker icon > Console_, and I navigate to and see the files no problem. Krusader can see them. Main > Console can CLI over there, looks the same. Same owner, same permissions. I did recently have a full Cache drive that caused all sorts of issues. I set the Min Free Space, hopefully avoiding that in the future. Any ideas what might be causing this odd file behavior? I have restarted the server once or twice. Doesn't seem to fix anything. rubble-syslog-20231105-0512.zip
  24. been getting some errors because my 1TB cache drive is filling up and other services are crashing due to that. I removed several GB of data from /mnt/cache to clear up some space, and Mover seems to be doing it's thing again. But I still have suspicions. I recently got two mini-SAS cables to a SSD controller replaced. I thought one of the drives might be failing. Any insights here as to what hardware might be giving me grief? I have another 8TB drive ready to deploy, and I'm thinking it would be a good idea to get a larger cache drive. Though, whatever the issue is, probably would have filled up the 2TB cache drive anyhow with it's issue. I did perform an xfs_repair -n, and an actual repair with no modifiers on the end via main GUI. it created some lost+found stuff. This was Disk5. I had loads of stuff in /mnt/user/unpack. I suspect Sonarr had stacked that cache folder up with stuff and got stuck in a loop somehow. That share, /mnt/user/unpack was set to Cache. Once I cleared that junk out, the cache usage went back to normal and the Mover was able to complete. hmmmm something is being naughty. thanks! rubble-syslog-20231104-0753.zip
×
×
  • Create New...