• Posts

  • Joined

  • Last visited


  • Gender
  • Location
    Somerset, England

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jademonkee's Achievements


Contributor (5/14)




Community Answers

  1. I know I'm digging up an old thread here, but I thought I'd chime in with something I found out yesterday: For some reason, my LSIO NextCloud instance had the log level defined in config.php set at 0 (debug), rather than the default 2 (warn). I'd never set this myself, so I'm assuming it's a silly default set by LSIO. Since I changed it, my instance seems snappier and my photo thumbnails in this third party Android app (https://play.google.com/store/apps/details?id=com.nkming.nc_photos.paid) load heaps quicker as I jump through the timeline. So, if anyone on the LSIO NextCloud container is experiencing problems, double check the log level set in your config.php HTH
  2. I installed it anyway and am now running "2023.09.05.31.main" Is it suitable for public consumption?
  3. Sorry, I don't know what that error means. LS.io no longer offer support through this forum, instead offering it through their Discord, so maybe try there.
  4. Dunno quite what that means. Some use Bridge, some use Host, some use a custom network for reverse proxy (Swag + Nextcloud).
  5. Just adding to the voices on this issue: I use MACVLAN on Docker, and have had no problems with that. I have Unifi gear (USG and 2x APs, with the controller running in Docker), and have no problems (except for it complaining that my bonded eth on the server shares an IP address). If there's anything I can do to help troubleshoot this problem (contributing to a known-working hardware list, for instance), feel free to reach out.
  6. FWIW, I copied all the .err files to a new directory (just in case), I then deleted all of them except the one listed in the logs at start-up. I then renamed the active log file to <filename>.err.old I then logged into the mysql console by opening the mariadb console via the Unraid GUI, then issuing the command: mysql -uroot -p As per https://mariadb.com/kb/en/flush/, I then issued the command to close and reopen the .err file (basically recreate it) flush error logs; Now the .err file is KB in size, and I have recovered 3GB of space on the cache drive by clearing the errors that have been accumulating for a couple years now. Fingers crossed I haven't messed anything up!
  7. I Googled the error this morning and it seems to be a problem from a poor upgrade between mariadb versions (ie the container updated the version, but there were manual steps needed inside the container that I was not aware of). This thread shed some light on it: https://github.com/photoprism/photoprism/issues/2382 Specifically, I ran the following command, and now the error isn't constantly spamming the .err logfiles. mysql_upgrade --user=root --password=<root_pwd> Does anyone know if I can just delete all the .err files from the folder now?
  8. Hi there, I remember after this image was rebased to Apline that my start up log (the one accessed when clicking on the mariadb icon in the Unraid Docker UI) started producing the following each time it started: [custom-init] No custom files found, skipping... 230720 11:04:18 mysqld_safe Logging to '/config/databases/98d77ae0f2c7.err'. 230720 11:04:18 mysqld_safe Starting mariadbd daemon with databases from /config/databases [ls.io-init] done Everything seemed to work correctly, so I didn't really think much about that mention of a .err file. Fast forward a couple years (maybe?), and I just realised that my mariadb appdata directory is about 4GB in size, while my db backups are only about 80MB. I was worried that the backups weren't running correctly, so started digging around in the 'db-backup' Docker that I use to back it up. I couldn't find anything in the logs. Long story short, I have just under 4GB of .err files in the mariadb appdata directory. Opening up the .err file that is referenced in the above log, I see that it's constantly filling with: 2023-07-20 13:43:56 362 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB'). 2023-07-20 13:43:56 362 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255). Over and over and over again. How do I fix this? The only app I use mariadb for is Nextcloud, which I also use the LSIO Docker for. Thanks for your help.
  9. Thanks, I shall then. Is the reason that they weren't automatically created as datasets because I used MC (disk share to disk share) to move the shares to the Pool, instead of the Mover? (just so that I don't make a similar mistake again)
  10. Hi there. I've been excited to move to ZFS for the snapshotting capabilities, so I've been following Spaceinvader One's videos and have converted my cache pool (2x 250GB SSD), my Nextcloud share (2x 2TB SSD), as well as my "Backup" disk in the array (2TB HDD) to ZFS. I've installed the ZFS Master plugin, and was hoping to start creating snapshots as my main backup, and then using "ZFS send" to back them up to the ZFS disk in the array (replacing my current rsync script). However, I note in the ZFS Master plugin that, while my cache drive and Backup drive both have all of their top level shares as datasets, the 2TB SSD pool only contains folders. Why is this? (see below for the methodology I used to convert the pool) And what's the best way to convert the folders into datasets? Should I use the Mover to copy the contents of each share in the pool to the array, then back again? (will this automatically make the shares in the pool datasets?) Diagnostics attached, if they're required. More detail on how the 2TB pool was created: The "pool" was originally a single BTRFS SSD (called 'ssd2tb') that contained 3x shares: one for my Nextcloud data; one for my photos; and another one specifically for the photos I want streamed to a digital photo frame I have. I bought a second 2TB SSD and formatted it as ZFS, naming it 'pool2tb'. I then used Midnight Commander to copy the contents of ssd2tb to pool2tb Once done, I erased ssd2tb, deleted the ssd2tb pool, formatted it as zfs, and added that ssd as a mirrored disk in 'pool2tb' Does any of that explain why the 3x shares on pool2tb are folders, not datasets? (while the shares on my 'cache' zfs pool are datasets?) And what do I have to do to convert them to datasets? Many thanks for your help. percy-diagnostics-20230718-1428.zip
  11. Amazing, well done JorgeB! Thanks for all your help over the years!
  12. Just reporting in that this release seems to have fixed the "Retry unmounting shares" issue for me.
  13. Thanks, that command allowed me to stop the array. I've just updated to v6.12.3-rc3 and tried stopping the array and it worked without having to issue the command. I'll report back if it happens again.
  14. Hi there, I recently changed my cache drive from btrfs to zfs, and now when I try and stop the array, it gets stuck trying to unmount the disk shares. If I try and shutdown the array, it has to force a shutdown. I have this in the syslog: Jul 12 20:10:33 Percy emhttpd: Unmounting disks... Jul 12 20:10:33 Percy emhttpd: shcmd (267285): /usr/sbin/zpool export cache Jul 12 20:10:33 Percy root: cannot unmount '/mnt/cache/appdata': pool or dataset is busy Jul 12 20:10:33 Percy emhttpd: shcmd (267285): exit status: 1 Jul 12 20:10:33 Percy emhttpd: Retry unmounting disk share(s)... Diagnostics from a previous force shutdown are attached (I have two others - one earlier, and one just now - that I can add too, if need be). I have my Dockers mapped to disk shares (including /mnt/cache/appdata/) for performance purposes, so am unsure if it's related to using disk shares rather than the user share. Your help is appreciated. percy-diagnostics-20230712-2010.zip
  15. Just a guess: are there any port conflicts with other Dockers?