liujason

Members
  • Posts

    57
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

liujason's Achievements

Rookie

Rookie (2/14)

0

Reputation

1

Community Answers

  1. I installed mariadb and photoprism, and currently tweaking performance because the latency is really high after adding 200k+ photos and videos. Following https://docs.photoprism.app/getting-started/troubleshooting/performance/ I hope to add `--innodb-buffer-pool-size=1G` parameter some where to adjust the buffer pool size. I tried it in both Extra Parameters and Post Arguments, non of them worked. Is there a `docker-compose.yml` for mariadb to update? Or where else can I add the parameter? Thanks! Jason
  2. Ah... bonkers. All docker apps/libraries are running in cache drive. I may have lost all the appconfig. What caused the drive to be unmountable? is this the data corruption that happened? I thought the update blocked the NIC, and data corruption wouldn't have occured.
  3. Now I'm seeing my cache unmountable. (attaching diags). I have not restarted the system. "Unmountable: Wrong or no file sysem" ("system" misspelled BTW). tower-diagnostics-20220611-2205.zip
  4. Cache was mounting before updating - correct. I was doing a big cache flush (600GB or so) before the reboot following the update. Diags are unfortunately not from the first boot. I rebooted it several times wondering why I couldn't connect to the webGUI.
  5. It was 6.9.x Sorry I don't remember the minor version. Cache was not unmountable. Restarted the server after the update as usual.
  6. Thanks for the prompt reply! Yes. It is an HP. Is the next step to create the empty file config/modprobe.d/tg3.conf ? Is there anyway to verify tg3 will work with the build? (I don't want to wait till data corruption happens. Edit: Read through this post, and seems like the recommendation for now is to disable VT-d.
  7. I have not experienced this before in the past (10 years?) with unraid. Console boots and it looks normal (see attached), but I can't get to the web gui. The IP address (192.168.1.3) is unreachable via ping. Other iLO IP address is fine, indicating NIC is ok. Checked clients via router, should not have ip conflict. How should I diagnose further? (diagnostics attached) tower-diagnostics-20220608-1442.zip
  8. Can you please elaborate? Also, should I also avoid assigning "/mnt/disk#" for Docker configs? (I've been doing this for many years, and never realize this is wrong).
  9. The cache disk cannot cache the cache disk itself? 😉 Is there anyway to create share groups with different policies? Since I want to keep the disk1 & disk2 not be mixed with the other drives/shares, is it still possible to use cache for those two disks? (Is the solution to create share only spans disk1 and disk2, and the rest shares cross the other disks?)
  10. Hi, I'm using cache for user shares, but is it possible to use cache disk for disk shares (ie. disk1, disk2, etc.)? Some of my disks are excluded from user shares as I don't want to "cross-pollination" the different files for disk management purposes. I like the write performance brought by using an SSD cache disk. I can't seem to find the cache disk setting for disk shares. Is it possible? Thanks, Jason
  11. I don't recall seeing this problem before when the drive was RFS. I formatted the drive in XFS, and trying to rsync the files back using AFP from my mac, and seeing the following error. Is this related to XFS/AFP? rsync -azv /Volumes/Disk1RAID/ /Volumes/disk1/ BACKUP/BACKUP-SSD120G/jason/.rvm/bin/ rsync: write failed on "/Volumes/disk1/BACKUP/BACKUP-SSD120G/jason/.rvm/src/rvm/binscripts/rvm-installer": Operation not permitted (1) rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32) io timeout after 30 seconds -- exiting rsync error: timeout in data send/receive (code 30) at /SourceCache/rsync/rsync-45/rsync/io.c(164) [sender=2.6.9] rsync: writefd_unbuffered failed to write 114 bytes [generator]: Broken pipe (32) When I run the command the second time, it seems fine. I think rsync resumes from the last stopping point.