Have I messed up my cache data?


Go to solution Solved by trurl,

Recommended Posts

Hello all,

because of SMART errors I had to replace my cache SSD (part of a two SSDs pool). As I was not impressed by the lifetime of the SSD I decided to switch both SSDs of similar brand with new ones.

 

I also made a post in the unraid Reddit

 

What have I done:

  • Stopp all Docker container (no VMs running)
  • Set all shares to cache:yes
  • Start the mover with enabled logging
    • confirm that my files are moved to the array by looking at the logs
    • confirm by looking at the new storage utilization (just a couple of MBs left on the cache)
  • Stopp the array
  • Power off the machine
  • Switch both SSDs
  • Power on the machine
  • Start the array
  • Set all shares to my previous cache settings
  • Discover that my Docker container automatically started so powered them off again
  • Start the mover to move files from the array back to the cache
    • confirm in the logs that the mover was working
  • Discover that my cache pool was not ready because of missing uuids and had to be formated
  • Format my cache pool
    • Cache pool is ready
    • and empty ...
  • Start the mover again
    • no files to be moved
  • no data on the cache

 

So, my question is: where is my cache data? how could the mover move files from the array when there is no cache pool ready?

 

Thanks for any help

 

Here are the relevant logs

 

First, you can see the last lines of the mover moving files from the array to, well a non existent cache pool. Afterwards, you can see the logs of me formatting the new SSDs and creating the cache pool. Then, I started the mover again but he moved only a couple of files.
 

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA2GM7D7R1VJ451ZV3P8AQQ/index

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA2GM7D7R1VJ451ZV3P8AQQ/meta.json

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA2GM7D7R1VJ451ZV3P8AQQ/tombstones

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA9CAWBMT8H9SR9ZQ3MAHZA/chunks/000001

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA9CAWBMT8H9SR9ZQ3MAHZA/index

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA9CAWBMT8H9SR9ZQ3MAHZA/meta.json

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/data/01FXA9CAWBMT8H9SR9ZQ3MAHZA/tombstones

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/prometheus/etc/prometheus.yml

Mar 4 14:55:21 unraid move: move: file /mnt/disk1/appdata/grafana/grafana.db

Mar 4 14:55:21 unraid move: move: skip /mnt/disk1/system/docker/docker.img

Mar 4 14:55:21 unraid move: move: skip /mnt/disk1/system/libvirt/libvirt.img

Mar 4 14:55:21 unraid root: mover: finished

Mar 4 15:58:06 unraid kernel: mdcmd (42): nocheck pause

Mar 4 15:58:08 unraid emhttpd: shcmd (915): /sbin/wipefs -a /dev/sdf

Mar 4 15:58:08 unraid emhttpd: writing MBR on disk (sdf) with partition 1 offset 2048, erased: 0

Mar 4 15:58:08 unraid emhttpd: re-reading (sdf) partition table

Mar 4 15:58:09 unraid emhttpd: shcmd (916): udevadm settle

Mar 4 15:58:09 unraid kernel: sdf: sdf1

Mar 4 15:58:09 unraid emhttpd: shcmd (917): /sbin/wipefs -a /dev/sdf1

Mar 4 15:58:09 unraid emhttpd: shcmd (918): mkfs.btrfs -f /dev/sdf1

Mar 4 15:58:09 unraid root: btrfs-progs v5.10

Mar 4 15:58:09 unraid root: See http://btrfs.wiki.kernel.org for more information.

Mar 4 15:58:09 unraid root:

Mar 4 15:58:09 unraid root: Detected a SSD, turning off metadata duplication. Mkfs with -m dup if you want to force metadata duplication.

Mar 4 15:58:09 unraid root: Label: (null)

Mar 4 15:58:09 unraid root: UUID: c7f97bd7-04a0-42e5-bf7e-90910244f953

Mar 4 15:58:09 unraid root: Node size: 16384

Mar 4 15:58:09 unraid root: Sector size: 4096

Mar 4 15:58:09 unraid root: Filesystem size: 465.76GiB

Mar 4 15:58:09 unraid root: Block group profiles:

Mar 4 15:58:09 unraid root: Data: single 8.00MiB

Mar 4 15:58:09 unraid root: Metadata: single 8.00MiB

Mar 4 15:58:09 unraid root: System: single 4.00MiB

Mar 4 15:58:09 unraid root: SSD detected: yes

Mar 4 15:58:09 unraid root: Incompat features: extref, skinny-metadata

Mar 4 15:58:09 unraid root: Runtime features:

Mar 4 15:58:09 unraid root: Checksum: crc32c

Mar 4 15:58:09 unraid root: Number of devices: 1

Mar 4 15:58:09 unraid root: Devices:

Mar 4 15:58:09 unraid root: ID SIZE PATH

Mar 4 15:58:09 unraid root: 1 465.76GiB /dev/sdf1

Mar 4 15:58:09 unraid root:

Mar 4 15:58:09 unraid kernel: BTRFS: device fsid c7f97bd7-04a0-42e5-bf7e-90910244f953 devid 1 transid 5 /dev/sdf1 scanned by mkfs.btrfs (2509)

Mar 4 15:58:09 unraid emhttpd: shcmd (919): mkdir -p /mnt/cache

Mar 4 15:58:09 unraid emhttpd: /mnt/cache uuid: c7f97bd7-04a0-42e5-bf7e-90910244f953

Mar 4 15:58:09 unraid emhttpd: /mnt/cache TotDevices: 1

Mar 4 15:58:09 unraid emhttpd: /mnt/cache NumDevices: 2

Mar 4 15:58:09 unraid emhttpd: /mnt/cache NumFound: 1

Mar 4 15:58:09 unraid emhttpd: /mnt/cache NumMissing: 0

Mar 4 15:58:09 unraid emhttpd: /mnt/cache NumMisplaced: 0

Mar 4 15:58:09 unraid emhttpd: /mnt/cache NumExtra: 1

Mar 4 15:58:09 unraid emhttpd: /mnt/cache LuksState: 0

Mar 4 15:58:09 unraid emhttpd: shcmd (920): mount -t btrfs -o noatime,space_cache=v2,discard=async -U c7f97bd7-04a0-42e5-bf7e-90910244f953 /mnt/cache

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): enabling free space tree

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): turning on async discard

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): using free space tree

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): has skinny extents

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): flagging fs with big metadata feature

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): enabling ssd optimizations

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): creating free space tree

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): setting compat-ro feature flag for FREE_SPACE_TREE (0x1)

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): setting compat-ro feature flag for FREE_SPACE_TREE_VALID (0x2)

Mar 4 15:58:09 unraid kernel: BTRFS info (device sdf1): checking UUID tree

Mar 4 15:58:09 unraid emhttpd: shcmd (921): /sbin/wipefs -a /dev/sdg

Mar 4 15:58:09 unraid emhttpd: writing MBR on disk (sdg) with partition 1 offset 2048, erased: 0

Mar 4 15:58:09 unraid emhttpd: re-reading (sdg) partition table

Mar 4 15:58:10 unraid emhttpd: shcmd (922): udevadm settle

Mar 4 15:58:10 unraid kernel: sdg: sdg1

Mar 4 15:58:10 unraid emhttpd: shcmd (923): /sbin/btrfs device add -f /dev/sdg1 /mnt/cache

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): disk added /dev/sdg1

Mar 4 15:58:10 unraid emhttpd: shcmd (924): /sbin/btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache && /sbin/btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mnt/cache &

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): balance: start -dconvert=raid1 -mconvert=raid1 -sconvert=raid1

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): relocating block group 13631488 flags data

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): relocating block group 5242880 flags metadata

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): found 3 extents, stage: move data extents

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): relocating block group 1048576 flags system

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): balance: ended with status: 0

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): balance: start -dconvert=raid1,soft -mconvert=raid1,soft -sconvert=raid1,soft

Mar 4 15:58:10 unraid kernel: BTRFS info (device sdf1): balance: ended with status: 0

Mar 4 15:58:12 unraid emhttpd: Starting services...

Mar 4 15:58:12 unraid emhttpd: shcmd (926): chmod 0777 '/mnt/cache'

Mar 4 15:58:12 unraid emhttpd: shcmd (927): chown 'nobody':'users' '/mnt/cache'

Mar 4 15:58:12 unraid emhttpd: shcmd (928): /etc/rc.d/rc.samba restart

Mar 4 15:58:12 unraid nmbd[25500]: [2022/03/04 15:58:12.642949, 0] ../../source3/nmbd/nmbd.c:59(terminate)

Mar 4 15:58:12 unraid winbindd[25510]: [2022/03/04 15:58:12.642967, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)

Mar 4 15:58:12 unraid nmbd[25500]: Got SIGTERM: going down...

Mar 4 15:58:12 unraid winbindd[25510]: Got sig[15] terminate (is_parent=1)

Mar 4 15:58:12 unraid winbindd[25513]: [2022/03/04 15:58:12.642981, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)

Mar 4 15:58:12 unraid winbindd[25513]: Got sig[15] terminate (is_parent=0)

Mar 4 15:58:12 unraid winbindd[27880]: [2022/03/04 15:58:12.643011, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)

Mar 4 15:58:12 unraid winbindd[27880]: Got sig[15] terminate (is_parent=0)

Mar 4 15:58:14 unraid root: Starting Samba: /usr/sbin/smbd -D

Mar 4 15:58:14 unraid root: /usr/sbin/nmbd -D

Mar 4 15:58:14 unraid smbd[2583]: [2022/03/04 15:58:14.872234, 0] ../../lib/util/become_daemon.c:135(daemon_ready)

Mar 4 15:58:14 unraid smbd[2583]: daemon_ready: daemon 'smbd' finished starting up and ready to serve connections

Mar 4 15:58:14 unraid root: /usr/sbin/wsdd

Mar 4 15:58:14 unraid nmbd[2588]: [2022/03/04 15:58:14.881891, 0] ../../lib/util/become_daemon.c:135(daemon_ready)

Mar 4 15:58:14 unraid nmbd[2588]: daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections

Mar 4 15:58:14 unraid root: /usr/sbin/winbindd -D

Mar 4 15:58:14 unraid winbindd[2598]: [2022/03/04 15:58:14.929178, 0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache)

Mar 4 15:58:14 unraid winbindd[2598]: initialize_winbindd_cache: clearing cache and re-creating with version number 2

Mar 4 15:58:14 unraid winbindd[2598]: [2022/03/04 15:58:14.930202, 0] ../../lib/util/become_daemon.c:135(daemon_ready)

Mar 4 15:58:14 unraid winbindd[2598]: daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections

Mar 4 15:58:15 unraid kernel: mdcmd (43): check resume

Mar 4 15:58:26 unraid emhttpd: shcmd (936): /usr/local/sbin/mover |& logger &

Mar 4 15:58:26 unraid root: mover: started

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/Fuji/._2021 Weihnachten bei Michael und Gana

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.vnc/3d0437b23916:99.pid

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.vnc/3d0437b23916:99.log

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/mesa_shader_cache/index

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/fontconfig/CACHEDIR.TAG

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/fontconfig/b96a1bfe-1065-41b0-a55f-721c0249a52e-le64.cache-7

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/fontconfig/061407fd-ac06-4877-9889-b7654b3af8de-le64.cache-7

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/fontconfig/81c7465c-0010-4ceb-8997-a827f5c28fa6-le64.cache-7

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.cache/icon-cache.kcache

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.dbus/session-bus/ad432f2cb0c28bbcd1f87a20199237f9-99

Mar 4 15:58:26 unraid move: move: file /mnt/disk1/appdata/krusader/.Xauthority

Mar 4 15:58:26 unraid move: move: skip /mnt/disk1/system/docker/docker.img

Mar 4 15:58:26 unraid move: move: skip /mnt/disk1/system/libvirt/libvirt.img

Mar 4 15:58:26 unraid root: mover: finished

Mar 4 15:58:37 unraid nmbd[2588]: [2022/03/04 15:58:37.923921, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)

Mar 4 15:58:37 unraid nmbd[2588]: *****

Mar 4 15:58:37 unraid nmbd[2588]:

Mar 4 15:58:37 unraid nmbd[2588]: Samba name server UNRAID is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1

Mar 4 15:58:37 unraid nmbd[2588]:

 

Link to comment
  • Solution
12 minutes ago, Allaman said:

Stopp all Docker container

You should have disabled Docker in Settings. Not only would that have prevented dockers from autostarting, but it also would have kept the docker service from starting and recreating the folders specified in your docker.cfg. This is probably how you wound up with a path to cache even though you didn't have cache yet. And that path would have been created in rootfs (RAM) since there was no mounted storage at /mnt/cache.

 

And then mover would have moved all that data to that path in rootfs, possibly filling it, and rootfs is where the OS lives. Maybe your system won't even work now to get those diagnostics I asked for.

Link to comment

Thanks @trurl for your explanation. This makes total sense and confirms my fears.

I think the data is gone and we can consider my issue as resolved. I checked my system over the last couple of hours and could not find any hint for the data. Furthermore, the RAM usage is at about 500 MB which is not enough for my old cache data.

Next time I will for sure disable Docker engine

Thanks again

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.