I have a problem that i've not seen so far in the forum.

In 6.8.3 i had an excessive write problem. It's known and related to partition btrfs.

I upgraded to beta30 directly from 6.8.3

Yesterday i received new ssds so i setup a new pool (2 ssd) and make it my cache pool. It stores appdata, domains and system.

I continue to have high writes rates on loop2 devices.

https://forums.unraid.net/topic/97902-getting-rid-of-high-ssd-write-rates/

So i stop docker service and restart it without loop device using the directory option. Now /var/lib/docker is mapped to a dedicated share.

But i continue to have excessive writes

here is the result of iotop -aoP on unraid server after 1 hour:

```
Total DISK READ : 0.00 B/s | Total DISK WRITE : 4.19 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 7.88 M/s
PID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND
28332 be/4 root 51.46 M 3.99 G 0.00 % 0.13 % qemu-system-x86_64 -name guest=wazo,debu~ny,resourcecontrol=deny -msg timestamp=on
27768 be/4 root 80.00 K 2.96 G 0.00 % 0.07 % qemu-system-x86_64 -name guest=Hermes,de~ny,resourcecontrol=deny -msg timestamp=on
24611 be/4 root 2.13 M 848.54 M 0.00 % 0.06 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=330
19819 be/4 root 100.00 K 507.45 M 0.00 % 0.03 % qemu-system-x86_64 -name guest=PiHole,de~ny,resourcecontrol=deny -msg timestamp=on
21224 be/4 root 0.00 B 218.03 M 0.00 % 0.01 % [kworker/u65:10-btrfs-endio-write]
28870 be/4 root 0.00 B 169.11 M 0.00 % 0.01 % qemu-system-x86_64 -name guest=Apollon,d~ny,resourcecontrol=deny -msg timestamp=on
21422 be/4 root 0.00 B 159.13 M 0.00 % 0.00 % [kworker/u65:1-btrfs-endio-write]
27287 be/4 root 0.00 B 139.56 M 0.00 % 1.22 % dockerd -p /var/run/dockerd.pid --log-op~ --log-level=error --storage-driver=btrfs
15717 be/4 root 0.00 B 132.48 M 0.00 % 0.00 % [kworker/u65:2-btrfs-endio-write]
25364 be/4 root 0.00 B 130.80 M 0.00 % 0.01 % [kworker/u65:7-events_unbound]
10515 be/4 root 0.00 B 126.08 M 0.00 % 0.00 % [kworker/u65:9-btrfs-worker]
10708 be/4 root 0.00 B 97.09 M 0.00 % 0.00 % [kworker/u65:4-btrfs-endio-write]
10514 be/4 root 0.00 B 94.36 M 0.00 % 0.00 % [kworker/u65:0-btrfs-endio-write]
26862 be/4 root 0.00 B 68.48 M 0.00 % 0.00 % [kworker/u65:3-btrfs-endio-write]
22073 be/4 root 0.00 B 55.11 M 0.00 % 0.00 % [kworker/u66:7-btrfs-endio-write]
13555 be/4 root 0.00 B 52.02 M 0.00 % 0.00 % [kworker/u66:0-btrfs-endio-write]
13144 be/4 root 8.00 K 51.37 M 0.00 % 0.00 % [kworker/u66:14-btrfs-endio-write]
10269 be/4 root 0.00 B 50.30 M 0.00 % 0.00 % [kworker/u66:2-btrfs-endio-write]
25365 be/4 root 0.00 B 49.25 M 0.00 % 0.00 % [kworker/u66:5-btrfs-endio-write]
16626 be/4 root 0.00 B 48.81 M 0.00 % 0.00 % [kworker/u66:4-btrfs-endio-write]
3032 be/4 root 0.00 B 41.62 M 0.00 % 0.00 % [kworker/u66:3-btrfs-endio-write]
10709 be/4 root 0.00 B 40.86 M 0.00 % 0.00 % [kworker/u65:11-btrfs-endio-write]
10710 be/4 root 0.00 B 37.89 M 0.00 % 0.00 % [kworker/u65:12-btrfs-endio-write]
8224 be/4 root 0.00 B 30.77 M 0.00 % 0.00 % [kworker/u66:6-btrfs-endio-write]
2808 be/4 root 0.00 B 27.78 M 0.00 % 0.00 % [kworker/u66:1-btrfs-endio-write]
8142 be/4 root 0.00 B 10.25 M 0.00 % 0.01 % [kworker/u64:1-bond0]
3432 be/4 103 0.00 B 7.12 M 0.00 % 0.00 % postgres: 10/main: stats collector process
8848 be/4 nobody 8.00 K 2.38 M 0.00 % 99.99 % mono --debug Sonarr.exe -nobrowser -data=/config
26116 be/4 nobody 17.23 M 2.20 M 0.00 % 0.01 % mono --debug Radarr.exe -nobrowser -data=/config
```

The first 2 line are 2 vms i can't post the same result for these commands as it was done in a ssh session in mremotng (no copy avaliable)

but for the host hermes the amount of data written was around 10 times less and 20 times for the host wazo.

It doesn't involve loop3 device. I don't know where it writes.

Each guest has a single vdisk raw format stored in domains share using virtio driver.

I don't really know where and how investigate.

## Recommended Comments

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Note:Your post will require moderator approval before it will be visible.