caplam Posted May 31, 2020 Share Posted May 31, 2020 Hi, it's not a good period for my unraid server. Having big trouble with write amplification on my cache pool i decided to move from btrfs cache pool to single xfs cache disk until there is a correction. I shut down docker and vm service. I'm moving cache share to array with the mover but it only writes at 1,5MB/S. All cache shares (except an empty one ) have been set to cache: yes and included disk:2 Since few days i have so many troubles with this server, that i'm considering moving to another os. How can i stop the mover and transfer share from cache to array manually ? I also have a big question. From what i understood docker.img is btrfs filesystem handling cow. When loop2 is mounted we can see /docker/btrfs/subvolumes tree. Is this handled correctly on xfs cache ? godzilla-diagnostics-20200531-1213.zip Quote Link to comment
JorgeB Posted May 31, 2020 Share Posted May 31, 2020 3 minutes ago, caplam said: I'm moving cache share to array with the mover but it only writes at 1,5MB/S. Parity2 appears to be failing, run an extended SMART text to confirm. May 31 10:35:32 godzilla kernel: ata4.00: exception Emask 0x0 SAct 0x1ff80 SErr 0x0 action 0x0 May 31 10:35:32 godzilla kernel: ata4.00: irq_stat 0x40000008 May 31 10:35:32 godzilla kernel: ata4.00: failed command: READ FPDMA QUEUED May 31 10:35:32 godzilla kernel: ata4.00: cmd 60/00:38:e0:d3:67/04:00:08:00:00/40 tag 7 ncq dma 524288 in May 31 10:35:32 godzilla kernel: res 41/40:00:90:d6:67/00:00:08:00:00/00 Emask 0x409 (media error) <F> May 31 10:35:32 godzilla kernel: ata4.00: status: { DRDY ERR } May 31 10:35:32 godzilla kernel: ata4.00: error: { UNC } May 31 10:35:32 godzilla kernel: ata4.00: configured for UDMA/133 May 31 10:35:32 godzilla kernel: sd 5:0:0:0: [sdm] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 May 31 10:35:32 godzilla kernel: sd 5:0:0:0: [sdm] tag#7 Sense Key : 0x3 [current] May 31 10:35:32 godzilla kernel: sd 5:0:0:0: [sdm] tag#7 ASC=0x11 ASCQ=0x4 May 31 10:35:32 godzilla kernel: sd 5:0:0:0: [sdm] tag#7 CDB: opcode=0x88 88 00 00 00 00 00 08 67 d3 e0 00 00 04 00 00 00 May 31 10:35:32 godzilla kernel: print_req_error: I/O error, dev sdm, sector 141022864 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022800 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022808 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022816 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022824 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022832 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022840 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022848 May 31 10:35:32 godzilla kernel: md: disk29 read error, sector=141022856 Quote Link to comment
caplam Posted May 31, 2020 Author Share Posted May 31, 2020 (edited) yes i have read errors on it. So was parity 1 and have it replaced. I have no other disk to replace parity 2. Can i unassigned it and go on with only 1 parity drive ? edit: i'm running a smart test. Last week one was ok. Edited May 31, 2020 by caplam Quote Link to comment
JorgeB Posted May 31, 2020 Share Posted May 31, 2020 3 minutes ago, caplam said: Can i unassigned it and go on with only 1 parity drive ? Yep. Quote Link to comment
caplam Posted May 31, 2020 Author Share Posted May 31, 2020 (edited) ok. I can't because mover is running. Can we stop it ? edit : simple as : mover stop Edited May 31, 2020 by caplam Quote Link to comment
caplam Posted May 31, 2020 Author Share Posted May 31, 2020 i unassigned parity2, restarted the array and invoked mover and..... speed is the same between 2 and 5 MB/S. There are no other writes except disk2 and parity1. It should be quicker . Quote Link to comment
JorgeB Posted June 1, 2020 Share Posted June 1, 2020 21 hours ago, caplam said: speed is the same between 2 and 5 MB/S. You're likely moving a lot of small files, mover is not efficient for that, better off moving manually. Quote Link to comment
caplam Posted June 1, 2020 Author Share Posted June 1, 2020 that's what did. Mover took 3 hours to move appdata/plex (13G). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.