grizzlemt Posted July 19 Share Posted July 19 Hey folks, I am trying to make a zsf pool that's just a stripe pool to handle downloads I am doing. I started formatting them to do this 2 days ago - but its sitting and formatting. Is there something going on thats stopping this? what am I missing? Thanks Ansible Diagnostics 20240719.zip Quote Link to comment
JorgeB Posted July 19 Share Posted July 19 emhttp segfaulted, you will need to reboot. P.S. disk1 appears to be failing: Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196440 Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196448 Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196456 Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196464 Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196472 Jul 18 04:15:44 Ansible kernel: md: disk1 read error, sector=31305196480 Quote Link to comment
grizzlemt Posted July 19 Author Share Posted July 19 Ansible Diagnostics 20240720.zip I have restarted the server twice -- but I keep getting the same problems. Is there anything else i can look at that's causing this? Thanks! Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 emhttp crashed again, reboot and before attempting another format, post the output of: fdisk -l /dev/nvme0n1p1 and blkid /dev/nvme0n1p1 Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 I couldnt get anything from the second command Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 (edited) I am wondering if I should just wipe and redo the whole server? it'd be a shame to do so, but I had changed some of the configs of the hdds to try to reorient stuff, and its been a bit sketchy ever since. I did the process of making a 'new config' and then starting into that new config, and its been running in this loop ever since. Edited July 20 by grizzlemt Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 Post new diags now, still before attempting to format. Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 Here they are! Thank you so incredibly much for helping me. Ansible Diagnostics 20240720 (2).zip Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 That device still has a partition being detected, despite fdisk showing nothing: Jul 20 01:05:18 Ansible kernel: nvme0n1: p1 Post the output of: lsblk Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 65.8M 1 loop /lib loop1 7:1 0 373.6M 1 loop /usr loop2 7:2 0 20G 0 loop /var/lib/docker/btrfs /var/lib/docker loop3 7:3 0 1G 0 loop /etc/libvirt sda 8:0 1 14.3G 0 disk └─sda1 8:1 1 14.3G 0 part /boot sdb 8:16 0 16.4T 0 disk └─sdb1 8:17 0 16.4T 0 part sdc 8:32 0 14.6T 0 disk └─sdc1 8:33 0 14.6T 0 part sdd 8:48 0 7.3T 0 disk └─sdd1 8:49 0 7.3T 0 part sde 8:64 0 16.4T 0 disk └─sde1 8:65 0 16.4T 0 part sdf 8:80 0 7.3T 0 disk └─sdf1 8:81 0 7.3T 0 part sdg 8:96 0 7.3T 0 disk └─sdg1 8:97 0 7.3T 0 part sdh 8:112 0 16.4T 0 disk └─sdh1 8:113 0 16.4T 0 part sdi 8:128 0 16.4T 0 disk └─sdi1 8:129 0 16.4T 0 part sdj 8:144 0 16.4T 0 disk └─sdj1 8:145 0 16.4T 0 part sdk 8:160 0 18.2T 0 disk └─sdk1 8:161 0 18.2T 0 part sdl 8:176 0 10.9T 0 disk └─sdl1 8:177 0 10.9T 0 part sdm 8:192 0 232.9G 0 disk └─sdm1 8:193 0 232.9G 0 part sdn 8:208 0 111.8G 0 disk └─sdn1 8:209 0 111.8G 0 part sdo 8:224 0 223.6G 0 disk └─sdo1 8:225 0 223.6G 0 part md1p1 9:1 0 16.4T 0 md /mnt/disk1 md2p1 9:2 0 16.4T 0 md /mnt/disk2 md3p1 9:3 0 16.4T 0 md /mnt/disk3 md5p1 9:5 0 16.4T 0 md /mnt/disk5 md6p1 9:6 0 14.6T 0 md /mnt/disk6 md9p1 9:9 0 10.9T 0 md /mnt/disk9 nvme0n1 259:0 0 894.3G 0 disk └─nvme0n1p1 259:1 0 894.3G 0 part nvme1n1 259:2 0 1.9T 0 disk └─nvme1n1p1 259:3 0 1.9T 0 part nvme2n1 259:4 0 1.9T 0 disk └─nvme2n1p1 259:5 0 1.9T 0 part /mnt/appcache Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 dooh, just noticed I gave you the wrong command before, should have been: fdisk -l /dev/nvme0n1 I assume that shows the partition? Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Force MP510 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 1875385007 1875382960 894.3G 83 Linux Yes it does! Quote Link to comment
Solution JorgeB Posted July 20 Solution Share Posted July 20 Now run blkdiscard -f /dev/nvme0n1 then again fdisk -l /dev/nvme0n1 Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 root@Ansible:~# blkdiscard -f /dev/nvme0n1 blkdiscard: Operation forced, data will be lost! root@Ansible:~# I dont know how to force it to do it? Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Force MP510 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Maybe it did? I can't tell! hah Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 65.8M 1 loop /lib loop1 7:1 0 373.6M 1 loop /usr loop2 7:2 0 20G 0 loop /var/lib/docker/btrfs /var/lib/docker loop3 7:3 0 1G 0 loop /etc/libvirt sda 8:0 1 14.3G 0 disk └─sda1 8:1 1 14.3G 0 part /boot sdb 8:16 0 16.4T 0 disk └─sdb1 8:17 0 16.4T 0 part sdc 8:32 0 14.6T 0 disk └─sdc1 8:33 0 14.6T 0 part sdd 8:48 0 7.3T 0 disk └─sdd1 8:49 0 7.3T 0 part sde 8:64 0 16.4T 0 disk └─sde1 8:65 0 16.4T 0 part sdf 8:80 0 7.3T 0 disk └─sdf1 8:81 0 7.3T 0 part sdg 8:96 0 7.3T 0 disk └─sdg1 8:97 0 7.3T 0 part sdh 8:112 0 16.4T 0 disk └─sdh1 8:113 0 16.4T 0 part sdi 8:128 0 16.4T 0 disk └─sdi1 8:129 0 16.4T 0 part sdj 8:144 0 16.4T 0 disk └─sdj1 8:145 0 16.4T 0 part sdk 8:160 0 18.2T 0 disk └─sdk1 8:161 0 18.2T 0 part sdl 8:176 0 10.9T 0 disk └─sdl1 8:177 0 10.9T 0 part sdm 8:192 0 232.9G 0 disk └─sdm1 8:193 0 232.9G 0 part sdn 8:208 0 111.8G 0 disk └─sdn1 8:209 0 111.8G 0 part sdo 8:224 0 223.6G 0 disk └─sdo1 8:225 0 223.6G 0 part md1p1 9:1 0 16.4T 0 md /mnt/disk1 md2p1 9:2 0 16.4T 0 md /mnt/disk2 md3p1 9:3 0 16.4T 0 md /mnt/disk3 md5p1 9:5 0 16.4T 0 md /mnt/disk5 md6p1 9:6 0 14.6T 0 md /mnt/disk6 md9p1 9:9 0 10.9T 0 md /mnt/disk9 nvme0n1 259:0 0 894.3G 0 disk nvme1n1 259:2 0 1.9T 0 disk └─nvme1n1p1 259:3 0 1.9T 0 part nvme2n1 259:4 0 1.9T 0 disk └─nvme2n1p1 259:5 0 1.9T 0 part /mnt/appcache Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 Okay I decided to try formatting! It worked on the ZFS Cache - but not the cache itself. Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Force MP510 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 65.8M 1 loop /lib loop1 7:1 0 373.6M 1 loop /usr loop2 7:2 0 20G 0 loop /var/lib/docker/btrfs /var/lib/docker loop3 7:3 0 1G 0 loop /etc/libvirt sda 8:0 1 14.3G 0 disk └─sda1 8:1 1 14.3G 0 part /boot sdb 8:16 0 16.4T 0 disk └─sdb1 8:17 0 16.4T 0 part sdc 8:32 0 14.6T 0 disk └─sdc1 8:33 0 14.6T 0 part sdd 8:48 0 7.3T 0 disk └─sdd1 8:49 0 7.3T 0 part sde 8:64 0 16.4T 0 disk └─sde1 8:65 0 16.4T 0 part sdf 8:80 0 7.3T 0 disk └─sdf1 8:81 0 7.3T 0 part sdg 8:96 0 7.3T 0 disk └─sdg1 8:97 0 7.3T 0 part sdh 8:112 0 16.4T 0 disk └─sdh1 8:113 0 16.4T 0 part sdi 8:128 0 16.4T 0 disk └─sdi1 8:129 0 16.4T 0 part sdj 8:144 0 16.4T 0 disk └─sdj1 8:145 0 16.4T 0 part sdk 8:160 0 18.2T 0 disk └─sdk1 8:161 0 18.2T 0 part sdl 8:176 0 10.9T 0 disk └─sdl1 8:177 0 10.9T 0 part sdm 8:192 0 232.9G 0 disk └─sdm1 8:193 0 232.9G 0 part sdn 8:208 0 111.8G 0 disk └─sdn1 8:209 0 111.8G 0 part sdo 8:224 0 223.6G 0 disk └─sdo1 8:225 0 223.6G 0 part md1p1 9:1 0 16.4T 0 md /mnt/disk1 md2p1 9:2 0 16.4T 0 md /mnt/disk2 md3p1 9:3 0 16.4T 0 md /mnt/disk3 md5p1 9:5 0 16.4T 0 md /mnt/disk5 md6p1 9:6 0 14.6T 0 md /mnt/disk6 md9p1 9:9 0 10.9T 0 md /mnt/disk9 nvme0n1 259:0 0 894.3G 0 disk nvme1n1 259:2 0 1.9T 0 disk └─nvme1n1p1 259:3 0 1.9T 0 part nvme2n1 259:4 0 1.9T 0 disk └─nvme2n1p1 259:5 0 1.9T 0 part /mnt/appcache I just re-ran the commands you asked me for earlier - just in case that helps and reprinted a diagnostic Ansible Diagnostics 20240720 (3).zip Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 At least no crash, try rebooting and then to formatting again. Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 Something borky is happening with that. Now its unassigned the NVME Ansible Diagnostics 20240720 (1).zip Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 But honestly - I may just skip this cache idea and just use my strange version of a ZFS cache - later ill get bigger SSDs to use as a proper cache. I am a bit worried about that NVME though - does that mean its borked and no longer works? Quote Link to comment
grizzlemt Posted July 20 Author Share Posted July 20 Thank you so much for your help! I can take it from here. I appreciate it! Quote Link to comment
JorgeB Posted July 20 Share Posted July 20 Something weird is happening with that device, suggest trying with a different one. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.