Jump to content

ajeffco

Members
  • Posts

    169
  • Joined

  • Last visited

Everything posted by ajeffco

  1. Is there an "official" process to open a support ticket with LT, if that's even possible? Or is the forum post acceptable?
  2. They were part of an Ubuntu system running BTRFS. Before migrating to unRaid, I ran "wipefs -a /dev/..." on them. And when they got to unRaid, I ran preclear on every drive before running through the drive replacement procedure. Not sure if wipefs and preclear are enough to "clean" a disk. The odd thing, the 1TB doesn't think it's part of a 4TB pool, it looks like it think's it's a 4TB drive in the GUI. From the CLI, "btrfs fi" shows the device itself correctly. Al
  3. I'll wait for LT to chime in. In reading the File System Conversion page on the wiki, it talks about parity rebuilds, which I'm assuming I don't want to do. Not sure parity is even valid at this point.
  4. Ok. I'll stay away from them and the original 1TB disk. Thanks again.
  5. johnnie.black, do you think there's any issue with starting to start converting btrfs to xfs starting with disk 10 and moving data? Disk 10 at the moment is empty. Or should I just wait on LT before doing anything further?
  6. yea, I'm running btrfs on another rig and have run into that very issue before. This is something much deeper and uglier
  7. I put the 1TB drive back into the system. I didn't change the device configuration in unraid. I can mount the 1TB drive manually with "mount /dev/sdn1 /aljx". Interestingly, the 1TB drive has the same contents as Disk 2 and Disk 3.
  8. more oddity... lsblk shows the disk at 4TB however btrfs thinks it's 931GB lsblk output for disk 3: sdg 8:96 0 3.7T 0 disk └─sdg1 8:97 0 3.7T 0 part btrfs fi show for /dev/sdg1: btrfs fi show /dev/sdg1 Label: none uuid: 25d79d48-80f9-4b90-8091-515048193568 Total devices 1 FS bytes used 384.00KiB devid 1 size 931.51GiB used 1.02GiB path /dev/sdg1 And unassigned devices shows: It's really jacked up...
  9. After running the test you mentioned, I thought earlier I had accessed files on Disk3. They are really on Disk2. When I do a find in the cli, it's the same files. An example, a file from my synology backup share: root@Tower:/mnt/disk3/synback/filer_1.hbk# find /mnt -name synobkpinfo.db /mnt/user/synback/filer_1.hbk/synobkpinfo.db /mnt/disk3/synback/filer_1.hbk/synobkpinfo.db /mnt/disk2/synback/filer_1.hbk/synobkpinfo.db root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/user/synback/filer_1.hbk/synobkpinfo.db 103358f0b308ae36349bc17d1103e607 /mnt/user/synback/filer_1.hbk/synobkpinfo.db root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/disk3/synback/filer_1.hbk/synobkpinfo.db 103358f0b308ae36349bc17d1103e607 /mnt/disk3/synback/filer_1.hbk/synobkpinfo.db root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/disk2/synback/filer_1.hbk/synobkpinfo.db 103358f0b308ae36349bc17d1103e607 /mnt/disk2/synback/filer_1.hbk/synobkpinfo.db
  10. I moved a 12G file as you requested from another disk to disk2, and in the GUI, Drive 2 and Drive 3 are showing the same information, and free space has gone down by 10GB.
  11. It's sitting on my desk In the GUI I see all my drives. In the CLI, df is missing disk 3. Screenshots and a new diagnostic attached. Would it be easier for me to start converting to and migrating the data to xfs in a rolling fashion starting with Disk 10? GUI: CLI (Missing /dev/md2 aka /mnt/disk3): root@Tower:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 404M 16G 3% / tmpfs 16G 252K 16G 1% /run devtmpfs 16G 0 16G 0% /dev cgroup_root 16G 0 16G 0% /sys/fs/cgroup tmpfs 128M 2.4M 126M 2% /var/log /dev/sda1 976M 152M 825M 16% /boot /dev/md1 1.9T 1.7T 133G 93% /mnt/disk1 /dev/md3 3.7T 3.4T 344G 91% /mnt/disk2 /dev/md4 3.7T 3.4T 257G 94% /mnt/disk4 /dev/md5 1.9T 1.8T 63G 97% /mnt/disk5 /dev/md6 3.7T 3.6T 76G 98% /mnt/disk6 /dev/md7 3.7T 3.6T 75G 98% /mnt/disk7 /dev/md8 3.7T 3.4T 249G 94% /mnt/disk8 /dev/md9 3.7T 784G 2.9T 22% /mnt/disk9 /dev/md10 3.7T 17M 3.7T 1% /mnt/disk10 shfs 33T 25T 8.1T 76% /mnt/user BTRFS FI output in case it helps root@Tower:~# btrfs fi show /dev/md2 Label: none uuid: 25d79d48-80f9-4b90-8091-515048193568 Total devices 1 FS bytes used 3.30TiB devid 1 size 3.64TiB used 3.54TiB path /dev/md3 root@Tower:~# btrfs fi show /mnt/disk3 Label: none uuid: 25d79d48-80f9-4b90-8091-515048193568 Total devices 1 FS bytes used 3.30TiB devid 1 size 3.64TiB used 3.54TiB path /dev/md3 root@Tower:~# btrfs fi df /dev/md2 ERROR: not a btrfs filesystem: /dev/md2 root@Tower:~# btrfs fi df /mnt/disk3 Data, single: total=3.53TiB, used=3.30TiB System, single: total=4.00MiB, used=416.00KiB Metadata, single: total=5.01GiB, used=3.40GiB GlobalReserve, single: total=512.00MiB, used=0.00B tower-diagnostics-20170314-2025.zip
  12. Trurl, I didn't ask privately for support. He asked for a diagnostic, my mistake was providing it via PM vs. in thread. My apologies.
  13. I'm getting ready to move these drives and key back to the original system they came from, more drive bays. I'll test that out once it's complete.
  14. The server has been up for 25+ hours. I should have been more clear. the syslog files are the only ones that appear to go back to the reboot of the server, the others are timestamped with the restart of the array. I'll work on copying my syslogs to stable storage, think I saw a post on that somewhere.
  15. root@Tower:~# btrfs fi show /mnt/disk3 Label: none uuid: 25d79d48-80f9-4b90-8091-515048193568 Total devices 1 FS bytes used 3.30TiB devid 1 size 3.64TiB used 3.54TiB path /dev/md3 root@Tower:~# btrfs fi df /mnt/disk3 Data, single: total=3.53TiB, used=3.30TiB System, single: total=4.00MiB, used=416.00KiB Metadata, single: total=5.01GiB, used=3.40GiB GlobalReserve, single: total=512.00MiB, used=0.00B
  16. Unfortunately I stopped and restarted the array just over an hour ago. The diagnostic zip file looks to be mostly reset to that time. So there's nothing pointing at a problem with Disk 3 in that file.
  17. By diagnostics, are you talking about the "diagnostics" command output file? This is a new install, nothing was restored. I started with a few drives, migrated some data from my other machine, move drives over, preclear, add to array, etc. Two of the drives were 1TB drives and were replaced to gain more space. Right now, it looks normal in the GUI as shown below. I randomly picked 5 files on the drive and they are correct. Steps to replace the drive were: Preclear a replacement 4TB. When complete, stop the array, in the Disk 3 selection box, choose the "new" 4TB drive, start the array, let parity rebuild. The results of that action are blow, the data rebuild on Disk 3 just completed about an hour ago. I don't doubt that it was a serious problem, just not sure how to proceed to assist with any resolution if there is a problem. I'll read that other thread. Thanks, Al
  18. Took a shot and replaced it, and now the GUI is showing correct total size, however used and free haven't changed. Still an oddity however. Not sure why it thought the 1TB had 3.92TB used. Al
  19. Hello, I have a disk in my unRaid array that is a WD 1TB Red. As shown below, unRaid is showing this correctly in the disk and device portion. However, in the right side the size, used and free information is showing a 4TB drive, which is clearly incorrect. I've pasted the output of fdisk for the drive also showing it's 1TB. I was going to replace the drive with a 4TB drive until I saw this. Is this a cosmetic issue? Is there a way to "reset" the sized, used and free information? Am I ok to replace it with a bigger drive? Thanks, Al root@Tower:~# fdisk -l /dev/sdc Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdc1 64 1953525167 1953525104 931.5G 83 Linux EDIT: The stat plugin is showing the size correctly.
  20. I think what I might try instead is: I've got another 12-bay server that I can use for this process. If I pull the first two drives from the Ubuntu server and start the unraid pool on this other server, then just pull and move drives as I empty the ubuntu server, this might be "safer". At the end of the process, I can move the usb key and the drives from the 12-bay back to the original system correct, with the data intact correct? Time is not an issue, either way it will take quite a while.
  21. Well... I'll be trying it soon... I'll reply back in case anyone else in the future has this kind of harebrained scheme Thanks again, Al
  22. Ok. I've got a Pro license, I just didn't know if you can assign 22 cache devices and 2 data devices, and then start the rolling migration process...
×
×
  • Create New...