Jump to content

SSD showing as NTFS and won't format


tyrindor

Recommended Posts

Jun  3 18:52:37 UNRAID1 avahi-daemon[7364]: Service "UNRAID1" (/services/smb.service) successfully established.

Jun  3 18:52:42 UNRAID1 emhttp: writing MBR on disk (sdb) with partition 1 offset 64, erased: 0

Jun  3 18:52:43 UNRAID1 emhttp: re-reading (sdb) partition table

Jun  3 18:52:44 UNRAID1 emhttp: shcmd (109): udevadm settle

Jun  3 18:52:44 UNRAID1 kernel: sdb: sdb1

Jun  3 18:52:44 UNRAID1 emhttp: shcmd (110): mkdir -p /mnt/cache

Jun  3 18:52:44 UNRAID1 emhttp: shcmd (111): set -o pipefail ; mount -t ntfs -o noatime,nodiratime /dev/sdb1 /mnt/cache |& logger

Jun  3 18:52:44 UNRAID1 kernel: ntfs: (device sdb1): read_ntfs_boot_sector(): Primary boot sector is invalid.

Jun  3 18:52:44 UNRAID1 kernel: ntfs: (device sdb1): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.

Jun  3 18:52:44 UNRAID1 kernel: ntfs: (device sdb1): ntfs_fill_super(): Not an NTFS volume.

Jun  3 18:52:44 UNRAID1 logger: mount: wrong fs type, bad option, bad superblock on /dev/sdb1,

Jun  3 18:52:44 UNRAID1 logger:        missing codepage or helper program, or other error

Jun  3 18:52:44 UNRAID1 logger:        In some cases useful info is found in syslog - try

Jun  3 18:52:44 UNRAID1 logger:        dmesg | tail  or so

Jun  3 18:52:44 UNRAID1 logger:

Jun  3 18:52:44 UNRAID1 emhttp: shcmd: shcmd (111): exit status: 32

Jun  3 18:52:44 UNRAID1 emhttp: mount error: No file system (32)

Jun  3 18:52:44 UNRAID1 emhttp: shcmd (112): rmdir /mnt/cache

Jun  3 18:52:44 UNRAID1 emhttp: shcmd (113): :>/etc/samba/smb-shares.conf

Jun  3 18:52:44 UNRAID1 avahi-daemon[7364]: Files changed, reloading.

 

This was previously an SSD used in Windows. I deleted the partition, moved it to unRAID, precleared it. It still shows as NTFS, and I get this when I click the format button. Trying to use it as the cache drive. No idea what the problem is...

 

EDIT: I fixed it kinda. I unassigned the drive and reassigned the drive then it showed "Auto" file system and formatted to btrfs. However, I have resierFS set to my as my default. What gives?

Link to comment

Try this command from a terminal session:

 

sgdisk -z /dev/sdX

 

Replace X with the drive letter designation for the device.

 

Thanks for the quick help, I fixed it shortly after posting. For whatever reason it formatted to the wrong file system though, and ignored my "Auto" settings. I stopped the array, set the cache to ReiserFS and then it formatted correctly.

 

Strange. Is there any reason why I would want to use btrfs over reiserFS?

Link to comment

Try this command from a terminal session:

 

sgdisk -z /dev/sdX

 

Replace X with the drive letter designation for the device.

 

Thanks for the quick help, I fixed it shortly after posting. For whatever reason it formatted to the wrong file system though, and ignored my "Auto" settings. I stopped the array, set the cache to ReiserFS and then it formatted correctly.

 

Strange. Is there any reason why I would want to use btrfs over reiserFS?

Btrfs is a more modern filesystem and required by unraid to create a cache pool.  For array devices, the #1 benefit it offers is native copy on write support, which translates to the ability to checksum data at the filesystem level.

Link to comment

Btrfs is a more modern filesystem and required by unraid to create a cache pool.  For array devices, the #1 benefit it offers is native copy on write support, which translates to the ability to checksum data at the filesystem level.

 

Honestly, I don't really understand file systems. I have two unraid servers and they've been using reisefs for both the data and cache drive for over 5 years. Should I start using BTRFS for new drives, and can I mix/match the filesystems between drives without parity problems? Would you recommend I reformat this back to BTRFS? XFS? Etc?

 

EDIT: Seems you need BTRFS to have TRIM support, so it seems like a no brainer. Looks like XFS would be best for data drives when dealing when large 30+ GB files.

 

 

Link to comment

Btrfs is a more modern filesystem and required by unraid to create a cache pool.  For array devices, the #1 benefit it offers is native copy on write support, which translates to the ability to checksum data at the filesystem level.

 

Honestly, I don't really understand file systems. I have two unraid servers and they've been using reisefs for both the data and cache drive for over 5 years. Should I start using BTRFS for new drives, and can I mix/match the filesystems between drives without parity problems? Would you recommend I reformat this back to BTRFS? XFS? Etc?

 

EDIT: Seems you need BTRFS to have TRIM support, so it seems like a no brainer. Looks like XFS would be best for data drives when dealing when large 30+ GB files.

What I can suggest is this:

 

For existing reiserfs disks you have, you should consider leaving those as reiserfs unless you feel there is a benefit in converting, which will require you to add more storage devices of equal or greater size to the amount of data you have on the largest reiserfs device in your array. Then you can use the new device(s) with the new file system(s) to copy data from your existing devices to your new devices, then reformat your reiserfs devices to a new file system as well.

 

And btw, trim support only affects SSDs.

Link to comment

Btrfs is a more modern filesystem and required by unraid to create a cache pool.  For array devices, the #1 benefit it offers is native copy on write support, which translates to the ability to checksum data at the filesystem level.

 

Honestly, I don't really understand file systems. I have two unraid servers and they've been using reisefs for both the data and cache drive for over 5 years. Should I start using BTRFS for new drives, and can I mix/match the filesystems between drives without parity problems? Would you recommend I reformat this back to BTRFS? XFS? Etc?

 

EDIT: Seems you need BTRFS to have TRIM support, so it seems like a no brainer. Looks like XFS would be best for data drives when dealing when large 30+ GB files.

What I can suggest is this:

 

For existing reiserfs disks you have, you should consider leaving those as reiserfs unless you feel there is a benefit in converting, which will require you to add more storage devices of equal or greater size to the amount of data you have on the largest reiserfs device in your array. Then you can use the new device(s) with the new file system(s) to copy data from your existing devices to your new devices, then reformat your reiserfs devices to a new file system as well.

 

And btw, trim support only affects SSDs.

 

I'm using an SSD. :)

 

I'll swap my cache over so I have proper trim on it, leave the rest on reiserfs, and based on what i'm reading I can mix/match file systems without problem so I set the default to btrfs for future drives. Mainly because it appears reiserfs is a dead filesystem and I'd rather start slowly moving away from it as I upgrade/add drives.

 

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...