AxelS

Members
  • Posts

    4
  • Joined

  • Last visited

AxelS's Achievements

Noob

Noob (1/14)

0

Reputation

  1. As far as I know there is a "feature" in Windows 10 that enables specific versions of SMB filesharing protocol. May be the proper one is disabled? Don't know by heart, so you have to google for those keywords.
  2. Hmmm - I just rebooted UnRaid and seems that this clean system reset automatically took care of all inconsistencies. So lsblk now comes up identical for both nvme drives: nvme0n1 259:0 0 465.8G 0 disk └─nvme0n1p1 259:1 0 465.8G 0 part /mnt/c-nvme0n1x nvme1n1 259:2 0 465.8G 0 disk └─nvme1n1p1 259:3 0 465.8G 0 part /mnt/c-nvme1n1x Formatting the disk worked without any further issue and it's now properly available with xfs filesystem. Very nice - wish I had the "reboot" idea earlier... Problem solved. Thanks for your ideas and your support.
  3. Thanks a lot for your suggestion. Unfortunately the issue still exists. From my understanding the problem is not related to data on the drive but to the messed up block device structure in the /dev folder. Output of "lsblk" for nvme0n1 (bad) and nvme1n1 (ok) where nvme0n1p1 is missing but there is the garbage nvme0n1p3 entries: nvme0n1 259:0 0 465.8G 0 disk └─nvme0n1p3 259:3 0 464.8G 0 part ├─pve-swap 254:0 0 8G 0 lvm ├─pve-root 254:1 0 96G 0 lvm ├─pve-data_tmeta 254:2 0 3.4G 0 lvm │ └─pve-data 254:4 0 337.9G 0 lvm └─pve-data_tdata 254:3 0 337.9G 0 lvm └─pve-data 254:4 0 337.9G 0 lvm ----------------------------------- nvme1n1 259:4 0 465.8G 0 disk └─nvme1n1p1 259:5 0 465.8G 0 part /mnt/c-nvme1n1x output of "fdisk -l" for both drives where nvme0n1 still has a reference to /dev/nvme0n1p1 (which doesn't exist) so it fails when going to create a filesystem : Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors Disk model: KINGSTON SA2000M8500G Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 976773167 976771120 465.8G 83 Linux ----------------------------------- Disk /dev/nvme1n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors Disk model: KINGSTON SA2000M8500G Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/nvme1n1p1 2048 976773167 976771120 465.8G 83 Linux As there are no data on this device: maybe it's an option to let the system autodetect it again as a "new" drive? E.g. clean up by removing the /dev/nvme0n1 entry? I was even thinking about saving the trial key from the config folder, then overwrite the flash drive with the installation setup, start over and re-implant the trial key... but I guess that's a bit of overkill (and I'm not sure if this will work;-). Or is there any other way to initialize the system from scratch, so it behaves like the very first startup? No harm, still no data on the system by now. Thanks a lot for your help
  4. Hi all, I'm testing UnRaid and trying out several options and settings. Currently there are no important data on the system, I just did full memory check and disk stress test. Seems I've messed something up with one of my nvme drives that I want to use as pool. Both should be xfs formatted as I won't use pool raid, so that's fine for my usage. This one pool disk shows up as "Ummountable: Wrong or no file system" Done so far: used the Unassigned Devices tool to remove all partitions added the drive as 2nd pool set file system to xfs in the pool formatted the disk Result still the same. Now I see an issue in the disk log: Apr 9 00:25:12 Tower root: mount: /mnt/c-nvme0n1x: special device /dev/nvme0n1p1 does not exist. Apr 9 00:25:12 Tower emhttpd: /mnt/c-nvme0n1x mount error: Wrong or no file system So I've checked with lsblk and see some weird entries for nvme0n1 root@Tower:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS ... nvme0n1 259:0 0 465.8G 0 disk └─nvme0n1p3 259:3 0 464.8G 0 part ├─pve-swap 254:0 0 8G 0 lvm ├─pve-root 254:1 0 96G 0 lvm ├─pve-data_tmeta 254:2 0 3.4G 0 lvm │ └─pve-data 254:4 0 337.9G 0 lvm └─pve-data_tdata 254:3 0 337.9G 0 lvm └─pve-data 254:4 0 337.9G 0 lvm nvme1n1 259:4 0 465.8G 0 disk └─nvme1n1p1 259:5 0 465.8G 0 part /mnt/c-nvme1n1x There is a "p3" mount point with some strange additional entries. I guess these are leftovers from the previous usage of this system, so UnRaid created them on first startup. The "p1" entry is missing, so creating the filesystem fails, Don't ask me how I managed to remove this (at least I didn't do it from shell, just used the standard unRaid options) What's the best way to clean up this mess? Probably get rid of the "p3" entries, so they don't cause later issues Create a proper "p1" entry. Is there a tool, that re-scans this device and creates all proper entries? Or do I have to come up with some shell commands? Thanks for your help and Happy Easter