Jump to content

Mount Unraid disk in another Linux OS


TheSkaz

Recommended Posts

I installed ubuntu on a thumb drive and booted my machine that normally has unraid on it. I would like to mount (1 at a time) the disks and write to them directly. When i try to mount them, I get an error:

 

mount: /mnt/data: wrong fs type, bad option, bad superblock on /dev/sdl1, missing codepage or helper program, or other error.

 

I believe it is xfs and I have xsf, zfs, and btrfs installed in this ubuntu distro. (20.04)

Link to comment
13 hours ago, JorgeB said:

Post output of:

fdisk -l /dev/sdl

and

blkid

 

 

Disk /dev/sdl: 14.57 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NM001G-2K
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: DDB2E669-D1DA-4232-8C49-3E867F7B43CB

Device     Start         End     Sectors  Size Type
/dev/sdl1     64 31251759070 31251759007 14.6T Linux filesystem

 

/dev/sdz2: UUID="597002c4-2d94-4b44-aa05-e1e80e8e767a" TYPE="ext4" PARTUUID="db3a80ee-d9ea-411b-b720-854c0c4e942f"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/nvme0n1p1: LABEL="fast" UUID="4179580394884417541" UUID_SUB="12847431488902245656" TYPE="zfs_member" PARTLABEL="zfs-0cfad0fea46b414a" PARTUUID="41a2681c-2bd6-3147-8631-a397c1e25f14"
/dev/nvme2n1p1: LABEL="fast" UUID="4179580394884417541" UUID_SUB="15543906779802372019" TYPE="zfs_member" PARTLABEL="zfs-3fbe5c2bc68ba8ed" PARTUUID="0610b1a9-f1d4-624c-a40e-04b3f75a89e4"
/dev/nvme1n1p1: LABEL="fast" UUID="4179580394884417541" UUID_SUB="6492103685729250771" TYPE="zfs_member" PARTLABEL="zfs-e81f23cae5a73bcf" PARTUUID="fb7e4f63-48d8-ab43-a87e-2cf7d7406c34"
/dev/nvme4n1p1: LABEL="fast" UUID="4179580394884417541" UUID_SUB="13649481275645943769" TYPE="zfs_member" PARTLABEL="zfs-ee79cd07d03ede87" PARTUUID="116d3468-57ea-e44d-89fb-b8d6ad04c3c0"
/dev/nvme3n1p1: UUID="59591a90-d1b2-4d27-acfa-30485563deda" UUID_SUB="09c71c32-8b4e-4b9d-b657-b3c5fc929a62" TYPE="btrfs" PARTUUID="1bc3545e-01"
/dev/sda1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="4922239508972696224" TYPE="zfs_member" PARTLABEL="zfs-ab30a848780c9330" PARTUUID="672f6248-cf61-f44d-bc16-68c0066c8805"
/dev/sde1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="13336565585547709039" TYPE="zfs_member" PARTLABEL="zfs-7b1ab658861ef082" PARTUUID="26af02ef-c7ef-2146-9381-a8bf095a9117"
/dev/sdh1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="2814332359404138817" TYPE="zfs_member" PARTLABEL="zfs-9e344a20b0ec5662" PARTUUID="17f6819d-5cd7-b146-8450-6c7031c143e0"
/dev/sdf1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="8862187382222672228" TYPE="zfs_member" PARTLABEL="zfs-0e2ab6be8ff4bd2d" PARTUUID="c9240531-9051-f24a-bf65-9d129a158185"
/dev/sdi1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="3631782970123668525" TYPE="zfs_member" PARTLABEL="zfs-567b78a5f1f11d82" PARTUUID="b765ac26-ddb6-654b-b748-2e937ba1c8c6"
/dev/sdw1: UUID="ca02e3c6-deb5-47b6-b349-1d9e8bb6156a" TYPE="xfs" PARTUUID="bdd06172-01"
/dev/sdb1: LABEL="vmstorage" UUID="16302881269565465280" UUID_SUB="3994191215598499814" TYPE="zfs_member" PARTLABEL="zfs-cf50b6f6af3297f4" PARTUUID="fb600641-01ad-df4e-9be1-c7023f6fa55f"
/dev/sdu1: UUID="f98cba39-6600-40ee-a141-ec0135a70bd8" TYPE="xfs" PARTUUID="adc2b10e-732a-45e7-9d1f-d99a1c6c004e"
/dev/sdc1: LABEL="datastore" UUID="7743322362316987465" UUID_SUB="18411294586795704829" TYPE="zfs_member" PARTLABEL="zfs-dbc1b87cebfad658" PARTUUID="232aa618-37c1-044f-825c-88f4f6c8abe8"
/dev/sdj1: LABEL="datastore" UUID="7743322362316987465" UUID_SUB="13718147040912579959" TYPE="zfs_member" PARTLABEL="zfs-29ac71d3f7a5db12" PARTUUID="0a8bca1d-c041-0b4a-afaf-6da2106b74f5"
/dev/sdt1: UUID="e5464de7-7848-401b-8327-82dcf7ef1c91" TYPE="xfs" PARTUUID="a39b64b2-1413-4b13-8140-9a0b329ae5ef"
/dev/sdg1: LABEL="datastore" UUID="7743322362316987465" UUID_SUB="6664917712746874192" TYPE="zfs_member" PARTLABEL="zfs-4ff1ee9f5510e801" PARTUUID="c6620fc1-b2cf-8745-a77d-3618f70e918d"
/dev/sdk1: UUID="32f525a5-3690-44ab-b2d1-86c8f68528b2" TYPE="xfs" PARTUUID="03c6803f-01"
/dev/sdd1: LABEL="datastore" UUID="7743322362316987465" UUID_SUB="5243161482392972071" TYPE="zfs_member" PARTLABEL="zfs-548f15a0a9ab2de8" PARTUUID="e38c6de6-fff6-b349-a14c-30f2b73f30d7"
/dev/sdr1: UUID="bce3380b-c6d8-48e1-9a73-067a9c2e4c81" TYPE="xfs" PARTUUID="e330e92a-01"
/dev/sdx1: UUID="889e69cf-e862-4afb-98ec-95d6ee824ff6" TYPE="xfs" PARTUUID="040f2350-535c-46a5-a0ce-43c73b4f9ebe"
/dev/sdo1: UUID="fcd84261-18a8-40de-841f-f009bb9c0da8" TYPE="xfs" PARTUUID="2f935342-ee9c-4951-8641-a9c437848b34"
/dev/sdp1: UUID="a2fc349a-b529-42ce-8e04-2185a27f3572" TYPE="xfs" PARTUUID="7751df81-8679-433e-9399-2f72ff6dcb81"
/dev/sdq1: UUID="308e88b0-286d-431d-99f9-9ba63ed42084" TYPE="xfs" PARTUUID="e3ef1233-a87e-45df-a421-ab86e86852c6"
/dev/sdy1: UUID="5195a2ae-457e-44f3-8f80-ddf71e93eacd" TYPE="xfs" PARTUUID="d7fe6931-5178-412d-a072-7d036b2403d3"
/dev/sdn1: UUID="9c1962c8-d2d2-421d-9ddd-0ac750eda130" TYPE="xfs" PARTUUID="29782d04-2ca7-4731-a7f6-2b8414353473"
/dev/sdm1: UUID="0d789e6d-bc1f-4dc3-8021-7b0fd1c46fab" TYPE="xfs" PARTUUID="aca01ccd-c765-4497-9ac5-4d6da9e32e1e"
/dev/sds1: UUID="5d91a4ea-ff7f-4ec0-8815-51ad2a1a7d8c" TYPE="xfs" PARTUUID="03c68026-01"
/dev/sdv1: UUID="04d5c909-3807-4ab3-9906-9ae4f3a2ca49" TYPE="xfs" PARTUUID="03c68025-01"
/dev/sdz1: UUID="2AF4-6E17" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="1a689b2a-f028-4555-bfd2-a05232b5a903"
/dev/loop4: TYPE="squashfs"
/dev/nvme0n1p9: PARTUUID="cd50e6df-fafb-5e48-8569-e25d65bf6e56"
/dev/nvme2n1p9: PARTUUID="d4f4fb59-4790-1946-b30d-9d83272263e4"
/dev/nvme1n1p9: PARTUUID="9ff64f1b-9faf-4641-a299-1db01b27395d"
/dev/nvme4n1p9: PARTUUID="902a0f4e-8d87-2649-be02-3dad34de51d6"
/dev/sda9: PARTUUID="62da27ba-5d1a-c045-a3f1-abed8632977c"
/dev/sde9: PARTUUID="9184685b-1d22-5e4f-bf56-57096128710b"
/dev/sdh9: PARTUUID="f98d5a2c-7761-a64f-8c17-e0bfa315a1f1"
/dev/sdf9: PARTUUID="6034ba94-6426-f740-bdec-b59f2548d532"
/dev/sdi9: PARTUUID="4d17a6f6-6bb5-8740-82bb-78e5c96e3de8"
/dev/sdb9: PARTUUID="791e00ea-469e-484a-8b66-7c73da5d42e6"
/dev/sdc9: PARTUUID="adb9fac4-83df-6f48-b3c4-b0a3e345969b"
/dev/sdj9: PARTUUID="ce68e481-bc97-4d4e-9003-866697c2e295"
/dev/sdg9: PARTUUID="151669c9-126c-b940-8ddf-88b80d5c2720"
/dev/sdd9: PARTUUID="abe4fb4e-180d-8e4f-9a01-3b6a08b81f3b"
/dev/sdl1: PARTUUID="718c3da6-e902-4253-a3a5-52b8f98bcb7c"

 

Edited by TheSkaz
Link to comment

further explanation: 

 

I am trying to use my extra space for chia plots. 

I have around 150TBs of extra space that can be used for something, so I decided to plot for the time being. the issue is that the fast plotter I am using (madMAx43v3r/chia-plotter (github.com)) thrashes the RAM instead of the nvme drives. this is the only system that I have enough RAM (256GB) to do this, but it keeps crashing within Unraid. It will work 2-3 times then crash. I figured if i went to barebones ubuntu, and just mount each of the 16TB drives 1 at a time, and write the plots, this will work efficiently. once they are all written, I can boot Unraid back up and mount everything as normal.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...