kaffesugen

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by kaffesugen

  1. I think I'll try a different route first (recover files). Is it enough to run: wipefs -a /dev/disk on both drives to undo what we did?
  2. Ok, so I created a new pool on a Fedora laptop with a spare nvme (250GB). # sudo zpool create -m /mnt/NASferatu NASferatu /dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSAF704461V Defaulting to 4K blocksize (ashift=12) for '/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSAF704461V' # sudo fdisk -l The primary GPT table is corrupt, but the backup appears OK, so that will be used. Disk /dev/sda: 232,89 GiB, 250059350016 bytes, 488397168 sectors Disk model: Samsung SSD 840 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: DCE97A33-C48A-934E-B11B-93890B02CD50 Device Start End Sectors Size Type /dev/sda1 4096 618495 614400 300M EFI System /dev/sda2 618496 488392064 487773569 232,6G Linux filesystem
  3. Oh, and I scrolled back as far as I could in the terminal. After the upgrade to 6.12.8, but before reboot, i tried to export the pool (with the array down, and docker + vm disabled), and got this: zpool export NASferatu cannot unmount '/var/lib/docker/zfs/graph/de169b0bb2462a363fea5bfcfbbd2db3a57329a82685f2b763a93fa0aa90c1e7-init': unmount failed Then I rebooted from the Unraid Gui. To bad I couldnt scroll back further, as I'm sure there was a fdisk -l in there.
  4. Ok. Does it have to be a similar nvme, or it could be spinning rust as well? Thanks for all of your help.
  5. # blkid /dev/sdb1: UUID="a3b33c96-32a0-4bae-be4e-25c37630f9b1" BLOCK_SIZE="512" TYPE="xfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="B8EE-D462" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/loop0: TYPE="squashfs" /dev/nvme0n1p1: TYPE="zfs_member" PARTUUID="f8e34752-387f-4c0d-a02d-be452cbbce8a" /dev/nvme1n1p1: TYPE="zfs_member" PARTUUID="6df2df1d-01"
  6. Ok, it looked like it wanted me to type something for partition 2 also. Anyway... zpool import no pools available to import
  7. >>> 64 Created a new DOS disklabel with disk identifier 0x6df2df1d. The device contains 'zfs_member' signature and it may be removed by a write command. See sfdisk(8) man page and --wipe option for more details. Created a new partition 1 of type 'Linux' and of size 931.5 GiB. Partition #1 contains a zfs_member signature. Do you want to remove the signature? [Y]es/[N]o: N /dev/nvme1n1p1 : 64 1953525167 (931.5G) Linux /dev/nvme1n1p2: The cursor is located after /dev/nvme1n1p2: now... as if its expect me to type something?
  8. sfdisk /dev/nvme1n1 Welcome to sfdisk (util-linux 2.38.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Checking that no-one is using this disk right now ... OK The device contains 'zfs_member' signature and it may be removed by a write command. See sfdisk(8) man page and --wipe option for more details. >>> 64 Created a new DOS disklabel with disk identifier 0xa13d96f0. The device contains 'zfs_member' signature and it may be removed by a write command. See sfdisk(8) man page and --wipe option for more details. Created a new partition 1 of type 'Linux' and of size 931.5 GiB. Partition #1 contains a zfs_member signature. Do you want to remove the signature? [Y]es/[N]o: ^C
  9. So, I was reading this thread with a similar issue (i guess). I have one other nvme drive (in a laptop) with the same size (except it's an Samsung SSD 970 EVO 1TB, and not Samsung SSD 970 EVO Plus 1TB). Disk /dev/nvme0n1: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Would that be worth a try, create a pool on that drive and do what he did?
  10. root@Nexus:~# sgdisk -o -a 8 -n 1:1M:0 /dev/nvme0n1 Creating new GPT entries in memory. The operation has completed successfully. root@Nexus:~# zpool import no pools available to import fdisk -l Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO Plus 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 7A95D8A5-C639-4474-8617-E321943B417B Device Start End Sectors Size Type /dev/nvme0n1p1 2048 1953525134 1953523087 931.5G Linux filesystem
  11. At the time I compared two different pools, so I had them saved. Sorry, yes, 2 device mirror.
  12. Not really. Looking in some old docs, I found this: zpool create -m /mnt/NASferatu NASferatu mirror disk-id disk-id
  13. Hi! I've been using the zfs plugin for at least a year with no problem. Yesterday i upgraded to 6.12.6, and tried to import my pool (exported it first) by creating a new pool in unraid and adding the disks. It came up as "Unmountable: Unsupported or no file system". I then imported the pool manually in the terminal, and everything worked as before. Today I saw there was an update to 6.12.8 with a fix to zfs, adding the -f parameter.. great i thought, thinking that would solve my problems.. and upgraded. Now there is no pool. zpool import no pools available to import fdisk -l Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO Plus 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/nvme1n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO Plus 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Here are the properties: NAME PROPERTY VALUE SOURCE NASferatu type filesystem - NASferatu creation Sat Mar 26 15:23 2022 - NASferatu used 784G - NASferatu available 116G - NASferatu referenced 226M - NASferatu compressratio 1.27x - NASferatu mounted yes - NASferatu quota none default NASferatu reservation none default NASferatu recordsize 128K default NASferatu mountpoint /mnt/NASferatu local NASferatu sharenfs off default NASferatu checksum on default NASferatu compression lz4 local NASferatu atime off local NASferatu devices on default NASferatu exec on default NASferatu setuid on default NASferatu readonly off default NASferatu zoned off default NASferatu snapdir hidden default NASferatu aclmode passthrough local NASferatu aclinherit passthrough local NASferatu createtxg 1 - NASferatu canmount on default NASferatu xattr on default NASferatu copies 1 default NASferatu version 5 - NASferatu utf8only off - NASferatu normalization none - NASferatu casesensitivity sensitive - NASferatu vscan off default NASferatu nbmand off default NASferatu sharesmb off default NASferatu refquota none default NASferatu refreservation none default NASferatu guid 569506525292153468 - NASferatu primarycache all default NASferatu secondarycache all default NASferatu usedbysnapshots 0B - NASferatu usedbydataset 226M - NASferatu usedbychildren 783G - NASferatu usedbyrefreservation 0B - NASferatu logbias latency default NASferatu objsetid 54 - NASferatu dedup off default NASferatu mlslabel none default NASferatu sync standard default NASferatu dnodesize legacy default NASferatu refcompressratio 1.00x - NASferatu written 226M - NASferatu logicalused 922G - NASferatu logicalreferenced 226M - NASferatu volmode default default NASferatu filesystem_limit none default NASferatu snapshot_limit none default NASferatu filesystem_count none default NASferatu snapshot_count none default NASferatu snapdev hidden default NASferatu acltype off default NASferatu context none default NASferatu fscontext none default NASferatu defcontext none default NASferatu rootcontext none default NASferatu relatime off default NASferatu redundant_metadata all default NASferatu overlay on default NASferatu encryption off default NASferatu keylocation none default NASferatu keyformat none default NASferatu pbkdf2iters 0 default NASferatu special_small_blocks 0 default It's a striped mirror. Help?
  14. So, how can I connect to my server from PS4? I'm only setting this up for my kids locally on LAN. We don't have PS Plus, only PS Now (which Nintendo will convert to PS Plus soon). I don't have a LAN section in the Friends tab on PS4.
  15. So it's better to start a new trial and copy over my docker image and appdata folder? If so, what should I copy over from the /boot/config folder? I mean docker templates obviously, and perhaps some server settings? (vms I can recreate)
  16. Whats the best options for me here.. This is going to be my second Unraid server, so I'm currently testing my setup. I started with an old USB stick I had laying around, and now I've bought new disks and USB sticks. Can I transfer what I have to the new USB stick, blacklist the old and continue my trial?
  17. Ok. Yes, I was planning on having that solution.
  18. Hi. I just bought 2 nvme drives (samsung 970 EVO Plus 1TB each), and was wonder whats the best setup. On this server I plan on having a few containers (Plex, Emby, Jellyfin, xteve, Roon server, home assistant, ESP Home etc), and a few VMs. All my media are located on a different server. I'm not planning on using the array, and will do backups myself. What is the best solution (performance wise): 1. 2 disks in a btrfs cache pool (RAID1 or RAID0) for everything. 2. 1 disk in a btrfs cache pool for appadata etc 1 disk as unassigned device for VMs. 3. ZFS mirror What do you think?
  19. I'm about to install my second unraid server, but can't download the iso since the site is down. Does anyone have a direct link I can use instead?
  20. Hi! During my build of one harddisk the green checkmark next to "Build up-to-date" appeared, but the build process is still ongoing. Should I click the cancel button?