Jump to content

orhaN_utanG

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by orhaN_utanG

  1. Hello there,

     

    I am thinking about this for some time, and I was hoping to get some ideas from you.

    I have 4 x 14 TB drives in a raidz1 pool.

     

    I would like to add 4 x 14 TB to my machine and switch to raidz2.

     

    For my understanding, this is what i would have to do:

     

    1. Create a vdev with the new drives (raidz2)

    2. Move the data over to the new vdev

    3. destroy the existing one and recreate is as a raidz2

    4. move the data back

     

    Is that the approach that you would choose, too?

     

    Am i missing/not getting something here?

     

     

  2. Hello there, I am receiving this error from fix common problems:
     

    disk1 (Samsung_Flash_Drive_0309722050004991-0:0) has file system errors ()

     

    Quote

    If the disk if XFS / REISERFS, stop the array, restart the Array in Maintenance mode, and run the file system checks. If the disk is BTRFS, then see this post. If the disk is listed as being unmountable, and it has data on it, whatever you do do not hit the format button. Seek assistance HERE

     

    The disc the error is referencing, is just a "dumb" array thumb drive needed to have one big ZFS pool.

     

    I'm just wondering if this is something i can just ignore, since I dont know all possible implications.

     

    Thanks for any insight in advance.

     

     

     

     

    Screenshot 2023-12-04 at 23-22-37 niko_FixProblems.png

  3. Hello everyone,

     

    I had used Unraid before the official support for ZFS and then upgraded to 6.12 by following the guide:

     

    My Unraid setup is as follows:

     

    1x USB Stick = Operating System 1x USB Stick = Array Device

    4x Hard drives as a single pool (ZFS) with the name 'Zfs'

     

    /mnt/zfs/ leads to the same path as /mnt/user/

     

    It's no biggie, other than all the docker containers using /mnt/user as default.

     

    Why is that the case? Am I doing something wrong?

     

    BR,

    orhaN

  4. 1 minute ago, hawihoney said:

     

    zu 1+2: Lautet der Pfad wirklich usr oder user?

     

     

    Gut aufgepasst, es ist natürlich "user" 🙂

     

    2 minutes ago, hawihoney said:

    zu 3: Es gibt zwar bei Erstinstallation Standardpfade, diese sind aber nicht erforderlich. Hauptsache die von Dir verwendeten Pfade werden konsequent und durchgängig genutzt:

     

     

    https://docs.unraid.net/unraid-os/manual/shares/user-shares/#default-shares

     

    Standard Unraid sind:

    appdata

    system

    domains

    isos

     

    Standard durch Unassigned Devices plugin:

    addons

    disks

    remotes

     

     

    Das heißt: Einfach "weiterleben" und ignorieren? 🙂

  5. Hallo zusammen,

     

    ich bin ein relativ neuer Unraid-User und nicht so ganz mit den best practices vertraut.

    Bzw. war es für mich einfach mal ein Test-System und ist dann historisch gewachsen, weshalb ein neu-aufsetzen extrem viel Arbeit machen würde.

     

    Ich hatte Unraid vor der offiziellen Unterstützung mit ZFS benutzt und wollte jetzt soweit es geht auf den "Standard" gehen, weil mir aufgefallen ist, dass 1-2 Dinge bei mir anders sind als bei anderen.

     

    Mein Unraid ist wie folgt aufgebaut:

     

    1x  USB-Stick = Betriebssystem
    1x  USB-Stick = Array Device

    4x Festplatten als einziger Pool (Zfs) mit dem Namen "Zfs"

     

    Punkt 1:


    /mnt/zfs/ führen zum gleichen Pfad wie /mnt/usr/

     

    Soll das so sein?
    Wäre es besser, den Pfad /mnt/usr z. B. für Docker zu nutzen? bzw. ist es generell so gedacht, dass man den User-Pfad nutzt?


    Punkt 2:


    "Fix Common Problems" meldet

     

    Quote

    Reserved user share user, You have a share named user. Since 6.9.0, this is now a reserved name and cannot be used as a share. You will need to rename this share at the command prompt for the system to work properly. See HERE  More Information

     

    Die weiterführenden Links haben mir aber nicht so recht geholfen und unter "Shares" tauch es auch nicht auf. Zudem habe ich von selbst nie ein "user" share angelegt.

     

    Zudem kriege ich auch folgenden Fehler:

     

    Quote

    Share system references non existent pool cache - If you have renamed a pool this will have to be adjusted in the share's settings  More Information

     

    Auch hier verstehe ich das Problem nicht. Ich hatte nie ein "Cache" share und auch da hilft mir die weiterführende Information nicht wirklich weiter.

     

    Punkt 3:

     

    Ich habe gelesen, dass sich der Standardpfad von Docker geändert hat.

    Meine aktuellen Pfade:
     

    Docker directory:
    /mnt/zfs/system/docker/
    
    Default appdata storage location:
    /mnt/zfs/system/appdata/

     

    Wie läuft ein "Umzug" ab, ändere ich den Pfad in den Einstellungen und verschiebe dann die bereits existierenden Ordner in die neue location oder macht der das automatisch?

    Sorry falls die Fragen ein wenig durcheinander wirken, ich versuche mich da gerade zu sortieren und weiß nicht so recht wo ich anfangen soll.

    Viele Grüße

     

     

     

     

     

     

     

  6. Hello Guys and Gals,

     

    there was an exciting announcement from the OpenZFS team:

     

     

    They have announced that this pull request:

    https://github.com/openzfs/zfs/pull/12225

    Will finally be implemented.

    The pull request has been around for over two years.

     

    iXsystems is sponsoring the implementation/further development.

     

    Why this is exciting:

     

    This feature will allow single storage discs to be added to an existing zfs pool.

    With the current state of zfs, you'd have to add an additional pool which typically means that you'd have to double the number of discs. The expansion feature will come in handy for smaller pools and especially for many of us who like unraid for it's flexibility but still want the most reliable filesystem.

     

    What do you guys think?

    Anyone else as excited as me?

     

    You can follow the status here:

     

    https://github.com/openzfs/zfs/pull/15022

     

     

    • Like 8
    • Upvote 2
  7. Hello everyone, I have been trying for quite some time to pass through my AMD iGPU to an Ubuntu VM. Ubuntu recognizes it, and I managed to install Nomachine, disable VNC, and use only the iGPU at some point. Unfortunately, it seems to depend on luck whether it works or not, as the VM failed to boot after a restart.

     

    I wanted to start from scratch and check if the XML is correct. Do you see anything wrong with it?

    Best regards

     

    Spoiler
    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Ubuntu-Werkstatt</name>
      <uuid>7bc1fc1f-1af2-2113-33ea-793d48f6bd65</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>16</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='8'/>
        <vcpupin vcpu='2' cpuset='1'/>
        <vcpupin vcpu='3' cpuset='9'/>
        <vcpupin vcpu='4' cpuset='2'/>
        <vcpupin vcpu='5' cpuset='10'/>
        <vcpupin vcpu='6' cpuset='3'/>
        <vcpupin vcpu='7' cpuset='11'/>
        <vcpupin vcpu='8' cpuset='4'/>
        <vcpupin vcpu='9' cpuset='12'/>
        <vcpupin vcpu='10' cpuset='5'/>
        <vcpupin vcpu='11' cpuset='13'/>
        <vcpupin vcpu='12' cpuset='6'/>
        <vcpupin vcpu='13' cpuset='14'/>
        <vcpupin vcpu='14' cpuset='7'/>
        <vcpupin vcpu='15' cpuset='15'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/7bc1fc1f-1af2-2113-33ea-793d48f6bd65_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='8' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/zfs/system/vms/Ubuntu-Werkstatt/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/zfs/system/vms/isos/ubuntu-22.04.2-desktop-amd64.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0x14'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0x15'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
        </controller>
        <filesystem type='mount' accessmode='passthrough'>
          <source dir='/mnt/zfs/private/Niko/3D-Druck/'/>
          <target dir='3D-Druck'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </filesystem>
        <interface type='bridge'>
          <mac address='52:54:00:56:d6:71'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='de'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <audio id='1' type='none'/>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

     

     

  8. I don't know if I am understanding your problem correctly but I do it like this:

     

    - Edit the docker container

    - Enable Advanced view

    - Replace the http://[IP]:[PORT:7878] with your ip http://123.456.789:[PORT:7878]

    1532129280_Screenshot2023-01-31at12-57-52niko_UpdateContainer.thumb.png.7d31296ba8107ccdbd59672f05164f46.png

     

    It then redirects you to your tailscale IP.

     

    Or are you talking about replacing the [IP] template with an IP address?

     

    If that is the case: No idea, I do it manually.

     

     

    • Thanks 1
  9. Hello there, 

     

    what is the common way to set the app data and docker folder to? 

     

    I'm using an array with a single USB stick. 

    I created two datasets 

     

    zfs/System/appdata

    zfs/System/docker

     

    EDIT: tried it this way and it leads to a crash/unresponsivness from the GUI. What am I missing here?

     

    Is this the normal way to go? 

     

    Thanks in advance! 

  10. Hello all,

     

    I am a very inexperienced user and have had Unraid for a few days (not set up yet). After much reading and based on the importance of my data, I want to use ZFS. Of course I realize that ZFS is not a backup, I always do offset backups as well. But the data integrity is crucial. For example, I also have ECC Ram for that reason.

     

    I could also wait for 6.12, but I would already like to start and hope to learn something in the process.

     

    The situation for me is that I have about 8 TB of important data and 10 TB with movies etc.

    How would I set up Unraid to migrate to the "official" ZFS path in the short and long term with the upcoming changes (ZFS support) or to use ZFS in the best possible way?

     

    In this Youtube video:

     

    I have the following disks: 4x 14TB HDD, 2x 1TB NVME SSD
    If I go by the video, the SSD's would be useless, right? For this reason, I thought of the following:

     

    I create Unraid normally with a parity disk and create a 10TB ZFS pool for the VMs/ and the important data. For the rest i just use BTRFS.

     

    I hope it becomes reasonably clear what my question is: I actually want to use ZFS as much as possible, but my understanding is that I can't (yet) use a ZFS array with all my disks, not even with 6.12 but only sometime in the future. The alternatie would be to use a complete ZFS pool but then i wouldn't be able to use my SSD's.

     

    I'm just looking for the best middle ground and am hoping for suggestions.

     

    Thanks for reading until here!

     

    Kind regards,

     

  11. Hello Guys, noob again.

     

    I'm currently using TrueNAS Scale with 4 HDD's. I guess there is no way to switch to Unraid without formatting the drives and starting over, correct? Just want to doublecheck before i begin to copy/save 20 TB of data.

     

     

  12. 2 hours ago, Kilrah said:

     

     

     

    Can we get some clarification there? @JorgeB's 2 posts suggests ZFS will be supported for array and pools, @limetech's one suggests pools only... unless that's talking of ZFS pools anywhere instead of Unraid pools.

     

    One way I'd think of ZFS being a worthy addition would be if ZFS was on top of unraid's parity system, and as such you could have one/multiple ZFS pools as part of the array, and do something like having 2,3,4-drive striped ZFS pools (no protection) made of array drives and rely on unraid's parity protection for the drives instead of ZFS's. Less capacity wasted to protection, but with read performance of a striped array. Of course writes would be limited by the parity drive(s), but since only 1/2 drives would be needed for protection of many ZFS drives SSDs could be used to alleviate that.

     

    Still hoping for multiple arrays, which would be even more useful in that case...

     

    Sorry, just so I understand (absolute newbie here):

    If ZFS is not supported for arrays, there are no advantages over the status quo, correct? Apart from the fact that it is "officially" supported. You wouldn't benefit from the higher speeds and would only have the ZFS advantages (parity, snapshots, data integrity etc.) within the created pool, am I understanding correctly?

     

    If that is the case, why are people excited about it? Isn't that what you could do with the Plugin? Again: Not judging, genuinly trying to understand it. Would be happy about a ELI5 😄

  13. 7 hours ago, JorgeB said:

    I believe that was mentioned as a possibility for the future, but you can always assign an old flash drive as a single array data device, then have a pool as you main storage, I already do that with some of my servers where there's no array, just multiple pools. 

     

    As someone who has never used unraid before: That would mean that a single USB stick is sufficient for the installation files and the array data device or would i need two for that setup?

     

    Is it possible to do an offsite backup from the unraid GUI of the array data device so i can just restore it on a new usb stick in case of failure?

  14. On 1/3/2023 at 11:01 PM, gyto6 said:

    Euhhhhh...

    comment_bJDJsG6bxtw3HVnSYLMoR7Do3DsM2bxn.jpg.5019e9dd2a3bfd495249af7ef95db6ca.jpg

     

    You can already use ZFS in Unraid with no GUI, instead of leaving the dust on your server : Support Forum

     

    Can't say you're wrong but to understand where i'm coming from: Used TrueNAS and hated it. It's a great storage system but not very consumer friendly (regarding the whole app system etc.). You can tell who their target audience is. I figured that it will be a nice compromise to use Unraid and bought the license in order to use it with the ZFS plugin. But it adds another layer of complexity which i'm not comfortable with. Then I saw the announcement and I was like...well yeah...let's wait.

  15. On 12/31/2022 at 1:50 AM, brandon3055 said:

    Hopefully not too much longer.

    I have been using the promise of ZFS support as an excuse to delay setting up my new server since before 6.11 was released.
    But i can only procrastinate for so much longer... 

     

    Same here. Bought a license, server is laying around waiting for official ZFS support :D

    • Upvote 1
×
×
  • Create New...