orhaN_utanG

Members
  • Posts

    17
  • Joined

  • Last visited

orhaN_utanG's Achievements

Noob

Noob (1/14)

13

Reputation

  1. Hello there, I am thinking about this for some time, and I was hoping to get some ideas from you. I have 4 x 14 TB drives in a raidz1 pool. I would like to add 4 x 14 TB to my machine and switch to raidz2. For my understanding, this is what i would have to do: 1. Create a vdev with the new drives (raidz2) 2. Move the data over to the new vdev 3. destroy the existing one and recreate is as a raidz2 4. move the data back Is that the approach that you would choose, too? Am i missing/not getting something here?
  2. Hello there, I am receiving this error from fix common problems: disk1 (Samsung_Flash_Drive_0309722050004991-0:0) has file system errors () The disc the error is referencing, is just a "dumb" array thumb drive needed to have one big ZFS pool. I'm just wondering if this is something i can just ignore, since I dont know all possible implications. Thanks for any insight in advance.
  3. Hello everyone, I had used Unraid before the official support for ZFS and then upgraded to 6.12 by following the guide: My Unraid setup is as follows: 1x USB Stick = Operating System 1x USB Stick = Array Device 4x Hard drives as a single pool (ZFS) with the name 'Zfs' /mnt/zfs/ leads to the same path as /mnt/user/ It's no biggie, other than all the docker containers using /mnt/user as default. Why is that the case? Am I doing something wrong? BR, orhaN
  4. Gut aufgepasst, es ist natürlich "user" 🙂 Das heißt: Einfach "weiterleben" und ignorieren? 🙂
  5. Hallo zusammen, ich bin ein relativ neuer Unraid-User und nicht so ganz mit den best practices vertraut. Bzw. war es für mich einfach mal ein Test-System und ist dann historisch gewachsen, weshalb ein neu-aufsetzen extrem viel Arbeit machen würde. Ich hatte Unraid vor der offiziellen Unterstützung mit ZFS benutzt und wollte jetzt soweit es geht auf den "Standard" gehen, weil mir aufgefallen ist, dass 1-2 Dinge bei mir anders sind als bei anderen. Mein Unraid ist wie folgt aufgebaut: 1x USB-Stick = Betriebssystem 1x USB-Stick = Array Device 4x Festplatten als einziger Pool (Zfs) mit dem Namen "Zfs" Punkt 1: /mnt/zfs/ führen zum gleichen Pfad wie /mnt/usr/ Soll das so sein? Wäre es besser, den Pfad /mnt/usr z. B. für Docker zu nutzen? bzw. ist es generell so gedacht, dass man den User-Pfad nutzt? Punkt 2: "Fix Common Problems" meldet Die weiterführenden Links haben mir aber nicht so recht geholfen und unter "Shares" tauch es auch nicht auf. Zudem habe ich von selbst nie ein "user" share angelegt. Zudem kriege ich auch folgenden Fehler: Auch hier verstehe ich das Problem nicht. Ich hatte nie ein "Cache" share und auch da hilft mir die weiterführende Information nicht wirklich weiter. Punkt 3: Ich habe gelesen, dass sich der Standardpfad von Docker geändert hat. Meine aktuellen Pfade: Docker directory: /mnt/zfs/system/docker/ Default appdata storage location: /mnt/zfs/system/appdata/ Wie läuft ein "Umzug" ab, ändere ich den Pfad in den Einstellungen und verschiebe dann die bereits existierenden Ordner in die neue location oder macht der das automatisch? Sorry falls die Fragen ein wenig durcheinander wirken, ich versuche mich da gerade zu sortieren und weiß nicht so recht wo ich anfangen soll. Viele Grüße
  6. Hello Guys and Gals, there was an exciting announcement from the OpenZFS team: They have announced that this pull request: https://github.com/openzfs/zfs/pull/12225 Will finally be implemented. The pull request has been around for over two years. iXsystems is sponsoring the implementation/further development. Why this is exciting: This feature will allow single storage discs to be added to an existing zfs pool. With the current state of zfs, you'd have to add an additional pool which typically means that you'd have to double the number of discs. The expansion feature will come in handy for smaller pools and especially for many of us who like unraid for it's flexibility but still want the most reliable filesystem. What do you guys think? Anyone else as excited as me? You can follow the status here: https://github.com/openzfs/zfs/pull/15022
  7. Hello everyone, I have been trying for quite some time to pass through my AMD iGPU to an Ubuntu VM. Ubuntu recognizes it, and I managed to install Nomachine, disable VNC, and use only the iGPU at some point. Unfortunately, it seems to depend on luck whether it works or not, as the VM failed to boot after a restart. I wanted to start from scratch and check if the XML is correct. Do you see anything wrong with it? Best regards
  8. I don't know if I am understanding your problem correctly but I do it like this: - Edit the docker container - Enable Advanced view - Replace the http://[IP]:[PORT:7878] with your ip http://123.456.789:[PORT:7878] It then redirects you to your tailscale IP. Or are you talking about replacing the [IP] template with an IP address? If that is the case: No idea, I do it manually.
  9. Hello there, what is the common way to set the app data and docker folder to? I'm using an array with a single USB stick. I created two datasets zfs/System/appdata zfs/System/docker EDIT: tried it this way and it leads to a crash/unresponsivness from the GUI. What am I missing here? Is this the normal way to go? Thanks in advance!
  10. Hello all, I am a very inexperienced user and have had Unraid for a few days (not set up yet). After much reading and based on the importance of my data, I want to use ZFS. Of course I realize that ZFS is not a backup, I always do offset backups as well. But the data integrity is crucial. For example, I also have ECC Ram for that reason. I could also wait for 6.12, but I would already like to start and hope to learn something in the process. The situation for me is that I have about 8 TB of important data and 10 TB with movies etc. How would I set up Unraid to migrate to the "official" ZFS path in the short and long term with the upcoming changes (ZFS support) or to use ZFS in the best possible way? In this Youtube video: I have the following disks: 4x 14TB HDD, 2x 1TB NVME SSD If I go by the video, the SSD's would be useless, right? For this reason, I thought of the following: I create Unraid normally with a parity disk and create a 10TB ZFS pool for the VMs/ and the important data. For the rest i just use BTRFS. I hope it becomes reasonably clear what my question is: I actually want to use ZFS as much as possible, but my understanding is that I can't (yet) use a ZFS array with all my disks, not even with 6.12 but only sometime in the future. The alternatie would be to use a complete ZFS pool but then i wouldn't be able to use my SSD's. I'm just looking for the best middle ground and am hoping for suggestions. Thanks for reading until here! Kind regards,
  11. Hello Guys, noob again. I'm currently using TrueNAS Scale with 4 HDD's. I guess there is no way to switch to Unraid without formatting the drives and starting over, correct? Just want to doublecheck before i begin to copy/save 20 TB of data.
  12. Sorry, just so I understand (absolute newbie here): If ZFS is not supported for arrays, there are no advantages over the status quo, correct? Apart from the fact that it is "officially" supported. You wouldn't benefit from the higher speeds and would only have the ZFS advantages (parity, snapshots, data integrity etc.) within the created pool, am I understanding correctly? If that is the case, why are people excited about it? Isn't that what you could do with the Plugin? Again: Not judging, genuinly trying to understand it. Would be happy about a ELI5 😄
  13. As someone who has never used unraid before: That would mean that a single USB stick is sufficient for the installation files and the array data device or would i need two for that setup? Is it possible to do an offsite backup from the unraid GUI of the array data device so i can just restore it on a new usb stick in case of failure?