Probleme mit ZFS und Array


EricM
Go to solution Solved by EricM,

Recommended Posts

Servus Leute,

 

Ich habe zurzeit ein sehr sehr komisches Problem, und komme nicht dahinter was es hier auf sich haben könnte.

Mein Array umfasst 5 HDDs und eine Parity Disk, worauf meine gesamten Medien gespeichert sind. (Die Festplatten sind geshuckte 8TB WD Elements/MyBooks)

Zusätzlich dazu habe 2 ältere gebrauchte 3TB WD Reds, die ich in jeweils eines der WD Elementsgehäuse gepackt habe (Im Server ist leider kein Platz mehr) und via USB 3.0 am Server angesteckt habe. Darauf läuft ein ZFS Dateisystem, wobei die Platten gespiegelt sind. Bis gestern hat das alles super funktioniert, das ZFS System ist unter /mnt/user/zfs in einem eigenen Share gemountet.

 

Szenario 1: Gehen wir zuerst davon aus, der Server ist heruntergefahren und ich starte diesen in dieser Konfiguration, dann bekomme ich folgendes Problem:

Der Server startet ganz normal und unter dem Main Tab sieht auch alles ganz normal aus, wechsle ich jedoch auf den Shares Tab, sind alle meine Shares verschwunden, bis auf den ZFS Share. Es werden nur die einzelnen Platten unter Disk Shares angezeigt. Im Docker Tab kommt die Fehlermeldung: Docker Service failed to start, im VMS Tab die Fehlermeldung: Libvirt Service failed to start. Das ist ja nur logisch da der Domain Share und der Appdata Share nicht gefunden werden.

Auf die Dateien im ZFS Share kann ich aber im Windows Explorer ganz normal zugreifen, auf die anderen Shares logischerweise nicht. Auch mit dem Befehl zpool status wird mir alles angezeigt.

Wenn ich nach diesem Start den Array stoppen möchte, tut sich gar nichts, es kommt immer die Fehlermeldung: Array Stopping•Retry unmounting user share(s)...

Wenn ich aber nun die beiden 3TB Platten in den Gehäusen von den USB Ports abstecke, lässt sich der Array stoppen.

 

 

Szenario 2: Ich stecke beide 3TB Platten vor dem starten des Servers aus.

In diesem Fall startet der Server wieder ganz normal, und unter dem Main Tab werden die beiden 3TB Platten als fehlend angezeigt. Alle Shares bis auf den ZFS Share sind verfügbar und funktionieren, auch der Docker Service und die VMS funktionieren ohne Probleme. Wenn ich jedoch nun den Befehl zpool status abrufe, kommt die Fehlermeldung: no pools available. Auch der ZFS Share wird mir als leer angezeigt und befindet sich auf einmal nicht mehr auf den beiden 3TB Platten sondern am Array.

Wenn ich nun beide 3TB Platten wieder anstecke, ändert sich nichts. Also lasse ich sie ausgesteckt, da ich so den Array stoppen kann. Wenn der Array gestoppt ist und ich beide Platten wieder einstecke und dann den Array starte, bleibt die Konfiguration ebenfalls unverändert, heißt alles funktioniert bis auf den ZFS Pool.

 

 

Ich bin gerade wieder mal am Verzweifeln und würde mir gerne wieder ewiges Daten hin und her schaufeln ersparen wollen. Was mir noch einfällt wäre eine weitere externe Festplatte anzuschließen, mit ZFS den Server starten, alles von den beiden 3TB Platten auf die externe zu schaufeln, den ZFS Pool zu löschen und danach den Server neu zu starten ohne die beiden 3TB Platten angesteckt zu haben. Und dann den ZFS Pool neu einrichten und alles wieder von der Externen auf die 3TB Platten schaufeln.

Ich hoffe ihr wisst was hier das Problem ist und könnt mir damit helfen, das blödeste das passieren könnte ist das alle Daten weg sind.

 

Danke euch!!!

              

 

Link to comment
3 minutes ago, glennv said:

Dont mount your zfs under /mnt/user as that is the array. So depending on who mounts first you will only see the array or zfs stuff or other wierd effects from this comflicting setup
Mount it for example under /mnt/disks/zfs instead.

i did it like this before but then i had no idea how to use the zfs pool. because i want my zfs pool to show up on windows explorer, as the other unraid shares. so i came up with the idea to create a share for zfs and then mount the pool in this share, so everthing is visible on windows.

Link to comment
19 minutes ago, EricM said:

i did it like this before but then i had no idea how to use the zfs pool. because i want my zfs pool to show up on windows explorer, as the other unraid shares. so i came up with the idea to create a share for zfs and then mount the pool in this share, so everthing is visible on windows.

Yeah i get why you though that would be smart , but unfortunately in this case that is not the way and pretty dangerous. Check the main zfs thread where it is explained how the share your zfs datasets over smb. Basicaly you have to use smb-extras. 

 

p.s. here an example how i shared 2 of my ZFS pools so they are available on my Mac and Windows clients

image.png.2ba0f0409c92ef5ec1945d2d2a0a2d5b.png

Edited by glennv
Link to comment
1 minute ago, glennv said:

Yeah i get why you though that would be smart , but unfortunately in this case that is not the way and pretty dangerous. Check the main zfs thread where it is explained how the share your zfs datasets over smb. Basicaly you have to use smb-extras. 

Do you have a link for this thread?

Link to comment
4 minutes ago, glennv said:

Saving data is always good, but you dont have to delete it. Just change the mountpoint to anywhere outside of the array. Then restart he array (or reboot) and likely you will be fine again with all your data in tact.

how do i change the mount point? sorry i dont know any commands, i just did it like spaceinvaderone, except the mount point

Link to comment
Just now, EricM said:

how do i change the mount point? sorry i dont know any commands, i just did it like spaceinvaderone, except the mount point

If your zfs pool is named "zfs" then you can do that via an export and import of the pool as you can only set the "altroot" parameter during creation.

For example setting the mountpoint of the pool named "zfs" to "/mnt/disks/zfs" (so the "altroot"  is /mnt/disks) you would use :

zpool export zfs
zpool import -R /mnt/disks zfs

 

Any datasets (if any)  created underneath will inherit this mountpoint. 

Link to comment
2 minutes ago, glennv said:

If your zfs pool is named "zfs" then you can do that via an export and import of the pool as you can only set the "altroot" parameter during creation.

For example setting the mountpoint of the pool named "zfs" to "/mnt/disks/zfs" (so the "altroot"  is /mnt/disks) you would use :

zpool export zfs
zpool import -R /mnt/disks zfs

 

Any datasets (if any)  created underneath will inherit this mountpoint. 

so just those 2 commands? and afterwards i can delete the zfs share i made and search a solution to export the new mountpoint via smb.

 

so in my case i would write: zpool import -R /mnt/disks/zfs zfs (so the mount point is in an extra folder)

 

 

Edited by EricM
Link to comment
Just now, EricM said:

so just those 2 commands? and afterwards i can delete the zfs share i made and search a solution to export the new mountpoint via smb

 

 

 

yes, after these 2 command you can type "df" and you should see your zfs pool mounted under the new mountpoint. The /mnt/user/zfs should then be empty (doublecheck !!) and you can then remove that directory/share. 

Then i would restart the array and check if all your other shares will come back now zfs is not blocking/accessing it anymore.

 

So step by step.

Link to comment
1 minute ago, glennv said:

 

yes, after these 2 command you can type "df" and you should see your zfs pool mounted under the new mountpoint. The /mnt/user/zfs should then be empty (doublecheck !!) and you can then remove that directory/share. 

Then i would restart the array and check if all your other shares will come back now zfs is not blocking/accessing it anymore.

 

So step by step.

ok thank you very much, i will try this mow :)

Link to comment
3 minutes ago, glennv said:

 

yes, after these 2 command you can type "df" and you should see your zfs pool mounted under the new mountpoint. The /mnt/user/zfs should then be empty (doublecheck !!) and you can then remove that directory/share. 

Then i would restart the array and check if all your other shares will come back now zfs is not blocking/accessing it anymore.

 

So step by step.

i tried to export the zpool, but i get this error: cannot unmount '/mnt/user/ZFS': unmount failed

Link to comment
8 minutes ago, EricM said:

i tried to export the zpool, but i get this error: cannot unmount '/mnt/user/ZFS': unmount failed

 

That can have several causes. For example the pool is beeing used/bussy. If you dont know how to fix that , you can do what you did before eg shutdown, disconnect  the 2 zfs drives, start the system  . Then , if you start the array as you mentioned you will have all you normal shares back and the zfs pool is not imported (as missing all drives).

 

Run the export command just to be sure the pool is gone.

Then you can plug in the drives , wait untill they are detected and just run the second command "zpool import -R /mnt/disks zfs" .

 

You can also do this export/import "before" the array is started as zfs is already active immediately after boot. That way you can check / make sure the zfs is mounted correctly before you start the array.

 

Edited by glennv
Link to comment
Just now, glennv said:

 

That can have several causes. For example the pool is beeing used/bussy. If you dont know how to fix that , you can do what you did before eg shutdown, disconnect  the 2 zfs drives, start the array . Then , as you mentioned you will have all you normal shares back and the zfs pool is not imported (as missing all drives).

Run the export command just to be sure the pool is gone.

Then you can plug in the drives , wait untill they are detected and just run the second command "zpool import -R /mnt/disks zfs" .

 

how can i export a zpool if there is no zpool recognized. when i unplugg this two drives zpool status doesnt show any pools avalaible. but i will try that too. just tried to restart the server but that didnt work eather, same error.

Link to comment
Just now, glennv said:

Ok so when you reboot witout drives conected, you check with zpool status , should be no pool (if pool, run export command) .

Then plug in drives. Wait untill recognised. (should not auto import the pool ). Then run the import command i gave you. and check again afterwars with zpool status.

ok, so a zpool exports automatically when the server is shutdown and imports when started? so basically if you shutdown the server u could just unplugg those two drives, connect them for example to a pc and start the zpool there witg the import?

Link to comment
Just now, EricM said:

ok, so a zpool exports automatically when the server is shutdown and imports when started? so basically if you shutdown the server u could just unplugg those two drives, connect them for example to a pc and start the zpool there witg the import?

Yes ;-) 

That is the nice thing with zfs. Its very portable.

Edited by glennv
Link to comment
3 minutes ago, EricM said:

ah ok, i thought its a stable mountpoint and you have to manually export it, that sounds really cool. so i will try this now

 

Think about it, everything in unraid os is already dynamic as build up in memory. During boot zfs is installed every time from scratch . Then (if set as such) it will import all available pools . In the pool parameters the mountpoint is set (altroot) and used/created during import.

 

With "zpool get all <poolname>" you will see all current parameters for the pool.

Edited by glennv
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.