Gladio

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by Gladio

  1. I have some special question to ZFS in combination with unraid. I have some 10TB disk in unraid array. Now I want to buy some 12 TB disk and want to use 10 TB in original array and 2 TB as zfs pool: I have the following idea: zero out the disk with preclear creating a 10TB partition /dev/sdx1 with same sector-count like the other disks and format it with xfs creating a 2 TB partition /dev/sdx2 and format it with zfs adding disk to array create a zfs pool with the smaller partitions. adding on ssd to the array a a cache disk using a second ssd as cache disk for the zfs-pool I this a possible way to go ahead? What is happening when I am adding a disk (preformated) tom the array and the whole disk size is not same? Please give me some ideas.
  2. Thanks - my use case is to exports the *.img file via iSCSI to a proxmox-host VM. For emergency reasons I want to mount the img-File if my proxmox-vm is broken. I did it via the playground-docker image - installing kpartx and use it as a privileged docker container - mounting the /mnt/user/iSCSI directory with the iSCSI-Image Files. #kpartx -a -v volume1-iscsi.img #mount /dev/mapper/loop3p1 /mnt/disks/p1 #mount /dev/mapper/loop3p2 /mnt/disks/p2 mounted the partitions into local unraid server. I know, that I can only mount read-only because of consistency. Thanks for your help. --- snipp OK - It has been worked it is very easy to mount the partitions locally
  3. I have some less specific question about the file-format of the img-files which has been created by iscsi-gui. It seems to be a raw format, so I should able to mount them with kpartx. But I don't know which linux type is below unraid, so I am not able to port kpartx to unraid. Can somebody give me a hint, how to do this? Greetings and thanks
  4. I find a work around to make this semi-permanent Install the plugin: CA User Scripts Here you can implement your own Scripts, which will survive a reboot / reconstruct of the USB-Stick
  5. Hi all, I did some tests with iSCSI - Plugin and the corespopnding WebGUI. It is really amazing how easy it is, to setup an iSCSI - Target. Now my question: e.g. I do have 3 disks á 10 GB and want to export a FileIO-Target bigger than 10 GB whithin the array. So UNRAID has to split the file over at least two disks. Is that possible? Or should I do the following: Export 2*7 GB FileIO Targets and combine them in Windows to a striped Volume? (RAID0) Some ideas to sense, performance and stability? Greetings
  6. Auf die Daten kann prinzipiell wie folgt zugegriffen werden: (Ich gehe davon aus, das /dev/sda dein USB-Stick ist) mkdir /mnt/disks/d1 mount /dev/sdb1 /mnt/disks/d1 -- Das machst Du jetzt für jede Festplatte einzeln und verschaffst Dir einen Überblick, was denn fehlt. (Anmerkung - die Parity-Platte erkennst Du daran, dass Sie keine Partition drauf hat, also der Mountversuch fehlschlägt) -- Wichtig ist, ob eine Daten, oder eine Parity-Platte fehlt und welche Festplatten die Parity-Festplatten sind. -- Wenn Du einen Überblick hast, welche Platten/Daten fehlen, dann kannst Du verschiedene Wege gehen: 1: Durch den Austausch von Kabeln die fehlerhaften Festplatten wieder zum laufen bringen. (Im Bios prüfen, was denn da nicht geht) 1a: Auf jeden Fall eine Sicherheitskopie des Verzeichnisses /boot/config deines USB-Sticks machen 2: Wenn Du ermittelt hast, welche festplatten defekt sind und Du entschieden hast, auf welche Festplatten Du zu verzichten bereit bist, dann kannst Du 2a: Über Tools -> New Config die Array Konfiguration löschen und dann das Array neu zusammenstellen. ACHTUNG! Du musst wissen, welches die Parity Festplatte(n) ist/sind! Die Parity Platte wird platt überschrieben - Wenn das eine Datenplatte war, wirfst Du unnötig Daten weg. Dann einfach das Array neu definieren und starten. 2b: Wenn das nicht funktioniert - Die Array - Configuration steht in der /boot/config/super.dat entweder den USB-Stick komplett neu generieren oder folgende Dateien manipulieren: Lösche die /boot/config/super.dat (Das ist eine gefährliche und warscheinlich nicht wieder rückgängig machbare Aktion - Unter Umständen hilft Datei zurückkopieren nicht und vergrößert die Probleme dann) Falls Du die einzelnen Platten getuned hast, dann steht diese Information in der /boot/config/disk.cfg - Wenn da jetzt eine Platte fehlt, dann stimmt die Zuordnung nicht mehr! Am besten, ganz am Schluss das Tuning wieder setzen. Die /boot/config/docker.cfg und /boot/config/domain.cfg beschreiben die Config aller Docker-Container und aller VMs. Wenn diese Datein gelöscht werden, wird es unter Umständen sehr mühsam aller wieder herzustellen. Die Parity musst Du nicht rausnehmen - Einfach bei der Neukonfiguration in die Parity-Sktion wieder einordnen. Er macht natürlich dann einen Parity-Check - Dann sollte alles bis auf die drei Platten wieder laufen. Ganz am Ende, würde ich die mutmasslich defekten Platten nocheinmal woanders testen. Da sind ganz normale xfs-partitionen drauf - Solltest Du mit jedem Live Ubuntu/Linux-System retten können. Ist ja schade um die guten Daten 🙂 Viel Glück
  7. Unfortunately - on my system - this do not work 😞 - maybe - wrong finger on keyboard error BUT - The following procedure has worked: ATTENTION - This procedure can be dangorous, so I will describe in very detail to prevent problems: problem situation: My USB was accidentially physically broken and I did not had a backup of my USB-Stick, but I do have the URL of my key-file: Solution: deletion of the usb-key and reformat - If you can backup the /boot/config directory - please do Recreate the USB-stick with the unraid tools (see documentation / Installation proceure Reboot After Reboot with the new created USB-stick all of your disks with be in the new/unassigned state If you don't know which disks are your parity disks and which disks are your data disks - please do the following: start your array with all disks in maintanance mode figure out which disk ist your usb-stick - mostly /dev/sda try to mount all disks like this: mkdir /mnt/d1 /mnt/d2 /mnt/d3 /mnt/d4 /mnt/d5 /mnt/d6 mount /dev/sdb1 /mnt/d1 mount /dev/sdc1 /mnt/d2 mount /dev/sdd1 /mnt/d3 mount /dev/sde1 /mnt/d4 mount /dev/sdf1 /mnt/d5 mount /dev/sdg1 /mnt/d6 On my system there where two disks not mountable (/dev/sdb and /dev/sdg - means device 1 and device 6)) the following error should occure If any other error appears - you may have an disk failure or a power failure - please recheck Have a very very close look - if you are able to see all of your data under /mnt/d1 ... /mnt/d6 if you found all of your data - very good If you are still missing a data disk - It seems to be a power failure - check in BIOS - if all disks are appearing Then stop maintanance mode assign the data disks (mount was possible) to data disks and the parity disks (mount was not possible) to parity disks. After that you must not have more parity disks than before. ATTENTION! Please recheck three times, that you have assigne everything correctly. The disks, which you did assign to parity willl completely been erased and are not more recoverable then - ATTANTION! If you are not 100.00% sure - backup your data from your data disks to a second location! If you are 100.00% sure, that everything is correct - start array in normal mode - Depending on, what you did try before - a parity check will be made or not. While this parity check is running - in summer or so - the disks will get more hot than normal. Please be prepared. Docker Container and VMs should stay on data disks and should not has been touched by a broken usb-stick. What I will do better next time: Making a screen shot of my disks configuration (click on main) - so next time I will know which disks are data and which on are parity Nevertheless - I will try to check the mountable state of disks - so I do have no other error I will systematicly backup the directory /boot/config from my usb-stick (or the whole USB-stick) conclusion: I am very expirienced with raid systems (RAID 1/ RAID 5 / RAID 6 / RAID 10 / RAID 16 ...) where I did have many many problems with. UNRAID seems to be a very very robust system - If you do not destroy data - Unraid also will not and you will have a very high chance to recover data in an emergency case. Thanks to 911 for your help - this bring me down from panic mode to thinking mode.
  8. Thanks for this advice - I did identify my Parity devices - Now I am in the state, that all disks are assigned. If I change my parity disks (in the below sections) to no device and put them into the parity section. The answer is: "To many wrong and/or missing disks" (see error1.png) so doing a "Tools->New Config" (new-config.png) - I think I will not presserve anything - so None I seems, that the new-config will not work. Also after an reboot (see after-reboot) Thank you very much for your help.
  9. I have the same problem but I do have the key file - after creating a new usb-key - I dont remember exactly which of my disks are parity disks and which are data disks? How can I figure out, which disks are my data disks? Is it possible to rescue my data from them, before creating a new disk array? Thank you in advance
  10. I would like to give some other access to the docker tab for let him manage the docker containers - but root access for everybody - really?
  11. Why it shouldn't be possible to add a secondary ip to an interface. Command will be an ip addr show will show: But I don't know how to setup, unraid will remember this