Jump to content

Xili

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by Xili

  1. 19 hours ago, Deen said:

    Juste par curiosité , comment as tu réglé le problème ?

    apres plusieurs essais de recontruire les disque en erreurs , j'avais trop de soucis de lecture sur le 3eme , donc ça ne donnait rien .... Je suis finalement repartir en créant une nouvelle config en remettant en place les disque que j'avais remplacé ( le tout l'ayant pas evolué ou tres peu dans le lapse de temps), plusieur recheck apres et voila

     

  2. Hello
    Sorry for the response time, it's been a busy week....

    I did the checks indicated above and ended up restarting everything + moving the important files to a single disk, so I know if a disk is critical or not... , and I will have to quickly put an external backup in square .
    For the ram, I let you see my screenshots of the bios, impossible to manually modify the frequency, I can only go down to 2667 as I have with the xmp deactivated

     

    signal-2022-11-20-145424_002.jpeg

    signal-2022-11-20-145422_002.jpeg

    signal-2022-11-20-145429_002.jpeg

  3. hello

    the parity check finally finished earlier, it has accelerated well beyond 6to, 180mb/s, because none of the disks are larger than this size by those of partition for the moment

    Last check completed on Wed 09 Nov 2022 02:43:02 PM CET (today)
    Duration: 1 day, 14 hours, 5 minutes, 54 seconds. Average speed: 102.1 MB/s
    Finding 1465130620 errors


    I have not yet restarted my containers, but a priori nothing prevents it, everything seems ok to me?

    I am posting my diagnostics here. Is there anything shocking in it?

     

    Small question: I have 2 disks of 16to to inject and put in parity since they are the biggest, to pass the 14to in the array. What is the best way to avoid taking risks? I imagine that it is better to avoid changing the two at once, but rather one by one. But during this change can I immediately integrate the 14 into the array instead of another, or is it better to do it in two steps?

    tower-diagnostics-20221109-1747.zip

  4. Hello

    The recovery of my important files is completed without any particular worries.
    I launched the parity check, obviously a lot of error in the parity, I suppose because of the checks started during reading problems. I imagine that it is the rewrites on the disc which make the check is slower than normal 80mb/s against 120 in normal time

  5. ok, so I launched the array with the config, before restarting anything I get back what I didn't want to lose. It's quietly copying from the network to another machine at 90mb/s, hoping there are no errors or corrupted files.

    For the rest do you think that I will resume my use as before or do I need to do some checks?
    doing a parity check seems necessary I think

    Thank you for this first step

  6. ok , strange because no matter whether I check the box or not , I have this which is put on my parity disks: "All existing data on this device will be OVERWRITTEN when array is Started" Given what you tell me, it might be safer to start with the parity disks, and then replace them 1 by 1

     

    I admit that I am a bit cautious, and I would like to take as little risk as possible

  7. 5 hours ago, JorgeB said:

    No, there's no need to start in maintenance mode, after doing a new config there will be a checkbox with "parity is already valid" next to the array start button, just check that before first array start.

    OK Does this pose a problem if I replace my two 14to parity disks with 16to, so that I can add the 14to instead very soon? or is it better to start with the original parity disks first?

  8. 2 hours ago, JorgeB said:

    Correct, if you have all 3 replaced disks you can do a new config with them, since parity should be mostly valid you can check "parity is already valid" before array start but should then run a correcting check.

    when you say before starting the array, that is to say that I start in maintenance? with valid parity checked and then I run a parity check

  9. I initially changed the 2(2to) 5(3to) 6 (3to) drives to 6to, the replaced drives were functional, but 7 years old in operation, so I wanted to take them out for security reasons...funny to write this now

    reconstruction of 2 and 5 OK,and during the reconstruction of 6, error, and disc 2 fail
    the 3 initial disks 2(2to) 5(3to) 6 (3to), have not moved they are on the side, and I know which one (normally)

    During the disk 6 reconstruction error (or after) I had a disk 2 read error message, it is this disk that seems to be causing problems with the stability of the array and creates latency with the controller I guess.

    since the change of discs, there has not been much done on the array. even if during the reconstruction of the disks the containers were in operation, everything is on /appdata in the nvme cache. Only my Nextcloud Data folder may have undergone changes (I'm not sure)

    the only big things launched since we were a parity check that I canceled because it was very very slow.
    attempts to rebuild the new disk with many errors found
    and at the beginning I had launched the array to try to recover some files, but it was only reading

    I guess you're going to offer me to create a new configuration with the old disk location?

  10. no I currently have no other card, several years that I have this one without worries (or I did not realize it). Formerly it managed my raid 6, but since my switch to unraid, it just serves me to reassemble the disk individually, I had also put it in a particular mode, maybe that's the problem? I'm not sure I understood the problem, what's wrong with the card? what are the errors?

  11. On 11/5/2022 at 10:56 AM, JorgeB said:

    Still constant problems with the controller, make sure it's well seated or try a different slot if available, at least for now also disable IOMMU since it appears to be causing issues, then post new diags after array start.

    OK,

    I disabled IOMMU, remove everything that was on the pci. There remains only adaptec card that I changed port, and it is well fixed, and the nvme cache.

     

    I did not dare to start the array (even in maintenance), for fear of making things worse, because I have:

    - disk 1 which appears deactivated by unraid

    - disk 5 seems OK, but not formatted, and I guess empty

    - new disc 6 tested OK, which appears disabled by unraid.

     

    I await your instructions.

    Thank you for your help

  12. Well

    Unraid tried a rebuild with the two 8TB drives after the array booted. worries, one of the two discs is in error, the other is going to the end (strangely fast reconstruction and a lot of errors) ... but does not seem to be exploitable "Unmountable" surely because it does not have been forced at the beginning of the process. And during all that disk 1 is put in error, while it is a priori good

     

    it doesn't seem terrible to me

     

    2022-11-05 00_56_05-Window.png

    2022-11-05 00_55_36-Window.png

    tower-diagnostics-20221105-0039.zip

  13. 51 minutes ago, JorgeB said:

    Adaptec controller crashed:

     

    Nov  4 13:16:18 Tower kernel: aacraid: Host bus reset request. SCSI hang ?
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: Adapter health - -3
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: outstanding cmd: midlevel-8
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: outstanding cmd: lowlevel-0
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: outstanding cmd: error handler-0
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: outstanding cmd: firmware-1
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: outstanding cmd: kernel-0
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: Controller reset type is 3
    Nov  4 13:16:18 Tower kernel: aacraid 0000:09:00.0: Issuing IOP reset
    Nov  4 13:17:33 Tower kernel: aacraid 0000:09:00.0: IOP reset failed
    Nov  4 13:17:33 Tower kernel: aacraid 0000:09:00.0: ARC Reset attempt failed

     

    Start by rebooting and post new diags after array start.

    thanks for reply

     

    with or without new 8to hdds

     

  14. Je me permets de double poster dans le section FR en plus .... je ne sais pas si c'est accepté ou non, a supprimer au besoin

    => le sujet orginal :

     

    Bonjour

     

    Je viens demander de l’aider car je me retrouve dans une situation délicate et je ne voudrai pas faire plus de betise que ce que j’ai déjà fait….

     

    Déjà excusez moi pour la traduction google….

     

    Pour expliquer la situation, j’ai une array de 18hdds ( 3to à 6to) , et deux disques de parité de 14 ( suite a un début de renouvellement de disque) + cache nvme et tmp en sdd

     

    J’ai eu la brillante idée de vouloir éjecter les petits et vieux hdds de l’array, a savoir 2to 3to 3to. J’ai donc remplacé 2 disques par deux de 6to que j’avais de coté, reconstruction OK + 2 jours sans soucis.
    J’ai changer le dernier par un autre de 6to, et la durant la reconstruction de ce dernier j’ai un de deux précèdent qui fail et me stop le reconstruction.
    C’est alors a ce moment que je me rends compte que j’ai re-injecté des disques que unraid m’avait exclu il y a quelques mois ……
    grosse boulette de ma part.

    je n’ai plus le details exacte de ce que j’ai tenté avec ces 3 disques, mais a present je me retrouve avec un système qui a 2 hdd fail (disk5 et 6), et un qui presente des erreurs de lecture (disk 2)  rendant la système très lent. Et potentiellement qui risque de me lâcher.
     

    J’ai donc commandé 2 diques de 8to en urgences et ai coupé la machine en attendant.

     

    Aujourd’hui, après vérification de l’intégrité des nouveaux disques , je les ai injecté a la place des deux fail pour tacher de faire un reconstruction malgré les soucis de lecture sur un 3eme

     

    J’espérai que cela passe, mais je me retrouve avec une reconstruction certes en cours, mais extrêmement lente ( que je ne laisserai pas aller au bout en l’etat) et avec de nombreuse erreur a priori … De plus un dique ( 17 m’indique « unmoutable . wrong or no file system)

     

    Concrètement qu’est ce que j’ai sur mon array.

    Beaucoup (trop) de media , qui sont récupérable, même si c’est chiant si je perds tout, ce n’est pas dramatique.
    J’ai par contre 1,5to de nextcloud ( evidement sans backup, je n’ai pas pris le temps de m’occuper de mon ban google drive), avec des docs perso/famille, photos, pour lequel je suis beaucoup plus emebeté.
    Tout mon appdata est sur le cache nvme donc safe.

     

    Pour le moment j’ai la reconstruction en cours, mais trop lente pour que je la laisse tourner. Donc a moins que cela s’accelere la solution ne va pas ( d’autant qu’elle semble faire beaucoup d’erreur …. Je ne sais pas quoi en pensais, si je dois laisser tourner ou pas ….

     

    J’envisage de stopper la machine, faire une nouvelle config, sans les 3 disques, en indiquant a unraid que la config est fonctionnelle  … sans certitude que cela marche ?) . et prendre les 3 disque un part un a monter sur une autre machine pour récupérer leur contenu ( je ne sais pas si unraid découpe les fichier ou non …. Si oui dans ce cas ce n’est pas gerable

     

    Dernière possibilité, lancer la machine avec disque monté et aller récupérer mon Nextcloud, même si très lent et que ça me prend 1 mois, ça sera gerable contrairement au delai de recontruction actuelle

     

    Merci de votre aide ….

    2022-11-04 17_49_08-Tower_Main - Vivaldi.png

    2022-11-04 17_49_17-Tower_Main - Vivaldi.png

    tower-diagnostics-20221104-1845.zip

  15. Hello

     

    I come to ask for help because I find myself in a delicate situation and I would not want to do more stupidity than what I have already done….

     

    Already excuse me for the google translation….

     

    To explain the situation, I have an array of 18hdds (3to to 6to), and two parity disks of 14 (following a start of disk renewal) + nvme cache and tmp in sdd

     

    I had the brilliant idea of wanting to eject the small and old hdds from the array, namely 2to 3to 3to. So I replaced 2 disks with two 6to that I had on the side, reconstruction OK + 2 days without worries.
    I changed the last one by another of 6to, and during the reconstruction of the latter I have one of the two previous ones which fails and stops me the reconstruction.


    It is then at this moment that I realize that I have re-injected disks that unraid had excluded me from a few months ago……
    big dump on my part.

     

     

    I no longer have the exact details of what I tried with these 3 disks, but now I find myself with a system that has 2 hdd fail (disk5 and 6), and one that has read errors (disk 2) making the system very slow. And potentially who risks letting go of me.

     

    So I ordered 2 8to disks urgently and shut down the machine while waiting.

    Today, after checking the integrity of the new disks, I injected them instead of the two fails to try to rebuild despite the reading problems on a 3rd

    I hoped that it would pass, but I find myself with a reconstruction certainly in progress, but extremely slow (which I will not let go to the end as it is) and with many errors a priori… In addition a disc (17 m ' indicates "unmoutable . wrong or no file system)

     

    Concretely what do I have on my array.


    A lot (too much) of media, which are recoverable, even if it's boring if I lose everything, it's not dramatic.
    On the other hand, I have 1.5to of nextcloud (obviously without backup, I didn't take the time to take care of my google drive ban), with personal/family docs, photos, for which I'm much more annoyed .
    All my appdata is on nvme cache so safe.

     

     

    At the moment I have the reconstruction in progress, but too slow for me to let it run. So unless it speeds up the solution will not work (especially since it seems to make a lot of mistakes…. I don’t know what I thought about it, if I should let it run or not….

     

    I plan to stop the machine, make a new config, without the 3 disks, indicating to unraid that the config is functional... without certainty that it works?). and take the 3 disks one part to mount on another machine to recover their content (I don't know if unraid cuts the files or not .... If so in this case it is not manageable

     

    Last possibility, launch the machine with mounted disk and go to recover my Nextcloud, even if very slow and it takes me 1 month, it will be manageable unlike the current reconstruction time

     

    Thank you for your help ….

    2022-11-04 17_49_08-Tower_Main - Vivaldi.png

    2022-11-04 17_49_17-Tower_Main - Vivaldi.png

    tower-diagnostics-20221104-1845.zip

  16. 1127343505_2021-03-0614_22_23-Tower_UpdateContainer.thumb.png.a92c20fd3214368fa7ba7cf09f354673.png

     

    2021-03-06 14_22_37-Tower_UpdateContainer.png

     

     

    La connexion a échoué

    Firefox ne peut établir de connexion avec le serveur à l’adresse 192.168.2.110:8080.

        Le site est peut-être temporairement indisponible ou surchargé. Réessayez plus tard ;
        Si vous n’arrivez à naviguer sur aucun site, vérifiez la connexion au réseau de votre ordinateur ;
        Si votre ordinateur ou votre réseau est protégé par un pare-feu ou un proxy, assurez-vous que Firefox est autorisé à accéder au Web.

  17. hello toto

    j'ai taché de suivre ton tuto , mais j'arrive au meme soucis de ddespinoy .... et sa solution marche pas dans mon cas

     

    les log du container vpn sont ok

    la création du réseau openvpn ok aussi , avec le nom du container, ( la deuxieme commande que tu a donné marche pas , j'en ressort avec une erreur comme dit precedemnt)

    chromium tourne correctement si je le mets en bridge , mais si je le mets sur le reseau vpn , ou avec le none et Extra Paramters '--net=container:openvpn' il devient inaccessible

     

    je ne sais pas trop quoi checker pour solutionner tout ça ^^

    esperant que tu puisse m'aide

×
×
  • Create New...