x3n0n

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

x3n0n's Achievements

Noob

Noob (1/14)

3

Reputation

1

Community Answers

  1. I meant: allowed to report ssds in this thread, since i didnt want to start an own topic for failed ssds. I only saw failed hdds reported here, so... Also, i had them in a btrfs raid 1 pool. As i understood, btrfs takes care of ("soft"?) trimming and i didn't change anything on the default pool settings.
  2. Are SSDs allowed? I would advice against buying Patriot P220 1024GB. They are reported to have flaky controllers. I bought two of them in january, now (5 months later): - one of them dropped from the array a month ago and did so until i erased with "secure erase" tool (bad controller?) - the other started to have reallocated sectors just today Will change to samsung 870 evo.
  3. Is it still necessary to install intel igpu driver? On 6.11.5 lsmod shows i915 already loaded and i did not install. If it is necessary: what plugin specifically is meant? // Edit: Seems, that it is not necessary to istall intel iGPU driver manually anymore:
  4. Did that (also waited for the btrfs operations to finish) and it seems that everything is in order now. Both btrfs balances exited with 0. See attached. hafen-diagnostics-20230529-1704.zip
  5. Booted unraid, started in maintenance and did fs check, stopped, started normally, stopped and took diagnostics: diagnostics-20230528-1958.zip
  6. So for clarification: I have a pool with 2 ssds in raid1, but the second ssd was erased. Main windows shows no config error and the fs check in maintenance mode shows 1 device missing. How do i go on from here? - Tell unraid the second drive was zeroed - format drive 2 with btrfs - tell pool the second drive is not the same anymore and it has to redo the raid 1 from drive 1
  7. In the meantime i got a new system. Turns out the drive acts up in the new system also. I put the unraid flashdrive and SSDs in the new system, booted unraid and started the array, only to see the same drive dropped with the same errors in log. I wanted to rma the drive so i wrote support and they told me to send it in. I securely erased the ssd and wanted to send it in, but was curious if it still has write errors. So i formatted it ext4 on another machine and ist started writing fine. So i want to put it back in my unraid system and did so. Config was fine, no missing drives. But also there are no errors from the cache or btrfs, which is odd, because the secure erase has zeroed the drive. What do i have to do to get the zeroed drive actually working back in the cache, so it has a btrfs filesystem and all the raid1 data on it? //Edit: Did a filesystem check on the cache in maintenance mode and now it shows errors: [1/7] checking root items [2/7] checking extents [3/7] checking free space tree [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... warning, device 2 is missing Checking filesystem on /dev/sdc1 UUID: 0936923a-0844-4c80-9929-d48d3e40bfb0 found 384472600576 bytes used, no error found total csum bytes: 371082512 total tree bytes: 919502848 total fs tree bytes: 457474048 total extent tree bytes: 63045632 btree space waste bytes: 108854723 file data blocks allocated: 386939850752 referenced 382193192960
  8. I've pulled a drive and found out that thats the one with errors. Put it in the other slot and it works normal. So it seems that one of the bays is faulty and not the drive. So good bye nas enclosure, it seems. Wrote terramaster support if they can provide another pci to sata card and am waiting for answer.
  9. It is connected directly with the nas enclosure. ive made sure it sits securely in the slot.
  10. Around line 1573 in syslog it says May 10 19:01:59 Hafen kernel: ata2.00: exception Emask 0x0 SAct 0x100000 SErr 0x0 action 0x6 frozen May 10 19:01:59 Hafen kernel: ata2.00: failed command: READ FPDMA QUEUED May 10 19:01:59 Hafen kernel: ata2: hard resetting link May 10 19:02:04 Hafen kernel: ata2: link is slow to respond, please be patient (ready=0) May 10 19:02:09 Hafen kernel: ata2: COMRESET failed (errno=-16) May 10 19:02:59 Hafen kernel: ata2: reset failed, giving up May 10 19:02:59 Hafen kernel: ata2.00: disable device Sounds like a ssd thats getting dropped, but i don't know why.
  11. Hi there folks, i'm running unraid in a terramaster 2-bay nas enclosure and its been fun! Sadly my new config is running for some 2 months now and one ssd is dropping from the redundant cache already. I don't know if it is the terramaster enclosure or if its a faulty ssd. I don't want to switch the ssds in the bay. If it is a connection issue, i have write errors on both ssds. I'm running mostly from cache, for main storage i have a 32gb usb flash drive. I attached the diagnostics. Can somebody point me in the right direction? wkr diagnostics-20230510-1915.zip
  12. Stand heute (falls jemand ähnliche Probleme hat): - SSDs nochmal ordentlich montiert - BTRFS repariert über Console: btrfs scrub start /mnt/cache - Der Befehl könnte bei Dir anders sein. Hierfür muss das array gestartet sein. Dieser Befehl wird als "sicher" angesehen, da nur Fehler korrigiert werden, die sicher von der anderen Raid Platte geholt werden können. Es gibt auch andere Wege für BTRFS Reparaturen, die Daten überschreiben, die musste ich nicht nutzen. - Der scrub Vorgang meldete am Schluss 0 Fehler, die nicht korriegiert werden können. Korrigiert wurden einige. Bis jetzt läuft der Pool wieder einwandfrei. Für die Zukunft - Script für BTRFS Überwachung eingerichtet (von Mod JorgeB: hier) - Script von @mgutt (danke!) eingerichtet (s.o.)
  13. Nachdem ich die SSDs aus den Käfigen genommen habe und auf den SATA Steckplatz fest draufgesteckt habe (glaube die saßen in den Käfigen nicht so gut) bekomme ich nach einem reboot nun fehlerfreie SMART werte. Langfristig muss ich dann schauen, wie ich die verbaue. Die eine SSD meldet über 200 Std weniger "Power On Hours", obwohl die nicht weniger als einen Tag im NAS verbaut wurden. Sprich, die ist wohl schon länger offline gewesen. 1. Wie gehe ich jetzt weiter vor, was den Cache/BTRFS angeht? Kann ich das array einfach starten und unraid checkt automatisch ob BTRFS etwas reparieren muss oder muss ich das manuell auslösen? 2. Warum habe ich keine Warnungen im Interface von unraid gesehen, dass da was nicht stimmt? Habe ich das übersehen? Im log müssen ja schon länger I/O Errors gewesen sein.
  14. Moin und danke für deinen Post, leider (?) Patriot P220 1024 GB
  15. /mnt/cache konnte unmounted werden, nachdem ich manuell /var/lib/docker/btrfs und /var/lib/docker unmounted habe (zeigten auf docker.img). Das Array ist also gestoppt. Ich habe die Diagnose angehängt. Wie kann ich jetzt weiter das btrfs checken bzw. den Fehler finden? SMART scheint die SSDs nicht ansprechen zu können, woran liegt das? //Edit Aktuell immer noch folgende Fehler: diagnostics-20230227-1125.zip