Unmountable device, wrong or no file system


Recommended Posts

Looks like you're having connection problems on parity2.

 

2 hours ago, Decay said:

Memtest but tried it some different hardware an got many failures. So that was probably one thing that caused the problems.

You must never attempt to run any computer unless memory is working perfectly. Everything goes through RAM. The OS and other executable code, your data, EVERYTHING. The CPU can't do anything with anything until it is loaded into RAM.

 

Have you done memtest with your new hardware?

Link to comment
  • Replies 98
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I´ll start with your second question. No, I haven´t don it, but put in the usb stick right now and start the test.

 

Connection problem on parity 2... I´ll check if the cables connected the right way or should I directly change the cables of the parity drive?

Link to comment

I just checked the diagnostic file and the problem is also shown at parity drive 1. Haven´t seen that on any other drive in the diagnostic file.

 

Edit: disk two has the same problem. I check if all are connected to the same power cable.

Edited by Decay
Link to comment

Disk2 is on a different power cable than the parity drives and there is no power splitter involved, at these drives.. Could the power supply be the problem or does every disk should have the same problem?

Could it be that the hard disks are defect?

 

Edit: If I am right, drive1 also have that problem now or I haven´t seen that.

Just checked old diagnostic files and that drive one also had that problem.

 

unraid-diagnostics-20240120-1952.zip

Edited by Decay
Link to comment

Could it be a smart idea to replace the discs with that failure?

If I am not wrong, I have 4 drives with that problem:

- 2 parity drives

- drive 1 and drive 2 from the array (really old drives, which I wanted to replace)

 

I bought three new drives and could replace both parity drives with new drives. When the parity drives are rebuild the parity I could replace drive 1 from the array and let it rebuild. After that I could copy all date from array drive2 to the new array drive1.

When that is done I would also replace the other drives with new ones or try the old parity drive as a array drive.

 

Why I got to this idea: When I googled a bit and searched for the problem, "occurred at disk power-on lifetime", i could find some reports which say´s that it could be a powerproblem but mostly it ended with drive problems. Perhaps it is a really bad idea to change them.

 

 

Edit: Memtest passed the test without any error. Shall I post the logfile?

Edited by Decay
Link to comment

I did not see any errors directly. I did some searching on the internet to narrow down the following error, and I saw the same error messages as these:

Error 127 [6] occurred at disk power-on lifetime: 54389 hours (2266 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

I therefore assumed that it might make sense to replace the disks. I replaced the cables, checked all the plugs again and couldn't find anything wrong.
Some time ago I had replaced the power supply unit with a larger one, could it be that the error occurred before and is now negligible?
I have attached the log file from the memtest here.

MemTest86-20240121-013132.log

Edited by Decay
Link to comment

Just startet the smart test again for both parity drives. Would it make sense to change the parity drives with new ones? I got three new drives here and I´ll order some for the array also. Want to shrink the array a bit and also want to change the old drives.

Link to comment

I accidently stopped the check on parity 1 and startet it agin last monday morning. Today it still was doing the samrt check, it was at 90% for days.

Going to stop the check at parity 2, hope that parity 1 will run through this time.

 

Edit: Checked the three new drives I got. They are Seagate drives, they should be cmr. Just for my understanding, should I avoid smr completly or just as parity drives? Most/all of the used drives are from WD, they are probably all smr drives.

Should I stop the test and replace the drives and build the parity from scratch? After that I would replace all other drives.

For my understanding it would make sense.

Edited by Decay
Link to comment

You can use SMR, but when used as parity it will degrade write performance for all the array, including writing to non SMR drives.

 

40 minutes ago, Decay said:

Should I stop the test and replace the drives and build the parity from scratch?

If you plan to replace them might as well.

 

Link to comment

Just me is using the server, so the write performance was okay for me but I think I´ll replace the parity drives with non smr drives and rebuild parity. It will take some time, I´ll report the progress if it is done. The third new drive I´ll change with drive 1 of the array and copy all data from drive 2 to drive 1.

Just ordered three more drives, these will replace the rest of the drives. It is hard to get new ones, most shops are sold out in my area.

 

@trurl Have you seen the memtest log? For me it looks good or did I missed something?

 

MemTest86-20240121-013132.log

Link to comment

Docker failed to start. Could it be possible that the docker.img, which was recreated days before, is corrupted? It was recreated with the old hardware, which had RAM problems and suddenly completly stopped to work?

To be hones, I don´t understand the docker logfile from the diagnostic.

 

Edit: Fix common Problems plugin shows: Unable to write to cache - Drive mounted read-only or completely full.

The cache drive isn´t full and it does not show that it is read-only.

 

unraid-diagnostics-20240127-1229.zip

Edited by Decay
Link to comment

Cache pool is now showing filesystem issues:

 

Jan 27 12:19:01 unraid kernel: BTRFS critical (device sdb1): unable to find logical 9836321250464530432 length 4096
Jan 27 12:19:01 unraid kernel: BTRFS critical (device sdb1): unable to find logical 9836321250464530432 length 16384

 

And because of that it went read-only, since the docker image is there it won't also work, with btrfs I recommend backing up what you can and reformatting the pool, also btrfs was detecting data corruption on the pool, so keep and eye on it to see if more corruption is detected after reformatting, if yes there could be an underlying hardware issue.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.