Jump to content

Cache read-only with new USB


Go to solution Solved by JorgeB,

Recommended Posts

I recently had issues with my iGPU and this recreated my entire USB. The only thing I kept is the data on my three data drives. Beforehand, I had copied all data off my cache SSD and reformated it with the new install.

 

Now, after copying all Docker data back over, I now encounter issues with reinstalling the old containers. Via SSH, I get the message that my cache is read-only. The SSD sometimes appears twice (see image); once in the cache pool and a second time as an unassigned device.

Screenshot 2024-04-26 151509.png

tower-diagnostics-20240426-1508.zip

Link to comment
Posted (edited)
On 4/26/2024 at 4:03 PM, JorgeB said:

Pool went read only, with btrfs suggest you backup the data and re-format.

 

In hoping that I might have done something wrong earlier that you can point out, here's how I already did that this morning:

 

1. Stop the array.

2. Remove the SSD from the pool.

3. Delete the partition via Unassigned Devices.

4. Add the SDD to the pool.

5. Start the array and reformat the pool drive.

6. Restarted the server for good measure.

 

Edit: After reformatting again, I still get this issue. Docker, for example, says: `docker: Error response from daemon: error creating temporary lease: file resize error: truncate /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown.`

Edit 2: For now, I've removed the cache drive entirely and running everything off the array HDDs. I'm still unsure on how to proceed due to the cache drive's behavior (especially it appearing twice).

Edited by DesertCookie
Link to comment
  • 2 weeks later...
Posted (edited)
1 hour ago, JorgeB said:

The NVMe device is being passed through to the Spiele VM, correct that and it should resolve the issue.

That's... what? Thanks for pointing that out! That VM has been dormant for months now and I never started it up since removing its GPU. I never had the NVMe drive passed to it, but had a HDD passed though that I removed when upgrading to 6.12. Does it make sense that the NVMe bound itself to the VM and that this causes issues, despite the VM not being turned on? Perhaps the new USB I made reassigned the VFIO IDs and the SSD thus moved up into the place of the now removed HDD...? I assume this is not expected behavior. Do you think it's worth investigating and potentially opening a bug report?

Edited by DesertCookie
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...