Jump to content

One share gone after cache recreation, "Share does not exist"?!


Go to solution Solved by alexanderSad,

Recommended Posts

Hi,

 

I've been running an unraid server for a few years.

 

A day ago a cache drive dired. I was running 2TB + 2TB + 128GB cache drives and the small one died. No issue erase and remove it.

 

After that the system ran fine and I checked that my latest data was available. It was all good.


Today the array wouldn't start. Complaining about the cache drives again. With the confidence of long load times when the disks spin up and having seen all my data after the last clear I cleared and recreated the cache pool.

Seemed to work until I realized that one of my shares was just gone. Not listed in shares and I can't see it if I traverse the file system online. Not mountable from my desktop.

I can see by the used space on my array that it still uses up space (probably around 7 TB) but I can't find any way to get my files. And this is the last share I would have wanted to loose.

 

Reading through the diagnostics dump I can see the config for the share (bildtemp) but it has a comment added at the top "# Share does not exist"

How do I recover from this?

What went wrong?

 

All help is greatly appreciated. It would crush me to loose this data.

 

tower-diagnostics-20240429-2210.zip

Link to comment
16 minutes ago, alexanderSad said:

bildtemp is the name. Which despite it's name did not hold only temporary data.

That share is configured to all be on the 'cache' pool (space permitting) so if you did not back it up before you cleared the cache pool its contents will have been lost.

Link to comment
1 hour ago, alexanderSad said:

t was several times larger than the cache pool. I my understanding was that prefer cache was a read cache. Writes straight through to disk. That is how the write speeds behaved at least.

No - exactly the opposite - i is a write cache.   Each file goes to the cache unless it will not fit.  Only overflow files get written to the array.

Link to comment

Well this was incredibly depressing.

 

I thought I had set everything up to write to disk always :(

What are my chances of finding the files with file recover programs on the caches?

I just can't believe this happened. I really really would have expected a lot more warning with clearing the cache if the cache was in fact not a cache at all.

Link to comment
  • 1 month later...
On 4/30/2024 at 11:10 AM, itimpi said:

No - exactly the opposite - i is a write cache.   Each file goes to the cache unless it will not fit.  Only overflow files get written to the array.


This is not a description of what a write cache is though. 

Wikipedia:
 

Quote

With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. 



In no sensible context would a write cache (or read cache) mean data is removed from storage. Except unraid i suppose.

Not that it brings back my data 💔

Proper read cache definition from wikipedia:
 

Quote

With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location


 

Edited by alexanderSad
Link to comment
49 minutes ago, alexanderSad said:

In no sensible context would a write cache (or read cache) mean data is removed from storage. Except unraid i suppose

If the cache drive is not full, then it behaves exactly as the Wikipedia definition describes for a write cache.    Removing the cache for loses all data that has not been flushed from the cache.   You unfortunately made use of an Unraid option had to set things up so that the data was never flushed to the main array.   That setting is intended to maximise performance for docker containers and VMs where you want to keep files on fastest storage.    I think it would be better if the default name for the first pool was not ‘cache’ as then users expectations of the pool might be slightly different.    Having said that the help built into the GUI for the primary and secondary storage and mover action settings on user shares does describe exactly how it operates.

 

if you still have the original drive intact then file recovery tools such as UFS Explorer on Windows can normally retrieve all the data.

 

Link to comment
  • Solution

I have recovered some but it's worse than you described.

I started with most of the data in the array before I updated and changed my cache setting. Unraid then copied my data from the array to the cache (good) and deleted it from the array (absolutely insane with the name cache). That is not a cache behavior.

The word "move" is also between drives as then there is only copy and delete. I read the descriptions and concluded one option was from cache to array. Don't like not safe. One option is from array to cache. They must mean caching it. But no :(

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...