Solved - Cannot write to docker Image error


gacpac

Recommended Posts

6 minutes ago, gacpac said:

The larger I would download is 3Gb at a time. And that's what I've set up. The files I'm downloading are about 600mb each. Which is what confused me. 

 

My download location is in the /mnt/user/share

 

Not /mnt/disk/cache

OK - that is good.   Can the files 600MB files download in parallel - if so then you need to allow enough in the Min Free Space for 600MB x no of parallel downloads.  You probably want the value to a bit larger than that to provide some headroom as you never want the free space to actually get to zero.

 

If the files are a rar set then what location do you have set for the unrar location?    That location also has to have a Min Free Space that is larger than any file in the Unrar set - probably about twice that value as sometimes Unrar needs work space as well.

Edited by itimpi
Link to comment

@Squid

They are going there too. It's like this

 

/mnt/user/share/torrents/complete/

/mnt/user/share/torrents/incomplete/

 

They go to the cache and then mover does its magic and it works perfect unless I'm downloading like now 100gb of files

@itimpi

I usually don't download rar files and if I do, they are not that big. And I have that download location for everything, also I thought the minimun size is only set at the global share settings

Edited by gacpac
Link to comment
44 minutes ago, gacpac said:

@Squid

They are going there too. It's like this

 

/mnt/user/share/torrents/complete/

/mnt/user/share/torrents/incomplete/

 

They go to the cache and then mover does its magic and it works perfect unless I'm downloading like now 100gb of files

@itimpi

I usually don't download rar files and if I do, they are not that big. And I have that download location for everything, also I thought the minimun size is only set at the global share settings

The Global Share setting applies to the cache disk.   The individual share settings apply to the array disks.

 

If you are downloading using a torrent then the Min Free Space on the cache disk will probably need to be as large as all the simultaneous torrents added together as all the files for each torrent can get initially allocated at the start and then consume more space up to their total size as the torrent downloads.   It sounds as this might be what is causing your problem if you are downloading 100GB of torrents and have a 30GB docker image as this totals more than the size of your cache disk.    Does your torrent software have an option to allocate the space required for each torrent file before starting to download it?   If so using this might solve your problem.

Link to comment
11 minutes ago, itimpi said:

The Global Share setting applies to the cache disk.   The individual share settings apply to the array disks.

 

If you are downloading using a torrent then the Min Free Space on the cache disk will probably need to be as large as all the simultaneous torrents added together as all the files for each torrent can get initially allocated at the start and then consume more space up to their total size as the torrent downloads.   It sounds as this might be what is causing your problem if you are downloading 100GB of torrents and have a 30GB docker image as this totals more than the size of your cache disk.    Does your torrent software have an option to allocate the space required for each torrent file before starting to download it?   If so using this might solve your problem.

I use the transmission_VPN docker. I believe it does allocate the files by default as it starts downloading so I don't think that might solve the problem, I could set the global settings to 600mb, or just straight up disable cache on that share. 

 

Question. What do you guys for downloading?

Link to comment
2 hours ago, gacpac said:

Do I need to worry with Time Machine or the file integrity plugin?

Yes to both.

 

For Time Machine, set up a user share dedicated to the task. Make it Private and create a special user (e.g. TM) that has read/write access. Exclude the cache (by setting Use cache disk to No) and preferably limit it to one array disk. In AFP Security Settings you can set a limit to the space available to Time Machine - otherwise it won't start deleting old backups until the whole of the share is full. For best performance and reliability consider moving the Volume dbpath off the array disk and onto your cache. I set mine to /mnt/user/system/AppleDB.

 

For File Integrity plugin, exclude all temporary files and folders. Exclude Apple metadata files. Exclude your Time Machine share - Time Machine performs it's own integrity checks. Exclude files whose content changes frequently.

Link to comment
3 hours ago, John_M said:

Yes to both.

 

For Time Machine, set up a user share dedicated to the task. Make it Private and create a special user (e.g. TM) that has read/write access. Exclude the cache (by setting Use cache disk to No) and preferably limit it to one array disk. In AFP Security Settings you can set a limit to the space available to Time Machine - otherwise it won't start deleting old backups until the whole of the share is full. For best performance and reliability consider moving the Volume dbpath off the array disk and onto your cache. I set mine to /mnt/user/system/AppleDB.

 

For File Integrity plugin, exclude all temporary files and folders. Exclude Apple metadata files. Exclude your Time Machine share - Time Machine performs it's own integrity checks. Exclude files whose content changes frequently.

I did what you requested. Currently I have the DB in the root of the TM share. I added the parameter for the DB and manually moved it to a folder. I hope it works after that. 

For the file integrity plugin, I exclude Time Machine, appdata and Nextcloud. 

Link to comment
1 hour ago, gacpac said:

I added the parameter for the DB and manually moved it to a folder. I hope it works after that. 

There's no need to move the folder and it's best not to in case you mess up its permissions. The next time the share is AFP-mounted there will be a few seconds' delay as the system moves the database to the new location. The best thing would be to ensure the share is not mounted on any Macs then delete the .AppleDB folder and just let it be rebuilt in the desired location when it is next mounted. I've written a more detailed guide to setting up a Time Machine share here: 

 

 

  • Like 1
Link to comment
17 minutes ago, John_M said:

There's no need to move the folder and it's best not to in case you mess up its permissions. The next time the share is AFP-mounted there will be a few seconds' delay as the system moves the database to the new location. The best thing would be to ensure the share is not mounted on any Macs then delete the .AppleDB folder and just let it be rebuilt in the desired location when it is next mounted. I've written a more detailed guide to setting up a Time Machine share here: 

 

Interesting so that folder can be deleted anytime and doesn't affect the backup. Cool

4

 

Link to comment

A lot of AFP-related problems can be fixed by deleting that database (make sure the share is unmounted first). But by moving it to the cache a lot of those problems can be avoided in the first place. The only penalty for deleting the database is the need to rebuild it (and therefore a delay) when it's next mounted.

Link to comment
3 minutes ago, John_M said:

A lot of AFP-related problems can be fixed by deleting that database (make sure the share is unmounted first). But by moving it to the cache a lot of those problems can be avoided in the first place. The only penalty for deleting the database is the need to rebuild it (and therefore a delay) when it's next mounted.

But that's awesome. I have setup the share with a limit in size and moved the DB. It's wonderful that part. 

 

@Squid I think I was able to put it under control with the CA Mover plugin. I set the mover to run via schedule but also when disk is 95% full. That makes. 

 

Also, I'm keeping the btrfs to use the benefit from qcow to the virtual machines. Maybe in the future I'll add a 256gb sdd 

Link to comment
6 minutes ago, gacpac said:

But that's awesome. I have setup the share with a limit in size and moved the DB. It's wonderful that part. 

It really makes a difference to multi-disk spanning user shares that regularly have files added because the .AppleDB folder can get split across multiple disks. I suppose you could manage it with split levels but you shouldn't have to go to that much trouble for the sake of mere metadata. Another significant advantage of moving the database onto the cache is that you can leave AFP shares mounted without the problem of keeping disks from spinning down - the Finder can access the database as often as it wants, and it's really quite chatty.

Link to comment

Hi, 

 

I noticed I still get the Cache to fill and the dockers stop working. This breaks the purpose of going to the raid to save files when cache is getting full. 

 

I currently have the setting to run when 95% and I had to take it out. Also, changed the schedule for Mover to run every 8 hours. I'm sure there should be a better way. I had to install the unbalance plugin to manually rsync folders from cache to the raid. 

 

Can somebody send me a screenshot of their tuning, or at least give me an idea. 

Link to comment

Would a disk outside the array, mounted using Unassigned Devices be a possible solution? I'm thinking of a hard disk of 1 to 2 TB capacity, which should be enough to handle your downloads and post-processing but cheaper than an SSD. It would avoid spinning up the array disks unnecessarily and would leave your cache for those applications that would most benefit from its speed. I don't use downloaders myself so I'm not entirely sure of your workflow but presumably you need space to assemble segments of files and when they're complete you may decide to rewrap/transcode/re-scale/subtitle them before finally moving them onto the array.

Link to comment
  • 3 weeks later...

That's what I'm planning right now. I'll get a USB HDD and set the downloads there. Then let the sonar move it to the array. I'll also create another location within the torrent client to download array when I need it. 

 

Currently I'm using cache for my apps and VMs only. Honestly, It's kind of a bummer that it doesn't route to the next empty location when the disk fills up. And I don't mind much that it fills. Now, what drives me crazy is that the dockers crash and I have to do mover, and then restart the server. If the developers can have a look into a possible fix for future builds, it would be amazing. But for now, I'll edit the post and mark it as solved. 

 

 

Link to comment
57 minutes ago, gacpac said:

Honestly, It's kind of a bummer that it doesn't route to the next empty location when the disk fills up.

It WILL choose another disk when one fills up if you have the settings for the User Share set correctly.

 

Each User Share has a Minimum Free setting. You must set Minimum Free larger than the largest file you expect to write. Unraid has no way to know how large a file will become when it chooses a disk for it. If a disk has less than Minimum Free it will choose another.

 

Cache also has a Minimum Free setting in Global Share Settings that works in a similar manner. If cache has less than Minimum Free it will choose an array disk instead.

Link to comment
5 hours ago, trurl said:

It WILL choose another disk when one fills up if you have the settings for the User Share set correctly.

 

Each User Share has a Minimum Free setting. You must set Minimum Free larger than the largest file you expect to write. Unraid has no way to know how large a file will become when it chooses a disk for it. If a disk has less than Minimum Free it will choose another.

 

Cache also has a Minimum Free setting in Global Share Settings that works in a similar manner. If cache has less than Minimum Free it will choose an array disk instead.

Understood, 

Now, depending on the use case. Files can change at any time. I have it set up for the user shares, but for my cache varies a lot. 

Link to comment
3 hours ago, trurl said:

What varies a lot? The largest file you will EVER write?

The largest file I will ever write. So look, last few weeks I was using it with Time Machine to complete the first backup, and I had to set the biggest file to be 8MB so when the cache gets full it will send the remaining to the raid. But before that happened to a TV series I was downloading, the biggest file was 600MB and some were 300MB. And again had to set up the cache to 300 MB so it sends the files to the raid. To me doing this is a hassle. This is something that I want to set and forget. 

 

Honestly the best option that works for me, is to set my downloads outside the array and then let my Sonarr or Radarr move them to the array. For files that I want to download in the cache like programs I set a different download path that sends it to the cache and at night Mover takes place. 

 

Do you get what I mean? 

 

Link to comment
6 hours ago, gacpac said:

This is something that I want to set and forget. 

Some people download much bigger files than 600MB. They just set it to 2G or 4G and leave it there. If cache free occasionally gets lower than that then the overflow goes to the array. Then later mover comes along, frees up some space, things start going back to cache. It works as intended.

 

Or whatever. My only point was that 

19 hours ago, gacpac said:

It's kind of a bummer that it doesn't route to the next empty location when the disk fills up.

This feature is already there. It was there before Unraid V6.

Link to comment

And if you set it to 4G, it's not like you are wasting 4G of space all the time. Let's say you have it set to 4G, and cache fills so that only 5G remain. Then you start to download a 4G file. Since cache has more than minimum, Unraid will choose it and it will go to cache. Afterwards, cache only has 1G remaining, so if you then try to download a 2G file, Unraid will choose the array for that file. And later mover comes along and moves a bunch of stuff so that cache then has 50G free. So lots of room for more stuff to go to cache again.

 

Do you get what I mean?😉

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.