[Support] Linuxserver.io - Sonarr


Recommended Posts

On 5/18/2019 at 11:43 AM, BLKMGK said:

Curious, is anyone using the V3 version of Sonarr with this container? It looks like we can request it by using the "preview" tag. I've been reading some good things about V3 and have become curious. Just wondering if anyone has tried this and how it went before attempting to take the plunge :)

I left my v2 docker alone, stopped it and created a new docker with the v3 preview with its own appdata folder. I started fresh and everything is working fine so far. I imagine you can copy the DB folders into the new appdata folder but I don't know that exact process. 

 

I have only been running v3 for a couple days but I have not come across any issues yet. 

  • Upvote 1
Link to comment

Hi, I am having DB corruption issues with Lidarr and Sonarr.

Unraid 6.7.0, install containers, add lots of media, run for a bit, and then errors.

E.g.

System.Data.SQLite.SQLiteException (0x80004005): database disk image is malformed
database disk image is malformed
  at System.Data.SQLite.SQLite3.Reset (System.Data.SQLite.SQLiteStatement stmt) [0x00083] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x0003c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior)
  at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at Marr.Data.QGen.InsertQueryBuilder`1[T].Execute () [0x00046] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\QGen\InsertQueryBuilder.cs:140 
  at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\DataMapper.cs:728 
  at NzbDrone.Core.Datastore.BasicRepository`1[TModel].Insert (TModel model) [0x0002d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Datastore\BasicRepository.cs:111 
  at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x0013d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:82 
  at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet] (System.Runtime.CompilerServices.CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) [0x00136] in <35ad2ebb203f4577b22a9d30eca3ec1f>:0 
  at (wrapper delegate-invoke) System.Func`6[System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,System.Object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger,System.Object].invoke_TResult_T1_T2_T3_T4_T5(System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger)
  at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push (System.String commandName, System.Nullable`1[T] lastExecutionTime, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x000b7] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:95 
  at NzbDrone.Core.Jobs.Scheduler.ExecuteCommands () [0x00043] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Jobs\Scheduler.cs:42 
  at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in <6649516e5b3542319fb262b421af0adb>:0 
  at System.Threading.Tasks.Task.Execute () [0x00000] in <6649516e5b3542319fb262b421af0adb>:0 

 

Is there a systemic problem with LSIO and Unraid 6.7.0, or is there something wrong with Sonarr/Radarr/Lidarr?

Link to comment
2 hours ago, ptr727 said:

Hi, I am having DB corruption issues with Lidarr and Sonarr.

Unraid 6.7.0, install containers, add lots of media, run for a bit, and then errors.

E.g.


System.Data.SQLite.SQLiteException (0x80004005): database disk image is malformed
database disk image is malformed
  at System.Data.SQLite.SQLite3.Reset (System.Data.SQLite.SQLiteStatement stmt) [0x00083] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x0003c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior)
  at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 
  at Marr.Data.QGen.InsertQueryBuilder`1[T].Execute () [0x00046] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\QGen\InsertQueryBuilder.cs:140 
  at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\DataMapper.cs:728 
  at NzbDrone.Core.Datastore.BasicRepository`1[TModel].Insert (TModel model) [0x0002d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Datastore\BasicRepository.cs:111 
  at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x0013d] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:82 
  at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet] (System.Runtime.CompilerServices.CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) [0x00136] in <35ad2ebb203f4577b22a9d30eca3ec1f>:0 
  at (wrapper delegate-invoke) System.Func`6[System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,System.Object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger,System.Object].invoke_TResult_T1_T2_T3_T4_T5(System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger)
  at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push (System.String commandName, System.Nullable`1[T] lastExecutionTime, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x000b7] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:95 
  at NzbDrone.Core.Jobs.Scheduler.ExecuteCommands () [0x00043] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Jobs\Scheduler.cs:42 
  at System.Threading.Tasks.Task.InnerInvoke () [0x0000f] in <6649516e5b3542319fb262b421af0adb>:0 
  at System.Threading.Tasks.Task.Execute () [0x00000] in <6649516e5b3542319fb262b421af0adb>:0 

 

Is there a systemic problem with LSIO and Unraid 6.7.0, or is there something wrong with Sonarr/Radarr/Lidarr?

Something local to you is my guess.

Post your docker run command.

might be you have a bad drive.

Link to comment
7 hours ago, saarg said:

Something local to you is my guess.

Post your docker run command.

might be you have a bad drive.

It could be my system, but I would expect to see other symptoms, I am not.

It could just be Sonarr that messes up its own DB, and maybe it has nothing to do with Docker or the config path mapping.

Where can I get a log of my docker run command without recreating the container?

 

Link to comment
1 hour ago, trurl said:

Why do you not want to recreate the container?

Because it is currently running, and to get the logs as suggested, I need to stop and recreate the container.

Was hoping that there will be a historic log file I can reference, seems important enough to log?

Link to comment
2 hours ago, Squid said:

What's the big deal about that?

Big deal, I don't know, who said anything about a big deal?

I just asked if there is an alternative to recreating the container to get a log, and since it is such a big deal (see what I did there ;) ) to get the log, suggested an enhancement to make the details end up in logs.

 

Btw, I found several other reports of DB corruption in Sonarr, Radarr, Docker, Unraid, could be coincidental.

E.g.

https://github.com/Sonarr/Sonarr/issues/1886

https://github.com/docker/for-win/issues/1385

https://forums.sonarr.tv/t/nzbdrone-db-constant-corruption-docker/17658

https://forums.sonarr.tv/t/database-file-config-nzbdrone-db-is-corrupt-restore-from-backup-if-available/21928

 

 

And this FAQ: 

https://github.com/Sonarr/Sonarr/wiki/FAQ#i-am-getting-an-error-database-disk-image-is-malformed

 

I'll report the docker log when I get home.

Edited by ptr727
More references
Link to comment

Here is my command:

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sonarr' --net='br0' --ip='192.168.1.16' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/download/':'/downloads':'rw' -v '/mnt/user/media/':'/media':'rw' -v '/mnt/user/appdata/sonarr':'/config':'rw' 'linuxserver/sonarr' 
31eff3d78164112cccc1355564430e0f218094f8dc062d58488b6b3e56bbc260

I do not currently have any cache drives.

I use /mnt/user/appdata for /config.

 

Btw, I now notice that Plex is also misbehaving. Plex was unresponsive, restarted container:

...
Sqlite3: Sleeping for 200ms to retry busy DB.
Sqlite3: Sleeping for 200ms to retry busy DB.
Sqlite3: Sleeping for 200ms to retry busy DB.
Sqlite3: Sleeping for 200ms to retry busy DB.
Sqlite3: Sleeping for 200ms to retry busy DB.
Sqlite3: SleepinCritical: libusb_init failed

[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.


[cont-init.d] 60-plex-update: exited 0.
[cont-init.d] 99-custom-scripts: executing...
[custom-init] no custom scripts found exiting...
[cont-init.d] 99-custom-scripts: exited 0.
[cont-init.d] done.
[services.d] starting services
Starting Plex Media Server.
[services.d] done.
Starting Plex Media Server.
Starting Plex Media Server.
Starting Plex Media Server.
Starting Plex Media Server.
Starting Plex Media Server.
Starting Plex Media Server.
...

 

Edited by ptr727
Plex DB failure.
Link to comment

Then your issues are related to using the array for the appdata. The problem is that databases do not go well with the fuse system on unraid (or mergers on other systems).

You should change the appdata to go directly to a disk instead of /mnt/user. So you use /mnt/diskX/share/for/appdata. Replace the X with the disk you want to use.

After this you should have no issues with containers using databases.

  • Like 1
Link to comment
1 hour ago, saarg said:

Then your issues are related to using the array for the appdata. The problem is that databases do not go well with the fuse system on unraid (or mergers on other systems).

You should change the appdata to go directly to a disk instead of /mnt/user. So you use /mnt/diskX/share/for/appdata. Replace the X with the disk you want to use.

After this you should have no issues with containers using databases.

Wow, really, that seems like a major problem for any code that relies on file system locking to work like a local fs?

If I am now forced to use a single disk for any app that relies on fs locking, it also breaks the intent of a resilient filesystem, i.e. my app dies when the disk dies, vs. my app dies when disk plus parity dies?

If I did add a cache, would BTRFS be impacted by fuse, I assume not?

Any pointers to docs about the issue with fuse and locking, or plans to address the issue?

Link to comment
Wow, really, that seems like a major problem for any code that relies on file system locking to work like a local fs?
If I am now forced to use a single disk for any app that relies on fs locking, it also breaks the intent of a resilient filesystem, i.e. my app dies when the disk dies, vs. my app dies when disk plus parity dies?
If I did add a cache, would BTRFS be impacted by fuse, I assume not?
Any pointers to docs about the issue with fuse and locking, or plans to address the issue?
It's not our issue to fix and by design appdata is meant to reside on cache. It makes far more sense to have appdata localised to one disk anyway and no real sense to write it directly to the array as that slows things down considerably and causes issues.

Also, protected array is not the same as backups, appdata should be backed up, not relying on parity protection.

Sent from my Mi A1 using Tapatalk

Link to comment
2 minutes ago, CHBMB said:

It's not our issue to fix and by design appdata is meant to reside on cache. It makes far more sense to have appdata localised to one disk anyway and no real sense to write it directly to the array as that slows things down considerably and causes issues.

Also, protected array is not the same as backups, appdata should be backed up, not relying on parity protection.

Sent from my Mi A1 using Tapatalk
 

With "our issue", do you mean Unraid or LSIO, yes, not a LSIO issue, but absolutely an Unraid issue, breaking fs locking is a big no-no?

 

<rant-on> If appdata is "designed" to be on cache, then I have not seen any such pre-purchase docs that tell me that without a cache my Unraid will break if any apps require fs locking to you know, work, before I spent good money on buying two pro licenses, and converting two working systems to Unraid. <rant-off>

 

As for it only breaking sqlite, it probably breaks any code that relies on the filesystem locking semantics to work, but sqlite in docker is just prevalent.

Now that I know what to search for, google and github is full of reports of docker based Sonarr, Radarr, Plex, etc. SQLite corruption on Unraid.

 

I don't like the idea of having to use a dedicated disk to bypass fuse breaking locking, two alternatives come to mind; change the sqlite locking semantics used in docker apps running on unraid, and obviously unraid fixes locking.

Has there been any attempts at collaboration with Sonarr / Radarr / Lidarr / SQLite to try and have a sqlite configuration that works on unraid fuse?

Link to comment

It's not a ls.io issue. It doesn't break anything, it just requires a direct address to disk.

As to whether the actual issue is kernel, Unraid, Docker, Sqlite or Sonarr/${Application} I have no idea and to be honest, I don't see it as a significant issue.

Why you would want all your appdata for Sonarr spread across an array anyway is beyond me, and I would recommend keeping it all on one disk for that very reason.

I think you see this as a bigger deal than it actually is

Also, for some reason it doesn't seem to affect everyone, some people get away with it with no problems at all, others don't, hence you could probably throw hardware into the mix as a possible cause.

If you wish to go down the rabbit hole to try and get to the bottom of it, feel free.

Sent from my Mi A1 using Tapatalk

Link to comment
2 hours ago, ptr727 said:

If I am now forced to use a single disk for any app that relies on fs locking, it also breaks the intent of a resilient filesystem, i.e. my app dies when the disk dies, vs. my app dies when disk plus parity dies?

You misunderstand parity protection as it applies to single disks. The mdX devices are protected by parity, so there is no difference in protection whether you use /mnt/user/appdata or /mnt/diskX/appdata. In either case a single disk failure is still emulated by parity. The only thing you are losing is the ability to automatically spread the ../appdata folder across multiple disks and use a single point of access, /mnt/user/appdata.

 

BTW, parity doesn't provide a resilient filesystem, only device failure protection. Each disk has a separate independent file system. The /mnt/user fuse is just the combination of all the root folder paths on each separate disk.

  • Like 1
  • Upvote 1
Link to comment
10 minutes ago, jonathanm said:

You misunderstand parity protection as it applies to single disks. The mdX devices are protected by parity, so there is no difference in protection whether you use /mnt/user/appdata or /mnt/diskX/appdata. In either case a single disk failure is still emulated by parity. The only thing you are losing is the ability to automatically spread the ../appdata folder across multiple disks and use a single point of access, /mnt/user/appdata.

 

BTW, parity doesn't provide a resilient filesystem, only device failure protection. Each disk has a separate independent file system. The /mnt/user fuse is just the combination of all the root folder paths on each separate disk.

Ok, I didn't know that individual disks remain protected.

 

As for the rabbit hole, the fact that it works for some users and not for others, makes it even more scary to me, in my mind a fs is foundational, and it needs to be rock solid, always. I now regret not giving the ZFS based solutions a second look.

Link to comment
8 minutes ago, ptr727 said:

in my mind a fs is foundational, and it needs to be rock solid, always. I now regret not giving the ZFS based solutions a second look.

Since each disk in unraid has an independent file system, you can choose BTRFS or XFS freely, even have different file systems on the same array. The fuse overlay isn't a file system, more like a hard link type setup. That's why it can be advantageous to address the disks individually for some situations. Many of us that have used unraid for many years choose to manage our files on a disk by disk basis, and use the /mnt/user tree as a convenient way to present the files to users and applications, all while maintaining control at the disk level.

 

It's a very powerful system that allows multiple different sized disks with seamless expansion, not something that ZFS or other pure RAID systems can offer. Different philosophies, different strengths.

 

Name another setup that when faced with multiple member failures beyond the RAID recovery threshold still allows easy data recovery from intact remaining disks. For example, with Unraid, no matter how many disks die, you only lose data on the dead disks.

Link to comment
35 minutes ago, ptr727 said:

I didn't know that individual disks remain protected.

Just so there is no misunderstanding. With single parity, a single failure can be rebuilt from the parity calculation by reading parity PLUS ALL of the remaining disks. Parity by itself protects nothing.

 

And as mentioned, parity is not a substitute for backups, whether in Unraid or some RAID implementation.

Link to comment
22 minutes ago, jonathanm said:

Since each disk in unraid has an independent file system, you can choose BTRFS or XFS freely, even have different file systems on the same array. The fuse overlay isn't a file system, more like a hard link type setup. That's why it can be advantageous to address the disks individually for some situations. Many of us that have used unraid for many years choose to manage our files on a disk by disk basis, and use the /mnt/user tree as a convenient way to present the files to users and applications, all while maintaining control at the disk level.

 

It's a very powerful system that allows multiple different sized disks with seamless expansion, not something that ZFS or other pure RAID systems can offer. Different philosophies, different strengths.

 

Name another setup that when faced with multiple member failures beyond the RAID recovery threshold still allows easy data recovery from intact remaining disks. For example, with Unraid, no matter how many disks die, you only lose data on the dead disks.

Flexible, yes, exactly the reason I chose to go from hardware RAID6 to Unraid.

Dead disk single disk loss, don't care, a partial loss when viewed as a whole is a loss when consistent state recovery can only be done by restore from backup.

But, if fuse breaks file locking, that normally works on the underlying XFS disks, then my view to the logical fs is broken.

 

I know I'm starting to sound like a broken record, and I don't want my feelings to reflect negatively on the great work done by lsio, but if it really is fuse that breaks file locking, then that is a big deal to me, and I'll dig down the rabbit hole ;)

Link to comment
7 hours ago, saarg said:

Then your issues are related to using the array for the appdata. The problem is that databases do not go well with the fuse system on unraid (or mergers on other systems).

You should change the appdata to go directly to a disk instead of /mnt/user. So you use /mnt/diskX/share/for/appdata. Replace the X with the disk you want to use.

After this you should have no issues with containers using databases.

I've actually got some questions around this and they're probably going to sound fairly stupid lol.

 

I wasn't aware that the appdata shouldn't be stored on /mnt/user, the docker apps typically defaulted to that and I assumed that was okay. I haven't encountered any corruption issues (yet), but of course want to avoid that potential. I do have a cache drive, so should I be storing all my appdata on the cache, or simply move the appdata to a specific disk?

 

FWIW I have a 10tb parity drive, 10TB, 6TB, 2x3TB drive for storage, and a 500gb SSD for the cache.

 

My Plex's setup is:

/media is /mnt/user/Storage

/transcode is /mnt/cache/appdata/plex/transcode

/config is /mnt/user/appdata/plex

 

Radarr

/downloads -> /mnt/user

/movies -> /mnt/user/Storage/Movies

/config -> /mnt/user/appdata/radarr

 

Sonarr

/config -> /mnt/user/appdata/sonarr

/tv -> /mnt/user/Storage/

/downloads -> /mnt/user/

 

Deluge

/downloads -> /mnt/user

/config -> /mnt/user/appdata/binhex-delugevpn

 

I realize this question is a bit more generic and not sonarr specific, so I hope this okay.

 

Thanks in advance

 

 

Edit: If I was to move all my appdata from the /user/ to /diskX (or cache?), would it move the existing data, or should I be backing up the appdata folders, repointing my docker config and then copying the configs, and such over?

Edited by NVS1
Link to comment
2 hours ago, NVS1 said:

If I was to move all my appdata from the /user/ to /diskX (or cache?), would it move the existing data,

A very likely outcome to this would be that you would actually ERASE all your appdata.

 

DO NOT DO ANY OF WHAT YOU ARE PROPOSING.

 

/mnt/user/appdata is already on one of your disks, which one depends on the appdata share settings.

 

If you want to learn about how unraid works behind the scenes, then afterwards you can mess around with this type of stuff.

Link to comment
2 hours ago, jonathanm said:

A very likely outcome to this would be that you would actually ERASE all your appdata.

 

DO NOT DO ANY OF WHAT YOU ARE PROPOSING.

 

/mnt/user/appdata is already on one of your disks, which one depends on the appdata share settings.

 

If you want to learn about how unraid works behind the scenes, then afterwards you can mess around with this type of stuff.

Thanks for the heads up. Which setting are you referring to when you say it depends on the appdata share settings?

 

Under Global Share Settings, I have Enable Disk Shares set to Auto, User shares set to Yes, and Included Disks set to All.

 

Is there another setting that you were referring to?

 

Edit: Under the Shares section the /appdata share is also configured to use All Disks here.

 

image.png.0364ac7cf956fbb889337ff2ae48f5da.pngimage.png.d527206254cbbd2b145e99f3ab0d4413.png

 

Should I be changing this to only a specific disk? Can I do this safely without losing data?

Edited by NVS1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.