asbath

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

asbath's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I think it worked - after rebooting, I can see the "Airflow" widget on the dashboard, and the Dynamix Fan Auto Control app sees the qnap_ec pwm fan controllers right away, I didn't need to fiddle with the terminal to get them to show up. Thank you! If I may ask, what changes did you make to fix this? And is this a global fix for all TS-464s in the qnap-ec plugin?
  2. ran the binary as requested, and here's the output: Byte 1: FF, Byte 2: FF
  3. Here's a diagnostics file from this morning. Thank you! unraid-diagnostics-20240426-0934.zip
  4. Will do. To confirm, get the Diagnostics right after a reboot, or can I grab that anytime? I assume right after reboot, since that's when it fails to load.
  5. Thanks for posting this. I was having the same issues with my TS-464 and qnap-ec not surviving any reboots. I've followed your steps with manually removing the qnap-ec.config file and creating the qnap-ec.conf file. I'm hoping that this might resolve the issue. I'm also moving a large amount of data right now (about 12TB) so we'll see if this all survives the next reboot.
  6. I did this (stopped both docker and VM services) and then ran the Mover. It seemed to move everything over; well almost everything. There seems to be about 2GB remaining in the array (vs 160GB moved to the cache), so I will leave those files alone because I assume those will just be taken care of automatically by the system. It's not broke, so I'm not going to try to fix it! Thank you very much for the pointers. It was helpful
  7. So that is probably where I went wrong, and things went very wrong after it. When I ran the cp command, it did in fact copy over the 50GB of files. But I guess like you said, and now that I've read up more on it, they are basically the same place, just controlled by the system. I'm also reading and understanding teh cache system better. Currently I have Primary storage = cache, Secondary storage = array, Mover = Cache > Array. This to me means that the system will fill up the cache drive until it's full, then start putting things in the array after teh cache is full; meanwhile, the Mover when invoked will move data from the cache to the array. What I think I will try next is leave the Primary and Secondary storage locations as is, but se tthe Mover to Array > Cache, and manually invoke the Mover. This I believe will tell the system I want to move everything in appdata from the array to the cache. After that, I should be able to set Secondary storage = None.
  8. @JonathanM thank you for the link. I'd already watched that video, and read through a few threads/Reddit topics to get an idea. Hence why I was still using the old terminology of "Cache yes". I've installed the SSD already and gone through some trial and error, only to screw things up, but resolve the issue with a nuke and pave method. To install the SSD drive, I stopped the array, added the SSD a new Pool called "cache", and then started the array again. All was fine and dandy at this point. I then went to the "appdata" share to see that Unraid had automatically set the "appdata" share to use Primary = cache, Secondary = array. Perfect! However, I wanted to move everything from array to cache so that it would never use the array. This is where things when awry. I set the appdata share to Mover = Array=> Cache . Then I disabled the Docker service (Settings > Docker > Enable = no), then went to the Scheduler settings to manually invoke the mover. I thought this would move the files to the cache. It didn't. At this point I wasn't sure if it had failed, or this is not what the Mover was supposed to do. So I figured, easy enough, I'll just use cp to manually bring the files over from /mnt/user/appdata/* to /mnt/cache/appdata/*. That went swimmingly. Then I started up the Docker service again, and then the crap hit the fan. I got a "Docker service failed to start" error on teh Docker tab. To keep a long story short, almost nothing I did could remmedy the "Docker service failed to start" error. What did fix the issue was to revert everything back to the pre-SSD cache state. So I backpedaled until the SSD was reformatted and then re-added to a cache pool. Docker works, containers load, everything is running from the Array (no cache). Now I have the appdata configured to Primary = cache, Secondary = array. I'm leaving it as such, and I'm going to research more into how to move things to the cache. Right now the mover is doing nothing because the cache is barely 50GB full out of the 1TB limit. I am thinking that the mover may never need to be invoked because I doubt I'd ever go about say 500GB full on the cache drive because the container that uses the mose space is Plex, and even then I don't have a gigantic library so its metadata and databases should be relatively small (i.e. less than 200GB). So am I correct in thinking that just leaving the appdata share to "Primary = cache, Secondary = array" is the equivalent of having all docker containers running strictly off the SSD? I know that only new files and folders will be run off the SSD, but I assume that in the background Unraid is silently moving files over to the cache drive?
  9. Hello, First time poster, kind of long-time lurker. Please be gentle My question is about setting up a SSD as a cache as merged use for my docker containers and for Plex. What I'd like to do is have the docker containers running off of the cache using the "Cache= Yes" option such that the mover does its scheduled thing, but I'd also like my Plex container to use the SSD cache for transcoding as well. Is it just a matter of setting up the SSD cache in the array, then pointing the applicable Shares to use the cache? If I only want the appdata share to use the cache (i.e. only the docker containers), then that's the only share I should modify, is that correct? If for Plex I want everything stored on the SSD (not the media itself, just the Plex container and all of its associated files such as metadata and transcoding), is that automatically done when I set the appdata share to use the cache? And if I don't specify the location for transcoding in the Plex container config, I assume it will automatically use the Plex appdata folder on the cache? Or should I just point Plex transcoding to the RAM instead? In case you need to know anything about the setup: 4 HDDs in the array 2x 14TB data 1x 18TB parity 1x 18TB hot spare 32GB RAM 1TB SSD Thanks!