Jump to content

Phastor

Members
  • Content Count

    62
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Phastor

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I cannot for the life of me get Calibre-Web to see ebook-convert. I pulled the binary from a Calibre install, tossed it into /config within the container, and pointed Calibre-Web to the path to it. It's still reporting as not installed. I consoled into the container to run "ebook-convert --version" to see if it was actually functional at all and it returned an error regarding missing modules. Does it require dependcies? Does Calibre as a whole have to be installed within the container?
  2. MXToolbox is reporting that my server does not support TLS. My knowledge in this sort of thing is limited, but I think I have pinpointed the problem. After issuing the EHLO command myself, it returned the following. 250-PIPELINING 250-8BITMIME 250-SMTPUTF8 250-SIZE 25214400 250 STARTTLS That last line is what draws my attention. It's got a space instead of a dash. MXToolbox is expecting "250-STARTTLS" and I'm guessing that's why it's marking it as not supported since that's not in the response that it's getting. I imagine this is something more for the original developer of the software to deal with--just hoping that it makes its way up the chain from here.
  3. I've had a pretty stable set of backups for about 8 months now. Aside from the incredibly slow restore process (browsing folders within the backup is painful!), I've been pretty happy with it. However, I really wanted some of the newer features, such as improved performance during restores, so I switched to the canary build. I was not expecting my backup config to get wiped when I did this. Going back to the stable container returned the configs after some tweaking. Is there a safe way I can move my configs over to the canary build?
  4. I'm confused on which version of Duplicati the container is running. From what I understand, they have made a lot of improvements in performance a few months ago as far as browsing your backups goes, but I am still having terrible results that seem to go back and forth with each update of the container. On average, after hitting "restore files," it takes about 8-10 minutes to finally be presented with a directory structure I can browse. It then takes 2-3 minutes for it to think on every folder I drill down in that structure. On some updates it gets better, as in it only takes a couple minutes to present the directories and then about thirty seconds to drill down each folder. But then, like in the latest update, it's back to taking several minutes again. Throughout the course of this, the actual version of Duplicati in the container has not changed and is reporting that it is 2.0.3.3_beta_2018-04-02.
  5. How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today? There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind. Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container? Edit: It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.
  6. Stumped by such a simple and obvious thing. Thanks for pointing it out! It didn't like that I left the host path for the watch directory blank. I just removed that path entry entirely and it went through. Strange since this was the same template, blank watch path included, that I used to install it the first time where it worked.
  7. This happened a while back, but I just installed a different version of it, which turned out to be more to my liking anyway, but now it has happened again. It has become orphaned again. This time it's the one by jlesage. After removing the orphan and re-installing, it is immediately orphaned again. What does it mean when this happens?
  8. In the Dashboard tab, we get a little summary list of the users and how many shares that they have read and write access to. However, unless I am missing something, the only way to see which shares they have read/write access to, you have to look into each share individually. This is a bit tedious. I think it would be useful to have this information also present in the users tab and then be able to see more details when you click on a user. I'd like to be able to click on one of my users and then see a detailed list of shares that they have access to and what permissions they have to each.
  9. Upgraded to 6.4 and after rebooting I went to go start up my VMs and docker containers only to find they weren't there. Plugins were still intact. Going into my main tab, I'm greeted with "Unmountable:Unsupported partition layout," where my cache drive is supposed to be. The cache is formatted xfs just like the other drives and worked prior to the upgrade. Any ideas?
  10. I have split levels on that share to only split the top level directory, so it shouldn't be writing anything under a movie's subfolder outside of the disk it's already on, but reading this prompted me to take a deeper look into it and I think I see what's going on. Radarr is also changing the name of the parent directory of each movie file to match the filename. Tell me if this is what could be happening: I tell Radarr to rename a movie in /user/Movies/. The data for this movie is currently on Disk1. Radarr first creates a new folder that corresponds with the new name instead of renaming the existing folder. Following allocation and split levels, unRAID sees this as a new second level folder and writes it to Disk2. Radarr then renames the movie and moves it to the new folder under /user/Movies/. unRAID knows this file is already on Disk1, so instead of trying to move it over to Disk2, it creates the folder on Disk1 as well and moves it to there. After the movie is moved, Radarr deletes the old folder in /user/Movies/, causing the old folder to be removed from Disk1 (this folder never existed on Disk2). Both instances of the new folder on each disk are seen as valid by unRAID, so the one on Disk2 remains as well as the one on Disk1 where the file is. It will always remain empty on Disk2 though since split levels within this share will keep everything within that folder to be written to Disk1. Does this about sum it up? If this is truly what's happening, I can expect to see this happen again in the future. Given the nature of what's happening (if this is it), I don't think I can do something within unRAID to prevent it. However, it's still an undesirable effect that could potentially make things confusing down the road for me, so I would like to address it. It's a known cardinal sin to modify data on the individual disks manually, but would it hurt to recursively delete all empty folders in each individual disk? Is there possibly a plugin or some other form of tool that cleans up redundant empty folders on individual disks like this?
  11. I have a "Movies" share that is allocated to two disks using high-water. Structure for this share is as follows: Movies --Movie Title (Year) ----Movie Title (Year).mkv Disk1 has reached half capacity, and so now unRAID is putting new files onto Disk2. I just recently did a bulk renaming of a bunch of movies with Radarr, most of which were on Disk1. I have noticed that since doing that, Disk2 now has an empty folder for every movie that I renamed. The files aren't there--those are where they should be within the folder on Disk1. However, for some bizarre reason, unRAID is creating folders for these files on Disk2 if they have been changed. I'm guessing this will have no impact on functionality since it's not making duplicates of the actual files themselves? However, I really don't want this to happen for a specific reason. If "/Movies/Bladerunner/" exists on both disks, but "/Movies/Bladerunner/Bladerunner.mkv" exists on Disk1 and Disk1 fails, that means "/Movies/Bladerunner/" will remain on Disk2 and continue to show up in /User/Movies/" even though the file is now gone with the failure of Disk1. I will not be able to easily tell which movies I have lost. I do have parity, but I'm thinking of a worse case scenario where parity fails and backup recovery fails. To clarify, all of the renaming and file editing is being done from the /user/Movies share. I'm just observing what is happening to the individual disks when I do so. Is there a way I can prevent these redundant empty folders from being created like this? If not, is there a safe way I can bulk remove them from the individual disks?
  12. Gotcha. I thought it would recursively scatter all individual files. Thanks for the clarification!
  13. Thanks! Here's a scenario I was thinking of. I currently have a share called movies. Under that is a folder for each movie where the movie itself, subs, posters, etc reside. I have split levels set to only split the top level directory. This way each movie would reside on the same disk as its subs and whatnot. I wouldn't want a movie and it's related files stretched across multiple disks. If I felt the need to scatter the movies folder, say after I've gathered it to one disk for whatever reason, I would want scatter to follow the split levels so again the individual moves would stay together with their respective files. Or am I misunderstanding how this works?
  14. Any plans for it in the future, if it's even possible?