Jump to content

Phastor

Members
  • Content Count

    62
  • Joined

  • Last visited

Everything posted by Phastor

  1. I cannot for the life of me get Calibre-Web to see ebook-convert. I pulled the binary from a Calibre install, tossed it into /config within the container, and pointed Calibre-Web to the path to it. It's still reporting as not installed. I consoled into the container to run "ebook-convert --version" to see if it was actually functional at all and it returned an error regarding missing modules. Does it require dependcies? Does Calibre as a whole have to be installed within the container?
  2. MXToolbox is reporting that my server does not support TLS. My knowledge in this sort of thing is limited, but I think I have pinpointed the problem. After issuing the EHLO command myself, it returned the following. 250-PIPELINING 250-8BITMIME 250-SMTPUTF8 250-SIZE 25214400 250 STARTTLS That last line is what draws my attention. It's got a space instead of a dash. MXToolbox is expecting "250-STARTTLS" and I'm guessing that's why it's marking it as not supported since that's not in the response that it's getting. I imagine this is something more for the original developer of the software to deal with--just hoping that it makes its way up the chain from here.
  3. I've had a pretty stable set of backups for about 8 months now. Aside from the incredibly slow restore process (browsing folders within the backup is painful!), I've been pretty happy with it. However, I really wanted some of the newer features, such as improved performance during restores, so I switched to the canary build. I was not expecting my backup config to get wiped when I did this. Going back to the stable container returned the configs after some tweaking. Is there a safe way I can move my configs over to the canary build?
  4. I'm confused on which version of Duplicati the container is running. From what I understand, they have made a lot of improvements in performance a few months ago as far as browsing your backups goes, but I am still having terrible results that seem to go back and forth with each update of the container. On average, after hitting "restore files," it takes about 8-10 minutes to finally be presented with a directory structure I can browse. It then takes 2-3 minutes for it to think on every folder I drill down in that structure. On some updates it gets better, as in it only takes a couple minutes to present the directories and then about thirty seconds to drill down each folder. But then, like in the latest update, it's back to taking several minutes again. Throughout the course of this, the actual version of Duplicati in the container has not changed and is reporting that it is 2.0.3.3_beta_2018-04-02.
  5. How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today? There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind. Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container? Edit: It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.
  6. Stumped by such a simple and obvious thing. Thanks for pointing it out! It didn't like that I left the host path for the watch directory blank. I just removed that path entry entirely and it went through. Strange since this was the same template, blank watch path included, that I used to install it the first time where it worked.
  7. This happened a while back, but I just installed a different version of it, which turned out to be more to my liking anyway, but now it has happened again. It has become orphaned again. This time it's the one by jlesage. After removing the orphan and re-installing, it is immediately orphaned again. What does it mean when this happens?
  8. In the Dashboard tab, we get a little summary list of the users and how many shares that they have read and write access to. However, unless I am missing something, the only way to see which shares they have read/write access to, you have to look into each share individually. This is a bit tedious. I think it would be useful to have this information also present in the users tab and then be able to see more details when you click on a user. I'd like to be able to click on one of my users and then see a detailed list of shares that they have access to and what permissions they have to each.
  9. Upgraded to 6.4 and after rebooting I went to go start up my VMs and docker containers only to find they weren't there. Plugins were still intact. Going into my main tab, I'm greeted with "Unmountable:Unsupported partition layout," where my cache drive is supposed to be. The cache is formatted xfs just like the other drives and worked prior to the upgrade. Any ideas?
  10. I have split levels on that share to only split the top level directory, so it shouldn't be writing anything under a movie's subfolder outside of the disk it's already on, but reading this prompted me to take a deeper look into it and I think I see what's going on. Radarr is also changing the name of the parent directory of each movie file to match the filename. Tell me if this is what could be happening: I tell Radarr to rename a movie in /user/Movies/. The data for this movie is currently on Disk1. Radarr first creates a new folder that corresponds with the new name instead of renaming the existing folder. Following allocation and split levels, unRAID sees this as a new second level folder and writes it to Disk2. Radarr then renames the movie and moves it to the new folder under /user/Movies/. unRAID knows this file is already on Disk1, so instead of trying to move it over to Disk2, it creates the folder on Disk1 as well and moves it to there. After the movie is moved, Radarr deletes the old folder in /user/Movies/, causing the old folder to be removed from Disk1 (this folder never existed on Disk2). Both instances of the new folder on each disk are seen as valid by unRAID, so the one on Disk2 remains as well as the one on Disk1 where the file is. It will always remain empty on Disk2 though since split levels within this share will keep everything within that folder to be written to Disk1. Does this about sum it up? If this is truly what's happening, I can expect to see this happen again in the future. Given the nature of what's happening (if this is it), I don't think I can do something within unRAID to prevent it. However, it's still an undesirable effect that could potentially make things confusing down the road for me, so I would like to address it. It's a known cardinal sin to modify data on the individual disks manually, but would it hurt to recursively delete all empty folders in each individual disk? Is there possibly a plugin or some other form of tool that cleans up redundant empty folders on individual disks like this?
  11. I have a "Movies" share that is allocated to two disks using high-water. Structure for this share is as follows: Movies --Movie Title (Year) ----Movie Title (Year).mkv Disk1 has reached half capacity, and so now unRAID is putting new files onto Disk2. I just recently did a bulk renaming of a bunch of movies with Radarr, most of which were on Disk1. I have noticed that since doing that, Disk2 now has an empty folder for every movie that I renamed. The files aren't there--those are where they should be within the folder on Disk1. However, for some bizarre reason, unRAID is creating folders for these files on Disk2 if they have been changed. I'm guessing this will have no impact on functionality since it's not making duplicates of the actual files themselves? However, I really don't want this to happen for a specific reason. If "/Movies/Bladerunner/" exists on both disks, but "/Movies/Bladerunner/Bladerunner.mkv" exists on Disk1 and Disk1 fails, that means "/Movies/Bladerunner/" will remain on Disk2 and continue to show up in /User/Movies/" even though the file is now gone with the failure of Disk1. I will not be able to easily tell which movies I have lost. I do have parity, but I'm thinking of a worse case scenario where parity fails and backup recovery fails. To clarify, all of the renaming and file editing is being done from the /user/Movies share. I'm just observing what is happening to the individual disks when I do so. Is there a way I can prevent these redundant empty folders from being created like this? If not, is there a safe way I can bulk remove them from the individual disks?
  12. Gotcha. I thought it would recursively scatter all individual files. Thanks for the clarification!
  13. Thanks! Here's a scenario I was thinking of. I currently have a share called movies. Under that is a folder for each movie where the movie itself, subs, posters, etc reside. I have split levels set to only split the top level directory. This way each movie would reside on the same disk as its subs and whatnot. I wouldn't want a movie and it's related files stretched across multiple disks. If I felt the need to scatter the movies folder, say after I've gathered it to one disk for whatever reason, I would want scatter to follow the split levels so again the individual moves would stay together with their respective files. Or am I misunderstanding how this works?
  14. Any plans for it in the future, if it's even possible?
  15. Does Scatter follow allocation method and directory split levels?
  16. Now that Veeam Endpoint, now known as Veeam Agent, has a Linux version, is anyone aware of any plans that might be in motion to get this dockerized?
  17. Trying to browse through folders under the Restore tab is painful. Every time you try to drill down a folder it takes nearly two minutes to think before it actually does it. Doesn't matter how large or how many files are in the folder. It does this with every single folder--and it seems to get longer with every additional snapshot taken. Is this normal?
  18. Aha! So I'm already covering it. Got the flash included in Duplicati. I figured it was going to be on the flash somewhere. I just didn't think to look under plugins. Thanks!
  19. Where are those stored? I want to make sure I have them included in my backups so that in the event of having to rebuild my docker apps, I don't have to worry about forgetting what I've mapped to what. I know that unRAID will save those templates for you which makes re-installation of apps easy, but I'm thinking about an event where those templates are lost. Those are what I want to back up.
  20. I have one drive with an existing set of backups on it. I want to make a second so I can rotate them to have an off-site cold backup. I really don't want to take another three days to generate another backup of my 1.7TB from scratch on another drive, so I had the idea copying the backups from one drive to another (a few hours of work rather than days) and exporting/importing it's configuration and changing the name and destination. I know you could move backups to another location by doing this, but what about making an exact copy? Could two exact instances of the same backups exist under different names? To test if this would work, I made a small test backup set. I ran it once and copied the resulting backups to another location. I then exported the configuration for that backup and imported it into another new set under a different name. I changed the destination to the location that I made the copy of the backup to and tried running it. It failed the first time, referencing files on the remote location that didn't match. After running a repair on it, it ran successfully on the second attempt. So now that I know this works, before I go an do this with my live backup, I just wanted to make sure that I won't run into any wonky issues later on?
  21. Yeah, I never looked into whether Cloudberry supported it or not until now. I guess it just doesn't fit my use case. It's shame since I really like it! I guess I'm stuck with the slowness of Duplicati.
  22. Thanks for the quick timzone fix! Partially for the space, partially for the backup duration, and mostly because my backup drive can't hold more than one copy of my video. I suppose I could create a separate backup plan for video that immediately removes files that were locally deleted. However, if I were to move a video, the software will still want to create a backup of that video in it's new location before deleting the old, right? Just renaming the top level directory of my video files would trigger a full copy of every video, unless I'm misunderstanding this. At over a TB of video, that would be a long backup for a small change.
  23. Eew. Just changing the filename (or that of its parent folder) or moving a file triggers a full duplicate instance of that file in the backup. Since most of my data is decently large video files, that's a deal breaker for me. I understand that's a limitation of the software itself and not the docker container. Thanks anyway for your help on this!
  24. I just did another backup attempt with "Always keep the last version" selected. That seemed to fix it. It retained everything this time. I figured it would have have hung onto everything without that selected as long as the backups were not older than a month. Does it determine this by the files last modify date rather than the time of the backup that it was actually taken? And I have the same issue 1812's comment above. The time is off.