Phastor

Members
  • Posts

    78
  • Joined

  • Last visited

Everything posted by Phastor

  1. Upgraded to 6.4 and after rebooting I went to go start up my VMs and docker containers only to find they weren't there. Plugins were still intact. Going into my main tab, I'm greeted with "Unmountable:Unsupported partition layout," where my cache drive is supposed to be. The cache is formatted xfs just like the other drives and worked prior to the upgrade. Any ideas?
  2. I have split levels on that share to only split the top level directory, so it shouldn't be writing anything under a movie's subfolder outside of the disk it's already on, but reading this prompted me to take a deeper look into it and I think I see what's going on. Radarr is also changing the name of the parent directory of each movie file to match the filename. Tell me if this is what could be happening: I tell Radarr to rename a movie in /user/Movies/. The data for this movie is currently on Disk1. Radarr first creates a new folder that corresponds with the new name instead of renaming the existing folder. Following allocation and split levels, unRAID sees this as a new second level folder and writes it to Disk2. Radarr then renames the movie and moves it to the new folder under /user/Movies/. unRAID knows this file is already on Disk1, so instead of trying to move it over to Disk2, it creates the folder on Disk1 as well and moves it to there. After the movie is moved, Radarr deletes the old folder in /user/Movies/, causing the old folder to be removed from Disk1 (this folder never existed on Disk2). Both instances of the new folder on each disk are seen as valid by unRAID, so the one on Disk2 remains as well as the one on Disk1 where the file is. It will always remain empty on Disk2 though since split levels within this share will keep everything within that folder to be written to Disk1. Does this about sum it up? If this is truly what's happening, I can expect to see this happen again in the future. Given the nature of what's happening (if this is it), I don't think I can do something within unRAID to prevent it. However, it's still an undesirable effect that could potentially make things confusing down the road for me, so I would like to address it. It's a known cardinal sin to modify data on the individual disks manually, but would it hurt to recursively delete all empty folders in each individual disk? Is there possibly a plugin or some other form of tool that cleans up redundant empty folders on individual disks like this?
  3. I have a "Movies" share that is allocated to two disks using high-water. Structure for this share is as follows: Movies --Movie Title (Year) ----Movie Title (Year).mkv Disk1 has reached half capacity, and so now unRAID is putting new files onto Disk2. I just recently did a bulk renaming of a bunch of movies with Radarr, most of which were on Disk1. I have noticed that since doing that, Disk2 now has an empty folder for every movie that I renamed. The files aren't there--those are where they should be within the folder on Disk1. However, for some bizarre reason, unRAID is creating folders for these files on Disk2 if they have been changed. I'm guessing this will have no impact on functionality since it's not making duplicates of the actual files themselves? However, I really don't want this to happen for a specific reason. If "/Movies/Bladerunner/" exists on both disks, but "/Movies/Bladerunner/Bladerunner.mkv" exists on Disk1 and Disk1 fails, that means "/Movies/Bladerunner/" will remain on Disk2 and continue to show up in /User/Movies/" even though the file is now gone with the failure of Disk1. I will not be able to easily tell which movies I have lost. I do have parity, but I'm thinking of a worse case scenario where parity fails and backup recovery fails. To clarify, all of the renaming and file editing is being done from the /user/Movies share. I'm just observing what is happening to the individual disks when I do so. Is there a way I can prevent these redundant empty folders from being created like this? If not, is there a safe way I can bulk remove them from the individual disks?
  4. Gotcha. I thought it would recursively scatter all individual files. Thanks for the clarification!
  5. Thanks! Here's a scenario I was thinking of. I currently have a share called movies. Under that is a folder for each movie where the movie itself, subs, posters, etc reside. I have split levels set to only split the top level directory. This way each movie would reside on the same disk as its subs and whatnot. I wouldn't want a movie and it's related files stretched across multiple disks. If I felt the need to scatter the movies folder, say after I've gathered it to one disk for whatever reason, I would want scatter to follow the split levels so again the individual moves would stay together with their respective files. Or am I misunderstanding how this works?
  6. Any plans for it in the future, if it's even possible?
  7. Does Scatter follow allocation method and directory split levels?
  8. Now that Veeam Endpoint, now known as Veeam Agent, has a Linux version, is anyone aware of any plans that might be in motion to get this dockerized?
  9. Trying to browse through folders under the Restore tab is painful. Every time you try to drill down a folder it takes nearly two minutes to think before it actually does it. Doesn't matter how large or how many files are in the folder. It does this with every single folder--and it seems to get longer with every additional snapshot taken. Is this normal?
  10. Aha! So I'm already covering it. Got the flash included in Duplicati. I figured it was going to be on the flash somewhere. I just didn't think to look under plugins. Thanks!
  11. Where are those stored? I want to make sure I have them included in my backups so that in the event of having to rebuild my docker apps, I don't have to worry about forgetting what I've mapped to what. I know that unRAID will save those templates for you which makes re-installation of apps easy, but I'm thinking about an event where those templates are lost. Those are what I want to back up.
  12. I have one drive with an existing set of backups on it. I want to make a second so I can rotate them to have an off-site cold backup. I really don't want to take another three days to generate another backup of my 1.7TB from scratch on another drive, so I had the idea copying the backups from one drive to another (a few hours of work rather than days) and exporting/importing it's configuration and changing the name and destination. I know you could move backups to another location by doing this, but what about making an exact copy? Could two exact instances of the same backups exist under different names? To test if this would work, I made a small test backup set. I ran it once and copied the resulting backups to another location. I then exported the configuration for that backup and imported it into another new set under a different name. I changed the destination to the location that I made the copy of the backup to and tried running it. It failed the first time, referencing files on the remote location that didn't match. After running a repair on it, it ran successfully on the second attempt. So now that I know this works, before I go an do this with my live backup, I just wanted to make sure that I won't run into any wonky issues later on?
  13. Yeah, I never looked into whether Cloudberry supported it or not until now. I guess it just doesn't fit my use case. It's shame since I really like it! I guess I'm stuck with the slowness of Duplicati.
  14. Thanks for the quick timzone fix! Partially for the space, partially for the backup duration, and mostly because my backup drive can't hold more than one copy of my video. I suppose I could create a separate backup plan for video that immediately removes files that were locally deleted. However, if I were to move a video, the software will still want to create a backup of that video in it's new location before deleting the old, right? Just renaming the top level directory of my video files would trigger a full copy of every video, unless I'm misunderstanding this. At over a TB of video, that would be a long backup for a small change.
  15. Eew. Just changing the filename (or that of its parent folder) or moving a file triggers a full duplicate instance of that file in the backup. Since most of my data is decently large video files, that's a deal breaker for me. I understand that's a limitation of the software itself and not the docker container. Thanks anyway for your help on this!
  16. I just did another backup attempt with "Always keep the last version" selected. That seemed to fix it. It retained everything this time. I figured it would have have hung onto everything without that selected as long as the backups were not older than a month. Does it determine this by the files last modify date rather than the time of the backup that it was actually taken? And I have the same issue 1812's comment above. The time is off.
  17. Filtered by Purge and found that they are indeed being purged after the backup. I have my retention set to keep backups going back to a month. Shouldn't it only be doing this to file versions that are older than a month?
  18. While going in to restore within Cloudberry, the only files available to retore are those that I see when manually looking through the target drive. Just the ones that it didn't delete after completing the backup.
  19. Well, it just got weirder. During the second backup, I watched the target drive as the files were added. It actually does appear to backup everything selected, but then deletes them from the target after the backup completes. It only left the files that were there after the first attempt. The files in this test backup range between a few KB (documents), to a couple GB (short video). The test backup as a whole is about 100GB.
  20. Starting testing this out. Seems like my choice lies between this and Duplicati. Completed my first test backup and it seems to have skipped over a lot of files. Many things are missing in the backup target. I'm running the backup again and it seems to be picking up what it missed, but now I'm wondering if it will still miss things on this pass. Has anyone else seen this? I'm hoping this is something that can be fixed, because I'm already impressed at the speed that it runs at going to my USB3 drive compared to Duplicati.
  21. I wouldn't take a large issue with full file versions being taken when changed since I mostly have audio and video. The only changes made to those are renames and relocation, which I would imagine and hope it wouldn't take full versions for that.
  22. I might have to take a look at Cloudberry then. I have another drive I can test it with without losing what I've done so far with Duplicati. Would be great if it can address the speed issue, but another thing I would love if it has a restore option to ignore files that already exist. Duplicati only offers the option to overwrite or create duplicate. Does Cloudberry do this?
  23. I'm actually using USB3. The actual transfer of the remote volumes to the drive are pretty quick. Each 250 MB volume only takes a couple seconds to be moved to the drive. It's the creation of the volumes before they are flushed to the drive that takes forever. Each volume is generated at about 10 MB/s.
  24. I just completed my first backup of 1.7 TB to a USB3 drive. I'm running 500KB blocks, 250MB remote volumes and no encryption. The rest of my settings are default. It took 2d18h to complete. I've used solutions that were slower, but this still concerns me unless someone here can verify that this is expected. I did a test restore after the backup completed. The first thing I noticed was how slow it was to browse through the backups. It took 30-45 seconds to expand each directory that I drilled down through. I chose a 7GB file to test with, which took about 20 minutes to restore. From what I observed with the information the UI provides, the actual restoration of the file took about three minutes while the rest of the time was spent verifying before and after. Is all this kind of behavior typical with Duplicati, or is there something I could be doing wrong?
  25. I kept the volume size 50MB and did another test. Backup seems to be going really slow as it did about 30GB in two hours. I set it to 1GB just for testing and got the same results. I monitored the shares as the backup was happening and it seems that the slowness is occurring during the generation of the zip files. It takes about 90 seconds to generate one of the 1GB volumes. Transfer to the USB drive seems fine as it only takes about 10 seconds to copy the file to the drive after it's generated. I disabled compression thinking that might speed things up. Most of my files are audio and video, so I won't benefit much from compression. I'm not really noticing any difference even after doing that. Is there any way I can speed that up?