Jump to content

Phastor

Members
  • Content Count

    70
  • Joined

  • Last visited

Everything posted by Phastor

  1. If it's something as simple as both NICs having IPs on the same subnet, I'll be very happy, but confused. I understand having the same IP would cause an issue, but what about having two NICs on the same subnet doesn't sit well with unRAID? I'm not doubting that can be my issue--I want it to be that simple. I'm just curious.
  2. I've been running on a server with dual NICs from the beginning, but only ever utilized one of them. I decided I wanted to look at PFSense last night and figured running it as a VM would be a good option just for testing it out. I figured I would bridge my second NIC to it just so I could play around with it. When I went to set up the second NIC in unRAID, I realized for the first time that unRAID had bonded my two interfaces by default. I unbonded the interfaces, kept eth0 as it was with the IP if x.x.x.13, what it has always been, and assigned x.x.x.14 to eth1. Things seemed to be normal at the start. I took a look at PFSense and realized I was going to be better off testing it on a physical machine, so at that point I physically disconnected eth1 and that is where the weirdness began. I immediately lost connectivity to my server. I ran a ping and sure enough no response from .13. However, I did get a response from .14. Odd, since I disconnected the interface that should have been .14. I reconnected eth1 and the server came back. I disconnected eth1 again and got the same result. At this point I thought maybe I was wrong on which physical port was which on the server, so to troubleshoot, I went ahead and disconnected eth0 and left eth1 plugged in. Server dropped again. No response from .13, but again a response from .14... I still don't know what was going on there. It was already 1:30 at that point, so I just said screw it and left them both plugged in, since it seemed to be happy with that, and went to bed. Fast forward to this morning right before I was leaving for work. I realized that my Plex server was down (docker container in unRAID). I tried to get into the unRAID UI and got no response. No dockers containers responding. Can't connect with SSH. No pings from .13 or .14. The only thing that IS working is my Windows 10 VM that I VPN/RDP into from work, which is very odd since that VM is bridged to eth0, which is otherwise not responding. I couldn't do anything else with it as I had to leave for work. That's kind of where I'm at right now. The VM is still working as that's what I'm remoted into and writing this from right now, but I am otherwise unable to access my server in any way. I'm going to plug a monitor into it once I get home and try to get diagnostics from it to post here, but for now I'm just sitting here at work for the next eight hours going absolutely crazy and needed to get this out there. Any thoughts?
  3. Nah. I've only got two USB devices aside from the flash. I've got my UPS plugged into USB2 on the board and the drive in the PCIe adapter.
  4. As much as that would hurt, I'll give USB2 a shot. The USB drive in question is my Duplicati backup target, which is already slow the way it is. I may just get myself a small NAS for my backups if that turns out to be the issue. It's just weird since this issue only surfaced a few months ago after about two years of unRAID/UD use. Thanks for the help! I'll keep you posted with what happens.
  5. This has happened with two different USB drives. I guess I'll need to scrounge up a third one and see. Or perhaps it could be a controller? I'm using a PCIe USB3 controller since my board does not have it onboard. Is it UD's interaction with the failed device that's causing the server-wide issues?
  6. After about an hour of the diag page appearing locked up, it gave me the zip file. aivas-diagnostics-20200113-2202.zip
  7. Just tried running diagnostics and that seems to have hosed my UI. This thing will go at the drop of a hat as soon as UD starts to do this. Will have to wait till I'm home to do a hard reset before I can look into it further.
  8. I'm having an issue where Unassigned Devices will stop responding randomly. The UD section in my Main tab just gives me an unending dancing unRAID logo, I can't browse my USB connected drive, and any docker containers with folders mapped to that drive become unresponsive. It will also cause all of my CPU cores to appear to be pegged out in the Dashboard. When it does this, I am unable to stop any docker containers or my array. I am unable to shut down my server cleanly and my only option is to do a hard reset. If I try to do anything with UD when it does this, it will hose my web UI completely and I can't access anything until the server is hard reset. Any docker containers that I have not touched and do not have any mapped folders to my UD drive will continue to function. I can also still RDP into my Win10 VM. However, anything to do with browsing the server UI is not possible. I thought I read about this being a known issue and that it was addressed in a recent update, but it has just happened to me again. I am at work right now and am afraid to touch anything on it in fear that the web UI will get hosed and I won't be able to do anything with it until I get home.
  9. I cannot for the life of me get Calibre-Web to see ebook-convert. I pulled the binary from a Calibre install, tossed it into /config within the container, and pointed Calibre-Web to the path to it. It's still reporting as not installed. I consoled into the container to run "ebook-convert --version" to see if it was actually functional at all and it returned an error regarding missing modules. Does it require dependcies? Does Calibre as a whole have to be installed within the container?
  10. MXToolbox is reporting that my server does not support TLS. My knowledge in this sort of thing is limited, but I think I have pinpointed the problem. After issuing the EHLO command myself, it returned the following. 250-PIPELINING 250-8BITMIME 250-SMTPUTF8 250-SIZE 25214400 250 STARTTLS That last line is what draws my attention. It's got a space instead of a dash. MXToolbox is expecting "250-STARTTLS" and I'm guessing that's why it's marking it as not supported since that's not in the response that it's getting. I imagine this is something more for the original developer of the software to deal with--just hoping that it makes its way up the chain from here.
  11. I've had a pretty stable set of backups for about 8 months now. Aside from the incredibly slow restore process (browsing folders within the backup is painful!), I've been pretty happy with it. However, I really wanted some of the newer features, such as improved performance during restores, so I switched to the canary build. I was not expecting my backup config to get wiped when I did this. Going back to the stable container returned the configs after some tweaking. Is there a safe way I can move my configs over to the canary build?
  12. I'm confused on which version of Duplicati the container is running. From what I understand, they have made a lot of improvements in performance a few months ago as far as browsing your backups goes, but I am still having terrible results that seem to go back and forth with each update of the container. On average, after hitting "restore files," it takes about 8-10 minutes to finally be presented with a directory structure I can browse. It then takes 2-3 minutes for it to think on every folder I drill down in that structure. On some updates it gets better, as in it only takes a couple minutes to present the directories and then about thirty seconds to drill down each folder. But then, like in the latest update, it's back to taking several minutes again. Throughout the course of this, the actual version of Duplicati in the container has not changed and is reporting that it is 2.0.3.3_beta_2018-04-02.
  13. How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today? There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind. Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container? Edit: It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.
  14. Stumped by such a simple and obvious thing. Thanks for pointing it out! It didn't like that I left the host path for the watch directory blank. I just removed that path entry entirely and it went through. Strange since this was the same template, blank watch path included, that I used to install it the first time where it worked.
  15. This happened a while back, but I just installed a different version of it, which turned out to be more to my liking anyway, but now it has happened again. It has become orphaned again. This time it's the one by jlesage. After removing the orphan and re-installing, it is immediately orphaned again. What does it mean when this happens?
  16. In the Dashboard tab, we get a little summary list of the users and how many shares that they have read and write access to. However, unless I am missing something, the only way to see which shares they have read/write access to, you have to look into each share individually. This is a bit tedious. I think it would be useful to have this information also present in the users tab and then be able to see more details when you click on a user. I'd like to be able to click on one of my users and then see a detailed list of shares that they have access to and what permissions they have to each.
  17. Upgraded to 6.4 and after rebooting I went to go start up my VMs and docker containers only to find they weren't there. Plugins were still intact. Going into my main tab, I'm greeted with "Unmountable:Unsupported partition layout," where my cache drive is supposed to be. The cache is formatted xfs just like the other drives and worked prior to the upgrade. Any ideas?
  18. I have split levels on that share to only split the top level directory, so it shouldn't be writing anything under a movie's subfolder outside of the disk it's already on, but reading this prompted me to take a deeper look into it and I think I see what's going on. Radarr is also changing the name of the parent directory of each movie file to match the filename. Tell me if this is what could be happening: I tell Radarr to rename a movie in /user/Movies/. The data for this movie is currently on Disk1. Radarr first creates a new folder that corresponds with the new name instead of renaming the existing folder. Following allocation and split levels, unRAID sees this as a new second level folder and writes it to Disk2. Radarr then renames the movie and moves it to the new folder under /user/Movies/. unRAID knows this file is already on Disk1, so instead of trying to move it over to Disk2, it creates the folder on Disk1 as well and moves it to there. After the movie is moved, Radarr deletes the old folder in /user/Movies/, causing the old folder to be removed from Disk1 (this folder never existed on Disk2). Both instances of the new folder on each disk are seen as valid by unRAID, so the one on Disk2 remains as well as the one on Disk1 where the file is. It will always remain empty on Disk2 though since split levels within this share will keep everything within that folder to be written to Disk1. Does this about sum it up? If this is truly what's happening, I can expect to see this happen again in the future. Given the nature of what's happening (if this is it), I don't think I can do something within unRAID to prevent it. However, it's still an undesirable effect that could potentially make things confusing down the road for me, so I would like to address it. It's a known cardinal sin to modify data on the individual disks manually, but would it hurt to recursively delete all empty folders in each individual disk? Is there possibly a plugin or some other form of tool that cleans up redundant empty folders on individual disks like this?
  19. I have a "Movies" share that is allocated to two disks using high-water. Structure for this share is as follows: Movies --Movie Title (Year) ----Movie Title (Year).mkv Disk1 has reached half capacity, and so now unRAID is putting new files onto Disk2. I just recently did a bulk renaming of a bunch of movies with Radarr, most of which were on Disk1. I have noticed that since doing that, Disk2 now has an empty folder for every movie that I renamed. The files aren't there--those are where they should be within the folder on Disk1. However, for some bizarre reason, unRAID is creating folders for these files on Disk2 if they have been changed. I'm guessing this will have no impact on functionality since it's not making duplicates of the actual files themselves? However, I really don't want this to happen for a specific reason. If "/Movies/Bladerunner/" exists on both disks, but "/Movies/Bladerunner/Bladerunner.mkv" exists on Disk1 and Disk1 fails, that means "/Movies/Bladerunner/" will remain on Disk2 and continue to show up in /User/Movies/" even though the file is now gone with the failure of Disk1. I will not be able to easily tell which movies I have lost. I do have parity, but I'm thinking of a worse case scenario where parity fails and backup recovery fails. To clarify, all of the renaming and file editing is being done from the /user/Movies share. I'm just observing what is happening to the individual disks when I do so. Is there a way I can prevent these redundant empty folders from being created like this? If not, is there a safe way I can bulk remove them from the individual disks?
  20. Gotcha. I thought it would recursively scatter all individual files. Thanks for the clarification!
  21. Thanks! Here's a scenario I was thinking of. I currently have a share called movies. Under that is a folder for each movie where the movie itself, subs, posters, etc reside. I have split levels set to only split the top level directory. This way each movie would reside on the same disk as its subs and whatnot. I wouldn't want a movie and it's related files stretched across multiple disks. If I felt the need to scatter the movies folder, say after I've gathered it to one disk for whatever reason, I would want scatter to follow the split levels so again the individual moves would stay together with their respective files. Or am I misunderstanding how this works?
  22. Any plans for it in the future, if it's even possible?
  23. Does Scatter follow allocation method and directory split levels?
  24. Now that Veeam Endpoint, now known as Veeam Agent, has a Linux version, is anyone aware of any plans that might be in motion to get this dockerized?