shooga

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by shooga

  1. Ok, thanks. I guess that's good. I saw that you posted a write speed test script on another thread. Would running that on my drives help me understand the bottleneck?
  2. I have a server that includes an AOC-SASLP-MV8 and has been rock solid. I'm considering making some upgrades (CPU etc) and am wondering if it would be worth upgrading my PCI-E x4 card to an x8 card. Would I see much performance improvement? My current average parity check speed is 90MB/s.
  3. Might be a dumb question, but here goes: I've been running this a while and it works great. I originally imported from PlexWatch so I have that old config folder (and database) as a path. That was just needed for import right? I can remove that path and delete the files right? Thanks!
  4. I'm trying to get this up and running with just the wemo plugin for now, but am getting an error: [12/4/2016, 8:30:30 AM] ERROR LOADING PLUGIN homebridge-platform-wemo: [12/4/2016, 8:30:30 AM] Error: Plugin /usr/lib/node_modules/homebridge-platform-wemo requires a HomeBridge version of >=0.4.1 which does not satisfy the current HomeBridge version of 0.3.4. You may need to upgrade your installation of HomeBridge. I assume this means that the image needs an update.
  5. Thanks trurl. That seems to have solved it. I figured it was probably something basic that I was overlooking.
  6. Thanks for the response. I don't think it's writing to the docker image because of the folder that I see at /mnt/disks/, which isn't mapped to a drive but is just a regular folder with the Crashplan data in it. My docker settings are attached. I need to head to work and don't have Crashplan running at the moment (I removed it while troubleshooting), but I was backing up to a folder on the USB drive (supposedly) that only showed up after mounting with unassigned devices. I browsed to it via /unassigned. It seems like the Crashplan docker saw the mounted drive and then for some reason created a folder with the same name rather than writing to the drive itself. Not sure if it matters, but the drive is vfat.
  7. I'm having an issue that I see reported by other users on this thread, but I don't see any resolution. I'm going to have a friend host a drive for me as remote backup and want to seed the drive (external USB) with data before setting it up at his place. I connect it and mount with unassigned devices. I have the paths set correctly in the docker setup (/unassigned -> /mnt/disks/) and it seems like it is setup correctly as it sees the drive. However, when I start the backup it runs out of space at about 4GB even though the drive has nearly 1TB free. I also can browse the drive in the terminal and don't see the folder that Crashplan has created. So it must be backing up somewhere else. Here's where it gets weird. I removed the docker container and unmount the drive, then take a look at /mnt/disks/ again, and the folder that the drive is mounted to is still there. Then when I look in that folder, I see the folder that Crashplan created and was backing up to. If I mount the drive again then I see the first version of the folder (which I think is the physical drive). So it seems like Crashplan is behaving as if the drive isn't mounted and just writing to a regular folder instead of the drive. Like there are duplicates or something. Any idea what's happening here and how I can fix it? Also, I assume this data is being written to the flash drive right? The UNRAID dashboard doesn't show the usage on the flash drive growing.
  8. I'm curious how people are using this. I think the ideal scenario is that I would backup my computers to my file server and then use Crashplan on the file server to provide a remote backup of everything. Then you can use the single PC Crashplan account. However, it doesn't seem like using Crashplan for the local backup is a great idea because then you have nested Crashplan backups and would need to recover twice to get the files back right? Just seems like that adds another layer of risk. However, it would be simpler to use if I was just using Crashplan everywhere. Perhaps it would be better to use something like a folder sync (maybe Carbon Copy Cloner on a mac) to do the local backup? Then you just have the raw files to backup with Crashplan. I'm thinking of my Music and Photos folders in OS X for example and would just sync them with user shares. What are people's strategies here? Thanks!
  9. Thanks for the reply. So are you saying that copying the docker image from an xfs drive could induce the same bug as moving from a previous version of UNRAID? Is there any benefit to recreating the image vs setting that bit?
  10. Following up here as I finally got around to setting up the cache pool. The unassigned devices plugin wasn't working to format the new SSD as btrf, so I just copied the cache contents to a user share (one that doesn't use cache). Then I added the new drive as the only cache drive, formatted as btrf and copied the files back. All was working, so then I added the old xfs ssd back to the cache as a drive in the pool. Everything is working, except I now get a warning message that says, "Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!" I'm fairly certain that this warning doesn't apply (at least not as written) but it's a bit annoying. I haven't recreated the image file yet because I hate to mess with things that are working. I know it's easy (I've had to do it in the past) but I'm still hesitant... Anyway, except for this warning the conversion went very smoothly.
  11. I just moved from a single xfs cache drive to a cache pool by copying the contents to a user share and back again (including the docker image) and now I'm getting the following warning: Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW! I'm on 6.1.9 and was never on a beta of v6. I'm sure I could recreate my docker image to get rid of this message, but is there a simpler way? I don't believe it really applies in this case.
  12. Is there a documented process for converting a single ssd xfs cache drive to a pool? I'd like to add a second ssd for redundancy. I'm thinking that I could add the second ssd to my server, format as btrfs, and then copy all of the contents from the xfs cache to the unassigned btrfs drive. Then switch the cache from the current xfs drive to the btrfs drive, test to make sure it's all working, then reformat the xfs drive to btrfs and add it to the pool. Does that sound like a good plan? Btw, are there any downsides to moving from single xfs to a pool of btrfs? Thanks!
  13. Thanks! Cleaning that up was easy. Good as new now without too much time invested. I was worried when I saw that tab gone this morning...
  14. Thanks for the replies guys. I nuked my image file and rebuilt everything using my-templates. I didn't even know those were there. Very helpful. Is there a way to clean up that list? I have some cruft that I'd like to get rid of so it's easier to find the correct versions next time.
  15. Ok. So it occurred to me that I should try increasing the size of the docker image and that partially resolved the issue. However, now I have two orphaned images. Is there a way to convert those back to working containers?
  16. I'm getting the following error: time="2015-11-24T07:42:48.632055466-08:00" level=fatal msg="Error starting daemon: Unable to open the database file: unable to open database file" So I don't have a docker tab and docker is unable to start up. What can I do to resolve? Do I need to remove my image file and rebuild? Are all of the settings for my containers stored in the image file, meaning I am starting from scratch? Full logs attached. bunker-diagnostics-20151124-0752.zip
  17. Is the Sonarr container supposed to update itself on restart? I'm a few versions behind now and restarting doesn't cause an update.
  18. An update for anyone monitoring this thread or searching in the future: I used the command "docker exec -it <container name> bash" to drop into the container and then this, "find / -xdev -type f -size +100M" to find files over 100MB. This helped me find that I had a 1.34GB error log in three of my containers! (eroz's AirVideoServer, needo's PlexMediaServer, and needo's PlexWatch). Deleting that file is an easy temporary fix, but I still need to figure out a long term solution. Any idea what would be causing this error: Nov 1 08:10:18 [server name] sshd[435]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use. Nov 1 08:10:18 [server name] sshd[435]: fatal: Cannot bind any address. I'm guessing it's because ssh is already open on the server. How could I fix this?
  19. shooga

    cAdvisor

    Thanks trurl! This command to drop into the container and then this, "find / -xdev -type f -size +100M" to find files over 100MB helped me find that I had a 1.34GB error log in three of my containers! (eroz's AirVideoServer, needo's PlexMediaServer, and needo's PlexWatch). Deleting that file is an easy temporary fix, but I still need to figure out a long term solution. Any idea what would be causing this error: Nov 1 08:10:18 [server name] sshd[435]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use. Nov 1 08:10:18 [server name] sshd[435]: fatal: Cannot bind any address. I'm guessing it's because ssh is already open on the server. How could I fix this?
  20. ***Edit: I found that this is not limited to AirVideoServer and I also have the same problem in two other containers (needo's PlexWatch and PlexMediaServer)*** I'm trying to get to the bottom of growing containers and found that this AirVideoServer container has a 1.3GB error log. The error that gets stated over and over again is: Oct 31 17:07:53 [server name] sshd[17746]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use. Oct 31 17:07:53 [server name] sshd[17746]: fatal: Cannot bind any address. Is there anything I can do to prevent this in the future? I can obviously delete the error log file, but that won't fix it in the long term. Thanks!
  21. shooga

    cAdvisor

    Can anyone help me understand how to read docker container size in cAdvisor? I've read here that it can be used to help understand which docker images are growing in size. What should I be looking at to see this? On the main docker containers page within cAdvisor I see the size of images listed, but this is the size on the repo. If I go into each container the only place I see a size is the virtual size of each process, but that seems like a strange way to represent size and I'm not sure if that is actually including all of the files or not. Any advice here? I'm looking for ways to debug my growing docker containers because otherwise I will just have to keep increasing the size of my docker volume. I've double checked and don't see any obvious places where I could have mapped folders differently. Is there a way to browse the files inside a container to look for the offending files?
  22. I took a look and didn't have any dangling images, so that didn't help. Is there a way to view the image sizes? cAdvisor seems to only report the size of the default image because if I add up the numbers that it shows the total is way short of what shows in my docker settings page.
  23. I have this issue too and have all of my temp folders for sabnzbd and plex pointing outside docker to my cache. I've already expanded by docker image once and would like to avoid doing that repeatedly. Need to find a solution here.
  24. Ok. I'll take a look at the forums for those issues. The SMART reports look fine for the drive. I will also try disabling spin down for that drive. Thanks.