unTER

Members
  • Posts

    58
  • Joined

  • Last visited

Posts posted by unTER

  1. On 11/21/2018 at 11:56 AM, Squid said:

    on the flash drive config/plugins/dockerMan/templates-user are there two unify xml files (One with a (1) at the end of it)?

     

    On 11/21/2018 at 11:02 PM, dolivas27 said:

    Yes mine had the extra unify xml file so I compared the two and the one with the (1) was the correct xml so I removed the other one and renamed the xm1 (1) to just xml and restarted the docker and all it perfect now. It also fixed the problem with the network changing from proxynet to bridge.

     

    Thank you for leading me in the right direction to fix the problem I also found and problem with my plex docker having the same problem..... 👍

    This was exactly my issue and fix too. I renamed the version with "(1)" in the filename, updated the docker container, and now it is running the latest and working as expected.

  2. On 9/22/2018 at 4:11 AM, page3 said:

    Have I done something stupid?

     

    I was running 5.8.30 fine. UnRAID did it's usual weekly update earlier today and now I get the following error on running the controller:

     

    "We do not support upgrading from 5.8.30."

     

    Any ideas?

     

    UPDATE: RESOLVED. Repository had reverted to stable branch. Re-added "unstable" and all is well. 

    As of right now, I'm running the LS.IO's 5.8.30 Unifi docker. It tells me there is an update ready. If I allow it to update the docker and then go to the WebUI, I get the same message you quoted about the database not supporting an upgrade from 5.8.30. To fix this, I edit the docker repository back to "linuxserver/unifi:unstable" because ":unstable" disappears on its own. After applying, the WebUI opens as desired and I'm still on 5.8.30. It also goes back to telling me an update is ready. Also, if I edit the docker, ":unstable" is again gone. It doesn't seem to stay on Unstable whenever I add it and apply changes. So now I'm sitting here with 5.8.30 and unRAID telling me an update is ready... and if I allow it to update, I get your quoted message and the cycle continues. Does anyone know how I can get past this?

  3. On 12/2/2017 at 8:11 AM, orlando500 said:

    I didnt read the posts above. But doing so helped me to... 

    Setting the config to host mode fixed my problems. Thanks

     

    When I update the UniFi docker in Bridge mode, my adopted devices do not reconnect. If I power cycle them, they all connect in UniFi (still in Bridge mode). Next time the docker is updated, I have to again power cycle all of my UniFi devices to get them to reconnect in the UniFi Controller. This is annoying and a fairly new issue as of late 2017.

     

    Now that I set the UniFi docker in Host mode, I can update UniFi or restart the docker and all of my UniFi devices connect immediately as you would expect. Setting the docker back to Bridge mode brings back my issues.

     

    So leaving the docker in Host mode makes everything work as I'd expect and want. Is the solution to leave it in Host mode, or should I be coming up with a better solution that puts the docker back in Bridge mode and resolves these device connecting issues? Thanks!

  4. I've been using unRAID at home for several years with dual cache SSDs, dockers, and a few VMs. No problems there.

     

    I'm setting up a Dell T330 with hardware RAID10 for a small business that currently needs CentOS for their core applications, but we discussed virtualizing just in case they need another OS at some point. We also discussed installing UniFi Controller and UniFi NVR Video (and maybe Crashplan) on their server and how that is difficult directly on CentOS, which lead to discussions of Docker containers.

     

    We looked at Proxmox in order to host the CentOS VM and then to host LXC Ubuntu containers for the UniFi apps. We also looked at just installing CentOS as the base OS and then running Docker on that and KVM if more VMs are ever needed.

     

    But I wanted to ask (finally to the question), what are your thoughts on running unRAID with no parity and no cache and with the RAID10 volume added via the Unassigned Devices plugin? The goal would be to use unRAID for its ease of managing VMs and Docker containers from a USB-bootable system. Is this doable, crazy, etc.? Thanks!

  5. I have an ASMedia ASM1061 (Syba SY-PEX40039) that isn't being recognized. Everything I'm reading says that I shouldn't need to do this patch. Any advice?

     

    I have the card plugging into a Supermicro X9SCM-F motherboard. I've tried it in the three of the four PCI-E slots. Nothing appears to work. I can't tell if the card isn't even being recognized by the computer or if it's an unRAID thing.

  6. Chip,

     

      If you figure out what the "grep:/etc/php-fpm.d/www.conf: No such file or directory" is let me know I have been getting it off and on over the last month. It does not seem to affect anything just keeps scrolling up the terminal screen.  Thanks

     

    At the monitor it says

     

    Tower login: [glow=red,2,300]grep:/etc/php-fpm.d/www.conf: No such file or directory[/glow]

     

    What does that Mean?

     

    Interesting after a restart and starting up the array I noticed that the above popped up on the screen. Why is that?

    I think I get Tower login: grep:/etc/php-fpm.d/www.conf: No such file or directory from having the Tips and Tricks plugin installed. I see that message immediately upon booting my server, but then if I go into Tips and Tricks, the message repeats on the screen eight more times.

    ConsoleTroubleshooting.png.48f22be76b984bd628fbb0789e091d50.png

  7. Thank you for the UniFi Video docker. I'm loving that I can do this via a docker.

     

    Quick question, when you go into Settings, does it show an IP address that makes no sense? My docker is http://unraidIP:7080/ and I successfully point my cameras to unraidIP. However, the IP showing up under Settings is 172.17.0.4. Is this basically an unused and non-configurable setting that simply doesn't matter because this is running as a docker? Nothing about my network uses the 172 subnet. I don't see where I can change this value either.

     

    P.S. I'm using this for my docker icon, which can be adjusted in the docker's advanced settings... http://i.imgur.com/CJjSBdJ.png

  8. Can someone help me understand how to select the correct track when the disc shows multiple tracks of being the main movie. For example, Café Society has several many that are 16-18 chapters, and I would need to know which is the correct one to select. When I Google for help, I get results like the following, but I don't see how to select playlists in this MakeMKV-RDP Docker like they do here.

     

    https://www.makemkv.com/forum2/viewtopic.php?f=8&t=15236

     

    Search Google for prosess monitor blu-ray rip and you should get a guide on how to do it.

    You need to download process monitor, make a filter for mpls files and start the playback in powerdvd. Then you'll see in the process monitor which mpls is playing.

    Okay, so I would need to be using a PC to do this? Today, I have an internal Blu-ray drive installed in my unRAID server, and I just use MakeMKV-RDP to do everything.

  9. Can someone help me understand how to select the correct track when the disc shows multiple tracks of being the main movie. For example, Café Society has several many that are 16-18 chapters, and I would need to know which is the correct one to select. When I Google for help, I get results like the following, but I don't see how to select playlists in this MakeMKV-RDP Docker like they do here.

     

    https://www.makemkv.com/forum2/viewtopic.php?f=8&t=15236

  10. Does UniFi-Video Controller (NVR) Docker essentially replace the need to have a physical NVR in your home? I was looking into some of their cameras, but the NVR they sell is quite expensive. It would be nice to replicate that on my UnRaid Server.

     

    Yes, that's the point of running this docker.  No need for their physical NVR at all, it achieve the same.

    Yeah, I'm going to have to do this. I imagine you all would limit NVR storage to a single disk to prevent multiple disks from having to constantly spin up?

  11. I was finally getting around to doing a sas2flash "uflash" (pull down complete contents of Flash to a file) on my LSI SAS 9211-8i for a fellow unRAIDer who needed it for troubleshooting, and I decided to flash from P19 to P20 while I was in there. While doing that, I decided to flash the UEFI BSD bios. It was originally N/A... blank. I flashed it with the signed P20 x64sas2.rom. That left me with a "sas2flash.efi -list" of:

    • Firmware Version 20.00.00.00
    • BIOS Version 07.39.00.00
    • UEFI BSD Version 07.27.01.00 <-- was previously "N/A"

    My question... should I have flashed the UEFI BSD? I noticed immediately after doing it that entering my Supermicro BIOS settings takes longer, which I'm guessing is caused by now having a menu item for my LSI SAS 9211-8i legacy boot ENABLE/DISABLE option. Is there a downside to now having the UEFI BSD BIOS flashed/loaded?

     

    For all those who want to use the P20 firmware.

    See this post

     

    Backup your data before updating!

    Report back with your experience.

     

    I chickened out after reading your post, the post you linked, and this guy's "LSI P20 firmware not a smooth ride" blog entry, so I didn't have time to test anything and provide feedback on successes/failures with P20. I'm back to:

    • Firmware Version 19.00.00.00
    • BIOS Version 07.37.00.00 <-- I think I could have stuck with the newer P20 bios as long as the firmware was P19, but I reverted this too
    • UEFI BSD Version N/A <-- decided I didn't need this and that it just added another variable I don't need.

  12. I was finally getting around to doing a sas2flash "uflash" (pull down complete contents of Flash to a file) on my LSI SAS 9211-8i for a fellow unRAIDer who needed it for troubleshooting, and I decided to flash from P19 to P20 while I was in there. While doing that, I decided to flash the UEFI BSD bios. It was originally N/A... blank. I flashed it with the signed P20 x64sas2.rom. That left me with a "sas2flash.efi -list" of:

    • Firmware Version 20.00.00.00
    • BIOS Version 07.39.00.00
    • UEFI BSD Version 07.27.01.00 <-- was previously "N/A"

    My question... should I have flashed the UEFI BSD? I noticed immediately after doing it that entering my Supermicro BIOS settings takes longer, which I'm guessing is caused by now having a menu item for my LSI SAS 9211-8i legacy boot ENABLE/DISABLE option. Is there a downside to now having the UEFI BSD BIOS flashed/loaded?

  13. I had originally converted one of my servers to almost all BTRFS.  I was having some issues which caused several hard crashes.  after a short time I decided to try a scrub on my data drives to see how they were doing.  Every disk had unrecoverable corrupted files on it except the XFS volumes.  I lost 1-2 files on each drive.  I converted to all XFS, and has been running smooth...  i would never use BTRFS again for a long time... I even converted my cache drive over to XFS since it isn't required for dockers anymore..

    Interesting. I'll probably go with XFS on the data array for a new unRAID build.

  14. ...I will say this much regarding the "preference" for SSD specifically with respect to Docker and Btrfs:  Btrfs is an "SSD Aware" file system, meaning that it will optimize how it handles IO for SSDs compared to HDDs...The actual read/write performance thanks to btrfs is really not a big deal with most applications because they are so small to begin with.

     

    Considering this, I have 2 SSD's that I can remove from machines I'm not using now, since I could combine them into the unRAID box  ;D ;D

     

    One is a 64GB SATA2, and the other is a 28GB SATA3 drive.  It sounds like the speed difference between 3GB/s (SATA2) and 6GB/s (SATA3) is unlikely to be realized in any meaningful manner, so it probably comes down to size.  It sounds like the base image sharing is going to keep my space needs down enough that 64GB should be plenty for a Windows VM and several containers for things like SickRage and SAB.

     

    Any thoughts on all this?  The 64GB drive is easier for me to get to, and leaves me the faster, larger drive for use in another machine, but I have no immediate plans/use for such a machine, so I can really use either drive for unRAID, I just don't want to 'waste' the good drive when the smaller/slower one will work just as well.

     

    You can use either.  I don't know how much of a difference you will notice day-to-day between the two.  You could also create a raid0 group in btrfs between the two SSDs and get the benefits of both devices.

    Just to be clear, you are referring to Step 6 of the Btrfs Quick-Start Guide?

  15. Is it possible to use a disk outside of the array/cache for Docker? I installed a SSD that I was planning on moving my ArchVM image to, but never got around to.

     

    Could I format this as btrfs and use it for Docker? Am I likely to have a noticeable speed improvement using SSD over a 1TB WD black disk (my cache)?

     

    I am thinking this could be easier than trying to move everything off my cache drive, reformat and move back (without screwing anything up).

    I'm wondering the same thing. I have a 1TB WD Black that is currently my cache and planned to keep it that way. I too am adding an SSD to host my ArchVM and others eventually. It sounds like in the Btrfs Quick-Start Guide that SSD is preferred for Btrfs, so (1) can I keep the 1TB WD as my cache and add the SSD as my Btrfs drive for both Docker and my VM drive? If yes to that question, (2) would it be wise to create two partitions on the SSD, one for Btrf and one for Ext4 if that's what I should still use for my VMs?

     

    Okay, it sounds like we will be able to do this with a non-cache drive, so then I would just need to determine if it's fine hosting my VMs on Btrfs...

     

    Only if your appdata lives on a cache disk today in which case, yes, you will need to move it off to format it with Btrfs.  However, if you use a new device for Btrfs, you can keep your Cache Drive for now on Reiserfs to just test out the new capabilities. I'm working on posting a guide here in a sec on how to do this manually to a non-cache drive.