lionelhutz

Members
  • Posts

    3731
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by lionelhutz

  1. Fair enough, I've just found that entry level boards can have poor IOMMU groupings which make it range from more difficult to impossible when trying to assign things like USB cards to a VM.

  2. I don't believe radarr can do that. I don't see that level of sorting options.

     

    What structure were you trying to store? With those settings directories below the Media share won't split.

     

    Honestly, I wouldn't recommend the Media share in most cases. Do a share for each type of media.

     

     

  3. I'm trying to use a HUSBZB-1 stick and I can't get it to work. I've passed the /dev/ttyUSB0 or USB1 devices to containers but no luck. It's well documented that the USB0 is the zwave and the USB1 is the zigbee.

     

    In Home Assistant, the zigbee is recognized and HA suggests installing the ZHA integration. Doing that give the serial number of the stick but it just won't open the radio and ultimately fails.

     

    In zwavejs2MQTT it's the same thing, I can't get the zwave radio to work. 

     

    I tried this container - https://github.com/walthowd/husbzb-firmware by passing the /dev/ttyUSB1 device and I was able to update the radio just fine.  Yet, passing the same /dev/ttyUSB1 device to  HA fails to work.

     

    I tried searching for the HA error in HA but there were only a couple of unresolved threads. Searching for using this stick just says passing the USB0 and USB1 devices works fine. I know those devices can change, but no point changing to a more permanent device mapping until it works.

     

    Right now I'm thinking the stick is simply bad, but I wanted to check here in case I'm missing something.

     

  4. Hmm, my install must have keep an earlier messed up template once I installed it.... 

     

    I haven't had any other containers act like this one, I guess the original assignments in them must have been fine with how I wanted to use them.

     

    When did Media become a default user share that is created by a new unRAID install? 

     

     

  5. There are these 2 lines in the template

     

    <Config Mask="false" Required="true" Display="always" Type="Path" Description="Container Path: /media/frigate" Mode="rw" Default="/mnt/user/Media/frigate" Target="/media/frigate/" Name="Clips path">/mnt/user/Frigate/</Config>

    ...

    <Config Mask="false" Required="true" Display="always" Type="Path" Description="Container Path: /media/frigate" Mode="rw" Default="/mnt/user/Media/frigate" Target="/media/frigate" Name="Media path">/mnt/user/Media/frigate</Config>

     

    Both are required (forced) and both map to the same container path so it makes absolutely no sense to have it twice. Every time I update the container this screws up my container settings. Every time I have to delete the second one that gets re-added and I also have to delete the media user share that gets created by the mapping because the container re-started after the update.

     

    The name and default system paths also don't match in the first one where I underlined them.  I don't use a media share so that isn't right for my system. Lots of people don't. I would suggest using the typical /mnt/user/Frigate type path like all the other templates I've used are doing.

     

    I would remove the fist one and also remove the Media from the path in the second one. 

     

    Also, the paths to the GPU and CPU mapping should not be required(forced). I was testing it without a TPU and no GPU acceleration so I deleted those device mappings but updating the container messed that up for me too by adding them back in so I had to delete them again.

     

  6. I come here to see what's new in development and find that there is a big uproar. Hate to say it, but I've been here a long time and community developers come and go and that's just the way it is. This unRAID product opens the door to personalizations, both private and shared. Community developers do leave because they feel that unRAID isn't going in the direction they want it to go or that the unRAID developers aren't listening to them even though there is no obligation to do so. Some leave in a bigger fuss than others. The unRAID developers do the best they can at trying to create a product that will do what the users want. They also do their best to support the product and the community development. The product is strong and the community support is strong and new people willing to put in time supporting it will continue to appear. 

     

    Maybe some hint of what was coming might have eased tensions, but I just can't get behind users taking their ball and going home because unRAID development included something they used to personally support. That evolution has happened many times over the years, both incrementally and in large steps. That's the nature of this unRAID appliance type OS as it gets developed.

     

    There is no place for lingering bad feelings and continuing resentful posts. Hopefully, the people upset can realize that the unRAID developers are simply trying to create a better product, that they let you update for free, without any intent to purposely stomp on community developers.

     

     

     

    • Like 21
  7. Set the allocation method to most free then each series will start on the most empty disk. 

     

    I use split level on my TV share and have never had an issue with failure to write. I watch the free space on my disks and 2x in the last 10+ years I moved some active TV series to a more empty disk even though the fuller disk had over 200 gig still free.  Most likely, the active series still filling the disk would have all ended before the disk got full.

  8. Did you setup the media share? Link /tv in the container to the location of your TV series. Then, when you add a series you can just use /tv/ as the folder. Once you do it, it should just keep re-using it. Same as the quality, it re-uses the last choice.

     

    I have a share called TV shows so I have /tv -> /mnt/user/TV_Shows/ 

     

    It has to be the top level location of the TV series because it creates a directory for each series and then subdirectories for each season. Well, I believe you can change that so it dumps them all into a single series directory but it doesn't make much sense not to use season directories.

     

     

  9. No,  complete and incomplete are sub-directories in the download share. You don't set them up as docker paths. You setup the apps internally where their setup asks for those directories.  When you set up the download share, make sure the container path and host path are exactly the same for every one. Then, you use the container path in the internal app setup.

     

    A NZB indexer is for Usenet. A torrent indexer is for torrenting. You have to match the indexer to the type of download. I use Usenetserver and it's been good, but there are others that will work just as well. I have no idea what indexer is best to sign up with today. I used to run my own. 

     

     

    • Like 1
  10. As far as setup.

     

    Sonarr and Radarr are to find the stuff to download. They send the info to download clients like Deluge or SABnzbd  to actually download it. They also monitor the download and sort it once it's downloaded. The indexer is used by Sonarr and Radarr to find what to download. You need an indexer that matches the type of download you want to do.

     

    Some docker containers have the ability to link compete and incomplete folders to directories on the server, which is normally completely unnecessary. I don't know why the developers make it complicated like that. I recall one container wanted an incomplete path setup. I just deleted it. Others probably had it too. Containers between different authors might have different path names too. Fix them so they are all the same.

     

    The easiest is to link every app to a downloads directory. I have a Downloads share and setup every docker container to have a path pointing /downloads in the container to /mnt/user/Downloads/ on the server. Only this one. Then, in each app I set the downloads directory to be /downloads. Some of the apps have the place to enter compete and incomplete directories. I entered /downloads/complete and /downloads/incomplete.

     

    The main thing with the paths is to make sure you put the SAME setup for every docker container. Be very consistent with this. I suggest linking a single share for downloads. Call it downloads or Downloads or whatever, but use it for every container. Just make sure the unRAID share side and container side are consistent. Notice how I used a capitalized share name but small case name on the docker side? Then you refer to that single directory in each app setup.  If you don't keep it consistent, then you can have issues with the apps finding stuff the other app has put in the directory.

     

     

     

  11. For initial setup I'd use the most free allocation method and set the split level to 1 or only split the top level. That way, each new series will start on the most empty disk so it has the most room to grow but it will all stay together on that disk. I've used that setup for years and it's worked well. Only had a disk run low on space a couple of times and require manual intervention.

  12. I'm running 9 Docker containers and a W10 VM on a 4-core i5-6400. I even transcode 1 stream for internet viewing with Emby using the Intel GPU and 1 core. No 4k media here yet. It"s a rather pathetic CPU compared to what you are considering but I can't convince myself to upgrade yet because it's still handling the current loading just fine.  Overall, your CPU choice looks great if you you don't have any processor intensive tasks planned besides Plex.

     

    I do want to get into Tensorflow camera image processing in Home Assistant and once I start doing that then I'll likely need a better processor. Something for the new house to provide some security and also a more accurate way to activate outside lights instead of old school motion sensors.

     

  13. On 4/30/2019 at 6:14 PM, lowbiker said:

    If Radarr was any kind of stable I'd love to move to it. Crashes all the time and can't figure out how to recover it without wiping it all and starting over. Hence are back on CouchPotato

     

    The Linuxserver Radarr docker container has been perfectly stable. You likely are doing something wrong or your hardware has issues if you can't keep it operating stability.

     

    Why pull up a 5 year old thread about a long dead plug-in anyways?

     

  14. What's the typical reported UPS load in watts?

     

    That UPS is around 20 minutes at 100W of load. Your server and router and modem could easily draw well over that when the server is working with all disks spun up. 

     

    There is no simple way to make existing hardware more efficient to much extent. About all you can do is make sure the CPU is speed stepping and the fans are speed controlled based on the temperature of the devices they are cooling.

     

     

  15. To run real peripherals with a VM, they must be connected to the server. For example, you pass through a USB card and a video card to the VM and then connect your monitor and USB keyboard and USB mouse to the server to operate the VM.

     

     

  16. Without parity, there is no ability to swap a device and let unRAID rebuild it as you have already figured out.

     

    You could install the new drive as another disk and copy the data over. Copy the files over the network using a PC or use Midnight Commander from the unRAID command line by typing MC. I've found it doesn't take that long to move 1T blocks of data at a time using a PC. For what you are doing, I would copy though. Make sure you copy disk to disk - DO NOT copy using the share or share to disk. Once complete, do a new config to get rid of the old drive.

     

    You could also move the old data drive to the cache position and replace the old data drive with the new drive. A new config would be required here. Once setup, you run the mover and it would move the data off the old drive to the new data drive or possibly other data drives depending on the share settings. Then, do another new config to remove the old drive from the cache position.

     

     

    • Upvote 1
  17. After reading the level of work that must have gone into the Stuxnet attack, this type of attack sounds completely believable. China is well known to have a massive state operated hacking organization and is also well known for counterfeiting complex integrated circuits, so it isn't far fetched at all for them to be hardware hacking new products being produced there.

     

    Another big question to be asked is how many compromised products don't we know about?