lionelhutz

Members
  • Posts

    3723
  • Joined

  • Last visited

  • Days Won

    1

lionelhutz last won the day on November 26 2020

lionelhutz had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

lionelhutz's Achievements

Proficient

Proficient (10/14)

38

Reputation

  1. Ouch, but thanks for sharing as a reminder to others about that. I never thought about those cables much, but it's really a dick move on the power supply companies side, not standardizing on the pinout for both ends of the modular cables.
  2. I come here to see what's new in development and find that there is a big uproar. Hate to say it, but I've been here a long time and community developers come and go and that's just the way it is. This unRAID product opens the door to personalizations, both private and shared. Community developers do leave because they feel that unRAID isn't going in the direction they want it to go or that the unRAID developers aren't listening to them even though there is no obligation to do so. Some leave in a bigger fuss than others. The unRAID developers do the best they can at trying to create a product that will do what the users want. They also do their best to support the product and the community development. The product is strong and the community support is strong and new people willing to put in time supporting it will continue to appear. Maybe some hint of what was coming might have eased tensions, but I just can't get behind users taking their ball and going home because unRAID development included something they used to personally support. That evolution has happened many times over the years, both incrementally and in large steps. That's the nature of this unRAID appliance type OS as it gets developed. There is no place for lingering bad feelings and continuing resentful posts. Hopefully, the people upset can realize that the unRAID developers are simply trying to create a better product, that they let you update for free, without any intent to purposely stomp on community developers.
  3. Set the allocation method to most free then each series will start on the most empty disk. I use split level on my TV share and have never had an issue with failure to write. I watch the free space on my disks and 2x in the last 10+ years I moved some active TV series to a more empty disk even though the fuller disk had over 200 gig still free. Most likely, the active series still filling the disk would have all ended before the disk got full.
  4. Did you setup the media share? Link /tv in the container to the location of your TV series. Then, when you add a series you can just use /tv/ as the folder. Once you do it, it should just keep re-using it. Same as the quality, it re-uses the last choice. I have a share called TV shows so I have /tv -> /mnt/user/TV_Shows/ It has to be the top level location of the TV series because it creates a directory for each series and then subdirectories for each season. Well, I believe you can change that so it dumps them all into a single series directory but it doesn't make much sense not to use season directories.
  5. No, complete and incomplete are sub-directories in the download share. You don't set them up as docker paths. You setup the apps internally where their setup asks for those directories. When you set up the download share, make sure the container path and host path are exactly the same for every one. Then, you use the container path in the internal app setup. A NZB indexer is for Usenet. A torrent indexer is for torrenting. You have to match the indexer to the type of download. I use Usenetserver and it's been good, but there are others that will work just as well. I have no idea what indexer is best to sign up with today. I used to run my own.
  6. As far as setup. Sonarr and Radarr are to find the stuff to download. They send the info to download clients like Deluge or SABnzbd to actually download it. They also monitor the download and sort it once it's downloaded. The indexer is used by Sonarr and Radarr to find what to download. You need an indexer that matches the type of download you want to do. Some docker containers have the ability to link compete and incomplete folders to directories on the server, which is normally completely unnecessary. I don't know why the developers make it complicated like that. I recall one container wanted an incomplete path setup. I just deleted it. Others probably had it too. Containers between different authors might have different path names too. Fix them so they are all the same. The easiest is to link every app to a downloads directory. I have a Downloads share and setup every docker container to have a path pointing /downloads in the container to /mnt/user/Downloads/ on the server. Only this one. Then, in each app I set the downloads directory to be /downloads. Some of the apps have the place to enter compete and incomplete directories. I entered /downloads/complete and /downloads/incomplete. The main thing with the paths is to make sure you put the SAME setup for every docker container. Be very consistent with this. I suggest linking a single share for downloads. Call it downloads or Downloads or whatever, but use it for every container. Just make sure the unRAID share side and container side are consistent. Notice how I used a capitalized share name but small case name on the docker side? Then you refer to that single directory in each app setup. If you don't keep it consistent, then you can have issues with the apps finding stuff the other app has put in the directory.
  7. For initial setup I'd use the most free allocation method and set the split level to 1 or only split the top level. That way, each new series will start on the most empty disk so it has the most room to grow but it will all stay together on that disk. I've used that setup for years and it's worked well. Only had a disk run low on space a couple of times and require manual intervention.
  8. I'm running 9 Docker containers and a W10 VM on a 4-core i5-6400. I even transcode 1 stream for internet viewing with Emby using the Intel GPU and 1 core. No 4k media here yet. It"s a rather pathetic CPU compared to what you are considering but I can't convince myself to upgrade yet because it's still handling the current loading just fine. Overall, your CPU choice looks great if you you don't have any processor intensive tasks planned besides Plex. I do want to get into Tensorflow camera image processing in Home Assistant and once I start doing that then I'll likely need a better processor. Something for the new house to provide some security and also a more accurate way to activate outside lights instead of old school motion sensors.
  9. I can't see this being implemented. I bet the majority of users barely understand the basic allocation methods. I hope you get the script figured out. But then, do some searching because I bet someone has a solution to sort like you want.
  10. The Linuxserver Radarr docker container has been perfectly stable. You likely are doing something wrong or your hardware has issues if you can't keep it operating stability. Why pull up a 5 year old thread about a long dead plug-in anyways?
  11. What's the typical reported UPS load in watts? That UPS is around 20 minutes at 100W of load. Your server and router and modem could easily draw well over that when the server is working with all disks spun up. There is no simple way to make existing hardware more efficient to much extent. About all you can do is make sure the CPU is speed stepping and the fans are speed controlled based on the temperature of the devices they are cooling.
  12. To run real peripherals with a VM, they must be connected to the server. For example, you pass through a USB card and a video card to the VM and then connect your monitor and USB keyboard and USB mouse to the server to operate the VM.
  13. Without parity, there is no ability to swap a device and let unRAID rebuild it as you have already figured out. You could install the new drive as another disk and copy the data over. Copy the files over the network using a PC or use Midnight Commander from the unRAID command line by typing MC. I've found it doesn't take that long to move 1T blocks of data at a time using a PC. For what you are doing, I would copy though. Make sure you copy disk to disk - DO NOT copy using the share or share to disk. Once complete, do a new config to get rid of the old drive. You could also move the old data drive to the cache position and replace the old data drive with the new drive. A new config would be required here. Once setup, you run the mover and it would move the data off the old drive to the new data drive or possibly other data drives depending on the share settings. Then, do another new config to remove the old drive from the cache position.
  14. After reading the level of work that must have gone into the Stuxnet attack, this type of attack sounds completely believable. China is well known to have a massive state operated hacking organization and is also well known for counterfeiting complex integrated circuits, so it isn't far fetched at all for them to be hardware hacking new products being produced there. Another big question to be asked is how many compromised products don't we know about?
  15. Interesting stuff. From the detail, it's likely true.