Jump to content

lionelhutz

Members
  • Content Count

    3716
  • Joined

  • Last visited

Everything posted by lionelhutz

  1. The only way to meet $400 and run a few VM's would be by compromising. An option would be to use a quad-core AMD setup and over-commit the cores to the VM's. In other words, dynamically assign either 3 or 4 cores to all VM's and each VM uses the cores as they need them. It would work if the VM's are general use or only 1 VM is doing heavy lifting at a time. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/form-Virtualization-Overcommitting_with_KVM-Overcommitting_virtualized_CPUs.html You could pull this off with a 4-core Intel setup too, but you'd be hard pressed to put together a 4-core Intel, motherboard and memory combo for under $400. You could do it going a little over your budget. I have a i5 so I've been playing with this to see if I can run multiple multi-core hosts. The hosts I would use would generally not be doing anything too CPU intensive so it could work for me. Basically, turn this section of the XML <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> into something like this. <vcpu placement='static'>2</vcpu> It still appears as a 2 virtual cores in the VM but 2 of my CPU cores are no longer pinned to the VM and unavailable for other uses. Doing this, I can run 3 VM's with 2 cores each on my 4-core i5. With pinned CPU's I could only run 3 VM's by pinning 1-core to each VM. Just throwing this out there since the unRAID VM setup page requires you to pin cores so you may not know this is possible.
  2. It depends on what you will be using it for. You would likely have to assign at least 3 Opteron cores for every FX-8350 core to get similar VM performance. If want many lighter duty VM's then the Opteron gives you that ability. Otherwise, it's a toss up where you basically traded extra power consumption for a cheaper price.
  3. I have 7 Dockers and a W10 VM assigned 2gig of memory running in 8gig total and the server currently has 3.3gig of memory cached and 300meg free. This is about 2gig for unRAID and the Dockers to run and 2gig for the VM. The VM certainly doesn't require 2gig overhead. I don't see why you'd have any issue running 3 W10 VM's and your Dockers in 32 gig even if you did assign 8gig of ram per VM since that leaves 8gig for unRAID and the Dockers which should easily be enough. The server would just make more use of the disks during operations like transcoding a large file. If you did assign 4gig to the general purpose W10 VM's then you'd have even more free memory for unRAID to play with. Something like 2 x W10 @ 4gig + 1 x W10 @ 8gig leaves unRAID plus the Dockers with 16gig which should be more than enough. You could also do something like run all 3 at 5gig or 6gig if you wanted to free up a little extra memory for the Dockers. I started with about 6 Dockers in 2gig of memory but stepped up the server processor and memory when I started running the Emby Docker. Then, I found it was so under-utilized I added the W10 VM for hooking to the TV and watching media instead of using another PC. I have 16gig but was having crashing problems so I was trying 8gig to see if it was the memory (I think it was one of the dockers but not sure yet) and haven't bothered to put the other 8gig back in yet. When I do, I'll put the VM back to 4gig and might add another one for testing or general use.
  4. Sure, but the presentation/layout of the data in that plugin needs work and it's clunky to use because you have to go into the settings or plugin page then click on it to get to it instead of being able to add the extra fields right to the main page with the other disk info.
  5. You can setup a custom table (or tables) to display your array info and include stuff like various SMART statistics (say power-on hours) as well as user entered data like the purchase date of the drives. There is A LOT of data that can be user entered and customized. I personally found it very handy for tracking some of the basic information on the drives being used like the purchase date, cost, place, warranty start/end etc. At one time I threw that data into myMain so I could easily review the drive history to decide when upgrades were due.
  6. Set the advanced script variable to true and write a little script file that does it when the docker starts. may some hints for me (unraid noob here ...), cant find any advanced script settings in docker settings, not in general nor in apache docker settings. i only find extra parameters to start up ... thanks ahead. Try this post for an example of using it. https://lime-technology.com/forum/index.php?topic=43858.msg506722;topicseen#msg506722 You create a "userscript.sh" script file in the root of the Apache docker config directory. Then, set the "ADVANCED_SCRIPT" variable to true.
  7. Set the advanced script variable to true and write a little script file that does it when the docker starts.
  8. It sounds like your setup is fine. The Docker image is a BTRFS filesystem image. Think .iso file, but for a BTRFS filesystem instead of a DVD. You don't need to backup the Docker image. When you click the add new Docker button there is a drop-down and you can pick the my-XXX for your previously installed Dockers (for example my-Couchpotato) and it will re-install the Docker with all your previous settings and start working again as if nothing has happened. So, if you can restore the appdata to a new SSD then you can re-install the Dockers.
  9. My bays have power buttons and I don't see any reason to be concerned about swapping disks if they are not mounted.
  10. No-one can suggest a split level setting unless you BOTH give the file structure you are using AND the sub-directory level where you want everything below to remain on a single drive. For your music, you say Music\Artist\Album is the directory structure and you want Album to remain on a single disk. Well, this means Music and Artist can both split to multiple disks so the split level setting is 2. The best allocation method can somewhat be dictated by the split level you use. For example, I keep each complete TV series on a single disk. So, I don't want a bunch of new series all being started on the same disk so that as new episodes come out the disk gets filled. So, I use most free to spread the new series around and start them where there is the most room for future expansion.
  11. I have added, pre-cleared and assigned drives without rebooting the server. I just had to stop the array when I assigned each drive then start it again. Preclear and the unassigned devices plugin will work without stopping the array. When upgrading a drive, I would first hot-plug the new drive into a spare slot to preclear it. Then, I would stop the array, pull the drives involved and put the new drive into the slot for the old drive before starting the array again. If I was selling the old drive, after a week or two I'd plug it into a spare slot again to clear it before selling it. I've also upgraded the cache with VM's and Dockers turned off and then plugged the old cache into a spare slot to copy the application data to the new cache before turning the VM's and Dockers back on. I then unmounted the old cache drive and pulled it without stopping the array. You can't actually hot-swap existing array drives with the array started, but having bays and hot-plugging drives is handy.
  12. http://www.heidisql.com/ is one of the simple tools that works on a PC. There must be a similar tool for a Mac.
  13. No, scrubbing won't fix data corruption errors with a single filesystem. It can fix the filesystem metadata and tell you there is a data issue.
  14. BTRFS requires 2 drives for data fault tolerance. Since it's an image file on the cache drive I don't believe having mirrored cache drives would help unless it was an issue with the underlying cache drive filesystem causing it.
  15. I threw together a basic guide. https://lime-technology.com/forum/index.php?topic=54931.msg524405#msg524405 You might not be interested after seeing the steps needed....
  16. I'm going to provide more of an overview then a step by step guide. At this time, I'm not going to get into exact step by step detail to the level it take to get someone with no prior knowledge about any of this running. I'm assuming anyone reading this is using Community Applications, can install Dockers and plugins, has some basic command line knowledge, can use the right text editors to edit Linux script and config files and (most obviously) knows enough or can read enough about newznab to set it up and configure it. 1. Install the following - Nerd Tools plugin. - Linuxserver.io Apache Docker. - Any MariaDb or MySQL Docker or Plugin. The database Docker or plugin version really doesn't matter since you can connect Newznab to any of them. At this time i am using Needo's MariaDB docker because it was the first database docker published and I've been doing this since dockers were first around. I've never changed it because it still works. But, the Linuxserver.io docker would be one of the more obvious choices here. 2. In the Nerd Tools configuration install the following packages. - apr - apr-utils - neon - serf - subversion These are needed to run subversion to download Newznab. I have the Perl package installed but I don't believe it is required for subversion. If the subversion command line doesn't work then install it too. 3. Download Newznab Log-onto the unRAID server and go to the Apache Docker installation location - the directory where you pointed /config to in the Apache docker setup. Then, further go into the www directory that is there. Now, use Subversion to download Newznab. The command line should be, - svn checkout svn://svn.newznab.com/nn/branches/nnplus If this works how I remember, you should have the newznab install directly in this www directory so you have 5 directories and 6 files in the directory. If it dumped everything into yet another subdirectory them move everything from that directory to the www directory and delete that subdirectory. You should have paid for Newznab by now and received a username and password to use for this. 4. Setup the Apache docker. By default Apache is looking for the website in the www directory where you just installed Newznab but Newznab creates another www directory inside the Docker www directory. So, you have to setup Apache so it looks for it in the www/www directory. Go to the same Apache docker "/config" directory as in step 3 then go into the apache\site-confs subdirectory and edit the default.conf file. Change the top part to look like this. <VirtualHost *:80> DocumentRoot /config/www/www <Directory "/config/www/www"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Basically, just change /www to be /www/www Now setup the Apache docker to use the advance script so you can install perl and unrar (optionally for use in Newznab for deeper password checking). In the Apache docker setup set the "ADVANCED_SCRIPT" variable to true. In the Apache install directory (where you pointed /config) put a file called "userscript.sh" containing the following. apt-get install php-pear apt-get install unrar-free Now, restart the Apache docker and go to it's WebUI and you should get the Newnab setup page. It'll take a bit the first time because the docker has to install those packages first before being run. Go through the setup pages and point it to your usenet server and the MariaDB or mySQL database and it should come up ready to use. As a minimum, you need to go into the site setup and put in your Newznab ID and you should also activate a single group for testing to ensure it's working. Since I installed unrar I put "/usr/bin/unrar" into the path to unrar box in the Newznab site setup. Then, I could use deep password checking. 5. The last part is to setup the indexing script. Yet again starting at the same Apache "/config" directory in step 3, go to this directory. - \www\misc\update_scripts\nix_scripts. Edit the newznab_screen.sh file and edit the top part to look something like this. #!/bin/sh # call this script from within screen to get binaries, processes releases and # every half day get tv/theatre info and optimise the database set -e export NEWZNAB_PATH="/config/www/misc/update_scripts" export NEWZNAB_SLEEP_TIME="300" # in seconds LASTOPTIMIZE=`date +%s` while : do CURRTIME=`date +%s` cd ${NEWZNAB_PATH} #/usr/bin/php5 ${NEWZNAB_PATH}/update_binaries.php /usr/bin/php5 ${NEWZNAB_PATH}/update_binaries_threaded.php /usr/bin/php5 ${NEWZNAB_PATH}/update_releases.php The import line is the NEWZNAB_PATH because and to take the # off the start of the update script lines for the ones you want to use. I use the threaded update binaries script but you could use the other script too. I use 300 seconds or 5 minutes as the update interval but you can change this to whatever time interval you want. 6. Now, test it to make sure it will index. Log-in to the Apache docker by using this line at the unRAID command prompt. - docker exec -t -i Apache /bin/bash Then run this. - sh /config/www/misc/update_scripts/nix_scripts/newznab_screen.sh And the update scripts should start to run. 6. To automate the updating when the Apache docker runs add this line to the Apache userscript.sh file and re-start the apache docker. sh /config/www/misc/update_scripts/nix_scripts/newznab_screen.sh &>/dev/null & I would recommend you "Browse Groups" to check they are being updated at least once a day. If they quit being updated then just re-start the Apache docker. As you can see, this isn't simple. The last time I actually did this was at least a year ago so I might have missed something. So, let me know if you get so far and I missed something obvious so it doesn't work for you.
  17. No, it's not been discussed lately. There never seemed to be much interest in running newznab here. Basically I use, Linuxserver.io Apache docker to run the web interface. This docker has everything needed to make it easy to get running. A MariaDB docker for the back-end database, any MariaDB or MySQL database docker would work. NerdTools to install the packages needed to run subversion so I can download newznab from the unRAID command line. I've been running newznab since before Dockers existed then with Dockers before anyone had an "unRAID Apache" docker. So, I've gone through multiple ways to do it. Running it bare metal actually was fairly easy to do but during one of the unRAID releases there was no support for another web server so I switched to dockers. The Linuxserver.io Apache docker was the first I found to be easily customized so it would just work. I could start a thread with an overview of what to do to get it running if people are actually interested. The problem is without me actually going through the setup again I would probably miss a step or two. For example, I can't see the newznab initial setup pages unless I started over with it.
  18. Is it allowed to discuss running your own indexer by using a couple of dockers and the nerd tools plugin?
  19. Mover doesn't work like that. (And if it did, it would be rather pointless) It physically moves the file. If anything its actually far more inefficient since hardlink support came into effect with 6.2 I wasn't clear enough ... my statement was about how the file is moved/renamed between SAB and Sickbeard, not how mover copies the file from the cache disk to the data disk. Actually doesn't even matter there either. Because /mnt/user/TV Shows and /mnt/user/Downloads are different mount points, the system will always do a copy / delete. No linking involved. But, that might be the key here. Vast majority of users add multiple paths to a docker app instead of just passing in /mnt/user So now you've got a single mount point passed through to a docker app that ultimately contains multiple mount points. I would surmise that the docker system is puking at that and just doing the rename instead of actually following the rules for following mounts. Solution is to pass separate mounts to all the docker apps involved of /downloads and /tv and make all the internal references point to them and ditch the mapping of /mnt or /mnt/user If this works, then its probably also @Nick5429's issue as he also is apparently passing through /mnt/user and then its an actual issue with docker, not unRaid per se. I have tried it with my TV_Shows share set to use cache and not use it and it works correctly both ways for me. I only pass through the lower level directory to my dockers. No /mnt or /mnt/user gets passed through. For example Sickbeard uses, /downloads -> /mnt/cache/appdata/Downloads/ /tv -> /mnt/user/TV_Shows/ /config -> /mnt/cache/appdata/Sickbeard/
  20. I have turned the use cache setting on and off on my TV share and files are not written to the cache with it turned off. You have a setup problem. What downloader and how is is the post-processing being done?
  21. "Root Directories" like you are describing isn't a SickBeard setting so it must be something to do with the plugin settings??? What does Sickbeard show here? Go to a show that is on the cache disk and see what is under the location here? This is a Docker so /tv is mapped to the user share by the Docker settings.
  22. You have Sickbeard pointed to /mount/cache/TV Series. Either in the Docker config or in the Sickbeard config. You need to change it to /mnt/user/TV Series.
  23. Yes, rebuilding the docker would require making the link again. Probably the easiest way is to use the user script option and make a script that creates the link. That way, you don't have to go into the container and it will get created.
  24. The server ran 24/7 from the end of September until Friday and then it locked-up again. I reset it and it did it again on Saturday. So, it appeared to be good but isn't and I still don't know why. There was a BIOS update for the motherboard so maybe it'll help.