Jump to content

lionelhutz

Members
  • Posts

    3,731
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by lionelhutz

  1. Well, there is really no mention of ANYTHING to do with this from LT and apparently an all XFS system does this too so I'm not convinced it's a RFS problem.

     

    It's always been best to install windows on a clean partition so that hardly qualifies as a comparison to having to wipe-out many drives worth of data...

  2. Interesting...  Can you tell that I'm a fan of lower end hardware?  (Wife has never believed me when I say that I absolutely need the newest stuff or the world as we know it is going to come to a crashing halt)

     

    The Skylake stuff has been a big disappointment. It has randomly hard locked and I don't even capture anything appearing in the syslog. I made a few BIOS adjustments and it went about 40 days once but it keeps happening. Last time was after 25 days and I had started thinking maybe it's OK this time...

     

    I've had unRAID running on 4 different AMD processor/motherboard combos without any stability issues. The last AMD setup would NOT let me install the video card drivers in a windows VM which is the reason for the Skylake. I tried EVERYTHING that I could find to fix that issue (which no-one else seemed to have) including both AMD and Nvidia cards.

  3. I really doubt you need more. It might make Plex transcoding or library updates go a little quicker. Monitor the memory usage when Plex is doing some transcoding and see what happens.

     

    If you have a crapload of torrents in Deluge which are downloading or seeding using a platter drive then more memory would take some load off the drive. But, if you're not doing heavy loads or using a SSD then I can't see it mattering.

     

    In case you don't know, linux will basically use the excess memory to cache drive contents so instead of the applications using the ram for the data they have to go to the drives more often.

     

    I used to run a bunch of dockers in 2gig of memory and it was always fine.

     

     

  4. I think the results are tricking you and it only has one USB controller, the 14 device. Possibly the 1a and 1d devices are some part of the interface in the processor chipset and the 14 device is actually the controller?

  5. Can you post the results of -> lspci | grep USB

     

    That command line does seem to work on my server, but have you tried what is here?

     

    https://lime-technology.com/forum/index.php?topic=36768.0

     

     

     

    Note that AFAIK a USB3 port is always going to be a different controller than a USB2 port

     

    Just FYI, I have a Skylake system and it has 1 USB controller with 6 x USB3 rear ports, 1 x USB3 header and 2 x USB2 headers. I think having an all-in-one controller is new to Skylake on Intel platforms.

     

     

  6. The only way to meet $400 and run a few VM's would be by compromising. An option would be to use a quad-core AMD setup and over-commit the cores to the VM's. In other words, dynamically assign either 3 or 4 cores to all VM's and each VM uses the cores as they need them. It would work if the VM's are general use or only 1 VM is doing heavy lifting at a time.

     

    https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/form-Virtualization-Overcommitting_with_KVM-Overcommitting_virtualized_CPUs.html

     

    You could pull this off with a 4-core Intel setup too, but you'd be hard pressed to put together a 4-core Intel, motherboard and memory combo for under $400. You could do it going a little over your budget.

     

    I have a i5 so I've been playing with this to see if I can run multiple multi-core hosts. The hosts I would use would generally not be doing anything too CPU intensive so it could work for me. Basically, turn this section of the XML

     

    <vcpu placement='static'>2</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2'/>
        <vcpupin vcpu='1' cpuset='3'/>
      </cputune>
    

     

    into something like this.

     

    <vcpu placement='static'>2</vcpu>
    

     

    It still appears as a 2 virtual cores in the VM but 2 of my CPU cores are no longer pinned to the VM and unavailable for other uses. Doing this, I can run 3 VM's with 2 cores each on my 4-core i5. With pinned CPU's I could only run 3 VM's by pinning 1-core to each VM.

     

    Just throwing this out there since the unRAID VM setup page requires you to pin cores so you may not know this is possible.

     

    • Upvote 1
  7. It depends on what you will be using it for.

     

    You would likely have to assign at least 3 Opteron cores for every FX-8350 core to get similar VM performance. If want many lighter duty VM's then the Opteron gives you that ability. Otherwise, it's a toss up where you basically traded extra power consumption for a cheaper price.

     

  8. I have 7 Dockers and a W10 VM assigned 2gig of memory running in 8gig total and the server currently has 3.3gig of memory cached and 300meg free. This is about 2gig for unRAID and the Dockers to  run and 2gig for the VM. The VM certainly doesn't require 2gig overhead.

     

    I don't see why you'd have any issue running 3 W10 VM's and your Dockers in 32 gig even if you did assign 8gig of ram per VM since that leaves 8gig for unRAID and the Dockers which should easily be enough. The server would just make more use of the disks during operations like transcoding a large file. If you did assign 4gig to the general purpose W10 VM's then you'd have even more free memory for unRAID to play with. Something like  2 x W10 @ 4gig + 1 x W10 @ 8gig leaves unRAID plus the Dockers with 16gig which should be more than enough. You could also do something like run all 3 at 5gig or 6gig if you wanted to free up a little extra memory for the Dockers.

     

    I started with about 6 Dockers in 2gig of memory but stepped up the server processor and memory when I started running the Emby Docker. Then, I found it was so under-utilized I added the W10 VM for hooking to the TV and watching media instead of using another PC. I have 16gig but was having crashing problems so I was trying 8gig to see if it was the memory (I think it was one of the dockers but not sure yet) and haven't bothered to put the other 8gig back in yet. When I do, I'll put the VM back to 4gig and might add another one for testing or general use.

  9. Apology for my ignorance, I haven't used MyMain, but what specific feature(s) of MyMain do you miss in v6?

     

    You can setup a custom table (or tables) to display your array info and include stuff like various SMART statistics (say power-on hours) as well as user entered data like the purchase date of the drives. There is A LOT of data that can be user entered and customized. I personally found it very handy for tracking some of the basic information on the drives being used like the purchase date, cost, place, warranty start/end etc. At one time I threw that data into myMain so I could easily review the drive history to decide when upgrades were due.

  10. Question now is, always when i modify something this setting is gone, any chance to add commands to a docker wich sticks ?

     

    Set the advanced script variable to true and write a little script file that does it when the docker starts.

     

    may some hints for me (unraid noob here ...), cant find any advanced script settings in docker settings, not in general nor in apache docker settings.

    i only find extra parameters to start up ...

     

    thanks ahead.

     

     

    Try this post for an example of using it.

     

    https://lime-technology.com/forum/index.php?topic=43858.msg506722;topicseen#msg506722

     

     

    You create a "userscript.sh" script file in the root of the Apache docker config directory. Then, set the "ADVANCED_SCRIPT" variable to true.

     

     

  11. It sounds like your setup is fine. The Docker image is a BTRFS filesystem image. Think .iso file, but for a BTRFS filesystem instead of a DVD.

     

    You don't need to backup the Docker image. When you click the add new Docker button there is a drop-down and you can pick the my-XXX for your previously installed Dockers (for example my-Couchpotato) and it will re-install the Docker with all your previous settings and start working again as if nothing has happened. So, if you can restore the appdata to a new SSD then you can re-install the Dockers.

     

  12. No-one can suggest a split level setting unless you BOTH give the file structure you are using AND the sub-directory level where you want everything below to remain on a single drive.

     

    For your music, you say Music\Artist\Album is the directory structure and you want Album to remain on a single disk. Well, this means Music and Artist can both split to multiple disks so the split level setting is 2.

     

    The best allocation method can somewhat be dictated by the split level you use. For example, I keep each complete TV series on a single disk. So, I don't want a bunch of new series all being started on the same disk so that as new episodes come out the disk gets filled. So, I use most free to spread the new series around and start them where there is the most room for future expansion.

  13. I have added, pre-cleared and assigned drives without rebooting the server. I just had to stop the array when I assigned each drive then start it again. Preclear and the unassigned devices plugin will work without stopping the array.

     

    When upgrading a drive, I would first hot-plug the new drive into a spare slot to preclear it. Then, I would stop the array, pull the drives involved and put the new drive into the slot for the old drive before starting the array again. If I was selling the old drive, after a week or two I'd plug it into a spare slot again to clear it before selling it.

     

    I've also upgraded the cache with VM's and Dockers turned off and then plugged the old cache into a spare slot to copy the application data to the new cache before turning the VM's and Dockers back on. I then unmounted the old cache drive and pulled it without stopping the array.

     

    You can't actually hot-swap existing array drives with the array started, but having bays and hot-plugging drives is handy.

    • Thanks 1
    • Upvote 1
×
×
  • Create New...