Edvard_Grieg

Members
  • Posts

    39
  • Joined

  • Last visited

Posts posted by Edvard_Grieg

  1. Hi-

     

    I've got a failed data drive that is being emulated.  Normally no big deal and I'd replace it.  However, I'd like to increase my parity size as well.  Right now parity drive is 14tb, and I'd like to be able to do 2x16tb.

     

    My plan at the moment was to buy a 16tb and a 14tb, replace the failing disk with a 14tb and then replace the current parity with a 16tb moving the existing parity drive into the array.

     

    Is there an effective way I can replace the failing disk in my array with a 16tb and replace the parity drive with a 16tb as well?

  2. On 1/12/2021 at 5:50 PM, bigzaj said:

    Mine is primarily for Plex, but have been expanding video editing / rendering stored on this machine.

     

    The goal was reduce power and replace drives, cut out 8 hdd's and a processor and run 12 x 18TB

     

    To be honestly, I am now considering just running this setup for another year to see where AMD + Unraid end up.

    That one's tricky, from half of what I read Intel with QuickSync is the way to go there (been some really good prices on the i9-10850k) but then you're no longer server class, new motherboard, memory yadda yadda yadda....

  3. Nice- setup, aside from the itch to reduce power consumption you're probably closer to good than not for performance.  I gave up on the power consumption piece....with your setup are you thinking of replacing your drives too? 

     

    I haven't experienced any data integrity issues with my use.  Speed of opening files generally is 'fine', my use case is primarily media server and don't typically see any notable loading times.  

     

    My intent is to be running nextcloud, wordpress and others- right now it tends to add to the sluggishness when adding that in with other media processes. 

  4. @bigzaj I started with FreeNAS (or something similar) about 10 years ago, and after a physical raid card failed and couldn't recover the array I switched to using unraid nearly 9 years ago. 

     

    I started with a basic Dell Optiplex Celeron desktop, and have iterated from there. Prior to the current 2xE5-2660 v0 that I'm currently running I had been running 2xX5482.  That move was pretty substantial, and hoping next one is as well.  Honestly, were it not for trying to run some other workloads (self-hosted cloud etc) the current CPUs would probably be ok.   I'm at ~110TB with plenty of 4K UHD, Atmos as well as HEVC etc.

     

    Part of the way I've addressed the majority of the local use is by ensuring my endpoints don't require any transcoding by using nvidia shields that will handle any format natively.  I limit remote streaming to lower resolutions from a bandwidth standpoint as my upload is not extensive.  Typically I don't see more than 5-6 concurrent remote streams, and often the likelihood of multiple major transcodes is not common.

     

    This is all to say that even with my 'older' hardware it may not be far off for meeting your needs.  I'm also pretty confident you'll be quite happy with unraid as a platform.  There are other system architecture considerations to make (especially with a larger library) to ensure that the right drives are performant when needed.  I think many are looking forward to 6.9 which will allow for multiple cache arrays (without doing the unassigned drive dance) and thus spread out some of those storage systems that require faster storage.

     

    For me, it's the added workloads, and ensuring any transcoding activity is non-impactful.

  5. On 11/28/2020 at 9:10 AM, PanteraGSTK said:

    GTX 1050 and above will help, but getting a card that doesn't have a limit wold be better.

     

    It looks like your board can use the e5 26xx v2 series so you have lots of options. Those CPUs are cheap on ebay depending on which one you want.

     

    https://ark.intel.com/content/www/us/en/ark/products/series/78582/intel-xeon-processor-e5-v2-family.html

    So I'm still back and forth on this, and I've been doing some system tweaking with my current system.  Checking that all dockers point to /mnt/cache/appdata vs /mnt/user/appdata, moving the cache drive to RAID 1 and a separate unassigned drive etc.  All these things seem to help in some ways; however, I'm still finding some sluggishness when I have a lot of dockers running.  

     

    I end up leaving the main media content dockers running, but I've got another dozen that I leave off until needed and would like to just be able to leave them running.

     

    At one end I've been looking at the E5-2697 v2 and getting a pair cheap on ebay + some faster memory.  

     

    Alternatively I keep eyeing some more modern AMD options, but it looks like every single one is going to cost a lot more, not just the cpu, but obviously will need new motherboard, ram and possibly a GPU.

     

    What are the thoughts here?  is the 2xE5-2697v2 enough of an upgrade over 2xE5-2660 to be worth the money, or am I better off investing that towards something more modern?  I've been out of the CPU game so long I'm just reading up on Ryzen vs TR vs Epyc - it appears TR and Epyc may be too much money (not sure) and I've been reading mixed things on overall stability of Ryzen on Unraid....Thoughts?

  6. 6 hours ago, PanteraGSTK said:

    True, it may be the containers you are using.

     

    I have close to 30 dockers, but not all are in use. Portainer is one that is there, but I don't use it unless I need to.

     

    Pulling down containers is pretty fast for me, but I'm used to this hardware. Single core clock speed could be a factor, but I'm not sure how much.

     

    I'd be interested to see how your performance is after you change your cache pool. Perhaps the RAID aspect is causing it to slow down?

    So...I'm finding a really like BTRFS...was able to convert from a 3 drive RAID 5 to a 3 drive RAID 1 and then remove one of the drives all without losing any data.  

    Right now the separate drive is being used for downloads and post-processing activities which should hopefully isolate some of the IOPS work. 

    The 2 Drive RAID 1 BTRFS array now has everything else...once 6.9 is stable I plan to upgrade and create a separate cache pool instead of the unassigned route.  

    So far it's feeling a little speedier, but time will tell...definitely need to put it through its paces...probably will pick up a 4th NVME to round out the set and run the downloads and post-processing as RAID 0.  

     

    Still have some minor itches to upgrade....toning down the question a little, are there any direct CPU upgrades that would be compatible with my motherboard that I might see a decent speed improvement with?

     

    What would be a good value GPU recommendation to offload HW Transcodes?

  7. 5 hours ago, PanteraGSTK said:

    I have a similar setup running on two different servers. One is the main server running 2x 2670 from the same generation as your 2660s. I haven't noticed any such slowdowns and our usage is very similar with the exception that my cache drive is a single m.2 1TB drive.

     

    I do have 2x GPU that plex can use for decoding/encoding so that takes a lot off the CPU.

     

    That server also has 2x parity and 22 array drives.

    Thanks for the comments, since posting, I think I've found my Portainer container being part of the source of the slowdown (no clue why, but speeds are better when it's stopped).  I'm guessing it's doing extra polling and creating extra cycles- just haven't been able to dig into the why.

     

    I have thought potentially about the GPU path....I feel like I started noticing it when helping a friend with his rig, he had a newer AMD CPU and GPU, and just some of the basic tasks (pulling down dockers and extracting) just flew in comparison...not sure if it's just a higher single core clock speed or something else.  

     

    Similarly, found when trying to run some dockers like Nextcloud etc it just 'chugged' same with Wordpress...

  8. 11 hours ago, ChatNoir said:

    Hey, unless I'm blind you mention your cache pool but not your Array drives and if you are running a GPU.

     

    This would have an impact on any recommandation.

    3 or 20 HDD would not required the same hardware, especially if do use a GPU for media transcoding.

     

    This could orient you to consumer grade or Workstation / Server grade hardware. :) 

    112TB of usable space across 14 drives including Parity.  

     

    I'd say I'm already much more in the Server grade hardware at this point (2x Xeon, Supermicro 3U Chassis with SAS Backplane, ECC Ram, Server Class Mobo), just happens to be from a different generation.

  9. Hey all-

     

    So, not new to unraid, but my current CPU/Mobo is getting a bit old.  I haven't kept up as much on the current tech and could use some advice.

    Current setup:

    Supermicro 3U Chassis

    2x E5-2660 OG (v0/v1)

    Asrock EP2C602-4L/D16

    64GB DDR3 ECC

    3xHP EX920 1TB NVME Cache running in BTRFS RAID 5 (I know, I know, it's being converted back to RAID 1 as we speak)

     

    Current Workloads:

    Plethora of dockers (10-20 running at a time), standard usenet media toolchains 

    Plex streaming upto 4K HDR for local content and 5-10 remote streamers typically max

    Ubiquiti Controller

     

    I've got some intentions of going more the self-hosted route, running Wordpress for some sites, nextcloud, photo hosting etc etc

     

    My observation has been some recent slowdowns on the docker side, and working on resolving the cache end of things, but I keep coming back to my CPU just being antiquated, and even though I've got plenty of threads at my disposal, they aren't exactly fast.

     

     

    So all that being said, what can modern tech do for me?  I'd like to upgrade to something that covers my needs, but gives a bit extra headroom (either more dockers/VMs etc and/or more demanding workloads (nextcloud, more remote plex transcodes etc)

     

    Additionally, is there anything I would need to be concerned with not being 'plug and play' with the Supermicro Chassis/backplane/power supply etc?

     

    From a budget standpoint, I'm not opposed to used via eBay, but in general looking for good value, just don't know what to look for these days.  Probably looking in the $400-$800 range for CPU/Memory/Motherboard (assuming rest will come over).  Of course anything coming up with holiday sales is always a plus.

     

  10. On 4/3/2020 at 12:47 AM, Edvard_Grieg said:

    I've been running into an issue and just banging my head on a wall for the past few days.  I've been trying to get nextcloud installed- I've followed the lsio blog instructions and SpaceInvaders video to a t, but invariably I'm never able to complete the Nextcloud install setup.  Each time I go through and at a minimum drop the nextcloud database, but have on multiple occasions nuked the mariadb and nextcloud containers and their respective appdata folders (and the lingering nextcloud files from the \data directory).

     

    I am not even trying to get a reverse proxy working yet, even local access would be a good first step.

     

    So at this point what I run into is I'm able to install the mariadb just fine without any issues, and update the bind address and create the nextcloud user and database in the db without any issue. 

     

    I then create the lsio nextcloud container; however when I run the actual nextcloud setup (populating credentials etc), I ultimately get a 504 Bad Gateway Nginx Timeout.  It appears to be a legit timeout as there are tables being written into the database, so it doesn't appear to be a full communication/access issue, but there is definitely something going wrong.

     

    Docker configs are essentially the defaults with the lsio changes (setting password for mariadb, changing port to 444 for the nextcloud docker, and then pointing to appropriate shares)

     

    From a log perspective, the \data\nextcloud.log only gives some information about not being able to process a file during the install.

     

    The appdata/nextcloud/logs/nginx/error.log has the following:

    
    2020/04/02 21:40:30 [error] 349#349: *7 upstream timed out (110: Operation timed out) while reading response header from upstream, client: <LAPTOP_IP>, server: _, request: "POST /index.php HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "<UNRAIDSERVERIP>:444"

    The appdata/nextcloud/logs/nginx/access.log doesn't have much

    
    <DOCKER IP> - - [02/Apr/2020:21:34:37 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:34:37 -0600] "GET /favicon.ico HTTP/2.0" 200 2464 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:39:13 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:39:13 -0600] "GET / HTTP/2.0" 200 2475 "-" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP>- - [02/Apr/2020:21:39:30 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP>- - [02/Apr/2020:21:40:30 -0600] "POST /index.php HTTP/2.0" 504 569 "-" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:40:30 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:40:30 -0600] "GET /favicon.ico HTTP/2.0" 200 2474 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:40:31 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:40:31 -0600] "GET /favicon.ico HTTP/2.0" 200 2470 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"

     

    The appdata/nextcloud/logs/php/error.log doesn't have much

    
    [02-Apr-2020 21:28:29] NOTICE: fpm is running, pid 337
    [02-Apr-2020 21:28:29] NOTICE: ready to handle connections
    [02-Apr-2020 21:37:52] NOTICE: Terminating ...
    [02-Apr-2020 21:37:52] NOTICE: exiting, bye-bye!
    [02-Apr-2020 21:39:12] NOTICE: fpm is running, pid 330
    [02-Apr-2020 21:39:12] NOTICE: ready to handle connections

     

    I'm hopeful someone has an idea of what might be going wrong and how to fix it.

    I ended up getting this working with the other docker image, a few things I tweaked...

    set binlog_format = ROW and transaction_isolation = read-committed per nextcloud documentation

     

    I ended up with the same error, but from some other googling, looks like it's a purely front-end error, so I ran "mysql -uroot -p -e "show processlist" from inside the mariadb container to watch until there were no more nextcloud processes being run.  I then restarted the nextcloud container and it appears to be working (albight slower than expected)

  11. I've been running into an issue and just banging my head on a wall for the past few days.  I've been trying to get nextcloud installed- I've followed the lsio blog instructions and SpaceInvaders video to a t, but invariably I'm never able to complete the Nextcloud install setup.  Each time I go through and at a minimum drop the nextcloud database, but have on multiple occasions nuked the mariadb and nextcloud containers and their respective appdata folders (and the lingering nextcloud files from the \data directory).

     

    I am not even trying to get a reverse proxy working yet, even local access would be a good first step.

     

    So at this point what I run into is I'm able to install the mariadb just fine without any issues, and update the bind address and create the nextcloud user and database in the db without any issue. 

     

    I then create the lsio nextcloud container; however when I run the actual nextcloud setup (populating credentials etc), I ultimately get a 504 Bad Gateway Nginx Timeout.  It appears to be a legit timeout as there are tables being written into the database, so it doesn't appear to be a full communication/access issue, but there is definitely something going wrong.

     

    Docker configs are essentially the defaults with the lsio changes (setting password for mariadb, changing port to 444 for the nextcloud docker, and then pointing to appropriate shares)

     

    From a log perspective, the \data\nextcloud.log only gives some information about not being able to process a file during the install.

     

    The appdata/nextcloud/logs/nginx/error.log has the following:

    2020/04/02 21:40:30 [error] 349#349: *7 upstream timed out (110: Operation timed out) while reading response header from upstream, client: <LAPTOP_IP>, server: _, request: "POST /index.php HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "<UNRAIDSERVERIP>:444"

    The appdata/nextcloud/logs/nginx/access.log doesn't have much

    <DOCKER IP> - - [02/Apr/2020:21:34:37 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:34:37 -0600] "GET /favicon.ico HTTP/2.0" 200 2464 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:39:13 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:39:13 -0600] "GET / HTTP/2.0" 200 2475 "-" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP>- - [02/Apr/2020:21:39:30 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP>- - [02/Apr/2020:21:40:30 -0600] "POST /index.php HTTP/2.0" 504 569 "-" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:40:30 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:40:30 -0600] "GET /favicon.ico HTTP/2.0" 200 2474 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"
    <DOCKER IP> - - [02/Apr/2020:21:40:31 -0600] "GET /data/htaccesstest.txt HTTP/1.1" 400 255 "-" "Nextcloud Server Crawler"
    <LAPTOP IP> - - [02/Apr/2020:21:40:31 -0600] "GET /favicon.ico HTTP/2.0" 200 2470 "https://<UNRAID IP>:444/index.php" "Mozilla/5.0 (X11; CrOS x86_64 12871.57.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.81 Safari/537.36"

     

    The appdata/nextcloud/logs/php/error.log doesn't have much

    [02-Apr-2020 21:28:29] NOTICE: fpm is running, pid 337
    [02-Apr-2020 21:28:29] NOTICE: ready to handle connections
    [02-Apr-2020 21:37:52] NOTICE: Terminating ...
    [02-Apr-2020 21:37:52] NOTICE: exiting, bye-bye!
    [02-Apr-2020 21:39:12] NOTICE: fpm is running, pid 330
    [02-Apr-2020 21:39:12] NOTICE: ready to handle connections

     

    I'm hopeful someone has an idea of what might be going wrong and how to fix it.

  12. Hi all,

     

    I've been using unraid for awhile, and my system hums along pretty well with common uses of Plex and 9 or so docker containers.  Lately I've been wondering if my cache drive configuration is optimal, I originally set it up when btrfs was first enabled after running Plex off a plug-in forever.  So originally I used a single cache drive, but that caused issues when mover, Plex and nzbget were all running at the same time and seemed to be fighting for IOps.  So I configured a single cache drive that is used for mover and nzbget activities with a symbols link to a separate unassigned btrfs raid 1 array of two other SSDs housing the actual docker containers and the docker app configurations.  That array is mounted via the go script at bootup.  This is all with the intent of having some redundancy of actual configs and not a huge deal if some bins are lost off the primary cache. 

     

    Mygoal is to simplify this and just use a standard unraid configuration without the separate mounts... Is there a better way now to do this? How would I without any IOPs contention?  Btrfs raid5?  

     

    Thanks!

  13. I don't see any performance issues, but that could be a difference between our situations - my final files are between 700MB and 3GB so I'm probably moving less data.  Still, a modern hard drive is capable of enough throughput to handle reads and writes so it might be worth trying.

     

    I think I may have found the issue- I thought it was the mover, but appears to be fully isolated to CouchPotato....looks like it might be this issue...I modified the 'From' directory and I'll keep an eye on it.  https://couchpota.to/forum/viewtopic.php?t=4681

  14. I don't use CouchPotato so I'm not sure if my solution for Sickbeard/Sickrage/Sonarr applies.  I just have the final file written directly to an un-cached user share.  Caching/Mover really doesn't do much for me with unattended operations like this.

     

    Thank you, do you see any speed/performance issues if you're doing large volume?  Any impact to streaming when writing to the user share instead of cache?

  15. EDIT:  Just looked up the bug, I'm on version 6.1.6, is it still an issue?

     

    Thank you- what is the user share copy bug?  can you link me to details on it?

    https://lime-technology.com/forum/index.php?topic=34480.0

    Still an issue, not really fixable without totally rewriting fuse user shares, which is linux, not unraid.

     

    Hmmm...so would there be a way to verify this is in fact the issue?  The piece that seems odd to me is that it is not consistent at all...even during the same mover operation to the same parent share some directories move properly and some don't.  All directories and files were moved/set via CP so permissions are the same...

  16. EDIT:  Just looked up the bug, I'm on version 6.1.6, is it still an issue?

     

    Thank you- what is the user share copy bug?  can you link me to details on it?

     

     

     

    This got a bit more bizarre...I had 5 items (each at least 5gb), 3 moved via the Mover just fine and 2 went to '0' as described in my OP...so even consistency is lacking...

  17. Hi-

     

    Not sure if this is the right place, but since it seems to be an intertwined issue hopefully someone can help.  I've been using Unraid for quite awhile and have started running into an issue that appears to be isolated to CouchPotato, Cache Drive and the Mover Script.

     

    At times I may have many items in the CouchPotato queue (used with NZBGet).  NZBGet downloads to the cache drive.  CouchPotato is configured to watch that folder on the Cache Drive and then the Renamer is configured to point to the user share.

     

    I initially had issues that due to the size of many of these files that they would not fully move from the download directory to the user share (using cache drive) and when the mover would invoke it would grab partial files.

     

    I changed the CouchPotato configuration to "link" instead of copy or move and often times this works....and other times it doesn't, and instead of partials I end up with nothing being moved over at all.  For instance....

     

    Item downloaded 15GB - Placed automatically in /downloads

    CouchPotato Renamer Runs - transferred to /user/share (cache) shows 15GB (via du -sh *)

    Mover Runs - /user/share (cache) is empty, /downloads is effectively empty (just remnants of 'link'), and /user/share shows the Folder, but "du -sh *" shows 0 as the size and the folder is empty

     

    Ultimately it seems like the files are just disappearing into the abyss when the mover runs...anyone else running into this issue?  I have looked at drive capacity and have reconfigured the user shares to only point to very free drives, and that has not helped either.

     

    I've also tested just a manual move via the command line and do not run into issues at all.

     

    Any ideas??

  18. Hi,

     

    I'm a fairly long-time user of Unraid going from a Celeron D system with a couple gigs of RAM to now and I'm now looking to shift gears again.

     

    A couple years ago I bought a used Dell T7400, it's outfitted with:

    2xXeon x5482

    32GB RAM

     

    Array

    5xToshiba 4GB NAS Drives

    1xWD 4GB Red Drive

    1xSeagate 3GB NAS Drive

     

    Parity

    1xSeagate 4GB NAS

     

    Cache

    Samsung 850 PRO 256GB - there is a symbolic link on cache drive to the separate BRFS array

     

    Other

    2x Samsung 840 EVO 120GB in BTRFS RAID 1 Array - these have all docker installs and configs on them

     

    My current uses include Sonarr, NZBGet, CouchPotato and Plex.  The docker plugins have specific core assignments and memory allocations

    Future uses include the above plus VMs (not sure on hypervisor yet)

     

    One of the 'tweaks' I made is with the 2xEVO SSDs...I found that when pulling down large amounts of NZBs and trying to unpack them it could tank the SSDs resulting in choppy streaming or deadlocks with downloading.  To remedy this I effectively created a second cache array with the 2xEVOs and mount with the SNAP plugin.  Since Unraid forces docker to be installed on the cache drive with btrfs, I created a symbolic link to the EVO array which appears to 'work'.  The major caveat is that when the system is fully restarted, there is a bit of manual tweaking to get the right shares visible, and almost invariably I lose my docker configurations (have to rebuild/link the main download)...If there is a better way to approach this piece then please tell me!!

     

     

    For my future build my main objectives are:

    * Lower Noise

    * Lower Power Usage

    * Fewer Drives

     

    I'm looking at the Seagate 8TB Archive drives to replace my current array drives...looking at probably 4 in the main array + 1 for parity.

     

    I'm additionally looking at moving to new CPU/Motherboard etc so I can downsize to a smaller case (T7400 is full tower and stupidly heavy, hot, and noisy). 

     

    I'm currently looking at:

    Budget varies, I'll probably do this in steps of a few hundred at a time

    2xE5-2620

    64GB RAM

    Supermicro MB

    1xSeagate 8TB Archive for Parity

    4xSeagate 8TB Archive for main array

    Cache Drives??? Reuse the 850 Pro and 840 EVOs??

    PSU??

    One of the 5 in 3 adapters for the drives

    A mid-tower case to hold it all

     

    Thank you for reading through this- let me know your thoughts on:

    How much of an improvement will I see in the areas of Performance, Power Consumption and Noise/Heat?

    Is there something else I can/should do for the secondary cache setup?

    What else should I consider or be thinking about?

    How will stability compare to my current setup (uptime of 82 days, last downtime was due to upgrade to 6.0.1)

     

     

     

     

  19. I was running into the same issues described above with not being able to generate even a text email.  I ended up doing the following to make it work:

     

    I entered into the container

    docker exec -it plexReport bash

    I navigated to the plexReport main directory

    cd /opt/plexReport

    I re-executed the setup file

    ./initial_setup.sh

    I noticed the questions were different running it this way, after it ran I was then able to navigate into

    /opt/plexReport/bin

    and run

    plexreport -t

    I then made any additional tweaks to the config and cron files in

    /opt/plexReport/config/config

     

    Hope this helps!

     

  20. So, if the drive is being emulated should I attempt just a straight swap with the new drive and let it get rebuilt?  Or am I better off copying the emulated data via the command line to the new drive? 

     

    Would the following work-

    Add new drive to the array configuration, format as xfs

    Copy data via command line from emulated drive to the new drive

    remove failing drive from array

    reset configuration without the failing drive?