Jump to content

danioj

Members
  • Content Count

    1359
  • Joined

  • Last visited

Everything posted by danioj

  1. Hello Community, Happy New Year 2019!! I have started 2019 with a motherboard I need to replace. It is an Supermicro X10SL7-F. It's out of warranty and I've tried to get a replacement but it's looking more expensive here in AUS to buy the board now than it is to buy a new one. So, I'm thinking upgrade. Happy to spend some money if the upgrade is worth it. The board was excellent as it had just the right amount of sata ports as well as IPMI. I also run 3 VM's (1 with a low powered Nvidia GC) and a handful of dockers which only test the CPU every now and then. Last year I considered 2 x gaming / 4k VMs (requiring 2 x16 GC's) but the board only had an x8 and an x4 PCI slot which was the only limitation I ever found (speed and number of slots) which meant I just left things as they were. 2 x Gbe - plus 1 for IPMI - has been fine (dont and cant see need for 10Ge). On the board there is an Intel® Xeon® Processor E3-1241v3 (8M Cache, 3.50 GHz), 2 x Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory and is powered by a Corsair HX850i 80 Plus Platinum 850W Power Supply. I'd like to reuse parts if possible. Has anyone upgraded the same board recently and or would like to chime in with some suggestions or thoughts!?! Thanks in as advance.
  2. WARNING - PLEASE CONSIDER THIS A WORK IN PROGRESS. I HAVE TESTED IT ON MY SERVER AND THERE WAS NO ISSUE BUT THAT DOESNT MEAN IN ANY WAY IT IS FREE FROM BUGS/ISSUS SO PLEASE USE AT YOUR OWN RISK UNTIL IT HAS BEEN TESTED FURTHER - WARNING Well, this is what happens when I am in hospital with nothing to do but wait for tests. For those who don't like what I have done, then I am sorry! Anyway, I was discussing the need for backup of Virtual Machine vdisks. I certainly need a method that is not manual and I thought I would write a "Proof of Concept" script to do this for us to see if it might be something that would be needed by others too. Like I said above, this is a proof of concept initially and it is missing most of the refinements that go into good code and a released product. For that reason I would suggest if you don't know what you're doing then don't use this. If there are people who want to test this in their test environment and or give feedback please do the following: - download script from github (link at bottom of the post). - copy the script to your flash drive. - read the notes in the file. - fill in the mandatory inputs (e.g. virtual machine list and backup location). - change optional switches if you wish to. - from the cli execute it like below (for now). sh backup_unraid_vms.sh - write a post and give feedback # this script has been created to automate the backup of unRAID vm's vdisks(s) to a specified location. # this script does not yet run using variables passed from the cli as yet but there is intention to do so if there is interest. # for now the variables below are what are needed to be ammended to have the script run. # Change Log # v0.1 20160330. # initial release. # # notes: initial realease is more concept and to see if there is a need for it. it lacks basic error handling and recovery. # which could be developed later. # v0.2 20160330. # bug fixes. # # changed how the bash script is recognising the failed shutdown flag as after testing even a failed shutdown would start a backup. # changed default mins to wait for shutdown to 6. # v0.3 20160415. # bug fixes and enhancements. # # applied strickter naming conventions to variables. # cleaned up the code, added comments, removed unecessary 'do' loops. # added input validation and verification. # added addition status messages. applied a bit more consistency. # added ability to deal with vm's with names that have spaces in them. # vms now seperated by "new line" not space. # added rsync copy over standard nix cp command. # added backup of vm xml configuration. # added option to add timestamps to backup files. # added script name check for version control. # added option to enable / disable the script. # added option to start vm after failure. # added option to "dry-run" copy operation - note all other functionality is enabled. # added ability to ignore vm's which are not installed on the system. # set defaults for all options to 0. # added guidence for options and inputs. # added constraint to only backup .img vdisks so to skip installation media etc. # changed method of obtaining vdisks list. # fixed issue which had script fail if no vdisks were present. # v0.4 20160416. # administration. # # script name changed to facilitate code being added to github. # bug tracker (order by severity) # <reference> <description> <link to post> <severity> <date added> <by whome> <accepted/refected> <comment> # to do list # core # error capturing and handling -- started (v0.3). # apply stricter naming conventions -- done (v0.3). # code clean up -- ongoing (v0.3) # input validation and verification -- done (v0.3). # logging. # clean up status messages -- done (v0.3). # deal with vm's named with spaces -- done (v0.3). # possible improvements # use rsync to copy the vdisks instead of copy -- done (v0.3). # possible future features # backup vm xml file -- done (v0.3). # have the script run from variables passed in from the cli. # add timestamps to files -- done (v0.3). # add iterations of backups and number of backups to maintain at any given time. # plugin -- started (v0.3). # plugin to-do's # basic structure and requirements -- done (v0.3) # git-hub account # ui validation via menus # .... much much more .... script now available via github: https://github.com/danioj/unraid-autovmbackup.git
  3. Nah. I did this and got really sick. My priorities have changed and tbh I have abandoned this. It was a good idea when i had time but even I back up manually now.
  4. For anyone having an issue setting up this container, here are some tips from me: - don't worry that there appear to be no port or config mappings. Everything is fine, you don't need to add these. - if you set your own IP (which you will probs have to do given it requires port 80) then ALL other containers it interacts with need their own IP - tvhProxy may need a restart before it will be recognised by TVH. - Don't mess with ports - use <ip>:<port> format of tvhProxy setup in Plex to find the server. Just my experience - HNY 2019. D
  5. danioj

    [Support] Linuxserver.io - Ombi

    For anyone who is installing Ombi for the first time and either cannot log in via OAuth OR Ombi doesn't recognise Plex Media Server (despite pulling the details from your Plex account), then the following solutions worked for me: - Don't try and login using OAuth on initial setup. Skip the wizard and set yourself a very strong password. Then, once logged in, navigate to Settings>Configuration>Authentication and click the checkbox which says "Enable Plex OAuth". Then, disable any popup blocker you have for your Ombi URL. Log out, then log back in (this time using the newly appearing option to use OAuth) and then you're good. For some reason, this method of logging requires a popup method of logging on, which the Wizard doesn't seem to use (as in it opens a new page). Meaning, when you're taken to a new page (where you are successfully logged in - and asked to close the window) but then the wizard never continues. - Once you have grabbed the details of your Plex server (via the Media Server settings option) from your Plex account and Click either "Load Libraries" OR "Test Connectivity" and received an error saying it can't connect to your server - just go to the Docker page and reset the server and try again. It should then work. - Run a Settings>Media Server>Plex>Manually Run Full Sync right at the end. Just my experience! HNY 2019! D
  6. danioj

    unRAID DVB Edition

    Hey Neil. Hope life is treating you well. I appreciate the time you have taken to update this. Thank you.
  7. danioj

    unRAID DVB Edition

    Thanks. I tried your 2 steps with @piotrasd libreelec binaries. The system found the adapter but not the drivers. Like you, I rolled back.
  8. danioj

    unRAID DVB Edition

    Yeah, that's what @piotrasd did and he posted them earlier. I quoted them in my post in the links above. I'm just not sure of the install process. I asked what the process was for installation of these files (eg where do each of the files within the archive go, does upgraded stock need to be installed first and can this method run alongside the Plugin - eg can I keep the plugin installed awaiting @CHBMB comeback or do I have to uninstall when using the manual method). Also, I'd like to know the process so I can compile them myself from now on without having to rely on others.
  9. danioj

    unRAID DVB Edition

    Would you mind documenting the steps you took to create these? I am not averse to doing some compiling and self-installation (as opposed to the plugin) but after reading the entire thread 4 times and the wiki I cannot get things to work. EDIT: In the meantime, I assume you just throw the output of each of the archives above onto the flash drive and reboot? Is this right? Also, what is the unraid-media file? I haven't seen this before?
  10. danioj

    Logmein Hamachi Docker

    Hi All, As I have mentioned in previous posts in this part of he forum, I have not yet installed v6 but I am looking at how I will set things up around it and through it. I use Logmein Hamachi to help me provide support to my friends and family etc. I thought about putting the plugin on v5 but couldn't get it to work and wasn't really into giving to much time to it so I gave up. I just came across this: https://registry.hub.docker.com/u/gfjardim/hamachi/ Has anyone used it or does anyone know if Unraid has all this needs to run enabled? Ta Daniel
  11. EDIT: I didnt post this in the OpenVPN-AS container thread as I am thinking this is an unRAID network issue. Mods, please move this if you / future replies indicate I am wrong. Hi All, I have an interesting network issue. I establish a VPN connection to my unRAID machine via linuxserver.io docker OpenVPN-AS. All has always worked well. Recently (as in a few days ago) I decided to change things and give each of my containers their own LAN IP on the same range as all other machines on my LAN (192.168.1.x). I went further and allocated (via the -h docker switch AND DNS in the router) each their own hostname. Now, when I VPN in, I cannot access any docker container UI. I can access other machines on the network fine and also the unRAID UI. I have tried to access the IP address as well as the local DNS name (I half expected the local DNS name not to work) but to no avail. When I revert back to using a bridge or host port, I can access the containers UI's just fine via VPN. There is absolutely no change to local access on the LAN - where I can access each container perfectly fine using either the hostname or the local IP. I imagine this must have something to do with a container accessing a container, but I am not savy enough here to figure out what is going on to try and fix it. Any help would be appreciated. Ta, Daniel
  12. danioj

    (SOLVED) Interesting Network Issue

    Hi @bonienl , I followed your instructions to the letter, but I still hit issues. All my docker containers are working fine (as you would expect on br1) but my openvpn docker (which is configured on the host) will not communicate with the containers - which have thier own IP set on my network. It can (once again, as you would expect) communicate with the host. Do you have any suggestions?
  13. danioj

    (SOLVED) Interesting Network Issue

    Hmmm, I do have 2 ethernet interfaces on the server. They are currently bonded. I'm not sure I get much real life benefit from that bonding setup. I might remove the bond and try that solution.
  14. danioj

    (SOLVED) Interesting Network Issue

    Thanks for this. However, after reading through the posts I wasn't too taken away with the solutions. So, (for others benefit) what I decided to do was: Use my existing Ubuntu VM which is always running Installed 18.04 LTS in a minimal config Install Docker.io via apt-get Install Portainer management UI docker container Give VM static IP address Deploy linuxserver.io openvpn-as container into the Ubuntu VM Docker instance Setup openvpn-as as normal port forward 1194 to the Ubuntu VM Login via phone and test. Now all docker containers with their own IP address can be accessed when I VPN in. There are plenty of other solutions to this (e.g. deploy openvpn-as directly into the VM, use router VPN functionality) but for various reasons (ongoing admin, the power of router hardware) I didn't want to do it. Happy now. EDIT: some people might want to know why I want each of my dockers to have their own LAN IP. It is so I can use my router to route certain dockers (via their IP) internet connections via an external VPN service.
  15. danioj

    (SOLVED) Interesting Network Issue

    Thanks @jonathanm. My search kungfu must be terrible, in a dozen searches I didnt see any discussion. Can you give me a link to the most authoritative thread so I can read up.
  16. danioj

    (SOLVED) Strange Docker Port Mapping

    This makes perfect sense, thank you. I understand now. Probably unrealistically, I had expected unused settings to disappear when I selected an option which made them irrelevant and there was no guidance in the help text. Issue resolved. Thanks again.
  17. Hi All, During routine maintenance I have noticed (What I consider to be) something weird about container mappings on my main unRAID setup. Note: posting here, as I seem to feel that this has nothing to do with any particular container or the docker engine itself. However, I decided to map IP's to each of my containers. Played around with the router and added hostnames too. Easy done. e.g. Custom br0 => 192.168.1.203 => nginx.danioj.lan As I had set the container to run (when it was originally setup in bridge / host mode) on port 81 (due to the conflict with unRAID GUI port 80) I had anticipated having to go to: http://nginx.danioj.lan:81 Due to my ever growing laziness, I accidentally left off the port assignment on the URL and (just as I hit enter and expected to find a URL not found error or similar) I was shocked to see that it resolved. What the? It resolved on port 80? I didn't think this was possible, given unRAID GUI runs on port 80. I was also "sure" that even though I have allocated the container to its own LAN IP Address that it still couldn't run on port 80 - there is only one port 80 on the host after all. EDIT: This is despite me selecting the Host Port in the Container Settings as 81. On the Docker summary page, still shows it is mapping to port 80: 192.168.1.203:443/TCP192.168.1.203:443 192.168.1.203:80/TCP192.168.1.203:80 Checked and double checked the container settings page. Host port is definitely 81. However, evidence is evidence so I thought, ooooo - Ill change the application port within other containers I have (e.g. emby) to port 80 too meaning I can just access the application using host name and no port. It did not work. Despite the application allowing for the port to be changed (which i did and then restarted), it wouldn't bind to port 80. When it came back up, the port was 8096 (default). What was also wierd though was, I glanced at the port mappings on the docker page in the emby entry and they (despite only having one port mapping for 8096 in the settings of the container) actually showed there was 4 mappings: 192.168.1.200:1900/UDP192.168.1.200:1900 192.168.1.200:7359/UDP192.168.1.200:7359 192.168.1.200:8096/TCP192.168.1.200:8096 192.168.1.200:8920/TCP192.168.1.200:8920 Again ....What the??? Something screwy is going on here. So, in summary, I have the following: 2 containers' settings indicating 1 port mapping (for the default port of the application) but the docker summary page shows that each container has multiple mappings each. I can access nginx on port 80 when a seperate IP is allocated to that container but not when I try and do the same with another container. I am scratching my head here .....
  18. I started a short discussion in the 6.2.0-beta20 thread after asking: There were a few very quick responses so I quickly decided to move it over here in respect to the BETA release thread. In essence I would like to ask about the possibility of including this configuration in a 6.2 BETA release of unRAID for evaluation / testing? I call it configuration and not feature as I "think" we are just talking about the starting and stopping of an existing feature. I am sure this will generate some traffic as I believe it is something many users want. In reply to my question, 2 suggestions as to how this could be implemented were: I thought both comments / suggestions were reasonable. I guess what I am looking for here is if someone from LT has the time to respond as to the viability of what I am asking AND to generate further discussion from the wider community of course as we all know:
  19. This is not really a support request more a support note for any of those people who run into the same issue. I have 3 LAN ports on my motherboards for each of my Main and Backup Servers (so I don't have to repeat myself again and flood the forum with repeating text - please see my Signature for full details of the setup). I run them both on the same Gigabit switch. I plugged each of the ports into the switch last weekend in preparation for enabling bonding on both servers. Things got in the way and I never did enable it. I didn't think having all three of the LAN ports (on each server) into the switch would be an issue. I was WRONG! I started too see the logs of my router and both servers fill - AND I MEAN FILL - with similar to the following: br0: received packet on eth0 with own address as source address MORE importantly it was crashing my wired network. Just down. Nothing would come back up or be accessible until I unplugged on / both the unRAID servers (which incidentally was how I isolated the issue to begin with). I tried to debug it and I was thrown all over the place by what I found on the web. Some saying it had to do with similar ranges of MAC addresses of VM's, cheap hardware etc BUT it turns out it was something much simpler AND to top it off unRAID HELP solved it for me ..... I SHOULDN'T have plugged the LAN ports into the switch without having first enabled bonding. Here is what unRAID's help says: To be fair it was RANDOM that I stumbled on it. I decided that rather than debug until the later hours of the night - I wanted a Wine - So as a try at a quick solution I'd just enable bonding as I was planning and perhaps this issue would go away. Only through reading the HELP to get bonding going did I stumble on that nugget. Hope it helps someone in the future.
  20. DanioJ's Home Setup (**UPDATED 14/01/2016**) Project Status - Complete Main Server- Initial 23TB of Protected Space. Expandable (with currently available non Archive disks of 8TB) to 104TB Complete Backup Server - Initial 24TB of Protected Space. Expandable (with currently available Archive disks of 8TB) to 56TB - Complete Test Server - 400GB of Protected Space - Complete Old Setup Unraid 5.04, System: ASUSTeK COMPUTER INC. - P8B75-M LX, CPU: Intel® Celeron® CPU G550 @ 2.60GHz, Cache: 32 kB, 256 kB, 2048 kB, Memory: G.Skill 1666 Ripjaws 4 GB 4096 MB, Case: Antec P183 V3, Power Supply: Antec Neo Eco 620 Cache: WDC_WD20EARS 2TB, Parity: WDC_WD30EFRX 3TB, Array: 4 x WDC_WD30EFRX 3TB Total Protected Space 12TB (11TB Used) Project Notes I decided to go with the Backup Server first. I just NEED to Backup my data NOW. My intention is have them both run the latest BETA of Unraid now. My reasoning here is because I don't want to have to migrate my disks to XFS later, I want to do it now This prevents me from utilising some components in the current server but I can live with that (Plus I have a use for them anyway). I built the Backup Server first, backed up all my data. Then I built my Main Server, converted all the current disks to XFS and copied all the data back from the Backup Server to the Main Server. All copied were done with Verify Copy enabled using Teracopy. Main Server Initial 23TB of Protected Space. Expandable (with currently available Archive disks of 8TB) to 104TB. This server is the primary source of my data. It is ALWAYS on and serves all my media, runs my home server applications and 2 VM’s and stores all my digital files and content. Build Type: ATX Notes: No specific requirements for this build other than looking good in the Study. OS: Unraid (Latest - Pro) Build Hardware CPU: Intel® Xeon® Processor E3-1241v3 (8M Cache, 3.50 GHz) Manufacturer: http://ark.intel.com/products/80909/Intel-Xeon-Processor-E3-1241-v3-8M-Cache-3_50-GHz Vendor: http://www.msy.com.au/1150-socket/13945-intel-bx80646e31241v3-quad-core-e3-1241v3-35ghz-8mb-lga1150-xeon-cpu.html Status: Bought. AUD $369. Notes: I have decided to go with an Xeon over another cheaper CPU. I want to run VM’s from the Box now and I think this is the best value Xeon for what I want. UPDATE: I was set on the E3-1231v3 BUT was out of stock when I got there and the 1241 was just AUD $15 more so I went with that one. Motherboard: Supermicro X10SL7-F Manufacturer: http://www.supermicro.com/products/motherboard/Xeon/C220/X10SL7-F.cfm Vendor: http://www.ebay.com.au/itm/Supermicro-X10SL7-F-mATX-Server-Board-/161674176680? Status: Bought. AUD $376.38 Notes: I was going to go with the ASRock C226 Workstation but then I found this Bad Boy. It has everything I want and nothing I don't. Plus it has so many SATA ports. No expansion cards for me. Have noticed that there is currently an issue with compatible memory for this board so I am watching this space so to speak. UPDATE: Seems memory issue is a non issue. Got this from Kogan on eBay. Best price I have seen in months in Australia. Memory: 2 x Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory CT2KIT102472BD160B/CT2CP102472BD160B Manufacturer: http://www.crucial.com/usa/en/ct2kit102472bd160b Vendor: http://www.amazon.com/dp/B008EMA5VU/ref=pe_385040_127745480_TE_item Status: Bought. AUD $413.91. Notes: Given the discussion over compatible memory for the X10-SL7-F I am waiting to select the right 4 x 8TB sticks. UPDATE: I have chosen Crucial for the memory. From extensive reviews on the FreeNAS website and recommendations all over the Web NOT to use Kingston anymore drove my choice. Crucial is Micron's consumer brand, and Crucial memory is very easy to find (and for you guys in the US it can be bought directly from Crucial on their website, at reasonable prices and low or no shipping fees). Particularly, Crucial has the following DIMM model: CT102472BD160B (Single DIMM) and CT2KIT102472BD160B (Two DIMMs) which turns out to be just a rebrand of Micron's MT18KSF1G72AZ-1G6E1. Bought from Amazon because even with the exchange rates and delivery the memory is STILL over AUD $100 cheaper than what I can get in AUS, so all good. PSU: Corsair HX850i 80 Plus Platinum 850W Power Supply Manufacturer: http://www.corsair.com/en-au/hxi-series-hx850i-high-performance-atx-power-supply-850-watt-80-plus-platinum-certified-psu-au Vendor: http://www.msy.com.au/vic/northmelbourne/pc-components/14424-corsair-cmpsu-hx850i-850watt-digital-80plus-platinum-full-modular-atx-power-supply-unit.html Status: Bought. AUD $215. Notes: I was going to go with Neo Eco but since I worked with the SFX Modular PSU on the Backup Server I dont want to use a non modular PSU ever again! While this one is a “Gaming” PSU it is reasonably priced. I have always like Antec PSU’s. UPDATE: Changed my mind AGAIN! After a chat with Garycase and some reviews I read I felt the Corsair PSU's looked excellent - better than the Antec I had chosen. Also I went for the 850W as I am thinking a high end graphics card or 2 might be coming at some point. PLUS - 10% sale at my Vendor! Happy Days! Case: Fractal Design Define R5 Mid Tower Black Manufacturer: http://www.fractal-design.com/home/product/cases/define-series/define-r5-black Vendor: http://www.pccasegear.com/index.php?main_page=product_info&products_id=29880 Status: Bought. AUD $159. Notes: Excellent Case. Fractal Design have great build quality. Space for heaps of drives and expansion for more (via caddy - see below) and an excellent Cooling System. Love it. Fans: 4 x Fractal Design Dynamic DP-14 & 2 x Fractal Design Dynamic DP-12 Manufacturer: http://www.fractal-design.com/home/product/case-fans/dynamic-series/dynamic-gp-14 Vendor: Came with Case. See above. Status: Bought. See above. Notes: I got these thrown in on top of the case which was excellent. I am sure I should have only had 2 of the 140mm ones. But alas no. I don't see a need to replace them with anything else. Cables: 16 x SATA III 90degree Down Angle Cable 26AWG 50cm - Black(2), Red(4), Light Blue(2), Blue(8 ) Manufacturer: http://www.cpl.net.au Vendor: http://cplonline.com.au/cables/sata-sas-cable/serial-ata-cable-sata-iii-90degree-down-angle-26awg-50cm-blue-9249.html Status: Bought. Notes: I was going to do away with my current cables and buy slim ones BUT I caved and went with standard premium cables from my local supplier. The case is going to end up holding 14 x 3.5” drives and 2 x 2.5” drives. I need good cable management, not least of which to ensure good airflow but because I want it to be neat too. To facilitate neatness I colour coded: Black for Parity Drives, Light Blue for Cache SSD Drives, Red for data (Controller 1) Light Blue for data / App SSD (Controller 2). Caddy: Caselabs HDD Cage Assy - Standard Manufacturer: http://www.caselabs-store.com/hdd-cage-assy-standard/ Vendor: Direct. See above link. Status: Bought. USD @29.95 (+ USD $25) for shipping :-() Notes: Being able to mount this on the 120mm fan holes on the bottom of the case next to the existing drive cages and have space for 4 more 3.5” drives is awesome and have. I saw some Caddy’s which were ok but non of this quality. Shame about having to get it shipped from US but I feel it was worth it. Unraid Specific Hardware Cache Disk (Pool): 2 x Crucial MX100 256GB SATA 6Gb/s 2.5" Internal SSD CT256MX100SSD1 Manufacturer: http://www.crucial.com/usa/en/ssd/ct256mx100ssd1 Vendor: http://cplonline.com.au/crucial-256gb-sata3-mx100-series-ssd-hbc-mx100d1-256.html Status: Bought. Notes: All the reviews I see for this drive are excellent. I have decided to go with 2 x 256GB because I will never transfer more than this in a day (non normal exceptions where it can be disabled excluded) and I wanted a btrfs Pool for fault protection on the Cache volume. Application Disk: Samsung 850 EVO SATA III 2.5 inch 250 GB SSD MZ-75E250 Manufacturer: http://www.samsung.com/au/consumer/memory-storage/ssd/850-evo/MZ-75E250BW Vendor: http://cplonline.com.au/samsung-840-evo-series-250gb-ssd-mz-7te250bw.html Status: Bought. Notes: I wanted a separate drive for Applications, Plugins, Dockers and VM’s. Went with a Samsung as I ended up buying this drive after the above ones and the latest Critual model is poor in comparison to those bought above. Parity: 1 x Seagate 8TB Archive HDD, SATA III, 5900RPM, 128MB Model: ST8000AS0002 (WTY - 3 year) Data Disks: (New) 1 x WD 6TB Green, SATA III, IntelliPower, 64MB, NCQ, WD60EZRX (WTY - 2yr) and (Exisiting 5) 5 x WD 3TB Red WD30EFRX, SATA III, IntelliPower, 64MB, NAS HDD (Initial 3) 1 x Seagate 8TB Archive HDD, SATA III, 5900RPM, 128MB Model: ST8000AS0002 (WTY - 3 year) Manufacturer: http://www.seagate.com/au/en/products/enterprise-servers-storage/nearline-storage/archive-hdd/?sku=ST8000AS0002 Vendor: http://www.scorptec.com.au/product/Hard_Drives_&_SSDs/HDD_-_3.5%22_Drives/58205-ST8000AS0002 Status: Bought. AUD $616 Notes: Far too much community discussion has gone into the selection of these drives. Thanks to pkn for his posting of test data and all those (especially garycase) who contributed to the thread. Please see this thread: http://lime-technology.com/forum/index.php?topic=36749.0 AND this thread for summary: https://lime-technology.com/forum/index.php?topic=39526.0 Notes: As already stated I am going to use the existing 3TB Red drives from my current setup and expand the space available by 8TB by adding a 8TB Seagate Archive as Parity and a 8TB Seagate Archive as a Data drive while also dropping the existing parity into the Data Disk Pool. I decided to go with the Seagate 8TB’s over more WD Reds or WD Greens due to the positive testing of the 8TB’s documented in the 2 threads above. Config User Shares: II matched what I have today. I sorted my Split Levels to Level 2 and Distribution over the disks as evenly as possible. Fan Orientation: I went with air in over the drives from 2 fans, pushed over the cpu and then out of the back. Worked excellently. Mover execution: Nightly @ 11pm. Default Status: Always ON. Software As I mentioned I run 2 VM’s (Windows 10 and Ubuntu 15.10), Many Home Server Dockers and Useful Plugins: - Dockers: Apache:EmbyServer:Guacamole:Maraschino:mariaDB:OpenVPN-AS:Quassel-Core - VMs: Windows 10, Ubuntu 15.10 - Plugins: Local Master:Schedules:SSD Trim:System Temperature:File Integrity:Preclear Disks:Unassigned Devices:Open Files:System Statistics:System Information:Cache Directories:Active Streams:Community Applications:Nerd Tools OS: Exisiting Unraid Pro License Vendor: http://lime-technology.com/forum/index.php?topic=31474.0 Backup Server - Initial 24TB of Protected Space. Expandable (with currently available Archive disks of 8TB) to 56TB. - Complete This runs primary as a COLD server and is NEVER the primary source of my data and as such is NOT accessible to clients on the Network UNLESS such time exists where the Main Server is down and a DNS name swap is made to allow data to be accessed from Backup Server as if it was the Main, transparent to the users. It is ALWAYS on and Backs up as documented here: http://lime-technology.com/forum/index.php?topic=45331.msg432998#msg432998 Build Type: Mini-ITX Notes: The idea is that the Backup Server will have a sufficiently small footprint that it can be taken “Off Site” easily and quickly if the need arises OR the disks can be removed quickly without need to disassemble. OS: Unraid (Latest - Pro) Build Hardware CPU: Intel® Atom™ Processor C2550 (2M Cache, 2.40 GHz) Manufacturer: http://ark.intel.com/products/77982/Intel-Atom-Processor-C2550-2M-Cache-2_40-GHz Vendor: Integrated with Motherboard. See below. Status: Bought. See below. Notes: The low power integrated quad core Atom CPU which can be passively cooled I felt fit my needs better than a Celeron running on a E3C224D2I. Motherboard: ASRock C2550D4I Mini ITX Motherboard. Manufacturer: http://www.asrockrack.com/general/productdetail.asp?Model=C2550D4I#Specifications Vendor: http://cplonline.com.au/index.php/asrock-c2550d4i-mini-itx-motherboard.html Status: Bought. AUD $399. Notes: The integrated passively cooled CPU was a big draw here. But the biggest was the 12 integrated SATA ports. No future expansion cards for me. No CPU cooler issues. With limited space I thought this was the perfect fit for this Mini ITX Build. Memory: Kingston 4GB (1x4GB), PC-12800 (1600MHz) ECC Unbuffered DDR3L, ValueRAM, CL11, 1.35V, Single Stick. KVR16LE11S8/4I. Manufacturer: http://www.kingston.com/en/memory/search/?partid=kvr16le11s8/4i Vendor: http://www.scorptec.com.au/product/Memory/ECC_&_Registered/55195-KVR16LE11S8_4I Status: Bought. AUD $89 Notes: Had some issue trying to find out where to buy “Compatible Memory”. I couldn't locally get hold of any Memory on the Manufacturers QVL list. Then I searched Kingston’s “Supported Board List” and found that this module was 100% compatible. The only difference was that this module had a “4/I” appended to the model number. Took me some time to figure out that this just meant it was certified and tested (reflected in the price) to work with Intel products but essentially the same stick. But this is all I could get. PSU: Silverstone ST45SF-G 450W SFX Form Factor Power Supply - ST45SF-G. Manufacturer: http://www.silverstonetek.com/product.php?pid=342 Vendor: http://cplonline.com.au/silverstone-st45sf-g-450w-sfx-form-factor-power-supply.html?gclid=CLuu5oTk2MQCFYqCvQodvJ0APg Status: Bought. AUD $118. Notes: Recommended manufacturer and type of CPU by Garycase. Decided to buy the Gold version for the Modular feature and also 450W over the 300W so I have headroom for the future. Case: SilverStone Black DS380 Hot Swap SFF Chassis. Manufacturer: http://www.silverstonetek.com/product.php?pid=452 Vendor: http://cplonline.com.au/index.php/silverstone-black-ds380-hot-swap-sff-chassis-sst-ds380b.html Status: Bought. AUD $179. Notes: I was ALL set for the Lian-Li PC-Q25B. However availability of this case is poor. I could not source it from ANYWHERE and I mean ANYWHERE home or abroad. My vendor recommended the DS380 and I really liked the cooling design as well as the fact that it had Hot-Swap drives which added an extra feature of quick release of drives in an emergency. I HAD to add a small amendment to the Chassis in the form of a skirt to control the airflow see discussion on case heat issues and for this small case mode here: https://lime-technology.com/forum/index.php?topic=31967.0 Fans: Case Stock. 3 x 120mm 1200rpm 22dBA (2 side 1 rear). Manufacturer: http://www.silverstonetek.com/product.php?pid=348 Vendor: Come with the case. See above. Status: Bought. See above. Notes: I was set to buy Noctua NF-F12’s fans but once I looked at the Silverstone spec of these fans I didn't see why and they have been fine. Note that you cannot control the fan speed from the BIOS with these fans. You would need fans like the 4Pin Noctua ones (PWM control) to do that. Cables: 8 x SATA cables provided by Motherboard Manufacturer. Manufacturer: Unknown. Vendor: Come with the Motherboard. See above. Status: Bought. See above. Notes: I was going to buy Silverstone Slim cables as I didn't expect the board to come with 8. However when they arrived they were black, reasonable size for the Mini-ITX case so thought I’d go with them. Caddy’s: None. Notes: No room for expansion in this case that I can see. Unraid Specific Hardware Cache Disk: None. Notes: I have decided I don't need one for its backup application. b]Application Disk[/b]: Samsung 850 EVO SATA III 2.5 inch 120 GB SSD MZ-75E250 Manufacturer: http://www.samsung.com/au/consumer/memory-storage/ssd/850-evo/MZ-75E250BW Vendor: http://cplonline.com.au/samsung-840-evo-series-250gb-ssd-mz-7te250bw.html Status: Bought. Notes: I wanted a separate drive for Applications, Plugins, Dockers and VM’s. Went with a Samsung as I ended up buying this drive after the ones in the Main Sever build and the latest Critual model is poor in comparison to those bought above. Parity: 1 x Seagate 8TB Archive HDD, SATA III, 5900RPM, 128MB Model: ST8000AS0002 (WTY - 3 year) Data Disks: (Initial 3) 3 x Seagate 8TB Archive HDD, SATA III, 5900RPM, 128MB Model: ST8000AS0002 (WTY - 3 year) Manufacturer: http://www.seagate.com/au/en/products/enterprise-servers-storage/nearline-storage/archive-hdd/?sku=ST8000AS0002 Vendor: http://www.scorptec.com.au/product/Hard_Drives_&_SSDs/HDD_-_3.5%22_Drives/58205-ST8000AS0002 Status: Bought. AUD $1,197. Notes: Far too much community discussion has gone into the selection of these drives. Thanks to pkn for his posting of test data and all those who contributed to the thread. Please see this thread: http://lime-technology.com/forum/index.php?topic=36749.0 Config User Shares: I matched the User Shares on the Backup Server with the Main Server. I run a weekly Incremental Backup from Share to Share over by Gigabit Network and a Yearly Mirror. Fan Orientation: I really like the case I have and I am going to run it (at least initially) as the Manufacturer intended. 2 x side case fans on draw. 1 x rear fan on extract. Default Status: Always ON. Software I run 1 VM (Windows 10) to run my Backup Software (SyncBack), 2 x Dockers and Useful Plugins: - Dockers: Apache: OpenVPN-AS - VMs: Windows 10 - Plugins: File Integrity:Preclear Disks:Unassigned Devices:Open Files:System Statistics:System Information:Active Streams:Community Applications:Nerd Tools OS: Additional Unraid Pro License Vendor: http://lime-technology.com/forum/index.php?topic=31474.0 Status: Bought. USD $30
  21. danioj

    To much of a good thing

    I think if we were to talk features. My unRAID life would be complete if we had: - ability to run a VM independantly of Array status (to facilitate pfSense use and or primary desktop) - formal support for virtualising unRAID as a guest
  22. There is no reason why you cannot have all the space available to you if you buy a 1TB SSD. If you format the SSD's as BTRFS you can run them in RAID0 and unRAID will treat them as 1 big drive. I run 3 x 250GB SSD's and unRAID see's them as 1 big 750GB cache disk. I don't run anything outside the array (utilising unassigned devices plugin) anymore as I prefer to use unRAID as it was intended (VM's, Docker etc) from the Cache device. As for the type of SSD, I don't think you can go past the Samsung EVO range. I find them to be excellent value for money.
  23. Please grab and post your diagnostics file. From the GUI: Tools>Diagnostics>Download. From the CLI: Type diagnostics then go and get the generated file from the flash drive.
  24. danioj

    To much of a good thing

    Interesting feedback for LimeTech. I am interested to know what was the driver behind the post? Has a recent feature (or promise of a feature) given you cause for concern? I have always felt (like you it seems) that unRAID should maintain its position as a storage centric product first and all the other things it is (can can be) second. So much so, that I was concerned myself when they started integrating Docker and KVM. I remember feeling at the time that their efforts were best served concentrating on more "storage related" features such as Dual Parity. On reflection, I feel that Limetech made a great move. If they had listened to me, it would have had them loosing ground (and custom) on other competing products. Dual Parity came eventually (and they did a great job ensuring that this was implemented correctly), but not before they made great strides to keep the product relevant and current in meeting with what many new customers want from a NAS appliance (e.g. application hosting). It's worth noting, that we now refer to Docker as a means of ensuring that the core product remains as it is BUT in fact Docker itself was just a short time ago one of those such features that was integrated into unRAID which really had nothing to do with its original product. jm2c.
  25. Me too. However ... These days a disk Clear (to get that flag on the disk) doesn't take the Array down - or make it or the GUI unresponsive (more appropriate explanation). As the disk I was using was from another unRAID server (and had already been through many rigorous tests) I knew it was fine - so had no need to clear outside the array (especially as I note above, this does not result in downtime anymore) hence why I just added it. What I was unclear (no pun intended - happy accident) was what was recorded in that history log, which made my post look nuts. I have cleared (again - no pun intended - another happy accident) up the original post. All makes sense now. Sigh. Sorry folks.