Leaderboard

Popular Content

Showing content with the highest reputation on 05/11/17 in all areas

  1. Yeah, I need more time to do a proper solution and have it fully tested. Meanwhile the old version is made available again. Press "check for updates" to revert back. Sorry for all inconvenience
    2 points
  2. Once upon a time, in a land far far away, lived a prince who was strong, moderately nice looking and quite stupid….. That’s me. Well, not a prince, as far as I know. And frankly not that strong either. Nice looking? Hmmmm What made me think that this will be an easy project?! I like to build computers. Built two or three for the family use. Always dreamed to build a water cooled rig. April 2016 my son complained he needed a faster gaming machine. Right about the same time I thought I could use a desktop of my own. Up until that time a MacBook Air was enough for my needs. Fell in love with MacOS in 2010. Since then I will never go back to Windows OS as a primary computer. I also had a mini server bellow the TV in the living room with 2Tb raid 0 external storage for movies pictures etc, backed up to CrashPlan cloud. Crashplan actually works very nicely. Don’t know why some people complain. Amazingly I even succeeded to restore lots of photos from it after accidental deletion. Support was great, very helpful. So I recommend. Also thought about replacing my daughter’s desktop as well. Anyways, why not building ONE central server for all my needs, make it a water cooled wall mounted board, have fun building it and live happily ever after?! I even though that the cost will be lower than separate systems, which actually I think it is. Didn’t make the exact calculations though… So I embarked on a long journey somewhere in August 2016. First things first, needed to do a lot of research on the web. How to’s, parts, tools what other people made. Etc… Stumbled on a nice build log of a wall mounted system and though it was amazing and I was hooked. http://www.overclock.net/t/1424387/gallery-build-log-ultimate-wall-mount-rig-maxxplanck-v2-completed In parallel, looked for the best options for virtualization. ESXi, Proxmox, unRAID some others. Very quickly decided that unRAID is the best fit for me. Allowed me to have virtualized desktops, use applications as Dockers, have extendable NAS with redundancy that kept the data in its native format and didn’t demand too much hands on and linux knowledge. Started also designing the board on the Google SketchUP. BTW amazing program. While roaming the WWW, I found out that there are lots of really cheap XEON CPUs on sale. And that was my first purchase. I bought 2x XEON E5-2670 2.6Ghz for about 150$ on eBay. Everything else followed and complemented those two CPUs. The motherboard, I was contemplating some Gigabyte and Asus dual CPU boards but after reading lots and lots of recommendations, I simply got tired of all the different options out there and went for the Supermicro X9Dai board. Why? I think I made a small mistake by choosing a board without a graphic chip. Otherwise it had all the main features I needed and it had a good reputation for stability. Moreover, I’ve seen some successful Hackintosh builds with this board. For whatever reason I thought it was important to have a MOBO that I could use for OSX. With unRAID its not very important though, as it masks the HW in most part from the OS. The board was not cheap. 450$. Also, people, remember that I live in the promised land of Israel. We have a beautiful country. But we are still looking for the milk and honey, as all our neighbors have the oil. AND to ship anything to Israel add up approximately 10-15% to the total cost of anything you want to buy from outside. RAM, I was really lucky finding a very reasonably priced 64G Samsung ECC Registered. 145$ Why 64G ? you have to have enough !!! it’s better to pay a little extra but not finding yourself juggling with the configuration transferring memory between the VMs all the time. And believe me I realized that, when I had to add 12GB more RAM to the GAMING VM so that the “Battlefield 1” will play right. 1-2-3 and its done. No hassle. I had enough spare and then some. And thank you unRAID team to make is so easy to do. After making some budget calculations I have decided to postpone the water cooling part of my build to a later stage. According to my calculation the total cost would be 1000$ for the parts related to the water cooling (CPU GPU blocks, fittings, tubes radiators, reservoirs, shipment etc) And also, let’s face it, it’s my first wall mounted rig and to add on top of it my first water cooling setup would be asking for trouble. Slowly but steadily I continued to buy stuff on eBay, Aliexpress, Amazon and other places. Cannibalizing my old systems, I had: -GTX560ti -2x WD 2TB GREEN -1x WD 1TB Green and that’s about it. All I could really use in my new system. So I bought - Gigabyte RX480 Gaming for my son. Coupled with LG 34UC98: 34 Class 21:9 UltraWide® WQHD IPS that one was really extra. Part of a deal with my son. Don’t ask…. - GTX1070 for me (made a mistake as its not working yet with OSX) - two additional WD 4TB RED HDs - 1x Samsung 1TB SSD EVO750 (chipper then EVO850, but in retrospect I think should have went for the 850, don’t know really, time will tell. So far the slower speeds are ok for my needs) - PSU, EVGA 1600W Titanium, the best there is (I think). Do not save on power supply. To have problems related to power supply is a real headache. You never really know why things fail. A system hanging or the data corruption or any other random occurrences, all can come from a faulty PSU. And good luck spending time looking for WHY… In my case, I have 3 GPUs, 2 CPUs and 6 hard disks and a future water cooling pumps. And I want it to be silent. My EVGA DOES NOT TURN THE FAN ON. NEVER. So far.. I have bought a power measuring device on the main line (220v). I can tell you that the average consumption is about 200W. since December 2016 I have a total 550KW. Which is about 80$ of electricity till today 13th April 2017. These are the main parts. There are lots of others complementary stuff. (planning to create a parts list on https://pcpartpicker.com/ The most important ones where the PCI extension cables for the video cards as they are not positioned on the MOBO PCI slots. And I needed 3 of them. You know the idioms - “buy cheap, buy twice” or “cheap at twice the price” ? Well…. That’s me. I thought I could save some money and went and bought some modDIY riser shielded cables 30cm and 19cm to daisy-chain them together. For 75$ a pair. To make the story short, if anyone will need to buy raiser PCI cables , go for the 3M brand. Nothing else works. 100$ each cable. But it just works. I thought I would have problems with the HDMI long 10m and 15m cables. And the USB extension cables. But the Cabernet Ultra CL2 Active High Speed HDMI Cables work great. 3440x1440 resolution with all gaming settings set to high works great. The mouse and keyboard control without lagging with a simple MADE in CHINA USB over RJ45 over cat6 cables. My current unRAID configuration has: - OSX Sierra VM for my daily use. Utilizing the GTX560Ti for now. Thanks to gridrunner for his really excellent guides !!! And what a nice and wonderful thing it is when I can simply add 8Gb more RAM to the OSX so that the VMware Fusion running nested windows VM will feel nice and worm. Plus some extra CPU cores to have a complete Nirvana. - Windows VM for my son’s Gaming using the pass-through RX480 GPU. And he can switch it on if its down via WOL add-on using his mobile app. Fantastic. - Windows VM for desktop use for my daughter. She is supposed to use the GTX1070. But at this time the board is RMA in USA . my fault. Never play with molex power cables while its still “hot” plugged, powered and working. Well now its not. Waiting for the card to be sent back. - Windows Ultra-Light install VM (1.5GB disk space windows 10 install for the PRTG networking monitoring setup. - Ubuntu VM to have the Virt-Manager running. Makes the configuration of passing through devices to VMs really easy. And also its VNC client works when the noVNC in unRAID doesn’t. I am really would love to see a Docker with the virtManager working. Does NOT work for me. Yet. - Docker with CrashPlan - Docker with Emby Media Server - Docker with Krusader file manager, although I am using MC via the terminal ssh access. - Docker with NextCloud cloud database. Something new. Want to make all the family mobile phones backups. - Docker with PrinterCUPS. Great stuff. Print server for every computer in the house being able to use one printer located in my small office room. Didnt find how to use the HP printer SCANNER in the same networked setup yet. - Docker with delugeVPN torrent downloader. - Dynamix SSD TRIM Plugin - Dynamix System information Plugin - Dynamix System statistics Plugin - Fix Common Problems Plugin - Libvirt Hotplug USB Plugin - Nerd Tools Plugin - Preclear Disks Plugin - Speedtest command Line Tool Plugin - Tips and Tweaks Plugin - Unassigned Devices Plugin - unRaid DVB Edition Plugin (needed it to play with passing through firewire on board device) - unRAID Server OS Plugin (duhhh… ) - User Scripts Plugin (AutoVMsBackup, icon download and sync, USBportscripts, vm settings backup) - Virtual Machine Wake On Lan Plugin some pictures They were watching too...
    1 point
  3. Hi folks, thank you for all the suport so far Finally got my build running today after parts were collected and a lot of beer was consumed. Want to share with you the details and photo story Have a nice day Chris SPECS: Case: Inter-Tech 4HU-4424 (new) Mobo: EP2C602-4L/D16 (new) Cpu: 2x Xeon-E5-2640 (used) Ram: Hynix DDR-3 ECC buffered 10600R 8x8 GB (used) and 8x4 GB (used) Controller: 3x H310 (used) Cables: 6x SFF8077 (both sides) (new) PSU: Seasonic Prime Titanium 1200W (new) HDDS (own/used): 12x3TB (some WD Reds, Blues and Greens) 4x2TB (WD Greens) 8x2TB (HITACHI 7200RPM)
    1 point
  4. Fixed. I added additional text to the help and tool tip. Help is your friend again!
    1 point
  5. It's going to be device-level encryption that would apply to all assigned devices. "Go big or go home"
    1 point
  6. Something weird happened last night and I had to hard reset my server. Upon reboot, all plex client devices do not find the Plex server or think it's a new server. accessing <ip>:32400 acts like it's a new server as well. I have quite a few user accounts and don't want to lose the "watched" status on all those. I can restore a backup of /appdata/PMS, but I wanted to exhaust other resources first. Someone please help!
    1 point
  7. - Remove both disks and then delete them from the historical devices. - Verify the /mnt/disks/AMCC_9650SE_12M_DISK mount point is not there. If it is unmount it and then rm -rf /mnt/disks/AMCC_9650SE_12M_DISK. If you can't clear it up, reboot the server. - Install one disk and change the mount point. Remember to hit <Return> to save the new mount point name. - Mount this disk and be sure the mount point is what you set. If not the config file is probably corrupted. - If this works add the second disk and change the mount point. If this doesn't work, post your /flash/connfig/plugins/unassigned.devices/unassigned.devices.cfg file and let me see what it contains.
    1 point
  8. For specifics, both Silverstone and Corsair make excellent SFX units: http://silverstonetek.com/product_power.php?tno=7&area=en http://www.corsair.com/en-us/sf-series-sf600-600-watt-80-plus-gold-certified-high-performance-sfx-psu
    1 point
  9. For an ITX build I'd absolutely use an SFX power support. While an ATX unit will fit in some of these cases, it's VERY crowded. An SFX unit is a FAR better choice.
    1 point
  10. Caching the shares (folders) usually is a matter of minutes not hours. You can try to limit the folders for caching by explicitely setting an exclude or include list (not both). This will have an effect on both cpu and memory load. The caching script is running continuously to tell linux to keep information in memory.
    1 point
  11. 1400$ isn't too bad... its the drives that add up quickly. A few of us in the forums recently had the 'pleasure' of flashing the H310. I thought I bricked mine as well... I'm now in need of another one, so we'll see how it goes. I noticed you have a fan on the H310s and there's one x16 slot still open, Do you think adding a GPU such as the GTX 980TI (for VM gaming but only when the boss is away, lol) would add too much heat? That is a bit concerning. I've heard of others using the Norco 4224 Case (back planes look to be horizontal) but there are some owners on the interwebs that say they overheat. Configuring unRAID is has its moments; it took me nearly a year to sort through everything. You may want to run preclear on the drives first, if you haven't done so already. Not real fun, but it gives you peace of mind. \m/
    1 point
  12. Then the recommended built above (or similar) sounds perfect for you. With the G4560 you get integrated graphics that you can use to pass through to your Linux VM; and you can use the PCIe slot for a SATA card if you need it. The performance will be more than enough for what you need.
    1 point
  13. I was having some minor issues with Sonarr and Radarr with the web server authentication. I changed it to cookie authentication and now they're working so far without issue. # redirect traffic to https://[domain.com] server { server_name [domain.com] www.[domain.com]; listen 80 ipv6only=off; return 301 https://[domain.com]$request_uri; } # redirect traffic to https://[domain.com] server { server_name www.[domain.com]; listen 443 ipv6only=off; return 301 https://[domain.com]$request_uri; } # main server block server { server_name [domain.com]; listen 443 ssl ipv6only=off; # SSL certificates and keys ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; # SSL settings add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; proxy_buffering off; # Custom error pages error_page 400 401 402 403 404 502 /error.php?error=$status; # Organizr location / { proxy_pass http://192.168.1.3:82; include /config/nginx/proxy.conf; } # Handbrake location ^~ /handbrake/ { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8083/guacamole/; } location ^~ /handbrake/websocket-tunnel { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8083/guacamole/websocket-tunnel; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # MakeMKV location ^~ /makemkv/ { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8081/guacamole/; } location ^~ /makemkv/websocket-tunnel { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8081/guacamole/websocket-tunnel; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # MKVToolNix location ^~ /mkvtoolnix/ { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8082/guacamole/; } location ^~ /mkvtoolnix/websocket-tunnel { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8082/guacamole/websocket-tunnel; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Plex location ^~ /web { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:32400/web; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # PlexPy location ^~ /plexpy/ { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8181; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Radarr location ^~ /radarr { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:7878; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Sonarr location ^~ /sonarr { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:8989; add_header X-Frame-Options "SAMEORIGIN"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Transmission location ^~ /transmission/ { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:9091/transmission/web/; add_header X-Frame-Options "SAMEORIGIN"; proxy_pass_header X-Transmission-Session-Id; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ^~ /rpc { if ($cookie_cookiePassword != "[cookiePassword]") { return 403; } proxy_pass http://192.168.1.3:9091/transmission/rpc; } }
    1 point
  14. A tidy little config for unRAID using current model hardware is: https://uk.pcpartpicker.com/list/cyGzxY PCPartPicker part list / Price breakdown by merchant CPU: Intel - Pentium G4560 3.5GHz Dual-Core Processor (£51.99 @ Ebuyer) Motherboard: Gigabyte - GA-Z170N-WIFI Mini ITX LGA1151 Motherboard (£119.95 @ Amazon UK) Memory: Crucial - 16GB (2 x 8GB) DDR4-2133 Memory (£83.99 @ Amazon UK) Case: Fractal Design - Node 304 Mini ITX Tower Case (£61.97 @ Ebuyer) Power Supply: SeaSonic - ECO 430W 80+ Bronze Certified ATX Power Supply (£44.98 @ Ebuyer) Total: £362.88 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2017-05-10 16:51 BST+0100 I'm recommending that board because I've found it to work great in unRAID, it has 6 SATA ports, an M.2 slot, dual Intel network, good fan controls. It's also very power efficient. The G4560 is a great CPU, it's dual core with Hyperthreading, so is about the same speed as the i3-6100. The boxed cooler is fine.
    1 point
  15. If you want to pass through the GPU to VM you need integrated Intel graphics, or GPU card. Do you need Xeon, i.e. server grade HW?
    1 point
  16. I should also point out that MITX will definitely limit you to one - count that : one - PCIe card - so take your pick of Graphics for VMs; or more ports for HDDs if your MITX board doesn't have enough SATA ports; or multiple network ports - if you don't have enough on the moptherboard.
    1 point
  17. Do you need to do multiple Plex transcodings or ECC? If not you could go with cheap basic consumer hardware like a G4560, B250 board, 8 GB RAM, and a PCIe SATA card (if needed). More than enough power to run your Dockers and a Linux VM. Do you have the disks aleady? 6-8 disks will not fit in your budget..
    1 point
  18. As you've seen, small does NOT mean inexpensive. For a system you want to use fairly extensively (not just as a NAS, but to run Dockers, VM's, etc.) I'd nevertheless bite the bullet and get a quality motherboard, Xeon, and ECC memory. Might blow a bit past your budget; but it'll be a rock solid system that will last you a long time and you won't be wishing you had a higher-performance CPU along the way.
    1 point
  19. I'd like to be able to help (M-ITX user), but as I've seen from other people's comments - some of my components are rather pricey But I went with this route cheap MINI-ITX board with dual NIC (Broadcom - so stable enough) cheap PSU (was to only power one "green" drive (cache), the drive controller, and fans - nothing else) - but I replaced this with a left over Silverstone 450w Bronze PSU regular RAM - 16GB DDR small case - Cooler Master Elite 110 expensive external controller: LSI 9206-16e (future proofing - single PCIe x8 card for 16 external SAS devices) expensive external drive case: Areca 3036 (8 bay 6gbps SAS/SATA + Expander to allow a number of enclosures daisy-chained later on) Drives are a mix of Seagate 8TB Archives, and WD 4TB Reds and a WD 4TB green for cache.
    1 point
  20. The Lian Li PC-Q25B is another popular miniITX case, if it is available where you are. ASRock is another well known vendor of server boards.
    1 point
  21. I followed the instructions in the GitHub thread and am back up and running now. Delete the following files: /mnt/user/appdata/sabnzbdvpn/admin/server.cert /mnt/user/appdata/sabnzbdvpn/admin/server.key And restart sabnzbdvpn. These files will be regenerated automatically on reboot and fixes the issue.
    1 point
  22. These are superb units. The front-mounted 120mm fan provides EXCELLENT ventilation ( the drives run much cooler than they do in units that have offset rear-mounted 60 or 80mm fans). The lights can indeed be disabled -- there's a small slide switch to do just that.
    1 point
  23. CA just acts as a front end to the plugin system. IE: It doesn't particularly matter how you've installed the plugin. CA will know that its there
    1 point
  24. First off, welcome! It's actually even easier than that... (give or take) No need to stop the array to change a disk share to cache only, but you will need to move the files yourself. Also, no need to fix the mapping to the container... Regardless of which disk the files are on, you can still point it to /mnt/user/appdata and if it's set to cache only, it will only be on your cache directory! So.. (1) unRAID Main > Shares > appdata : select Use cache disk: only and remove any include/exclude, it's not needed.. Cache only is cache only, apply, done. (2) Move files from disk 1 to cache drive (this can be done in multiple ways, however you ABSOLUTELY don't want to copy from a user share to disk share (or whatever way that is)). The easiest way is to just use another computer and copy the appdata folder from disk 1 to the cache drive (if in Windows, use the basic file explorer copy/paste). If you do not currently export the disk shares, click on each disk (1, and cache), set export to yes for either SMB or NFS, now use SMB or NFS from another PC to copy it to cache, and when finished delete the copy from disk 1. ----- (If you don't feel comfortable using SSH please do not, it shouldn't be needed)----- I have noticed some issues at times attempting to copy in use or permission locked files, so if that's the case and you know how to use Putty to SSH in, just type this. cp -avr /mnt/disk1/appdata /mnt/cache/appdata Once it's successfully transferred (please verify contents prior to continuing by checking your cache share for the appdata folder) rm -rvf /mnt/disk1/appdata
    1 point