bash

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by bash

  1. About to attempt this on the following unraid server. AMD EPYC 7401P 24-Core GIGABYTE MZ31-AR0-00 I don't see anyone else in the read who has used an EPYC just curious of any potential pitfalls.
  2. In reply to your keyboard issues, I use <bootmenu enable='no'/> to skip the first boot menu. This focuses the keyboard on the Enoch boot loader. For passing the 25% mark on boot you need to add some boot flags at the Enoch bootloader. I used nv_disable=1 to fully boot. Then install the Nvidia webdriver. The webdriver won't work for 10.11.2 so you need to install and modify the driver with an application called NVIDIA® WebDriver Updater.app. On the NVDAStartupWeb.kext patch tab enter 15D9c as fake OS build. After installing the driver the driver thinks you're still on 10.11.1. I didn't touch anything else in the app. After installing the drivers, reboot with nvda_drv=1 at the Enoch boot loader. I need to enter this at every boot which is annoying. I hope you don't have to. Check out this post for the correct files in the /Extra folder: https://lime-technology.com/forum/index.php?topic=44908.msg429831#msg429831 ps: I also use a GTX 960 for the OSX VM. It drives 4 monitors (1*4K, 1*HD, 2*1600x1200) perfectly fine. Working perfectly for my GTX 750 ti ! Download textwrangler and check your boot plist. Make sure textedit didnt add any garbage to it. If i used the one from the macfiles zip it added extra formatting garbage. Once I used textwranger and only had the plaintext version from the guide it worked and no longer required manually adding nvda_drv=1
  3. I figured the reason might be because it was super simple
  4. I have been searching the forum for a few days and I can not find any real answers to whether or not a whole drive can be passed through. We can passthrough GPU's and USB controllers but I can not find a thread that specifically discusses passing through a single SSD as a boot drive?
  5. I just bought 5(400GB) Enterprise SSD's that I will attempt to setup in RAID-Z2 this week. Will keep you all posted.
  6. After reading a bunch of these posts and threads about the disk IO issues with windows and KVM I figured I would run some benchmarks. Its almost impossible to run a apples to apples especially when running a brtfs cache pool. What surprised me the most was how bad the AS SSD benchmarks were on all KVM VM's it suffered far worse than the CrystalDisk benchmarks. Windows Native OS: Windows X64 10 Pro version: unRaid 6.1.6 vDisk: brtfs cachepool OS: Windows X64 10 Pro <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/VM/win10Test/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> version: unRaid 6.1.6 vDisk: Toshiba SSD - XFS - unassigned devices OS: Windows X64 10 Pro <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/TOSHIBA_THNSNJ256GCST_935S101JTSXY-part1/win10XFSTest/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/>
  7. This is a really exciting development. I will be testing it tomorrow, hopefully this will solve the disk performance issues with kvm+brtfs.
  8. bash

    Norco 4224 Thread

    Have any Norco 4420 & 4224 modded the case for EATX support? From what I can tell my current EEB board and the EATX board i'm looking to replace it with are exactly the same dimensions. The only difference I can see is in the standoff hole locations between the 2 form factors. My plan is to tap/drill the holes required for the EATX motherboard and I wanted to see if anyone else had done the same.
  9. After some research I think I will return the Asus board and go with a SuperMicro X8DTE It might not have all the bells and whistles of the Asus board but I can get it brand new for a decent price. The Asus(EEB) and the SuperMicro(EATX) are exactly the same dimensions and have the same IO shield placement as far as location is concerned. I will just tap/drill the holes required to support the EATX form factor in the norco 4224.
  10. I am leaning towards your advice however from what I have read about the Norco 4224 it will not fit a EATX board. Finding a dual 1366 board that is either EEB,CEB or normal ATX is turning into a challenge. I have found a few but I will have to sacrifice half of the RAM I already purchased as they only have 6 Memory slots. Thank you for the advice, I will start searching for an alternative board.
  11. Well I finally had all my parts in and started the build. As with most used HW builds the wildcard is always the motherboard. As you can see in my initial post I went with the ASUS Z8PE-D12 which had pretty good reviews and is feature rich. It even came with the iKVM board! I knew it was going to be a crapshoot when I pulled the motherboard from its shipping box and noticed the middle of the motherboard had a very very slight bow in it running from north to south. I installed the RAM and 2 x Xeon x5690's and it posted right away. I did notice a slight and I mean very slight whine coming from the motherboard. Temps in the BIOS look decent so I booted into unRAID which was a success. Decided to get lm sensors up and running and on the surface it looks like I may have an overheating issue on my hands. w83667hg-isa-0290 Adapter: ISA adapter Vcore: +0.06 V (min = +0.00 V, max = +1.74 V) in1: +0.06 V (min = +0.66 V, max = +1.62 V) ALARM AVCC: +2.99 V (min = +1.38 V, max = +3.07 V) +3.3V: +2.96 V (min = +0.26 V, max = +2.14 V) ALARM in4: +0.06 V (min = +1.65 V, max = +1.75 V) ALARM in5: +1.40 V (min = +0.53 V, max = +0.95 V) ALARM 3VSB: +2.98 V (min = +0.98 V, max = +2.00 V) ALARM Vbat: +2.91 V (min = +3.76 V, max = +3.57 V) ALARM fan1: 0 RPM (min = 2109 RPM, div = 128) ALARM fan2: 0 RPM (min = 2636 RPM, div = 128) ALARM fan3: 0 RPM (min = 703 RPM, div = 128) ALARM fan4: 0 RPM (min = 1171 RPM, div = 128) ALARM fan5: 0 RPM (min = 811 RPM, div = 128) ALARM temp1: +125.0°C (high = +51.0°C, hyst = +105.0°C) ALARM sensor = thermistor temp2: +123.0°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = CPU diode temp3: +58.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor cpu0_vid: +0.000 V intrusion0: OK Ok so that does not look right..... I walked over and the NB was extremely hot to the touch and I immediately placed a 40MM fan ontop of it which dropped the temps from temp1 to 56C. Temp2 is still way too hot I think I will need to find a mobo replacement
  12. Just picked up the replacement 120mm & 80mm fans for the case. 120MM Fans 3 X Noctua 120mm, Anti-Stall Knobs Design,SSO2 Bearing PWM Case 80MM Fans 2 x Noctua NF-R8 PWM 4 pin Cooling Fan
  13. I have been planning to consolidate a couple of server's into one decent sized unRAID build for some time. The Misses and I just welcomed our first child and sqft is becoming extremely valuable. The plan is to consolidate my NAS and VM labs into a single server. This is going to be another norco build thread. Goal: Consolidate the 3+ server's/computers I currently use to handle my openelec, (sickbeard/sabnzbd/couchpotato) , Proxmox VM server and NAS. I believe the build I have started to assemble below will more than adequately handle what I have planned. The plan is to move away from Proxmox and use unRAID6's native VM/docker features to achieve the above. CASE:(delivered) NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays Norco 120mm Fan Wall Bracket- Ordered before the Norco 4224 was delivered. It already had 120mm fan wall CPU(delivered) (2) X INTEL Xeon X5690 6-Core 3.46GHz LGA-1366 CPU COOLER:(delivered)(enroute) - hopefully fits, wish me luck. (2) X Noctua Ultra Silent CPU Cooler Cooling NH-U9B SE2 MOTHERBOARD:(delivered) ASUS Z8PE-D12 LGA 1366/Socket B Intel Motherboard RAM(delivered) Samsung 96GB 12x8GB PC3-10600R ECC Reg DDR3 GPU:(still pondering) My goal is to get GPU passthrough working and run openelec w/ unraid6 native VM. I have a couple of card's handy that I can test with and if I am successful with this motherboard I will probably shell out for a somewhat decent openelec GPU. $100-$150/budget EXPANDER CARD:(delivered) Intel RES2SV240 24 Port 6 Gb/s SATA SAS RAID CARD:(delivered)- mistakenly ordered the M5015 w/o realizing it could not do passthrough/JBOD. IBM ServeRaid M1015 Card POWER SUPPLY:(delivered) EVGA SuperNOVA 650 G2 80 Plus Gold Rated, Modular Power Supply HARD DRIVES: I will fully flesh this out as I finalize and start to build the server. I have a few 240GB SSD's that I am planning to use for a SSD Cache pool. I will also store VM/Docker images on the pool. As for traditional HDD's god only knows how many I have laying around. I have at least 10+ 3TB HDD's that I can utilize for this build and some 4TB's I will be pulling from the server's I am consolidating. Notes: I am most interested to see how my choice of GPU/CPU's+Coolers will affect cooling/noise in the Norco 4224. I have probably read every Norco 24 Bay thread across multiple forum's and have a good idea how to proceed. I have not mentioned any 120MM fan's as I have a few laying around from multiple vendor's I will be testing.