• Posts

  • Joined

  • Last visited


  • Birthday 03/12/1982


  • Gender
  • URL
  • Location

Recent Profile Visitors

745 profile views's Achievements


Newbie (1/14)



  1. Hi. All the questions are alrady answered on this forum, but let me overcome this and help you a little. Not all raid card work with unraid - you have to have the ability to run a card which can pass the drives to the system (not in Raid Mode). Some cards do this off the shelf, some can be flashed to enable this - Dell H310 is a good example of cheap hardware raid card which can be flashed to support the required functionality (IT MODE) you will find instructions on the forum for that You cannot use any raid VD as any part of the unRaid array unRaid works with Ryzen Yes, you can use 8 sata ports You can pass one physical device to one VM, if you want to have 2 VM for gaming simultaneously, you have to have 2 graphic cards and pass them 1:1 You are missing case and psu in your build For serious job i would go for a ECC-enabled system and ECC memory. A good advice -> physical HDD's which should host the drives for VM should be outside of the unRaid Array.
  2. @MarxisNewbie - Please do not get me wrong, but I have the feeling, you question is more like: Will unRaid be ok for me? And this is something you have to answer on your own. If you want to use unRaid machine as a server - you will most probably need a "client" pc or laptop to utilize the unRaid functionality. I'm not sure how deep you dive in the unRaid aspects, but the main advantage is that you have a Raid5/6-like protection without having the files scatterered in parts among all the drives. This unique feature allows you, even more drive fails, to rescue at least a part of the data by direct copying from the remaining / working drives. This is in opposite to normal hardware Raids, where, if more driver fails (as assumed by initial configuration), you are left with none data at all. This raises some concerns - if you want to have only one drive -> you will get no benefits from the system. You should have at least one (best two) parity drives in the system. Will you use docker? Maybe you can go and try for example Proxmox as bare metal hypervisor instead of unRaid ? It really depens what you want to do in particular with the system. unRaid is a gread soluition for multi-drive systems due to the described behaviour in the event of failure. If you will not use this -> what is the point of going for unRaid ? (I hope you add more drives to your setup, and the, the whole thing will make more sense (as for me :)))
  3. Totally resigned with unRaid array Disks as Proxmox drives. I took 4 drives off the unRaid array and put them in Raid10 under H310 and then this VD in unRaid as unassigned device. A Guest virtual Machine with Proxmox was later on put on this unassigned device and nested virtualization support was added to the booting option of unRaid The result -> a perfect speed of the virtual machines on Proxmox with all the goods of the Proxmox system hosted on the same server as the unRaid Thank you all for the hints.
  4. Hi 1812, than you for your engagment. I have 24 Drives in my unRaid. What about taking ex. 4 disks out of the array and put an additional H310 controller for them (not flashed) ? So the idea is to have 4x2TB disks in raid5 (hardware raid using H310) and then this resulting VD of 6TB having as unsigned device? Will this be recognized under unRaid? will it work? Of course i understand that this unsigned device will have no parity protection from unRaid (but will have from the controller feautures using one of the disks as parity by VD definition. In this setup i will of course auto-backup the VM's from Proxmox to external array regullary. Maybe this will add more io throughput? Am i thinking in the righ direction? Additional question - iSCSI over gigabit will also not be effective enoug?
  5. I'll try this, but the same is when the image is on external nfs share comming from nas storare (external). Now have to free one of the unRaid array disks, so I will be back with this tomorrow. Thanks
  6. Hello Experts I just installed Proxmox as guest on unRaid. The problem is that the performance of the Proxmox vm's is terrible. In comparision if Proxmox was directly installed on the same server, all was working 5-20 times faster (system boot time, response time, etc). I know that this is in fact deep virtualization, but, for some reasons (existing licensed software) i have to run already existing proxmox VM's under unraid. So thought installing proxmox as guest will be a good point to start. Can please anybody help me explaining what am I doing wrong, or what should be corrected to have reasonable io performance? Proxmox VM drive is on a share of unRaid and there is the configuration: <domain type='kvm'> <name>Proxmox</name> <uuid>cf02a52b-8d17-11fb-0a7d-d5db142d75c4</uuid> <description>Proxmox VE</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="default.png" os="linux"/> </metadata> <memory unit='KiB'>67108864</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>24</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='12'/> <vcpupin vcpu='13' cpuset='13'/> <vcpupin vcpu='14' cpuset='14'/> <vcpupin vcpu='15' cpuset='15'/> <vcpupin vcpu='16' cpuset='16'/> <vcpupin vcpu='17' cpuset='17'/> <vcpupin vcpu='18' cpuset='18'/> <vcpupin vcpu='19' cpuset='19'/> <vcpupin vcpu='20' cpuset='20'/> <vcpupin vcpu='21' cpuset='21'/> <vcpupin vcpu='22' cpuset='22'/> <vcpupin vcpu='23' cpuset='23'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='12' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/ai_proxmox/Proxmox/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Hypervisors/proxmox-ve_4.4-eb2d6f1e-2.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:65:0f:57'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='' keymap='en-us'> <listen type='address' address=''/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </memballoon> </devices> </domain>
  7. Hi, you totally right. Did not though about streaming from multiple HDD's. Thank you for this
  8. This one cheap'o is pretty awesome and works well under unRaid. A real bargain on ebay 4x1GBit for next to nothing: Dell INTEL Pro/1000 VT 4-Port Network PCI-e Adapter NIC TY674 Simple question What is the use for bonding on unRaid server (except of failover prvention), when the maximum transfer speed is limited to one drive speed ? (unless you use mainly read data from ssd cache?)
  9. Small update on case modding for better cooling The provided fans are really of poor quality in term of air-flow. In non-conditioned room the drives in case get's up quickly to more than 40`C (even 44'C for the Hitachi's 2TB). What I did is the following modification: Replacing 3 internal 120mm fans with used fans: Delta AFB1212SH (each 4$ on, making 12$ in total) Replacing 2 back 80mm fans with used fans: Delta FFB0812EHE (each 4$ on, making 8$ in total) there are holes on both sides where the air get's into the case without cooling the disks, those are now covered by some aluminum sticky tape: The result: Parity - WDC_WD30EFRX-68EUZN0_WD-WCC4N0TLA4LN (sds) - active 23 C [OK]Parity2 - WDC_WD30EFRX-68EUZN0_WD-WCC4N0TLATRF (sdt) - active 24 C [OK]Disk 1 - WDC_WD30EFRX-68EUZN0_WD-WCC4N6THYSUN (sdv) - active 22 C [OK]Disk 2 - WDC_WD30EFRX-68EUZN0_WD-WCC4N1FDKYXK (sdu) - active 24 C [OK]Disk 3 - WDC_WD30EZRZ-00WN9B0_WD-WCC4E4FZ5HEH (sdy) - active 26 C [OK]Disk 4 - WDC_WD30EZRZ-00WN9B0_WD-WCC4E5JV4DH0 (sdw) - active 26 C [OK]Disk 5 - WDC_WD30EZRZ-00WN9B0_WD-WCC4E1HSRU8K (sdx) - active 26 C [OK]Disk 6 - WDC_WD30EZRZ-00WN9B0_WD-WCC4E1HSRJ41 (sdz) - active 26 C [OK]Disk 7 - WDC_WD30EZRZ-00Z5HB0_WD-WCC4N0HZKXPS (sdk) - active 25 C [OK]Disk 8 - WDC_WD30EZRZ-00Z5HB0_WD-WCC4N7UL623H (sdl) - active 25 C [OK]Disk 9 - WDC_WD30EZRS-11J99B1_WD-WMAWZ0329521 (sdm) - active 26 C [OK]Disk 10 - WDC_WD30EZRS-11J99B1_WD-WMAWZ0275664 (sdn) - active 26 C [OK]Disk 11 - Hitachi_HUA723020ALA641_YFGJLM8A (sdo) - active 29 C [OK]Disk 12 - Hitachi_HUA723020ALA641_YGHW07MA (sdp) - active 29 C [OK]Disk 13 - Hitachi_HUA723020ALA641_YFGWAYXA (sdq) - active 28 C [OK]Disk 14 - Hitachi_HUA723020ALA641_YGG51DJA (sdr) - active 29 C [OK]Disk 15 - Hitachi_HUA723020ALA641_YFHYNSTA (sdc) - active 28 C [OK]Disk 16 - Hitachi_HUA723020ALA641_YGGSVSNA (sdd) - active 29 C [OK]Disk 17 - Hitachi_HUA723020ALA641_YGHX7YZA (sde) - active 29 C [OK]Disk 18 - Hitachi_HUA723020ALA641_YGJ02KUA (sdf) - active 29 C [OK]Disk 19 - Hitachi_HUA723020ALA641_YGGP93HA (sdg) - active 27 C [OK]Disk 20 - Hitachi_HUA723020ALA641_YGJ3TK9A (sdh) - active 27 C [OK]Disk 21 - Hitachi_HUA723020ALA641_YGG5UEWA (sdj) - active 27 C [OK]Disk 22 - Hitachi_HUA723020ALA641_YFHY90KA (sdi) - active 28 C [OK]Cache - Samsung_SSD_850_EVO_250GB_S2R6NX0J168639Y (sdb) - active 26 C [OK] So final conclusion - 20$ spend (30$ including shipping) what is about 5% of the price of the case and the temperatures of the drives has been lowered by 15'C on average. Pretty awesome !!!
  10. Hi BRiT As I understand turbo wrtite is still limited by single slowest drive in the system becouse that's where the write speed limit kicks-in. Let's make this question simplier: What to do to have parity check More than 100 MB/s ? How to use combined architecture -> hardware raid cards with vd's and the vd's as drives for unraid?
  11. Hi experts. Is there a possibility to have greather write speeds to unraid as single drive writing speed? I mean using cache drive the maximum write speed to undraid is the cache drive speed limit. Aftwerards when cache get's filled it's even worse. What I mean by now is a kind of setup, where raid cards (not flashed) are used for example in the following configuration 1st raid card - 2 VD (VD1 and VD2) out of 8 drives 4x3 TB (stripping) + 4x3TB (stripping) 2nd raid card - 2 VD (VD3 and VD4) out of 8 drives 4x3 TB (stripping) + 4x2TB (stripping) 3rd raid card - 2 VD (VD5 and VD6) out of 8 drives 4x2TB (stripping) + 4x2TB (stripping) afterwards use VD1 as cache drive VD2 as parity drive VD3-VD6 as data drives. IS it possible?
  12. Yes, there is still 1x16 free, I added the fan "just in case" Really do not have issues with heat. Now, when the drives get's parity check, the server is storeid temporary in a non-conditioned room with internal temp. of 20*C and the hootest drive has 38*C while inside the case there is 34*C. In fact using this mobo I could use 8 onboard sata and two H310's so you could easly fit two Graphics even ! You could easly fit sone nice GPU. There are buckets of space in 4u case, so overheating should not be an issue. I added the Vent's over 310's "just in case" -> had lying one around Norco's was 2x the price shipped to Poland in comparision with Inter-Tech.
  13. Hi Joseph. Thank you for beeing interested at mym build I did not run into major issues, but let me share the story and some thoughts: 1st of all I would comment on the RACK CASE INTER-TECH 4u4424 -> the overall quality of the enclosure isn't that great. Cheapest possible no-name fans inside. The PCB's "backplanes" covers almost all of the vent holes, so I am concerned a little bit about the air flow. One of the six backplanes have slightly different color of the led's, one led does not work at all... pretty sad for something which costs > 400$. On the positive side, thick metal is used for the build, the case is stiff and somehow rigid. Not so shabby at all. A little bit of streess was with flashing the DELL H310 controllers. At some time point i thought i bricked all 3 of them (150$ in total for used parts). Buf fortunatelly Hell Yeah -> they worked CPU's (2x50$) and RAM (120$) was bought as used off ebay. No issues there. Seasonic PSU (250$ for the 1200W unit for futher scallability) was a minor disappointment. A good piece of power supply, but comes with mostly sata cables. Gues what - try to buy somwhere original seasonic cabling with molex for this PSU. NO WAY... (mooding sets available only). Lucky me -> had some cabling for XFX Power supply which were compatible. Cooling radiatiors with fans -> of the shelf -> bought cheapest one's i could found on allegro (Polish e-bay) - 2x25$ Each SAS Cables 6pcs - total of 80$ The motherboard 300$. So in total it makes around 1400$ WITHOUT THE DRIVES (had already them on the stock). After saying goodbye to the company's money, finally tried for half a Day to start unRaid.... Then swapped the USB strick from 3.0 to 2.0 one, and boooooom .... it works ! Now configuring my precious, making parity, etc.... having fun
  14. Hi folks, thank you for all the suport so far Finally got my build running today after parts were collected and a lot of beer was consumed. Want to share with you the details and photo story Have a nice day Chris SPECS: Case: Inter-Tech 4HU-4424 (new) Mobo: EP2C602-4L/D16 (new) Cpu: 2x Xeon-E5-2640 (used) Ram: Hynix DDR-3 ECC buffered 10600R 8x8 GB (used) and 8x4 GB (used) Controller: 3x H310 (used) Cables: 6x SFF8077 (both sides) (new) PSU: Seasonic Prime Titanium 1200W (new) HDDS (own/used): 12x3TB (some WD Reds, Blues and Greens) 4x2TB (WD Greens) 8x2TB (HITACHI 7200RPM)