Golfonauta

Members
  • Posts

    66
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Golfonauta's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. ENGLISH: 1U Server based on Intel motherboard S1200BTL 4x3.5 SATA drive Caddies + 2 SATA ports (DVD + intenal SSD if wanted) DVD drive The motherboard has 2 gbe interfaces + 1 BMC (ilo, idrac) with full remote console capabilities additional pcie card with 2 gbe and a serial port Single power supply it includes rails. Average power consumption on Unraid: Current:35W Maximum:103W Average:48W Perfect for unraid if you just need 4 big drives and 1 SATA SSD Asking price 250€ I will also change it for 10gbe Network equipment or big hard drives. Location Area - Barcelona SPANISH: Servidor 1U basado en placa base Intel S1200BTL 4 bahias para discos SATA 3,3" con sus Caddies, tiene 2 puertos SATA internos adicionales conectado uno al DVD y el otro se le puede poner un SSD interno si se quiere unidad DVD En cuanto a interfaces de red vienen 2 digabit en placa base y se adjunta tarjeta con 2 mas 1 BMC (ilo, idrac) con gestion remota completa con redireccion de video y demás 1 sola fuente de alimentación incluye los railes del rack Consumo con Unraid: Current:35W Maximum:103W Average:48W Perfecto para unraid 24*7 si solo necesitas 4 discos grandes y 1 SSD SATA Se piden 250€ Lo cambiaría por algo de mi interés como equipamiento de red 10gbe o multigigabit o discos duros grandes. Area - Barcelona
  2. Hello Ozymandias, it really looks like a nice build. It seem you don't really need anything to just build and start testing your server, so I would go that way. You will not be using a lot of drives initially, and Unraid at least as I use it use to have only one or 2 drives active most of the time (mostly cache drive and the drive you use for the VMs). I would check the noise of other fans, like the ones in the PSUs included with that case, that are smaller and look like they will rev up and make some fancy noise. It is worth checking all your noises and trying to low down the REVs of all your fans, maybe even the ones included are not that noisy at really low speeds. If you use a big CPU cooling solution maybe you can get rid of most of the airflow you will need with stock one. So, at least from my POV, build and test first, then see what you need to upgrade. BTW I also use Noctua in my builds but I have a Lian-li D-8000 that is a completely different beast, and I used to have a desktop based CPU, I have just bought an Epyc system for that case, and I will probably buy an Artic Freezer cooler because of Noctua not building onw with the correct orientation for my Supermicro board. I will share my findings. Cheers
  3. Hello, I'm interested in this board too, the X570D4U-2L2T. Have you been able to solve the network speed problems? Thanks
  4. My experience in my specific array is very good. Running 0.6 without errors in any drives and when I clic on the drives I notice the slow spin up to read the directories, so it is really working well. I have been moving files and even deleting folders, no errors in the drives. My main issue is that even with that feature enabled my shelve seem to be still power hungry... but I will need to investigate more. I'm using this PCI card: [1000:0087] 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) with an EMC shelve from a VNX 5200 and the original 15 hitachi sas drives that came with it: [7:0:0:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdd 3.00TB [7:0:1:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdf 3.00TB [7:0:2:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdg 3.00TB [7:0:3:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdh 3.00TB [7:0:4:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdi 3.00TB [7:0:5:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdk 3.00TB [7:0:6:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdl 3.00TB [7:0:7:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdm 3.00TB [7:0:8:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdn 3.00TB [7:0:9:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdo 3.00TB [7:0:10:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdp 3.00TB [7:0:11:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdq 3.00TB [7:0:12:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdr 3.00TB [7:0:13:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sds 3.00TB [7:0:14:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdt 3.00TB aside from other 3 sata drives connected to the booard integrated controller Processor is a Xeon E3-1200 in an Intel motherboard Let me know if I can help by posting any more details.
  5. When you have the plugin installed the "support thread" link on the plugin, links to the feature request thread and not to this one. That needs fixing also I have just installed it without issues, waiting to see if it works...
  6. When you have the plugin installed the "support thread" link on the plugin links to this thread. That needs fixing also
  7. Thanks a lot you really helped me. I also had an issue with a dual nic I350 Here I add the Original and modified configuration; Original: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> Modified: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> I hope this helps others, and also that unraid solves it (I have seen the new beta in a video and given the new included tools maybe they have solved it) Thanks.
  8. You can install VFIO-PCI CFG pluggin from APPS to help you see the IOMMU groups and edit the configuration automaticly. I recommend trying to leave the adapter without network config first. I have done it twice, the interfaces disappear from unraid, but if you find out you have a weird network setup shown in unraid you can delete config and get to defaults. To restore default network configuration delete these two files and reboot: /boot/config/network.cfg (i.e. the file network.cfg in the config folder of your flash device) /boot/config/network-rules.cfg (i.e. the file network-rules.cfg in the same folder) You can also edit them if you know what you do. Good luck
  9. ok, I imagine you are thinking that developing that SAS support has a cost. Let's try to help. I'll buy 1 PRO license if you support it. Thanks, Ruben
  10. I also tried some commands to spin down one drive and ended up rebuilding a drive... not sure if the drive was damaged by the command itself or was not related but I will not do any other tests until it is under supported development.
  11. Please allow SAS drives to spin down before even getting more array pools, I think spinning SAS drives up and down should not be that difficult and will allow us to have larger arrays. Then we will be so happy to get multiple pools Thanks a lot.
  12. Hi unraid techies, please solve this problem, I'm also in the SAS side
  13. Upvote, I want this feature. I have a 15 drives shelf full of SAS drives that don't spin down. +15
  14. I have same issues in 6.0.1, it keeps retrying to unmount...