Golfonauta

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by Golfonauta

  1. ENGLISH: 1U Server based on Intel motherboard S1200BTL 4x3.5 SATA drive Caddies + 2 SATA ports (DVD + intenal SSD if wanted) DVD drive The motherboard has 2 gbe interfaces + 1 BMC (ilo, idrac) with full remote console capabilities additional pcie card with 2 gbe and a serial port Single power supply it includes rails. Average power consumption on Unraid: Current:35W Maximum:103W Average:48W Perfect for unraid if you just need 4 big drives and 1 SATA SSD Asking price 250€ I will also change it for 10gbe Network equipment or big hard drives. Location Area - Barcelona SPANISH: Servidor 1U basado en placa base Intel S1200BTL 4 bahias para discos SATA 3,3" con sus Caddies, tiene 2 puertos SATA internos adicionales conectado uno al DVD y el otro se le puede poner un SSD interno si se quiere unidad DVD En cuanto a interfaces de red vienen 2 digabit en placa base y se adjunta tarjeta con 2 mas 1 BMC (ilo, idrac) con gestion remota completa con redireccion de video y demás 1 sola fuente de alimentación incluye los railes del rack Consumo con Unraid: Current:35W Maximum:103W Average:48W Perfecto para unraid 24*7 si solo necesitas 4 discos grandes y 1 SSD SATA Se piden 250€ Lo cambiaría por algo de mi interés como equipamiento de red 10gbe o multigigabit o discos duros grandes. Area - Barcelona
  2. Hello Ozymandias, it really looks like a nice build. It seem you don't really need anything to just build and start testing your server, so I would go that way. You will not be using a lot of drives initially, and Unraid at least as I use it use to have only one or 2 drives active most of the time (mostly cache drive and the drive you use for the VMs). I would check the noise of other fans, like the ones in the PSUs included with that case, that are smaller and look like they will rev up and make some fancy noise. It is worth checking all your noises and trying to low down the REVs of all your fans, maybe even the ones included are not that noisy at really low speeds. If you use a big CPU cooling solution maybe you can get rid of most of the airflow you will need with stock one. So, at least from my POV, build and test first, then see what you need to upgrade. BTW I also use Noctua in my builds but I have a Lian-li D-8000 that is a completely different beast, and I used to have a desktop based CPU, I have just bought an Epyc system for that case, and I will probably buy an Artic Freezer cooler because of Noctua not building onw with the correct orientation for my Supermicro board. I will share my findings. Cheers
  3. Hello, I'm interested in this board too, the X570D4U-2L2T. Have you been able to solve the network speed problems? Thanks
  4. My experience in my specific array is very good. Running 0.6 without errors in any drives and when I clic on the drives I notice the slow spin up to read the directories, so it is really working well. I have been moving files and even deleting folders, no errors in the drives. My main issue is that even with that feature enabled my shelve seem to be still power hungry... but I will need to investigate more. I'm using this PCI card: [1000:0087] 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) with an EMC shelve from a VNX 5200 and the original 15 hitachi sas drives that came with it: [7:0:0:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdd 3.00TB [7:0:1:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdf 3.00TB [7:0:2:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdg 3.00TB [7:0:3:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdh 3.00TB [7:0:4:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdi 3.00TB [7:0:5:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdk 3.00TB [7:0:6:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdl 3.00TB [7:0:7:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdm 3.00TB [7:0:8:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdn 3.00TB [7:0:9:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdo 3.00TB [7:0:10:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdp 3.00TB [7:0:11:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdq 3.00TB [7:0:12:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdr 3.00TB [7:0:13:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sds 3.00TB [7:0:14:0]disk HITACHI HUS72303CLAR3000 C442 /dev/sdt 3.00TB aside from other 3 sata drives connected to the booard integrated controller Processor is a Xeon E3-1200 in an Intel motherboard Let me know if I can help by posting any more details.
  5. When you have the plugin installed the "support thread" link on the plugin, links to the feature request thread and not to this one. That needs fixing also I have just installed it without issues, waiting to see if it works...
  6. When you have the plugin installed the "support thread" link on the plugin links to this thread. That needs fixing also
  7. Thanks a lot you really helped me. I also had an issue with a dual nic I350 Here I add the Original and modified configuration; Original: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> Modified: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> I hope this helps others, and also that unraid solves it (I have seen the new beta in a video and given the new included tools maybe they have solved it) Thanks.
  8. You can install VFIO-PCI CFG pluggin from APPS to help you see the IOMMU groups and edit the configuration automaticly. I recommend trying to leave the adapter without network config first. I have done it twice, the interfaces disappear from unraid, but if you find out you have a weird network setup shown in unraid you can delete config and get to defaults. To restore default network configuration delete these two files and reboot: /boot/config/network.cfg (i.e. the file network.cfg in the config folder of your flash device) /boot/config/network-rules.cfg (i.e. the file network-rules.cfg in the same folder) You can also edit them if you know what you do. Good luck
  9. ok, I imagine you are thinking that developing that SAS support has a cost. Let's try to help. I'll buy 1 PRO license if you support it. Thanks, Ruben
  10. I also tried some commands to spin down one drive and ended up rebuilding a drive... not sure if the drive was damaged by the command itself or was not related but I will not do any other tests until it is under supported development.
  11. Please allow SAS drives to spin down before even getting more array pools, I think spinning SAS drives up and down should not be that difficult and will allow us to have larger arrays. Then we will be so happy to get multiple pools Thanks a lot.
  12. Hi unraid techies, please solve this problem, I'm also in the SAS side
  13. Upvote, I want this feature. I have a 15 drives shelf full of SAS drives that don't spin down. +15
  14. I have same issues in 6.0.1, it keeps retrying to unmount...
  15. It's working, I've been able to install plex docker. Thanks you both guys for your help. Sorry for my stupid issue. Thanks
  16. OMG, you are right, a bad network setting due to a network device in repair now... thanks for your help, I'm going to test it and replay as soon as I check it's working, at least the plugin seems to be installed.
  17. diagnostics zip attached I used plugins page to install, don't know old way tower-diagnostics-20150607-1638.zip
  18. I installed the new V6 by backing up the old, format, copy all new V6, run the boot script (I needed to modify it to work) and then I copied back the settings. I've never used plugins in any of my older installs. I can ssh the unraid server and try any command you want to check if the FS is fine... thanks
  19. when trying to install the pluggin said I get: plugin: installing: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg ... done Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 plugin: xml parse error
  20. I created the img, didn't create any cache share. How could I create only this user share? I don't want to create multiple shared folders, I based my unraid in sharing the drives only as I want to have only the drive in use spinned up for a low noise/greener setup. I pressed save.
  21. well, it is not nicely explained, but if this issue is because I use a cache drive in reiserfs and not btrfs, please tell me how to change the cache drive from one format to the other (without losing data) Thanks
  22. yes, I saw them from Noobie Docker Setup Guide http://lime-technology.com/forum/index.php?topic=37732.0 while the unraid pages aren't exactly the same, I followed the step by step guide, then on step 6 when I press select a template I don't see that menu, only the few option in the attached png. I know I don't have experience, but I follow step by step guides, I don't bypass any step or think I'm cool and I can do it without reading... all my drives are reiserfs if this makes any difference Thanks
  23. Hello, I've upgraded from V5 to V6rc4 I've created the docker image and enabled it in the /docker page I can insert a list of template repositories and save I tried with this list: https://github.com/aptalca/docker-templates https://github.com/balloob/unraid-docker-templates https://github.com/binhex/docker-templates https://github.com/jshridha/templates https://github.com/CaptInsano/docker-containers/tree/templates https://github.com/coppit/docker-templates https://github.com/dmaxwell351/docker-containers/tree/templates https://github.com/gfjardim/docker-containers/tree/templates but then I had near to no changes when I try to add a container in the select a template only [user defined templates]and [default templates] was shown until after trying lots of times and deleting creat image... etc now I can read also: [template-user] and inside my-plex but only this being also unable to install it Is there any video or troubleshooting guide? I have seen some info but from older versions of unraid and big differencies in the docker page Thanks
  24. Yes Sir, just trying to follow it slowly and nicely I'm in the : Adding Template Repositories - section, and taking my time to decide how to proceed Have you seen the V6 Wiki?