Jump to content

jammsen

Members
  • Content Count

    48
  • Joined

  • Last visited

Community Reputation

1 Neutral

About jammsen

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. jammsen

    Anybody planning a Ryzen build?

    Im right know not running unraid on the system with this mobo, but i remember that every pcieslot was in its own group, that means, after the flash of the bios to the latest levels, most of the time the uefi ugesa or something similiar its called updates help a lot out there. Have a look at that here on my old post. To be honest, without a detailed architecture diagramm of your board and pcie lanes i dont think i can cleary refer to which port is what address and so on, so you are pretty basicly up to try it out. PCIE lanes in the iommu group dont follow the network card naming standard p0s0 and stuff like that, even if they would you still would need a basic understanding of your mobo architecture and most of the vendors dont include this in the desktop segment. Supermicro gives you that, but thats pretty much server/more servery.
  2. jammsen

    Anybody planning a Ryzen build?

    My Asus Prime X370 Pro, got this. But the B350 has that too, just use the ACS_Override kernel patch with the multifunction flag, you can google that or search this in this forum, its explain a few times here.
  3. jammsen

    Server Idle Power Consumption

    Now i have, thanks for the clue, i didnt even know this was possible and its under account settings.
  4. jammsen

    Server Idle Power Consumption

    There is not system in your sig, can you post it again pls? @shooga Can you please post your system too?
  5. Oh my god, i couldnt accept RMAing all the parts and stuck to it and discoverd, that the mainboard is really picky with the PSU, the third one works, i just install the new rig and i feel like in heaven
  6. Thanks pwm and s.Oliver: As both of you asked, whats my scenario and endgoal, i think i have to explain a little (Just to be clear im using unRAID over a year and i know a fair bit about usage and features, but i wouldnt consider myself a pro) (I did use unRAID on my Gaming rig as a GamingVM in a NAS approach, again with the WD disks a few months ago, but i really frustrated me having massive IO and audio issues all the time, even after spending the overprize on a X370 board vs a b350 for better architecture of asus boards and better IOMMU groups): For now its just performance evaluation, but the longterm goal is to reduce my rented pretty powerful dedicated root which comes about 100€ a month, on which i do everything, webhosting, email, teamspeak3, ipfire, docker, gameserver hosting of a multitude of games, evaluating bleeding edge stuff and a few things more, all done mainly by Proxmox, because of the free choice of VMs or OpenVZ containers or if i want docker containers. Later that root (100€) server should go and the critical stuff should be hold there on a 20-30ish€ server per month(web+mail, maybe teamspeak), everything else i want to selfhost from my basement, my internet connection is high and good enough for that( as i said that stuff is not critical, so a downtime is okay if it happens). As i said my old HW for my old unRAID NAS is a dual Xeon L5420 (4c4t) and 24GB ECC DDR 1 RAM, plus a few used bought disks, thats why i want to go for a raid6, because of the multitude of different lifetimes and the possibilty of dying every minute. The disks are 2x 1TB WD Caviar Black 1TB (7,2k rpm) about 5 years power on time, 1x2TB WD Green (5,4k rpm) 5 years too i think, both of them are bought new and are my own. Then i shot 5x1TB Samsung Spinpoint SATA2 (7,2k rpm) for about 100€ used on ebay, no clue of SMART data of any of them, of which 1 just yesterday hicked up again, which basically means its dead for usage and not reliable anymore. Again my old system can about just handle the speeds required for the disks, the 2 pci-e lanes are just the ver1 standard and im using the x4 connector because of a chipset headspreader blocking the x8 port. All 3 WD support SATA3 on paper, but lets be real, the WB Green SATA3 is basically a 5400rpm disk which is about as fast as a 7200rpm SATA2 Samsung Spinpoint avg'ing about 90MB/s. When the new hardware comes back from checking/rma'ing i got a Xeon Low Power Chip ( https://ark.intel.com/de/products/75270/Intel-Xeon-Processor-E5-2650L-v2-25M-Cache-1_70-GHz ) (10c20t) with DDR3 ECC RDIMM R4 with 96GB and a bigger mainboard which itself could handle the 8 disks and i dont have to go for hw rc which only does jbod. When that happens i plan to future proof more, maybe buying new disks for not bottlenecking the cpu and ram. My endgoal is, recude the root BUT also, educate myself on doing nested virtualization. I plan to virtualize ESXi and unRAID and play around with infrastructure automatization, IaaS deployments or maybe XaaS stuff, which basically gives me the ability to create and run 25vms via one button click, which will be bottlenecked if only 1 disks is doing it, but the raid5 or rai6 approach gives me at 7disks the reads of a medium priced ssd. I know raid0 could give me the writes too, but if another Samsung fails everything is again gone, so im sticking with the "better safe than sorry" approach here because it happend yesterday the first time and i had to reinstall everything. Also you will ask yourself, why not use PVE and unRAID in ESXi OR PVE and ESXi in unRAID OR like i just do plan now unRAID and ESXi in PVE. I'm really trying things out here right know and to be honest i dont even know why i not considered PVE at first and only ESXi or unRAID as host, because im using PVE over 7 years know i think. If you wanted to say im knowledged in one of these hypervisors its defenetly PVE for sure on a way bigger scale. But i also know the hickups of all of them. PVE has basically none, except for customing nested VT yourself, which is really really easy, about 3 commands and a reboot. ESXi (im new to that and im just trying it out right now, because im using it at work in the future) needs a good hw rc and compatible HW or you will never see a newer version of this software and the cool stuff like IaaS or XaaS costs a lot money. unRAID is really cool as a NAS with a custom SW raid function that never had a basis on a hw rc and gives you a lot of features, but its not made for clustering, HA, live-migration of VMs/containers, which the others do, again its a NAS. To be honest im not using HA or a cluster or live migration, but i use backups of containers via snapshots every night, which PVE masters perfectly, also everything im trying to later move here is in that format, so no migration process needed, but that wouldnt be a dealbreaker to do if its needed though. I really hope you guys now understand a little better now, why i really love to have PVE but also unRAID too. (ESXi, meh, im new to that, it costs a lot of money and its really freaking picky, but here is where the cool enterprise stuff is, IaaS/XaaS VM deployments and stuff, but dont see me here as knowledged, i only now just the basics maybe.) But to be honest, from my pov, unRAID was always a freaking good NAS with really cool stuff that seeks a rival, but PVE is a free and basic ESXi alternative which not delivers any NAS function but many container formats except docker. All i know is, i want to have a basic robust system and virtualize the rest and 1000 things more in the future to evaluate things. Thats why its so hard to make your mind up, in which order you want to do things. Is speed more needed that power efficency? Where is the bottleneck on which approach? Can you live with the bottleneck? Does nested hypervisor B or C even work with host system A? Does the deeper nested system X even work with host or guest system A and B? (Again robust base) Where to store my classic NAS "private" stuff? How does a backup strategy look like? (raid even unraid isnt a backup, as you know) Im sure i would come up with many more questions the more i type here So please feel free to share your feedback, your knowledge, your experience and maybe even your plan to do this, because there is more than meets the eye and more eyes see more than just 1 pretty much confused and sad (rmaing my new server) person.
  7. Well my new ordered hardware wont run so i have to through RMA and stuff sadly. Im now using my old dual xeon (4c/4t each) with ddr1 ecc 24G ram, which is enough for playing. BUT on the other hand i have 8 disks (7x1TB+1x2TB) connected to a HW raidcontroller (Adaptec ASR-5805 i think) which just lets me use all disks, while my mainboard has not enough ports. As you said, on one hand its "un"+"raid", BUT on other hand it isnt a Raid6 with 600% read speeds, which is why im going for first testings with Debian9,5 and PVE and virtualize all the things including unraid, or do you see another way? Cause im not really down to do a SW raid via Debian installer and just use my hw rcard as JBOT, EXCEPT there is a really good and solid benefit. Can you think of one? Any suggestions / feedback for me? This is a poc for me, im try it right now out, so please feel free to go crazy
  8. As far as i can read, you guys never got into details of using raid0/1/5/6/10 with your disk, all i can see is that some ppl tried to pass through some SATA controllers. Did you used a certain 300G disk from pve for unraid, or did all of you pass a controll or certain disks through? What about raidlevel on pve? Can you guys sure some details/insight on this? I really want to follow this way, for using pve as a base, i love it since years. But i also want to use unraid as the nas for my home and esxi to evaluate and play around insde pve. Any tips or insight on this?
  9. jammsen

    [Solved] Antec 1200 HDD cage (Needed)

    I need a 902 Cage with LED/Fans with dustfiler, like this: https://www.google.de/search?q=NINE/TWELVE+HUNDRED+HDD+CAGE+WITH+FAN+AND+FILTER&safe=active&tbm=isch&source=iu&ictx=1&fir=bJa_vaVYpsPRsM%3A%2CUvKfpXKTC2y1qM%2C_&usg=AFrqEzcSzUFPBp3IhxKepkaWsf8Rc1LwpQ&sa=X&ved=2ahUKEwi01KSKytXcAhVMbFAKHbuHBvIQ9QEwAXoECAEQBA#imgrc=bJa_vaVYpsPRsM: Anyone up to the task? Antec itself isnt, though. Im living in the EU/Germany.
  10. jammsen

    Anybody planning a Ryzen build?

    Was no problem for me on B350 or x370 arch with ACS_Override Multifunction Patch i think its called? Try to google it, it basically breaks up the iommu groups even more.
  11. jammsen

    Anybody planning a Ryzen build?

    Its true that some chips in idle got as low as 25-50W from intel, i have not seen this on an amd chip yet.
  12. jammsen

    Anybody planning a Ryzen build?

    According to https://outervision.com/power-supply-calculator 895 is recommended, for 24/7 permanent usage, how do you come up with 1100w dual, i mean redundancy is a must for a productive server, but a home nas?
  13. jammsen

    Anybody planning a Ryzen build?

    You can calculate that infos here for yourself, pretty easy: https://outervision.com/power-supply-calculator
  14. jammsen

    Dynamix - V6 Plugins

    Hey guys, the System Temp plugin doesnt work for me, here is a log: May 13 04:51:56 Tower kernel: i2c /dev entries driver May 13 04:52:03 Tower kernel: w83627hf: w83627hf: Found W83627HF chip at 0x290 May 13 04:52:03 Tower kernel: w83627hf w83627hf.656: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:52:03 Tower kernel: w83793 0-002f: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:52:03 Tower kernel: w83793 0-002f: Registered watchdog chardev major 10, minor: 130 May 13 04:53:57 Tower kernel: i2c /dev entries driver May 13 04:54:03 Tower kernel: i5k_amb i5k_amb.0: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:03 Tower kernel: w83627hf: w83627hf: Found W83627HF chip at 0x290 May 13 04:54:03 Tower kernel: w83627hf w83627hf.656: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:03 Tower kernel: w83793 0-002f: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:03 Tower kernel: w83793 0-002f: Registered watchdog chardev major 10, minor: 130 May 13 04:54:13 Tower kernel: i2c /dev entries driver May 13 04:54:23 Tower kernel: i2c /dev entries driver May 13 04:54:25 Tower kernel: i5k_amb i5k_amb.0: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:25 Tower kernel: w83627hf: w83627hf: Found W83627HF chip at 0x290 May 13 04:54:25 Tower kernel: w83627hf w83627hf.656: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:26 Tower kernel: w83793 0-002f: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). May 13 04:54:26 Tower kernel: w83793 0-002f: Registered watchdog chardev major 10, minor: 130 May 13 04:55:33 Tower kernel: i2c /dev entries driver
  15. jammsen

    Anybody planning a Ryzen build?

    It works fine for me, but beware that the mainboard, the bios and the CPU/APU plays a role in it too. But if you want to go for ultra low wattage on standby, you wont get the same results as Intel does.