Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About jammsen

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This one "did you set the unRAID DNS to the IP address of Pihole (not recommended)" I had it for 2 weeks running with only static dns entries, not dhcp and router stuff, it worked like a charm on selected clients, but now where its in the broad, inronically the unRAID system suffers from that, i kinda think off just setting the unRAID IP settings to static and use as DNS from unRAID, seems like a easy fix.
  2. Hey guys, i followed the guide from SpaceInvader, here and there are a few hickups to feel compared to an AdBlock extension but i can live with that. What i cant live with is that i setup'd it like in the video with DNS via DHCP over my router and the DNS address to my docker container and the unRAID server cant resolve any internet domains anymore, the resolv.conf states my DNS is right, but i cant do anything with unRAID in that state. Any ideas? I really thought it wouldnt matter that much that unRAID tries at bootup to get a DNS when the container isnt started yet but now, nothing is working. Regards.
  3. Hey guys, ive seen a few topics of this topic leaning more to questions about tech/config like raid0 or 1 on cache or what disks are TRIM compable and stuff, but after reading other posts im still unsure on how to do my usecase. So please help me out here guys 😣 My Setup: ASUS Prime B450M-A Ryzen 5 3400G 2x8GB Corsair 3000MHz RAM Sticks Disks: 3x1TB SSDs (1 ADATA SU800, 2 Crucial BX500) My Usecase / Whats my plan: I would use this server as a homelab, meaning: Some Plex (about 400-800G) Mostly dev stuff & virtualization (so heavy VMs and Docker Containers and Coding) Aside a bit personal data backup (like 150gb or so) Rest is for trying out new stuff The question are now: How to setup the 3SSDs at best, considering my usecase. I thought of just normal 2 ssd array and 1 parity, or just 2 ssd and 1 cache Im unsure in what ways ssds might die in the way unraid uses them, compared to a normal software raid (md) like under ubuntu Am i missing something?
  4. Hey, just installed "Dynamix System Stats" but the selectbox for timing doesnt work at all. all of my displayed data is for about the last 15 seconds. How can i fix that?
  5. I have run radarr, sonarr and tautulli, thoose are fine, the other ones i've never used and dont know them. Keep in mind that deep app virtualization over/with massive Java containers like Firefox Chrome FileZilla and stuff, chew through ram like a beast and are prone to have the same flaws/errors like the desktop counterparts, wrong expension, wrong webside, weird tracking javascripts running or massive open tabs will use up quite some I/O, CPU and RAM. Please always remember even the "BEST BEAST HARDWARE" with Layer1 or 2 or app virtualization can loose to some weird stuff and not perform like you want it to
  6. That is highly depending on what Docker containers you are running, you can give a 3900x hell with only 2 containers or can use a 1600x with 120 containers fine. Its just application virtualization, that can be done very heavy on load or very slim, very good and very bad, also everyone can do an image, there is experience factor in it. What 10 containers/images are you running?
  7. Im right know not running unraid on the system with this mobo, but i remember that every pcieslot was in its own group, that means, after the flash of the bios to the latest levels, most of the time the uefi ugesa or something similiar its called updates help a lot out there. Have a look at that here on my old post. To be honest, without a detailed architecture diagramm of your board and pcie lanes i dont think i can cleary refer to which port is what address and so on, so you are pretty basicly up to try it out. PCIE lanes in the iommu group dont follow the network card naming standard p0s0 and stuff like that, even if they would you still would need a basic understanding of your mobo architecture and most of the vendors dont include this in the desktop segment. Supermicro gives you that, but thats pretty much server/more servery.
  8. My Asus Prime X370 Pro, got this. But the B350 has that too, just use the ACS_Override kernel patch with the multifunction flag, you can google that or search this in this forum, its explain a few times here.
  9. Now i have, thanks for the clue, i didnt even know this was possible and its under account settings.
  10. There is not system in your sig, can you post it again pls? @shooga Can you please post your system too?
  11. Oh my god, i couldnt accept RMAing all the parts and stuck to it and discoverd, that the mainboard is really picky with the PSU, the third one works, i just install the new rig and i feel like in heaven
  12. Thanks pwm and s.Oliver: As both of you asked, whats my scenario and endgoal, i think i have to explain a little (Just to be clear im using unRAID over a year and i know a fair bit about usage and features, but i wouldnt consider myself a pro) (I did use unRAID on my Gaming rig as a GamingVM in a NAS approach, again with the WD disks a few months ago, but i really frustrated me having massive IO and audio issues all the time, even after spending the overprize on a X370 board vs a b350 for better architecture of asus boards and better IOMMU groups): For now its just performance evaluation, but the longterm goal is to reduce my rented pretty powerful dedicated root which comes about 100€ a month, on which i do everything, webhosting, email, teamspeak3, ipfire, docker, gameserver hosting of a multitude of games, evaluating bleeding edge stuff and a few things more, all done mainly by Proxmox, because of the free choice of VMs or OpenVZ containers or if i want docker containers. Later that root (100€) server should go and the critical stuff should be hold there on a 20-30ish€ server per month(web+mail, maybe teamspeak), everything else i want to selfhost from my basement, my internet connection is high and good enough for that( as i said that stuff is not critical, so a downtime is okay if it happens). As i said my old HW for my old unRAID NAS is a dual Xeon L5420 (4c4t) and 24GB ECC DDR 1 RAM, plus a few used bought disks, thats why i want to go for a raid6, because of the multitude of different lifetimes and the possibilty of dying every minute. The disks are 2x 1TB WD Caviar Black 1TB (7,2k rpm) about 5 years power on time, 1x2TB WD Green (5,4k rpm) 5 years too i think, both of them are bought new and are my own. Then i shot 5x1TB Samsung Spinpoint SATA2 (7,2k rpm) for about 100€ used on ebay, no clue of SMART data of any of them, of which 1 just yesterday hicked up again, which basically means its dead for usage and not reliable anymore. Again my old system can about just handle the speeds required for the disks, the 2 pci-e lanes are just the ver1 standard and im using the x4 connector because of a chipset headspreader blocking the x8 port. All 3 WD support SATA3 on paper, but lets be real, the WB Green SATA3 is basically a 5400rpm disk which is about as fast as a 7200rpm SATA2 Samsung Spinpoint avg'ing about 90MB/s. When the new hardware comes back from checking/rma'ing i got a Xeon Low Power Chip ( https://ark.intel.com/de/products/75270/Intel-Xeon-Processor-E5-2650L-v2-25M-Cache-1_70-GHz ) (10c20t) with DDR3 ECC RDIMM R4 with 96GB and a bigger mainboard which itself could handle the 8 disks and i dont have to go for hw rc which only does jbod. When that happens i plan to future proof more, maybe buying new disks for not bottlenecking the cpu and ram. My endgoal is, recude the root BUT also, educate myself on doing nested virtualization. I plan to virtualize ESXi and unRAID and play around with infrastructure automatization, IaaS deployments or maybe XaaS stuff, which basically gives me the ability to create and run 25vms via one button click, which will be bottlenecked if only 1 disks is doing it, but the raid5 or rai6 approach gives me at 7disks the reads of a medium priced ssd. I know raid0 could give me the writes too, but if another Samsung fails everything is again gone, so im sticking with the "better safe than sorry" approach here because it happend yesterday the first time and i had to reinstall everything. Also you will ask yourself, why not use PVE and unRAID in ESXi OR PVE and ESXi in unRAID OR like i just do plan now unRAID and ESXi in PVE. I'm really trying things out here right know and to be honest i dont even know why i not considered PVE at first and only ESXi or unRAID as host, because im using PVE over 7 years know i think. If you wanted to say im knowledged in one of these hypervisors its defenetly PVE for sure on a way bigger scale. But i also know the hickups of all of them. PVE has basically none, except for customing nested VT yourself, which is really really easy, about 3 commands and a reboot. ESXi (im new to that and im just trying it out right now, because im using it at work in the future) needs a good hw rc and compatible HW or you will never see a newer version of this software and the cool stuff like IaaS or XaaS costs a lot money. unRAID is really cool as a NAS with a custom SW raid function that never had a basis on a hw rc and gives you a lot of features, but its not made for clustering, HA, live-migration of VMs/containers, which the others do, again its a NAS. To be honest im not using HA or a cluster or live migration, but i use backups of containers via snapshots every night, which PVE masters perfectly, also everything im trying to later move here is in that format, so no migration process needed, but that wouldnt be a dealbreaker to do if its needed though. I really hope you guys now understand a little better now, why i really love to have PVE but also unRAID too. (ESXi, meh, im new to that, it costs a lot of money and its really freaking picky, but here is where the cool enterprise stuff is, IaaS/XaaS VM deployments and stuff, but dont see me here as knowledged, i only now just the basics maybe.) But to be honest, from my pov, unRAID was always a freaking good NAS with really cool stuff that seeks a rival, but PVE is a free and basic ESXi alternative which not delivers any NAS function but many container formats except docker. All i know is, i want to have a basic robust system and virtualize the rest and 1000 things more in the future to evaluate things. Thats why its so hard to make your mind up, in which order you want to do things. Is speed more needed that power efficency? Where is the bottleneck on which approach? Can you live with the bottleneck? Does nested hypervisor B or C even work with host system A? Does the deeper nested system X even work with host or guest system A and B? (Again robust base) Where to store my classic NAS "private" stuff? How does a backup strategy look like? (raid even unraid isnt a backup, as you know) Im sure i would come up with many more questions the more i type here So please feel free to share your feedback, your knowledge, your experience and maybe even your plan to do this, because there is more than meets the eye and more eyes see more than just 1 pretty much confused and sad (rmaing my new server) person.
  13. Well my new ordered hardware wont run so i have to through RMA and stuff sadly. Im now using my old dual xeon (4c/4t each) with ddr1 ecc 24G ram, which is enough for playing. BUT on the other hand i have 8 disks (7x1TB+1x2TB) connected to a HW raidcontroller (Adaptec ASR-5805 i think) which just lets me use all disks, while my mainboard has not enough ports. As you said, on one hand its "un"+"raid", BUT on other hand it isnt a Raid6 with 600% read speeds, which is why im going for first testings with Debian9,5 and PVE and virtualize all the things including unraid, or do you see another way? Cause im not really down to do a SW raid via Debian installer and just use my hw rcard as JBOT, EXCEPT there is a really good and solid benefit. Can you think of one? Any suggestions / feedback for me? This is a poc for me, im try it right now out, so please feel free to go crazy
  14. As far as i can read, you guys never got into details of using raid0/1/5/6/10 with your disk, all i can see is that some ppl tried to pass through some SATA controllers. Did you used a certain 300G disk from pve for unraid, or did all of you pass a controll or certain disks through? What about raidlevel on pve? Can you guys sure some details/insight on this? I really want to follow this way, for using pve as a base, i love it since years. But i also want to use unraid as the nas for my home and esxi to evaluate and play around insde pve. Any tips or insight on this?
  15. I need a 902 Cage with LED/Fans with dustfiler, like this: https://www.google.de/search?q=NINE/TWELVE+HUNDRED+HDD+CAGE+WITH+FAN+AND+FILTER&safe=active&tbm=isch&source=iu&ictx=1&fir=bJa_vaVYpsPRsM%3A%2CUvKfpXKTC2y1qM%2C_&usg=AFrqEzcSzUFPBp3IhxKepkaWsf8Rc1LwpQ&sa=X&ved=2ahUKEwi01KSKytXcAhVMbFAKHbuHBvIQ9QEwAXoECAEQBA#imgrc=bJa_vaVYpsPRsM: Anyone up to the task? Antec itself isnt, though. Im living in the EU/Germany.