mavrrick

Members
  • Posts

    62
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mavrrick's Achievements

Rookie

Rookie (2/14)

26

Reputation

  1. That is if the adapter is splitting the lanes. It looks like it is simply a USB 3.0 adapter so if it is a USB 3.2 gen 2x2 or whatever is the right standard it could provide the needed bandwith for 3 pcie3x1 slots.
  2. Though the processors are documented with a limited max of 32GB, 48GB modules have been pretty well documented as working on the CWWK N100 & N300 units on the Serve the Home forums. I suspect the limit is based mostly on the max size of DDR5 so-dimms expected to be available vs what the chip will actually support. It was announced not long ago that micron was going to start making 64GB so-dimms soon so we may see that number go up even more. Keep in mind to turn a PCIe 3x1 slot into multiple M.2 drives you need a board that also has a PCIe Switch which is not on most M.2 boards carrier boards.
  3. Something @Miss_Sissy started to touch on is that different tactics are used for different kinds of data protection. Here are a few concepts that kind of discuss them. RAID "RAID" is generally thought of as used for Uptime since it is protects against a drive failure. The idea being depending on the RAID level/tech you can survive a certain number of drive losses. It can also under the correct circumstances boost performance over a single drive is capable of. Raid comes in two flavors which are hardware based raid and software raid. The industry is in many ways moving away from hardware raid as it generally isn't much of a benefit when you get into high speed SSD's and even more performant NVME drives. There are some great software raid solutions that also integrate LVM (logical Volume Managers) like ZFS which is a filesystem and raid technology combined. ZFS provides many advanced features that hardware raid can not like Snapshots, and data protection levels against bitrot and such that is hard to match with a hardware raid solutions. Physical backup which is something like writing all your data to tape, drive, or some other medium is generally thought of as a Disaster recovery option. Think of this like what would happen in you had your server closet destroyed by something like a natural disastor, fire, flood, or some other accident in your office that would rendor the entire stack of gear useless. This is why RAID itself in it's purest form a backup. Snapshots are something that has grown more and more over the last 15 years or so. Think of a snapshot like taking pictures of your data at a given time, but keeping it near by the server so you could just drop it back in place in a moments notice if you needed to. These are clearly not backups perse, but allowing you the ability to return your data to a previous state in either a near in place roll back, or side by side on the same host. This can be great in the case you have a that over zealous employee that accidentally deleted a folder or a few files, or a share. These can also possibly help with recovering from a ransomware attack. Say a employee clicked that bad link and suddenly their computer started to encrypt everything on the shares they have access to. Well you could simply rollback to the most recent snapshot and recover with very little struggle. Lastly there is replication. Replication is great from the standpoint of getting you multiple systems or multiple locations with your data fairly easily accessible, It has the possibility of even giving you a hot spare location so if your main location had a catastrophic event you could fail your activity to the secondary system, but you always have to remember replication doesn't protect the data in the files themselves. It just means it is the same in two places. So in theory if you get garbage in the main location it is just going to be sent to the second. I am sure there are scenarios I didn't discuss, but I just wanted to through this out there to show there are so many different concepts to how to backup and protect your data.
  4. I think there are a few very good points made so far based on the posts. There are differences between trying to maintain uptime vs trying to protecting the data. Both are generally wanted in a business environment and both can cause significant issues if not addressed. RAID has allot of flavors now and is mainly used as a way to ensure uptime. Backups can be accomplished using some kind of replication, or a physical device like a removable hard drive, tape backups, or even cloud backups(this has potential security concerns). I tend to agree with the statement above about using Unraid for a business as it could be problematic. Generally speaking if you want a 24/7 server that gets top teir support I don't think Unraid is the way to go simply as it seems oriented toward prosumers in home use. There are allot of ways to try to address this. Literally you could take three old computers with two spinning drives each. Run Raid 1 between the drives then replicate between two of them in the office. Lastly backup to tape or a remote computer at a different office. That is like pushing the cheap option to it's limit. There is allot more that needs to be answered to really give allot of details though. Like budget, do you have multiple offices, what is the appetite for cloud backup, what is the performance level needed. If you are trying to do best practices though you probably want some kind of raid for all storage along with at least 3 hosts spread between two physical locations.
  5. I think it really comes down to do you want the benefits of the ZFS file system like compression, dedup, redundancy, bitrot protection ect. ZFS bring some cool features, but also some interesting challenges. Performance wize you probably wouldn't see any improvements with the NVME in a RaidZ array, but this is the case with most raid solutions with NVME. You would likely see it with spinning disk, but it is unlikely with NVME's as their performance is just so much faster then what the standard install of ZFS is optimized for. That doesn't mean it will be horrible, but just not be as fast as each drive can run alone. I have 3 Crucial P3Plus 4TB drives in a ZFS RAIDZ1 Pool and they seem to fluctuate between 800MB and 2400MB per second. That box though is a much weaker setup then what you listed above. The box is a CWWK Mini PC which can only use NVME drives so I am kind of stuck in that setup. One of the best things about this setup is the ZFS Snapshots and on disk compression. The fact I can backup all of the docker containers in seconds and then replicate it if I want is nice. It is also nice vs the older CA Appdata backup plugin. That plugin creates full zip files of the Appdata for each day the backup runs. That adds up quickly. The Snapshots are only keeping Delta data so much smaller footprint. That allows me to to keep daily, weekly and monthly Snapshots. Disk compression for non Plex data can also add up and make a good difference depending on how much of that you have. ZFS by default will use only 1/8th of your system memory, but if you use some of the more advanced features of the file system it can start to get memory hungry so keep that in mind as well. There is also a complication of ZFS expansion. Right now you can't just add a drive if you want to change stuff. You have to add it with the same basic dimensions of the previous disk setup in the ZFS pool ie if you setup 3 drives in a Raidz1 array, to expand the pool you would need to add another RaidZ1 array to the pool to expand it. So to summarize it is a trade off. Use ZFS and get a bit less IO but fault tolerance, or no fault tolerance and get the IO. My case it is a no brainier, but your situation is a bit different with the fact you also have the 3 spinning disk.
  6. I think it is all about understanding the tradeoff's the problems with using them in a unraid array is primarily the loss of trim. The performance impacts of the unraid method of parity will likely be offset by the raw speed of those NVME drives though. Trim is really only impactful when doing writes. As when the drive is not trimed and needs to overwrite a block what could be a one step process becomes a many step process which can impact performance. How much on a drive that can do over 7GB/s is something i dont know. It was significant whe i got my first sata SSD's but not sure now. That said, trim on flash storage is kind of a best practice as it will improve write performance. As far as the memory usage go remember that by default unraid assignes 1/8th your memory to the ZFS Pool for cache. So while not insignificant not hugely impactful either. The basic setup is fairly memory lean, and ZFS memory needs shouldn't be to bad until you start talking about enabling advanced features like dedup. That is when it really goes nuts. My ZFS pool gets 4GB of memory since the box has 32GB of ram. What ZFS really shines is when you start talking about the advanced features like Snapshots, compression and allowing aggregated performance. Unless almost all of your content is highly compressed video files you will likely see a good return on compression and gaining space. ZFS with raidz# also allows the box to run a kind of raid that will give you potentially improved performance over a single drive. This is questionable with a all NVME setup but can be huge with spinning disk. Lastly snapshots are generally a much nicer way to save data vs creating backup zip files of individual sets of data. I Use to use the backup pluging to save my docker containers on a weekly basis. It took about 150GB each week and just put thE data in zip files. Now that i have the dockers on ZFS i use snapshota on a daily basis to back them up and the snapshots are generally a few hundred MB of data per snapshot instead. That is a good bit of saving. I now keep a much wider setup of snapshots as well. There are other good features of ZFS that make it good to use like its resistance to bit rot. Being that all of your drives are the same size they would be perfect to use ZFS with, but it is clearly not for everyone.
  7. Well AMD Ryzen laptop chips are great. That board has some really nice options as well. The chip will likely pull between 35- 55 watts plus what is running on the main board. I believe the radeon 780m is a vega3 graphics similar to handhelds like the steamdeck. So it will be decent for things like non demanding games. It will likely have a configurable TDP in bios so if you wanted to run it with lower power you could, just remember that mainly limits max clocks. And depending on load may not really improve things overall. I also would strongly suggest not setting the TDP belo 10 watts and i would probably not go below 15.
  8. Well I just got the 3rd Crucial P3Plus drive and so when adding it i took your advice and so far the results have been fairly good. Current setup is the CWWK 4x2.5GB Mini PC with a Intel N305 Processor 32GB of RAM. Storage configuration: Array 3TB with no parity ------------------------- Disk 1 =WD SN560E 2TB (shucked) Disk 2 = WD SN770 1TB ZFS Pool Raidz1 3x 4TB P3Plus Using Zpool IOstat i have observed the speed of the drives reach around 2.4-2.5GB/s which is really good when you consider all of the drives are limited to PCIe3x1. Considering the fastest network connection is 2.5gbps that is plenty of throughput. The only real issue is that when the array is under load and transferring about as fast as it can the CPU usage jumps and can even max the cpu out. It will probably never be to much of a issue though unless doing local only tasks. Hopefully I can add another P3Plus drive before to long and test this again. Though i don't expect to get much if anymore throughput since I think I am starting to hit being cpu bound.
  9. I wanted to post a update about power usage with my CWWK unit. As I mentioned earlier, my unit has the NVME x4 board that splits the unit's one PCIe 3x4 m.2 slot into 4 3x1 m.2 slots. Initially I was commenting about how the power didn't seem to fluctuate much when adding the drives but there is a problem with simply doing that. Drives in the Array can't use trim and as such any flash based drive would suffer overtime significantly. Because of that among other things I was using the drives without parity before yesterday. On Thursday I picked up a third 4tb Crucial P3plus drive. With that most recent purchase all 5 m.2 slots are filled and per a suggestion on this forum I decided to convert the 3 Crucial P3P 4TB drives to ZFS pool so that I could leverage Trim on the drives. Of course all the advanced features included in ZFS help. ZFS has actually worked fairly well, but it has come at a cost. It's CPU needs as somewhat expected are not exactly minor. The good news is that when doing transfers internal from the other two array drives to the ZFS pool I was seeing speeds up to 2.5GB/s not bad for 3 drives that are limited to around 800-900 MB/s each because of running at PCIE3x1. I tried turning compression on and off and that didn't make a difference to throughput and then i also adjusted the memory allocated to ZFS from 4GB to 10GB and no difference was made. I think the performance limits are currently based on the PCIE3x1 bus and cpu The bad news is that it drove CPU way up and that intern drove Power usage up while doing transfer intensive tasks. A low power solution with these MiniPC's/MB may be served better with spinning rust instead of set NVME's. At least then you wouldn't need to have trim to maintain performance and such which kind of makes ZFS a requirement. It does maintain fairly low power usage when the drives are not very busy and just handling regular system tasks. At most with large transfers happening between the 2 array drives to the ZFS Pool (all 5 drives active) my unit was hitting up to around 45 watts. Prior with all drives in the pool and no ZFS the power draw when doing continuous transfer would stay at most around 20 Watts. So just some food for thought about using NVME's for main storage. I think this will but a big kink in the Lincstation N1
  10. This suggestion was to get a better understanding as to why someone may want a low power cpu. Though there is allot of garbage on Youtube there are some decently knowledgeable folks out there that can discuss use case. Benchmarks have there usage as well, but it seems to me like part of this discussion is around why someone would want a lower power system vs a mobile or desktop based solution. It isn't a assumption. Think about why virtualization or docker came about(more so Virtual machines via vmware). It is because our systems have alot of cycles that they are not doing things. Those technologies partially exist to help better utilize the bare metal they reside on. There are allot of other benefits now but the concept was born from that. No. because I wasn't saying you were wrong in anyway. I was calling out a extreme case that may have allowed intel to have a edge and be something someone may want to consider. Ofcourse not. That is an example I have no personal favor for Intel. I have used AMD predominantly over the years and I can only recall two Intel boxes that I have owned over the last 25 years prior to the N305 I have now. I try to use what best suites my needs. Being that intel doesn't have some of the issues with Virtualization that AMD Ryzen does/is having it triggered me to pic it up.
  11. Real world. Meaning actually using it for Unraid, or as a proxmox lab server, or with casa OS with containers or a desktop for people real use. Heck just go do a Youtube search and you will find folks discussing how it the N100 is a great system for a cheap low powered system. You don't need me to validate it works well for others in that use case there are plenty of examples out there. Every single one of those videos will also talk about curbing your expectations as well since it is designed to be such a low powered system. Your server/computer will spend most of it's life at idle or near it. Even home servers are almost always underutilized. I upgraded my big home server about 2 years ago from a i5 2400 to a Ryzen 5950x because i thought i would need a more robust home lab setup as my job was changing. That kind of materialized, and then virtualization issues with Unraid and AMD Ryzen completely borked it. So now i have a 5950x chugging away at 90-100 Watts all the time just doing nothing most of the time. That exact same load can be handled by the N305 i have at around 17 watts. This is the whole core of how this thread was started and why there are topics on here about things like Powertop to reduce idle load. Yea.. I don't understand why it is being debated either. I have said repeatedly that the Intel chips don't match the Ryzen Chip. I haven't been debating that. My only point was in their designed Power envelop of 15(n305) or 6(N100) watts they are somewhat competitive. The N100 does better if it sticks to it's TDP. The N305 potentially has a chance as long as it's power draw stays down as well. If these are being clocked over 10 Watts the Ryzen chip has a advantage as it seems based on atleast the Geekbench Benchmark that is when the CPU's have enough power to flex a bit. I also think it is largely based on other characteristics of the whole SOC, though CPU is certainly a large part of it, but there are other areas that were cute back like i mentioned the Ram. One thing that I admitted to in my previous post is that i also see why you are talking about the N100 being having TDP's increased by default to 20, 30, or 40 watts. I did a quick search for review of the N100 and found one talking about how the Beelink x12 i believe was set in bios to use a P1 of 20 and a p2 of 25. Beside the fact that doesn't make much sense, if that is how it is shipping they are clearly pushing the Power envelope to over 4 times what it is by default. That should give higher clocks, across the board which would increase Multithreading performance, but as we have validated already should do jack for single core. That change doesn't do anything to help with single core performance and takes away the low power benefit. At that point it is all about taking a very cheap cpu and providing a cheap package with higher clocks. Then you have to ask if the price difference for purchase is worth it. That beelink is around $150. Yes it is slower but what is the cost of the ryzen chip. I think this is largely a driver as well. That said i certainly wouldn't advocate for this. So to sum it up i am not saying you are wrong about the ryzen chip at all. I was just talking positively about about the Intel chips when used in spec. It is a little disappointing to see how the N100 is being setup that way, though not really surprising after thinking about it. My n305 actually came to me with the Package Power configured down to 10 watt under load.
  12. I would suggest you stop looking at benchmarks and take a little bit of time to look for users with real world experience with these chips and how they compare them. I understand how you have come to your conclusion. We are taking different approaches to how these chips are viewed, so it is clear we won't agree. I also don't suggest comparing products from different market segments. Most of the posts in this thread before our back and forth was about how to reduce power usage since so much of it wasn't even the cpu. There is also an admission very early on that even desktop cpu's can reach very low power states now. It isn't just about the CPU as it is the whole package. I hope some of the MB options we provided earlier will help you. Here is another board with a interesting setup that may work well for a nas and fit your idea of a good setup https://store.minisforum.com/products/minisforum-ad650i I have found references to N100's being pushed way outside of their TDP Spec, by integrators. That is unfortunate if it gives folks a wrong impression. The reason for that is simple you can get a n100 minipc for far less then a Ryzen based system. Then it becomes a situation of you get what you paid for.
  13. I did some further testing and think i found something interesting. So I had my N305 Mini pc run several different iterations of Geekbench 6 with different numbers of cores and different settings for P1 and P2 package limits. My goal was to isolate what the performance of the cores at different power levels but to also test them with different tdp's with different quantity of cores enabled. One of the odd things is that with Geekbench it seems that the single core number was always around 1340-1364 no matter what i did. One of my runs I had the system reduce the active cores to 2 and forced 35 watt tdp the entire time. That gave me a clock speed for the test of 3.8ghz. If I ran a test with all 8 cores with 20 Watt P2 Package Power (5 over default of 15) or 2 cores with 35 watt package power it would never break that value. That just doesn't make any sense going from 1.8 to 3.4 ghz and getting the same score. I did a few google searches and it seems that Geekbench may be sensitive to the amount of Memory and or it's bandiwth bandwidth. That Is somewhere that the N100 and N305 will suffer since for system power reasons it is limited to single slot and single channel memory. It also doesn't help it seems to be capped at 4800. I don't think that impacts Package power usage, but it would impact the computer. I found a few references to Geekbench having limited or lower scores due to memory bandwidth so this could play a huge role in why the score isn't as good as mobile or other low power chips. If that is the case that would certainly highlight the point of how benchmarks are synthetic and may not represent real world use. It also shows how under the wrong circumstances the N100/N300/N305 could really suffer badly.
  14. I would have to ask for evidence of this. I think most folks probably wouldn't go in and by default change the Package Power levels up or down. I mean when you are buying what you expect to be a low powered solution why would you change that up. I would need some evidence to this. I showed that my N305 I have which should struggle more to achieve decent scores at 6 watts was able to achieve over 1000 in geekbench at 6 watts in single score tests. I set both PL1 and PL2 to 6 watts so it couldn't even boost using more power in my benchmark that i provided. When i looked up N100 benchmarks it was around 1200. At that low wattage that they do seem to be the more performant option. That said it doesn't take long for that to change, but then again this thread was always about trying to minimize power used with the lowest power hungry chip. When you consider that ryzen doesn't have a 6 watt chip currently the N100 is the better at that category. The n305 is a grey area it is probably if the power is kept low, but once the power draw goes much above the N100 it will start to struggle. It is probably a decent middle ground between the more conservative N100 and the other mobile chips like the ryzen 7840 you linked that will prefer 10 watts. So it bridges the gap. Not sure how i am disagreeing with you on that. I agreed that as the power draw goes up there is a inflection point where the Ryzen chip TDP forced down will be more performant then the N305. The N100 isn't even part of that discussion since it can't sustain a power usage over 6 watts for more then 30 seconds anyways. However we look at it we have shifted from the topic of this thread so.
  15. I think the important thing here is that TDP is a guildline and that what a processor can do at a given wattage may not translate up or down. When getting to such ultra low power usage some things get very hard to quantify. It looks to me like at the 6 -10 watt range the N series will be more performant then or fairly closeto the Ryzen you mentioned, as that is what it was intended to do, but the ryzen will smoke them as soon as they start to approach their limit. If someone is using a N100 or N305 and it starts to run hot that person would likely see benifit from the Ryzen or simply a higher TDP processor with better single core speeds. At the ultra low power under 10 watts though the N series can likely hold it's own against the Ryzen or others. The other thing here is that so much of power usage of a system is not just about the CPU. There are so many other things that can impact that.