Yros

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Yros

  1. Hello, I've been wanting (for a while now) to build a full AMD system (something along the lines of a Ryzen 7900x3D with a 7900XTX) however i have concerns about power at idle and stability preventing me from actually making the move over to unraid. Regarding power at idle, it's nothing new that Intel has a clear advantage here (though I did read that downclocking and reducing the soc power voltage did help reduce amd consumption at idle). I'm also wondering whether it is possible to fully power-off the GPU when the VM is off to further reduce power consumption? (I'd be using the amd igpu for unraid and the gpu only for VM). Meanwhile, when it comes to stability, I still have concerns regarding AMD c-states and gpu's old reboot bug that have always shown issue with Linux-based distributions (unraid included). Ideally I'd like to be able to maintain power-at-idle below 50w while spinning down drives. If that's really not possible, I guess I'll have no other choice than to go for an Intel 13900k + nvidia gpu build (though I really hate the idea of paying an overpriced nvidia gpu T.T). Please feel free to share your experience especially if you have an AMD build on a recent unraid build. This represent a substantial financial investment for me since I'm considering spending about 5k€ on this build between the components and hard drives so obviously I'd like to make sure I'm not going to ruin myself here :< Thanks in advance. Any help is appreciated.
  2. For monitoring purpose, I found those links while scrubbing the net trying to better understand ZFS: https://calomel.org/zfs_health_check_script.html https://jrs-s.net/2019/06/04/continuously-updated-iostat/ Bonus tips & tweaks : https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/ Someone also made a script in this thread to get mails/notifications, and Steini also offers the ZnapZed plugin for further monitoring
  3. Ok so change of plans : my previous graphic card ended up having some hardware issues so I ditched it in favor of a brand new AMD RX 5700 XT GPU. Now I can finally reach unRaid in GUI mode, but GPU passthrough is still a problem. In system devices, I get the following IOMMU grouping : IOMMU group 27: [1002:1478] 09:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch (rev c1) IOMMU group 28: [1002:1479] 0a:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch IOMMU group 29: [1002:731f] 0b:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] (rev c1) IOMMU group 30: [1002:ab38] 0b:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio In my VM, the GPU devices being passed are group 29 for the graphics and 30 for the audio (so the last two lines of the code above)... When I start a VM, I can reach the 'boot' part (like Linux boot to choose whether or not I start from the ISO, or install the system like Manjaro) but the moment I try to enter and reach the next step, black screen. The logs spam the following warning: Jan 9 07:45:06 Yros kernel: vfio-pci 0000:0b:00.0: BAR 0: can't reserve [mem 0xe0000000-0xefffffff 64bit pref] and sometimes it's the usual 'device is already in use'. I'm also aware of the AMD reset bug (which has a partial fix implemented in the custom 5.3 linux kernel available for unraid 6.8.0-rc5, which I move onto for that reason) but it seems to no avail. I've also added this to the syslinux configuration : append vfio-pci.ids=1002:731f,1002:ab38,1002:1478,1002:1479 intel_iommu=on amd_iommu=on But I think I made two mistakes there: first, I'm on AMD, and iommu is already active from the BIOS so I don't think I need it either way. Second, I added the 4 AMD Navi devices (including upstream/downstream) but I'm not sure if that was necessary. Especially considering they're already in their independant iommu groups so I 'probably' don't need to use the vfio-pci.ids ? Rather, I should maybe try a pci.stub to prevent unraid from using it in order to pass it through? But that's a bit problematic considering how I have only one GPU and it's in the primary slot and I'd like to be able to boot my computer/server in GUI mode to access unraid and then start a VM at any point directly with GPU passed-through.
  4. Yeah, a GUI with monitoring + a dedicated page with tutorial and basic commands and tweaks would do just fine I think. Though I'm not sure if allowing direct manipulation of the zpool through the GUI is a good idea, I think leaving that part in command lines is better as ZFS is quite flexible, meaning it needs precise options on a per-case basis to obtain the best benefits. (Plus it's a lot easier to make without direct modification )
  5. There's no absolute benefit for a GUI other than being able to monitor (in real time?) the ZFS pool and maybe even some additionnal perks. Either way it's, in my humble opinion, better to have a separate space for the Zpool(s) rather than be aggregated with the Unassigned Devices plugin and risk some mishap down the line due to a erronous mounting on unraid or something. As for the script, I'll have a look though I don't promise anything (my level is basically 1st year of IT college in programming... I was more of a design' guy at the time), and I'm still trying to figure out why my GPU won't pass-through @.@
  6. I also need the 6.9 unRaid version with 5.4x linux kernel since I'm running a x570 MB with Ryzen 3700x and AMD RX 5700 XT and the passthrough is nigh impossible for some reasons... (made a support topic about it which I should update btw). The RX 5700 XT has the amd reset bug very annoying but apparently there's a partial fix already out and that's available on the custom kernel for unraid 6.8.0-rc5, which I'm running at the moment (but I guess I'm just unlucky or I probably failed some config somewhere because it doesn't work...). ZFS can use partitions. My hope is that, once I set up the Zpool, the remainder of the 2x2Tb drives will be available for the unraid array. If not... Meh, I suppose I'll have to make another zpool just for them in raid1... ? I still have some time to iron-out the kinks since I'm still at the pre-clearing state of the drives right now so I'll start trying things out tomorrow with ZFS to get the most out of my current config'. Problem is: if I use all of my disks in the zPool, how will I start the Array T.T Maybe I'll be forced to use another user's trick and just dump a usb drive to start it... ? °I'll see°
  7. Hi, Excellent plugin which kept me reading non-stop informations on the ZFS subject for the past two days. A huge thanks to @steini84 for this port. If I find the motivation I may even lend my meager abilities to try and see if I can't come up with some GUI plugin '-' (would have to refurbish my coding skills a bit but meh, sounds like a good project). Speaking of ZFS plugin, it doesn't seem to be updated to the latest kernel (unraid 6.8.1-rc1). I'm currently trying to get my GPU to pass-through properly through unRaid but aside from that I have quite a decent (meaning completely insane) setup in mind (involving building a RAIDZ2 zpool of 8 disks while missing two which are part of my current NAS and will be replaced after transfering the datas first + a second zpool of 2x 1Tb PCIe4 nvme drives, because, why not...). I'll let you know how it goes @.@ Oh, yes, the fun part : 2 of my 8 disks are actually 6Tb (the 6 others are all 4Tb) meaning a 2x 2Tb loss of data. I plan on partitionning those extra 2x2Tb of data to use them as the default unRaid array (for some backup and stuff). Meaning part of the 6Tb drives will be in BTRFS with the unraid array in Raid1 (one parity, one drive), while the rest will be part of the main Zpool in RAIDZ2 Does that make sense ? Since the unraid array will only be used as backup I highly doubt it will have any performance impact. What's your take on that ?
  8. Thank you for the video. Very useful guide. I just have one tiny question : is it possible to modify the unRaid config to add the option of booting from Windows 10 Baremetal directly from the menu at the start instead of having to change in the BIOS between USB / SSD boot priority ? I've seen in another video you modified it to add a custom unraid version with isolated cores. I'd like to know if it's possible to do something similar here ?
  9. After searching for that very answer, I found out the 'go' file in the unRAID /boot/config directory of the flash drive. I also added that line to it, saved the change, then rebooted, but apparently it did not work. My current understanding is that the chip is too recent (NCT6798D in my case which is just the next one quoted above) and not compatible with the current version of the sensors-detect and sensors present on the latest unRAID. Useful links I found during my searches: https://wiki.unraid.net/Setting_up_CPU_and_board_temperature_sensing https://hwmon.wiki.kernel.org/start (replacing the old lm-sensors.org site) https://github.com/torvalds/linux/commit/0599682b826ff7bbf9d5804fa37bcef36b0c9404 the related issue to this specific chip. https://forum.level1techs.com/t/temperature-system-monitoring-for-ryzen-3000-and-x570-motherboards-in-linux/145548 Someone with the exact same config as I who apparently managed to patch it up… I'll try to adapt it to unRAID with my little knowledge and see how it goes '-'
  10. Currently with trial key. Also it shouldn't be the reason since I can add the HDD as a parity drive but it refuses to add it as a normal drive, which is strange. As for the syslog, I had a doubt considering how the zip file was twice bigger than the full diagnostic '-' Thanks anyway for the help ^^
  11. Sorry, the thread was posted before I finished writing it so I was editing it to provide the related informations ^^ As far as dockers are concerned, this is a brand new installation so the only docker I have installed at the moment is Krusader.
  12. Hello, I just finished my new build (Ryzen 3700x, 32gb RAM ECC, GTX 1080ti and 8 HDD (2x 6tb and 6x4tb)) and am now facing three different issues that are preventing me from going further. 1] GPU not working properly: this is the first and most critical issue I'm facing. When I start my server, I cannot even get into the unRAID GUI as all it displays is a blinking "_" on the top left corner of the screen. I have no problem starting unRAID normally (without GUI) and I can remotely access it through another device (smartphone, tablet or laptop). However, when I try to setup a VM and passthrough the GPU, then it's black screen all over again. I tried various methods (thank you Spaceinvader One for your excellent YouTube videos) to make it work, I dumped the GPU bios, linked it properly on the VM, and so on. I noticed that I managed to start the VM and install it 'properly' via VNC graphic display (though with a huge response delay) but in direct passthrough, nothing, just the black screen. When checking the VM log, I can see the same error line: 2020-01-05T17:59:08.926831Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region1+0x1e81c0, 0x0,8) failed: Device or resource busy 2020-01-05T17:59:08.926839Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region1+0x1e81b8, 0xffffff,8) failed: Device or resource busy I also tried with a Manjaro and a LinuxMint VM but the result is the same. My 2nd issue may be related to this: my Logs are reaching 100% very quickly soon after I boot. It may be related to this thread: In the Tools > System Logs, I also have this error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes) in /usr/local/emhttp/plugins/dynamix/include/Syslog.php on line 20 My 3rd and final 'issue' is related to the disk setup. I have currently 4 disks (2 of 6tb and 2 of 4tb) (+2 500g SSDs for cache pool but that's irrelevant I think). My issue is that when I try to add the 2nd 6TB HDD into the disks, the array refuses to start and it tells me something along the lines of 'you cannot add any more drives'. I don't really understant why it would refuse to mount the 6tb HDD into the array and only allows it into the parity drive despite the fact that there is already another 6tb HDD as parity in place and since both are the same size it shouldn't be a problem, right ? My default filesystem is 'xfs' (even though in the settings it says that it should be btrfs but w.e). I don't know if this is why ? EDIT: Found the reason for that one: you cannot add a new disk to an array that is larger than the current disks until the parity is successfully rebuilt. So I just waited for the parity to be done and it now works just fine. Please find the logs and diagnostics below and thank you in advance for any help provided. yros-diagnostics-20200105-1950.zip yros-syslog-20200105-1851.zip
  13. The motherboard actually offers 10g network. In any case, the cache pool will also be where I'll be installing the games (since I've read that you can decide to keep certain folders in the cache pool rather than the hard drives) so it will still be useful in some ways. Thanks for the advises regarding the RAM. I was also thinking similarly so I guess I'll just have to go for 32gb then... Very good to know ! I'll switch over to a USB2.0 flash drives then ! 16gb still... Since it's just as cheap (or even cheaper) than 8gb usb drives (and 4gb ones ah ah). I was actually a bit curious regarding the performances of unRaid if the system was on the usb but if it's actually hosted on the RAM then there's no issue. I wonder if in the future it will be able to be stored on non-volatile storage technologies such as Optane and the like 🤔 Well, that's another topic.
  14. Hello, So I'm finally getting ready for my unRaid build as the parts for my new config' have started to arrive this week (and more will come next week). Please beware the long post as I have a few questions and multiple details to review as this is by far my most advanced setup so far and although I've already spent the past 6 months learning many things both related to hardware and software and unRaid I'd still prefer to cross my T's and dot my I's considering that this project literally emptied the money I had saved for it (almost 3k€). So before we start, here's the planned build: Mobo: Asus Crosshair VI Hero (Wi-fi) CPU: Ryzen 2700 (with manual OC) GPU: GTX 1080 Ti (with manual OC) RAM: 2x8gb ECC, 2400mhz CL17 Samsung (Model: M391A1G43EB1-CRC) (with manual OC) Case: Phanteks Evolv X Drives: 2x Samsung 970 Pro 512gb onto an Asrock Ultra Quad M.2 card for cache pool 1x Samsung 970 Evo 512gb onto the motherboard slot, unassigned for Windows 10 VM 2x WD 6To Gold for parity 6x WD 4To Gold/Green/black for data Usage planned: Main Computer (Windows 10 config) for gaming, video and image editing. Media Server (Plex) as well as features like network firewall, web hosting, ... The whole build will also be fully watercooled on a custom loop (which is the first challenge I have to tackle due to limited space inside that case but that's what makes it so interesting). I'll take pictures along the way and share the completed build later on btw \o/ Anyway, here are the questions I have: 1] unRaid has to be installed on a separate USB-drive. How much space do you recommand for those to be on the safe side ? I've decided to go with a small-factor 16gb USB3.1 drive (actually bought 2, to have one backup). Is this fine ? 2] The RAM was a very big question. At first I wanted to go with some high-performance modules (3600 CL15 Samsung B-dies) considering the huge performance boost available on the Ryzen platform with proper RAM however at the end of the day I still decided to go for ECC module (and can only lament on the fact that vendors are limited to 2666mhz modules, which is illogical, at best, but that's another topic...). So I had the choice between a 2666mhz CL19 1-rank 8gb module (M391A1K43BB2-CTD) and 2400mhz CL17 2-rank 8gb module (see ref above). Actually there is a 1-rank version of that one as well but since everyone says 2-rank is better... Well, you have it. I ended up going for the 2400mhz one because in the end what matters is the actual latency and 2400mhz CL17 has actually a slightly slower latenty than the 2666mhz CL19, hence my choice. Any comment on that ? Also I have a few other questions regarding the RAM: How much RAM do I need to save up for unRaid itself ? How much RAM should I save up for the Plex Media server ? (I already have over 12Tb space used on my current NAS, in case this matters). I have currently 2x8gb of RAM however 16gb of RAM seems truly on the low-end side and might not be enough with my usage. Even now on my current computer I always have around 10gb of RAM used in average so I was kinda thinking of buying another 2 modules of 8gb to reach 32gb of RAM and keep at least 4gb of ram dedicated to unRaid, and 4gb of ram dedicated to the Plex media server. Potentially also another 4gb of RAM dedicated for the web server. Then it leaves me with 16gb dedicated to the Windows VM and 4gb of margin. What are your opinions on this ? 3] My final thoughts are regarding the unRaid setup itself (meaning the drives). I'll have 2 512gb Samsung 970 Pro on the Asrock Quad card to serve as the permanent cache pool (I chose the Pro version over the Evo version there to ensure stable transfer rates over time, since the Evo version usually don't do too well on sustained writes). As for the Samsung 512gb 970 Evo SSD, I plan to leave it unassigned and partition it so that I have one partition for my Windows VM (system and software only, around 200 - 250gb max), another partition to serve as the Plex media center (because I've read many topics saying that it worked much better with the Plex server on an unassigned SSD in terms of performances), and potentially a third one (TBD) as I don't think I'll need like 200gb of space for the Plex media server (I just need enough space to put the installation and config on it, as the media files themselves will be on the HDDs, right ?). The HDDs themselves will be without much surprises as described above with 2 6Tb drives for parity and 6Tb drives for storage. So, now, question: With all these informations, do you have any further advises or things I should be careful to look for ? (I'm already planning on using checksum to go in hand with the ECC memory to make sure my files don't get corrupted). I've spent a lot of time trying to figure out the hardware and software required for my setup and now spent a lot of money into getting this hardware so I admit I'm a little bit anxious at making some silly mistake and ruining it all (well, I'm also a little bit excited... first fully-watercooled build as well, gonna be fun, as long as I don't spill it XD). Oh, right, I also plan on following this very nice tip from @johnnie.black to create multiple btrfs pools : (as it will probably come in handy for the multiple partitions of the 970 Evo, as well as a potential future upgrade where I'll add another 2 NVMe M.2 SSDs on the Asrock card) So... Yep. Thanks to everyone who managed to read this wall of text 'till the end. If you have any advises/comment to make, that would be very nice. I'm still waiting for most parts of hardware (half of it is coming next week while the watercooling parts will arrive... gods knows when) but once the build fully setup and installed I'll make sure to share the final result \o/ Also if you have further advises on what plugins to install for my use or some tweaks/tips that could be helpful... \o/
  15. That's pretty good news. I suppose you don't have any speculation on the 'soon' delay ? (I mean, everyone got a definition of 'soon'... Like how my mom use to say 'in 5 minutes' but 4 hours later she's still not there xd). Still, pretty nice to know it's on the roadmap \o/
  16. THANK YOU ! Although it is clearly a 'MacGyver' trick to fix the issue, being able to set up cache pools through UD is already a pretty good option. In that comment you also said that LT was already planning to add such feature innately to unRaid however I haven't been able to find any source regarding that. May I ask if you have more informations regarding this subject ? After all, it stands to reason to ask such possibility especially with the possibility to create VMs and dockers and the like where most users would prefer to have each VMs / docker cleanly separated in order to avoid stepping on each other's performances and space.
  17. I see. I'll try to learn more about it and find an appropriate solution that can enable such hash computing/comparing to preserve the integrity of the system. It might indeed be a good thing then. */some time later/* Alright so I did find out more about it, mainly the 3 solutions currently available on unRaid for checksum parity checks which are: -- [bunker] https://lime-technology.com/forums/topic/35615-bunker-yet-another-utility-for-file-integrity-checks/ -- [Checksum Suite] https://lime-technology.com/forums/topic/41708-checksum-suite/ along the [Dynamic File Entry] plugin for unRaid. I also read the topic https://lime-technology.com/forums/topic/55426-new-to-nas-how-does-the-data-security-of-unraid-compare-to-solutions-with-checksum-before-writes-to-parity/ which also includes good advices on this matter. I'll think of my backup solution later on then. I think between the ECC memory and checksum parity check this should already be very good... (maybe even a bit overkill in my case but, hell, why not xD) To get back to my topic, do you have any suggestions regarding the SSD setup and the cache pool ? Optane use and the NVME Raid ?
  18. The Ryzen 2700x seems to support ECC RAM. It's a bit harder to find a proper motherboard supporting it though (the x470 Taichi Ultimate seem to be supporting it though it's a bit unclear and I couldn't find any actual confirmation...). I'm planning to make the build by the end of August / early September so who knows if at that time some new motherboards won't be released that will fix this issue anyway. Regarding the checksums for the file system, isn't the Data pool/array of hard drives created with unRaid already using BTRFS by default ? For the last line you lost me. I simply don't know anything about that :C Wouldn't the use of such 'integrity plugin' drastically reduce the lifespan of the hard drives though since it has to check files regularly ? ? Thanks for the answer though.
  19. Hello, I'm kinda new here and finally decided to use unRaid for my daily setup. I made extensive researches before deciding to post but I still would like some advice / confirmations regarding the setup I intend to use. Here's some context first : I have currently a very traditional high-end PC with a i7-7700k, 2x 8GB 2933mHz RAM, and a GTX 1070 with a 512GB SSD which I use both for gaming as well as a workstation (things going from programming to encoding and compiling softwares), along with a separate Synology NAS with 5x 4To drives in RAID 5 used as a media server and file repository. I wish to update this setup which is starting to show off some limits in terms of both performances, space and usability. First of all, I wish to ditch the Synology NAS (because I've had pretty poor performances across the board) so I wish to build a new tower recovering all the datas and using it as both gaming/workstation as well as a media server using an unRaid setup. Note: I've saved up quite a bit of money for this project. In terms of hardware, here's what I'm currently looking at in order to offer good performances : AMD Ryzen 2700x + GTX 1080 Ti, as well as adding in another 2x8 GB ram sticks into it. (Motherboard would most likely use a x470 architecture, if that even matters). I would also add another two 6To HDD to the mix to extend storage. Now, here's where I start to have some doubts: regarding the cache/cache pool for my setup, I'd like if possible to use the cache pool for my system install and preserve the best performances possible (for example by using 4x 250GB Samsung 970 Evo SSD on a PCI-e NVMe array), potentially in RAID0 or RAID10 (any suggestion here ?). I would also use my current 500GB SSD as an unassigned array to store apps like Plex for the media server and things like webhosting and such. The other HDD would obviously be used to store the bulk of the data (everything media-related) but I'll keep my VMs (at least one Windows 10 VM and potentially another one for Ubuntu and/or iMac) on the NVMe array to benefit fully from the performances of the 970 Evo SSDs in RAID. Additionally, I've been wondering whether or not to chip in an intel optane SSD (like the 900 or 905p) to dramatically increase the random performances, but I feel like it might interfere with the NVMe array here and I don't quite know how I should end up setting those various parts together to get the most of it ? ? Oh, right, and I also want to make sure I don't get the same issue with my current NAS where after some time there can be a few corrupted blocks of memory ending up corrupting files. It's especially true for pictures: I have a very, very large library of pictures since I've been in the webdesign industry and all for a few years and it is really, really annoying to suddenly end up with a file which worked one day and when you check it a few weeks later it end up being corrupted and showing 0 ko size in the details. And it doesn't seem to come from the hardware because I already checked the disks and they're working just fine (they're still pretty recent). Though it's not like it's occurring every time, it still occurred enough for me to notice and get annoyed by it. Another setup I was considering (with the hardware I just listed) was the following : having the Intel Optane SSD as cache drive, using the 4x 970 Evo SSDs as an unassigned disk and probably put them in RAID0 manually to store the VMs for full performances (I think I remember johnny.black saying like this was the only possibility) and using the last 500 GB SSD as unassigned disk to store the media server (Plex). If I put the 500 GB SSD along with the Optane SSD in the cache pool I'm almost certain of losing out greatly in performances because this SSD isn't very fast (like 1400 read, 1k write, its an old SSD Kingston SHPM2280P2). I really do work a lot with small files like pictures, text files, pieces of softwares, and so on, hence why the extremely high 4k random read/write performances from Optane SSDs is attractive to me compared to the 970 Evo/pro. Hence why I was thinking of using the 970 Evo/pro for VMs storage as unassigned disks for their high performances while the Optane would be kept as a cache SSD to transfer files back and forth to the HDDs/data pool. What are your thoughts regarding this ? My main issues are still regarding the SSDs and how to assign them in a cache / cache pool / raid type / which one to use as unassigned array, whether to add Optane, ... The rest I should be able to figure out \o/ I still have two months to set it all up anyway. Edit : oh, right, I forgot to ask but currently I have 'only' 16 Gb of RAM (which is, well, enough most of the time), but I should go for 32GB for this setup, shouldn't I ? (maybe higher even ? Hmmm ?).