Jump to content

Alexander

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Alexander

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 2.5 Gb should be enough if you only use HDD array in unraid (no SSDs or cache). The HDD data transfer speed (except from its cache) is not going to be faster. 2.5 Gb copper cable LAN is simple to use and cheap and convenient so why not use that if it works now.
  2. Yes I know, it is an old thread, sorry. But when I recently got this problem I searched for that status code error to find a solution. This is all I found. No good solution. I want others in the same situation to get help, that's all.
  3. I'm looking here a little bit too late. For any other new builder that wants a normal ATX motherboard I must let you consider ASRock. They have 2 nice things. 1. There is a fan controller in the BIOS. You can make a fan "curve" control with several numbers for temperature and power (speed). This is really good, and will immediately control fans at startup (but they still spin upp to max for like 2 seconds). Other motherboards normally only have windows apps for that (useless for linux/unraid) and they are often buggy and badly updated on top of that. Only downside on BIOS solution is when flashing a new BIOS version everything is wiped so write down your favorite setting, before you update your BIOS version. 2. ASRock normally use open 1 channel PCIe ports on their boards. You can plug any size PCIe cards (16 channel if you like) into those. This is especially useful if you get a SAS Expander card on top of your HBA card to be able to connect a lot of HDDs. One expander card can give you connections to 16-20 HDDs depending on if you use 1 or 2 SAS cable connection to it. You can daisy chain as many expander cards as your HBA card supports (think it is normally 256 or 1024 drives). But unless you power the expander cards from a separate backplane or separate "pcie slot power adapters" your motherboard slots will limit the cards/drives. Link: https://www.youtube.com/watch?v=EjFouPv6K-o Note. Many expanders has a 8 pcie channel long slot connector but do not use any of them for communication. So they are perfect to put into any of ASRocks open 1 ch pcie slot. ASRock is in the affordable, get much for the price range. If you want best and most efficient power regulators (usually= infineon components) for your cpu or other higher quality and pricier components ASRock is probably not for you and I suggest Gigabyte for those. For a stable "serious" server ASRock is probably not what you look for. As a "cheap" good start home server to learn on please consider ASRock boards. I think level1tech (youtube) quite likes them to, has relatively good linux support.
  4. 2.5 or 5 Gb LANs have not been working in linux/unraid. But in 6.8.0 anouncement it says "Added oot: Realtek r8125: version 9.002.02", this is Realtek 2.5 Gb controller, so just guessing it might work now? I think it is a matter of what drivers are included in the linux kernels used, so this is important for all linux OSes not just unraid. Please report back if it works. Anything with an Intel 1Gb or an Aquantia 10 Gb controller works out of the box. I have a motherboard with the Aquantia 10 Gb controller (with the AQC107 chip) and 2 Intel 1 Gb LAN. All 3 work out of the box in unraid. Here a link to a pcie card that works with that aquantia AQC107 chip: https://forums.unraid.net/topic/58390-asus-xg-c100c-10gbe-nic/?do=findComment&comment=803918 As far as I know all controller cards with these chip/circuits work too but I have not personally tested it so do not quote me on that. The best 10 Gb PCIe LAN card is probably the AQN107 https://www.anandtech.com/show/13066/aquantias-gamer-edition-aqtion-aqn107-10-gbe-adapter-now-available but that one sold out really fast. Marvell aquired Aquantia, and since then the supply of the cards/chips is unfortunately scarce. A coincidence? I do not know. Some say the Intel 10 Gb LAN is supported. I do not know. They talk a little about it here: https://forums.unraid.net/topic/86878-enough-pci-express-lanes/ I guess it works now at least for intel X550-T2. It uses ixgbe driver and in 6.8.0-rc8 announcement it says Update Intel 10Gbit (ixgbe) to out-of-tree driver version: 5.6.5 https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-680-rc8-available-r761/ I hope somebody can confirm if intel 10 Gb is working in unraid? The aquantia 10 Gb can be connected to 2.5 and 5 Gb networks, and it will work. Obviously I have not tested all AQC107 based card but I have not seen anybody saying the opposite on the net. But do note that intel 10 Gb is not connecting to 2.5 or 5 Gb networks, so intel is "inferior" to aquantia on that point. Intel 10 Gb does NOT support the standard for 2.5 and 5 Gb speeds/connections (it is another standard, and aquantia chip does support it too, but not intel). They both support connection to 1 Gb LANs though. The aquantia 5 Gb controller (AQC108 chip based) is obviously not working (=no kernel "driver" in linux), so do not get that one either. I think Spaceinvader One had a setup with Mellanox 10 Gb SFTP cards. You will have to buy adapters for copper cables or fibre cables to those (will cost you some). The standards used vary by brand for the copper adapter ones (voltages etc) so you should use same or compatible in both ends. If you use it fibre is normally suggested. If I do not remember wrong Spaceinvader One had to install a driver and he also needed a 1 Gb LAN network connected as well in parallel to get his 10 Gb Mellanox LAN working, at least for his advanced use, so I would NOT call this a work out of the box solution. All this is NOT necessary with the 10 Gb Aquantia copper LAN, you just use it exactly as any normal 1 Gb copper LAN. With unraid or any not very old linux (kernel) it will run/work out of the box. No issues or workarounds needed. Another thing. Do not bond 10 Gb and 1 Gb networks together in unraid. https://forums.unraid.net/topic/84516-upgrade-to-68-rc3-not-possible/?do=findComment&comment=783433 I guess you should not bond any of these together with 2.5 Gb either?
  5. I guess this is a link to the "Lehmann" server case. https://www.lehmann-it.eu/en/19-inch-server-cabinets/19-inch-office-racks.html Quick search gave this alternative (80% sound reduction) http://www.msnoise.com/soundproof-it-rackmount-cabinets-4u-6u-12u-25u.html I have no idea of the price. Of course these must be really expensive, reducing sound considerably and be high quality (that's what it looks like at a glance).
  6. I have this card too. Card is new. Has "default" firmware 510A. I have not tested it thoroughly yet, but it works OOTB with my WD reds and Seagate HDDs (4 in total, of which 2 are in array). But do not put it in a PCIe slot that shares lanes with your HBA. My HBA (HP H220 IT-mode) is not working if it doesnt get all the 8 lanes it should have. The IBM card will thus claim use of up to at least some lanes when inserted, despite that it doesn't use any of them. Maybe that is required by the PCIe standard to get power, and populate a slot correctly? It is meant to be put on a backplane or a slot only providing power. After moving the IBM expander to an open 1 channel slot (on my ASRock motherboard) it's all working. These do not use shared PCIe lanes between slots. The card has a 8 channel wide slot connection, so if you put it on your motherboard you will need an open slot or waste 8 channels i guess. Don't ask me why the IBM expander does not have a 0 or 1 channel width connection? You can "flash" it with newest firmware 634A. Art of Server on youtube has a video about how to do that. I would suggest to use latest stable manjaro XFCE instead of CentOS and learn the corresponding commands to use. Google will help you find those (Arch linux based). Hint, https://www.howtogeek.com/426199/how-to-list-your-computers-devices-from-the-linux-terminal/ Whence sudo pacman -Syu <commandpackage> will install the package that enables the command included in it (with some Y (Yes) confirmations. sudo sg_write_buffer <and the same parameters used by Art of server> will run the command. Remember to run it in the directory that contains the files used in the parameter fields as if=filename (if is inputfile) and of=filename (of is outputfile). You will need to have an USB stick or a hard disk connected containing your firmware file. Disconnect everything that can contain LSI firmware, such as controllers and SSDs first so that you will not accidentally wipe the wrong firmware. It is VERY important to check what the name of your device is as described by Art of server. If you do not provide that parameter at the end of the sg_write_buffer command you might damage another device. This is especially important if you motherboard has an LSI controller on it (usually server boards), obviously you can not remove that unless you have another computer to use for flashing. To install manjaro on an USB follow manjaros wiki, thus use rufus to put manjaro on the USB stick. Do not install manjaro on any hard disk, just run it "live" in memory. It will require you to set up timezone and keyboard for live use ="ram memory install". Hard disk install is done after your "live USB" install in manjaro, but you should not need to do that. I found a forum thread where a guy testing the performance of the IBM 46M0997 card with original 510A firmware and all later firmware. He got less throughput speed with all firmwares after 510A. So I will not "upgrade" it unless I later will encounter problems. I'm not sure you can downgrade back easily. Not tested that. You might need to use another tool to erase the firmware completely first if you want to downgrade back? Basically upgraded firmware made his 2 SAS bridge connected card tested with several SSD SATA connected (i think) to get about 1.1x the speed of only having 1 SAS bridge connected. With 510A the speed was 1.5-2x with both SAS ports connected (2x is what the "theoretical maximum" should be). With only HDDs connected, you will not hit SATA 3 speeds, rather just below SATA 2 at the most (=half speed), so this might actually not matter in the end for you. This is probably a good choice if you consider stable "bug free" operation (with any brand or old HDD) as a higher priority than "top" speed (as you probably should). If you do not have bugs/problems then maybe, (I do not know), you may not want to upgrade the firmware from v 510A. If anybody does speed and/or stability tests, please share your results.
  7. Like most: Unraid keeps all data that you do not expressively want to erase yourself safe, no matter what. Like 2nd most: GUI/ease of use + spaceinvader one tutorials/videos. Like 3rd most: Hardware support for new + old hardware, examples 10 Gb LAN (Aquantia AQC107), SSD cache. Like 4th most: The new Wireguard VPN feature is awesome, really useful/easy/fast. Like to see added: I hope the new app that backups VMs will get an easy nice GUI restore option. Especially useful as long as VM edit GUI is buggy. In restore you should be able to choose which backup (date) to restore, which VMs you want to restore and if you want to restore XML file and/or vdisk file. A 1 step undo option could also be good if you feel you accidentally end up worse after restore and want to revert it.
  8. I had this recently. I had fiddled with the VM settings after changing my USB settings so that I got an error message when starting my VM. In my case this happened when my XML got corrupted in v 6.8.0 (and the result was the blue-screen you got). Whatever you do, DO NOT erase your vdisk and DO NOT erase your VM. Backup your VM XML with the new app that is available. I excluded backup of the vdisk (only doing XML file bakckup), but keep the vdisk untouched and DO NOT erase it. Then create a new Windows 10 VM with the same settings you had when you created it and for vdisk Choose MANUAL (Not Auto) and the path+file of your original vdisk. After creation of the VM, next time you edit it, it will show vdisk AUTO in the setting. This is a bug. If you choose to look at the XML you will see that it uses your manually set vdisk. Just ignore this bug. The newly created VM has another uuid, whence is another VM machine. Now you can start it. I got a warning about a graphics file that I had to ignore and the windows and installed software licenses were not active. This is because now you run your vdisk on another (new) VM mashine. The final solution is to simply copy all the XML text from below the <uuid> </uuid> line(s) from the newly created VM into the old repair asking VM (in XML form edit) and update. Then suddenly the original VM works (I was lucky) and all activated licenses will show up as active again. This works because the VM now uses the original uuid. If everything works now you can delete the newly created VM you made and only keep the old one. The old VM that i erased all lines after </uuid> and replaced with the ones from the newely created VM ended up working 100%, no error messages or repair bluescreen and with windows licence + software licenses activated (as from before repair problem). This showed me that I can rely on that unraid will not destroy my vdisk data (thus the VM data except, VM Mashine, XML file setup). You just have to understand the VM GUI edit (especially non XML form) is buggy. Unfortunately the unraid VM create/edit GUI from simple non XML form is buggy, therefore make backups of the XML file. Unfortunately there is no restore in this new VM backup app yet. You will have to copy the saved XML text by hand (open file in text editor and copy, then paste into your VM XML form) if you need to do that. And if you backup the vdisk (I did not do that) you will have to copy over your backupvdisk file over your vdisk file to restore it. As I did suggest remember to keep a backup of your latest original files first if you do a "manual restore", then try to do the manual restore (you can have multiple time backups). If everything is working 100% THEN you can remove the backups of the XML (and possibly vdisk if you had made a vdisk backup) if you want.
  9. Thank you for pointing that out. I edited my post accordingly.
  10. When "Primary vDisk Location:" is set Manually to a vdisk image it is preserved in the XML file but it is not preserved in the Form view after leaving the VM and checking into Edit again. Then it is always showing "Auto" and the autopath wich is not the one used by the VM. The manually set and used path is correctly shown in the XML form though. This is confusing.
  11. Gigabyte. That was a nice MATX. For motherboard components and price I like ASRock and Gigabyte. But ASUS and ASRock has better BIOS just be prepared for that so maybe you shood check ASUS again then? Edit: Forget that comment. This is the Gigabyte board to get for a Hackintosh. Cool choice for that. I check reviews on Newegg before I buy a board. Your Gigabyte board on Newegg On your board: Other Thoughts: While navigable, uefi still falls behind AsRock and Asus. (Meaning BIOS is less good in Gigabyte) But since that board only has 4 reviews i suggest you check these 52 reviews on another Gigabyte Z390 board here Gigabyte Z390 AORUS MASTER as the same probably will apply to your board too. Some of the bad ratings might be user error, but to me if the BIOS is easy to use and gets good reviews it is a better board. On several Z390 Gigabyte boards if you check reviews With a cold start, the board sometimes "forgets" the boot drive and requires you to enter and (save) exit the BIOS for the system to pick the drive up again. (If that is true it is painful). Wendell at Level1Tech is an expert on linux motherboards and CPUs. He seems to mostly use/review ASUS and ASRock in his youtube videos and I think they are the best ones too (edit: for linux, Gigabyte often best for Hackintosh). Here is a nice Threadripper (AMD) review. ASUS Prime X399-A I fully understand your MATX size and intel CPU choice of motherboard and case.
  12. I would buy WD Red NAS drives if you opt for "consumer" drives, cache is only 64 MB but they are good. A consumer (less reliable but mainly cheaper than enterprise) NAS drive is a type of more reliable consumer drive for 24/7 use. I have bad experience from Seagate "consumer" drives. WD Red 3 TB is kind of quietest NAS drive available. 4TB some dB louder, still less loud than all other kind of NAS drives partly because of limited rpm speed. I think that is good for low temperature and possibly marginally for data reliability too. My 3-4 year old seagate (kind of low noise green low power) has bad sectors already. My WD green bought at same time is still 100% OK. Edit: AS of Jan 2020 actually the new HGST drives seems to be the most reliable. They are the loudest too though. Most reliable is probably "Enterprise grade" Seagate 10 TB from EXOS series. The standard model is the ST10000NM0086. But they are noisy (and speedy) as all enterprise ones are, and they are expensive (mostly because they are "big" 10 TB Helium) and of course you pay per TB. Still it seems Seagate has highest reliability and among the best drives in the "enterprise = most reliable and fastest" segment. So for reliability that is kind of the opposite of how it is in the consumer segment. Also they are actually reported as less noisy than others in this segment too. But they are fast spinning and thus still considerably much louder than consumer WD Reds (that is the simplest WD Red series not the PRO or GOLD). Personally I am not a fan of Corsair cases and Corsair watercooling but if you like RGB it is for you. Also I prefer ASRock motherboards over ASUS now, and I choose standard ATX because there is much more choice (compared to Mini ATX). ASRock has fan settings in it's BIOS. That to me is a lot better than buggy windows "add on" software/drivers (ASUS) even if ASUS has a lot of options (too many and buggy as I have heard). Have to admit ASRock BIOS will reset all settings at every BIOS update. That is annoying. Fortunately all million parameters are set to good default states. I only changed Memory OC and fan settings in mine. CPU OC might be interesting too for some. For Z390 you have EDIT: ASRock Taichi Ultimate has 10 Gb + 2 x 1 Gb (and all these work in unraid now, future proofing). Look for Motherboards or controller cards with the AQUANTIA AQC107 chip. Linux has support for it since quite a while back. The intel 10 Gb chip that is available now has dual RJ45 ports and does NOT have linux support (at least not yet) as far as I know. Another difference is that the intel 10 Gb chip do not support standard 2.5 and 5 GHz connections to it, only standard 1 Gb, but the Aquantia AQC107 chip supports 1, 2.5, 5 and 10 Gb and adapts to the different standards. EDIT2: Note that the simple Taichi (not Ultimate) is without 10 Gb. According to wesman reply below the Realtech 2.5 Gb does NOT seem to have Unraid support (probably without Linux kernel support) at least not yet. That is really disappointing if true. But most switches are probably 1 Gb and 10Gb (and 10 Gb is kind of 10 times the price and power if copper LAN cables are used).
  13. Spaceinvader One has a 2 min tip video "One share to rule them all" here: https://www.youtube.com/watch?v=TM9pPz732Gc This tip makes it faster to copy between unraid shares (= top level) in Windows. I found a solution to make this work on Windows 10 with Unraid 6.6.6 I had to stop the array to make it work and that was not indicated in the video or comments. In Windows 10 (maybe on MAC OS too?) Before you change in Settings/SMB/SMB Extras. STOP THE ARRAY Main/Array Operation <Stop>. Then write (in Settings/SMB/SMB Extras): [rootshare] path = /mnt/user comment = browseable = yes valid users = yourusername <----- See video / insert your newly made users username here write list = yourusername <---- same as above vfs objects = <Apply> <Done> Then START THE ARRAY. In Windows 10, add your network drive. Write: \\Tower (\\Tower is hidden) Then click the <Browse> button and all shares will show up. rootshare is the last share in my list. This works perfectly on my machines. But it does not ask for user/password to mount, so anybody with a windows machine on the local network can probably add this rootshare network drive. Unraid bug or more parameters have to be given in SMB Extras edit? I guess it might be enough to stop all dockers (and not the whole array) and start them again after SMB editing, just guessing SMB service is running alongside dockers in unraid. But I have not tried that. Stopping and starting the array will stop and start all dockers. Changing network settings in Windows 10 doesn't change anything at all for me. So those youtube comments are totally misleading as I see it.