iomgui

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by iomgui

  1. Thanks a lot thats very clear. Yes, i m not an English native speaker nor am i leaving in an English country. I ll try to be more clear : in my case reconstruct write had the Following consequences : 1/ increase in data transfer speed, up to the limit of what my ethernet was able to manage (1000 Mbps) 2/ extreme decrease of the cpu load from 35% (at 80Mo/s) down to 6 to 12% (at 110 Mo/s) Thats why i was wondering how it managed a lower cpu load for better data transfer and write speed. I also understand than when migrating large amount of existing data to unraid, reconstruct write should be the best mode. For now i m only trying to understand how to use unraid, how it performs and how i could do what i need with it. Though, when the time will come to transfer my data, i think i will consider building the array without the parity disk, and launching the parity dilk built at the end (my data being safe as my older disks will wait that last step to be blanked). Thanks again for the help and explainations.
  2. perhaps i was unclear about cpu but shouldnt the cpu usage be lower in sequential as computing is involved only during a third of the process ?
  3. It was on auto. I changed it to reconstruct write and it now writes at the eth bandwitch speed, and also CPU usage is down but not stable like before, between 6 to 15%. Many thanks for the tip its kinda an incredible jump in performance … i figure that the previous one was doing things sequencially ? explaining the speed divided ? but i dont understand the big drop in cpu usage … i dont complain … does it mean the auto mode should be "reconstruct write" nstead of "sequential read modify write" unless it concerns only a few people / conf ?
  4. (all same disks all same performances Under Windows). I Also tested the bus in window by copying files on all disk at the same time, as i had 4 it was easy to make. Which confirmed they were able to all perorm at 200 250Mo R/W in the same time on all drives. For info they are attached to a SLI 9211 8i in IT mode, in a PCIe 4x socket. Also i dont think it is the problem because when the parity disk built itself it took almost a day but was between 200 to 250Mo/s write for the parity disk according to unraid. Thanks for the input though. Is CPU load linear compared to write speed (for the same type of files)? In other words, if unraid was able to write the file at 160Mo/s and the parity disk at the same time, would the CPU load be around 70% (knowing that at 80Mo/s CPU load is at 35%)
  5. core i3 8100, 16 GB ram 4 disks, including 1 parity disk (same model as every other disks) Disks fully tested Under Windows 10, read write performance of 220 to 250Mo/s (average 240) on big files. I tested unraid first without a parity disk, transfer though ethernet of big video file was using full capacity of 1000Mbps ethernet, so was at 110Mo/s stable. But now with a parity disk, performances are down around 70 to 80 Mo/s (maximum) with a global cpu usage that is stable at 35%, but with nothing installed (not even Community application plugin, yet). Though even if global cpu usage is stable, CPU0 to CPU3 usage varies quickly between 10 to 50% (ram usage is low at 8%). I m not complaining, i just wanted to know if it is "to be expected" or if i missed Something in configuration ?
  6. thanks again ! then i dont see what is the reason (noise is the same as a "read noise" every 3 to 5 seconds, would be ok for 1 disk but 4 all at same time is annoying). All the more than older archive HDD (not nas) are completly silent in idle (when close you still can hear the humming of the spinning but there is no read noise when there is no read…) I disabled Seagate EPC feature already so its not power management unless firmware is bugged… Was first part of Seagate support now question is relayed to backoffice ... I havent tried to disable completly the smart function. Funny thing, i removed the parity disk for a test (i m testing unraid before deciding) when i put it back it decided to rebuild it entirely, why not, but what is funny is the noise of 3 disk in read + 1 disk writiing is the same as when they were idle … perhaps every 2 to 4s instead of 3 to 5s ...
  7. It seems it could be temperature monitoring by unraid that would cause the issue, is there any way to either disable HDD temperature monitioring or limit sensor checking every 1h for example ?
  8. my old configuration Under Windows is with 3 8Tb archive Seagate. With spin down set to never, when idle (2 min after system start), they make 0 noise every 10 min (stopped listening at 10 for the test). Of course the OS i on a ssd ... So i m wondering if its unraid related or disk related ...
  9. One of my main concern is noise. When i use my hdd in unraid (any of them, problem is not related to one, ironwolf pro) then every 3 to 5 seconds or so they will make a very short noise. Though in unraid tells me there as been no read nor write operation at all. I need to precise that they are included in an array, and i made the test with Nothing installed on them (no docker, no plugin, no Nothing, beside what unraid put at initialisation). Of course if the same disks are not in an array, they do even more noise (every seconds). The noise can be compared to a read operation noise in term of loudness. Though Unraid tells me there is no read (nor write) operation done ... Anyone has an idea ? could it be the SATA controller making the disk trying to read but unraid unaware ? Any way to eradictae the problem but keeping my HDD ? Or is it normal noise business for ironwolf pro ?
  10. I tried unraid around 2 weeks ago for a day or 2 on a basic test platform (open air motherboard, 2 hdss, etc.). Then i stopped. I finally assembled my final configuration (of course way different from my test config before). I used the same USB key with the trial key that i havent kept anywhere else so iwouldnt be able to find it again. First problem : it looped on boot (motherboard issue ?). I managed to find a solution by disabling anyother boot device than the USB in the boot options, UEFI mode, and renaming to EFI the folder on the USB key. It worked, but then stopped at login on the monitor attached to the Unraid server, i logged as root, but then Nothing, there was never any gui that i could access from my remote computer. So i mguessing i should format again the USB key and reinstall from fresh, but then i havent kept the trial key, and my understanding is i wont be able to get a new one for that USB stick. What do you suggest ? buying a new usb key and new trial ? or is there any way to continue initial trial ? I do not aim in any way to cheat the system.
  11. Thanks a lot. In my country that card is 80 to 100 more bucks than the Gigabyte one, and would also be short by one M2 PCIE port compared to the gigabyte (even though the second one would be for a futur hypothetic upgrade). But more importantly it wouldnt solve the 3 PCIe 4x problem as the layout wouldnt allow me to use the 4x one along with a graphic card taking the size of 2 slots ... Though i didnt precise that before I know Asrock has good reputation for server motherboards. I dont know about Gigabyte but layout, price and spécifications seems to lead me to choose Gigabyte...
  12. I wanted to know if i should expect a loss of performance in case i would switch from a LSI 9211 8i in IT mode to the SATA ports of a motherboard, using 8 HDD, with a 8 SATA ports motherboard. I cant test it easily for 2 reasons : - my motherboard only has 4 ports - i would need to buy special cables as actually i use a CS381 case and cables are SAS to mini SAS (for the test i would need reverse SATA to mini SAS) Tha motherboard i intend to use is GIGABYTE C246M-WU4, though my question is more generic. SATA controllers on motherboards versus dedicated LSI cards. In case you Wonder why i dont want to stick to the SLI card : i will need 2 PCIe ports with at least 4x speed, withouth using the LSI 9211 card. And i dont want to change my case which leaves me to motherboards in micro ATX format and maximum 2 PCIe ports being at least 4x. The Choice of the gigabyte is considering its a server motherboard i expect it to perform well in every compartment. It would also allow me to use ECC memory if i d choose to upgrade it (along with an i3 for a cheap result). Many thanks in advance
  13. My problem is simple, my motherboard Z390M gaming from gigabyte has 1 PCIe 16x and 1 PCIe 4x along with a couple PCIe 1x. Considering my case (CS381) and space available, i can only use the PCIe 16x for my graphic card. Wich means i can only connect my (not received yet, i can still cancel) LSI 9211 8i controller to the PCIe 4x port. I read everywher that it should be plugged into a 8x Pcie and never in a 1x PCIe. But what about a 4x PCIe ? What would be the conséquences in case of 8 HDD ? 1/ no consequences as with unraid only 3 hdd would work at the same time ? if i simplify … 2/ no consequences as long as i only plug 4 HDD, and for 8HDD what would be the result ? half speed ? worse ? 3/ consequences even with 4hdd ? 4/ other ?
  14. I suppose i already have an unraid array, and Windows disks i d like to copy partially or totally in the array before converting them to an unraid array or adding them to the existing array. My question is : is it possible to add to an existing unraid configuration an NTFS hdd from win10, and it to remain as a NTFS hdd with its data, and then read it Under unraid OS, either permanently or only (its my case) to copy the data to an existing array before converting the disk to unraid array management ?
  15. Hi, In case i change my motherboard by a different one, and lets say after a Failure of the motherboard (so i cant anticipate anything Under unraid management), how difficult it will be to have my old array and ssd cache still working as if nothing happened ? or will i loose the ssd cache and only it ? or worse ? and how much an amateur can i be in linux (no command line) to get my array working again ? Sorry for the noob question but i didnt find an easy answer
  16. if my understanding is correct drivepool would be missing the parity feature. Thanks for the advices though.
  17. I forgot to mention i ve seen snap raid compare chart with unraid. The feature missing in snapraid would be the real time construction of parity disks (in that chart it says "snapshot" for snapraid)
  18. Many thanks for your answer. What i m looking for is the possibilty Under win10, using a software that would allow more or less the same flexibilty in terms of HDD managements. I mean by that : - a "raid" solution that use up to 2 parity HDD (i m assuming here all HDD have the same size), but only write a file on one HDD like unraid, allowing to only sollicitate ONE disk when reading that file (this last point is one of the most important for me, noise management) - behavior would have to be like unraid, i mean each data disk being readable if needed (in case of computer or disk Failure for example) on any computer without requiring the "unraid like" software - the possibility to use a couple of SSD as cache for the raid, that couple of SSD also being the support of the OS (windows10 ideally), possible in RAID 0 (raid 0 would be managed by motherboard though)
  19. Hi, Is there any commercial version of Unraid, even with functionnalities limited to disk management (but allowing the same home made raid system as unraid) ? If not would you know of a win x64 software that would have the same disk management functionnalities (up to 2 parity disks, read sollicitating only 1 disk, etc.) ? Many thanks in advance