FlorinB

Members
  • Posts

    125
  • Joined

  • Last visited

Everything posted by FlorinB

  1. Totally agree with you. I have no problem to solder extra connectors and luckily, in my case I will not lose the warranty, because the hdd cables for my PSU have connectors like below and it is possibile to order extra cables separate. By the way @Squid, can you give me some hints/links how can I start to develop plugins myself? I am thinking about a BTRFS GUI, which is missing at this moment. Thanks in advance!
  2. This one with the splitters is one of my issues. I have 1 to 2 and 1 to 4 spliters and instead of connecting the power cable directly to the disks I am using those splitters. The second issue which I have is that are used 2x2.5inch to 3.5 inch hdd adapters to install the 2.5inch disks, like below. There are 3 of them one after the other. On the Fractal Design Node 804 case there are 2 cages for 4x 3.5 inch disks. I have installed the 2x2.5 inch disks to 3.5 inch adapters on the left cage. When all the disks are spinning, like I did initially to preclear the disks, the temperature is rasing around 45 C. SInce the computer is in the house, one of my goals is to keep the noise as low as possibile and just increase the fans speed when needed. The case comes with normal 3 pin fans (without PWM). I have ordered 2 of Noctua NF-F12PWM and i am planning to install them on the disks side of the case. However is still unclear for me how can I control better the speed of the fans from unRAID. The Fan Auto Control plugin, does not detect, at least at this moment any pwm fan.
  3. From what they are saying here, two things are important: 1. that PSU can deliver the power you need 2. to not make your power cables glow ?, in other words to balance the power consumption. Personally I would go for the PSU with multiple rails because it is having multiple over current protection.
  4. I have ordered a bigger PSU bequiet! PURE POWER 10 - 700W CM, I will send back the 500W as soon as the new arrives. Theoretically with the Dell PERC H310 and the 8 SATA sockets on the motherboard I have the possibility to connect 16 disks, practically with the Node 804 Case I cannot install more than 12 disks.
  5. At this moment with all the disks installed (1 SSD 2.5 inch x 250GB, 7 HDD 2.5 inch x 500GB, 2HDD 3.5 inch 1.5TB), the power consumption with all drives spinning should not be more than 10 Amps. The SSD disks consumes 1.4A, the small 2.5 disks consumes around 0.8 Amps each, except the 2 Toshiba, which consumes 1.1 Amp and the 2 x 1.5TB disks consumes 1.25 Amps each 1 x 1.40A=1.40A 5 x 0.80A=4.00A 2 x 1.10A=2.20A 2 x 1.25A=2.50A ============= Total:_____10.1A < 20Amps (for the moment) If I will replace all disks and add 2 more extra disks (to have 12 in total), except the SSD cache disk, with Samsung IronWolf 4GB which consumes 1.75A, that would lead to: 11 x 1.75A + 1 x 1.4A = 20.65A - do I have to replace the power source? Taking into account that the maximum number of disks which is possible to install into my case is: 10 x 3.5 inch + 2 x 2.5 inch. How big the PS should be?
  6. Saved me as well. Thank you very much for the post... 
  7. The controller is not defect, but you have to do a hardware hack to make it working. See below: and this link. Flashed the controller PERC H310 in IT mode using Update on 17.04.2017, v4 <--- this is the latest, use this one! Firmware is still P20.00.07.00 Added another 2 disks of 1.5TB, because the SAS cable that I have is with only 2 SATA, replaced the old 500GB parity disks with the ones connected to PERC H310 controller. The Parity-Sync/Data-Rebuild in progress. A big thank you to everybody which commented on this topic and helped me to understand better how UnRAID works. End of story.
  8. Bad news: the Dell PERC H310 SATA controller seems to be defect. When installed the motherboard makes only beeps instead of starting.
  9. Have to see how it will go with BTRFS on my UnRAID. Worst case scenario I will convert the disks one by one to XFS untill all are XFS :). It is a pity that the quota features on BTRFS have problems. I hope the native checksum error detection is working without any issues. By the way PCI SATA Controller Dell PERC H310 arrived. I have to reflash it for UnRAID and to find some cables.
  10. And the filesystem is ... BTRFS Already installed a Windows 10, Lubuntu 18.04 and one Docker application. Thanks @johnnie.black for the info and tips. I was very happy when I read that I can assign quotas as well, but since is not stable enough, I will not use that feature. The caching disk option is quite cool! Most of the time the other disks are stopped, the CPU temperature is at 35 C and case at 29.8 C, with the case fan at lower speed.
  11. Mouse control exists, but the pointer is not visible with UnRAID 6.5.2 - VM Windows 10 x64
  12. In theory, theory and practice are the same. In practice, they are not. Albert Einstein I think is important not only to undertsand how it is working, but also to know what to do when it happens. By the way, I am still not decided which filesystem to use. For BTRFS everybody is saying just that you need to do some command line to manage the snapshots, but I could not find, on our forum, a tutorial with how to do a snapshot, restore a snapshot, see how much of disk is occupied by a snapshot, by all snapshots and so on...
  13. Tested myself the above scenario as follows: On my test UnRAID there are 2 parity disks and 2 data disks. I had intentinally removed the 2 data disks and replaced them with other 2 new disks. You need to assign manually the new disks to the Array then the rebuild is starting automatically. While the rebuild was in progress the 2 missing disks were emulated by the 2 parity disks. See below image Disk3 and Disk4 "Device Contents Emulated" After 3h, 15 min and 50 sec the Array returned to normal.
  14. Thank you very much @jonathanm, @pwm and @Frank1940 for your answers. Now, thanks to you, is all clear for me.
  15. Do i need to make some screenshots with the disks order or there is somewhere a config/log file which I can save with a timestamp?
  16. Those are really good news. The only fear I had was just to not end up in a broken array due to a banal power failure. I understand and expect to have data loss on file writing in progress, that is normal. I have to do some research for BTRFS, the ability to make snapshots and the native checksum error detection is quite appealing for me. It does not matter too much if at this moment there is not gui support for snapshots. This means that the two parity disks does not hold the same data? actually it is doing a kind of party for parity. On short if there are 2 data disks broken one parity disk will help to recover the first broken data disk and the second parity disk will help to recover the second data disk. As you,` @pwm said, the more parity disks the more complicated formula to recover the data.
  17. The ability to do snapshots sounds interesting for me as well as the native checksum error detection, but... "More susceptibile to corruption if you have dirty shutdowns" does not sound too good. "Must people here, i'd say 95%+ are on XFS." - I will go with the crowd...use XFS. Now I know for sure that I need 2 parity disks. However it is unclear for me how 2 parity disks are able to recover 2 data disks, even after reading the documentation from here and watching the 2 youtube videos: unRAID Parity Made Simple, as well as How does unRAID Parity works and why did I use it...? The same question is also here unanswered: Can someone explain me how the 2 parity disks works that 2 data disks are recovered?
  18. Ok - No way for Syba SATA III 4 Port PCI-e x1 Controller. I will try to find a used LSI based controller. All the disks are thoroughly tested, but are just normal disks, not NAS designed like WD Red, Seagate IronWolf or HGST Deskstar. I will note down the counters for the "199 UDMA CRC error count" errors of each disk and keep an eye on them. Which is also the important reason why parity is intended to improve availability, but will not remove the need to backup important files. And even with an arbitrary number of parity disks, only an off-site backup will protect from a major fire. I just hope I will not end up in losing more than a disk at a time. Of course, I am aware that it is allowed by the system to lose no more than one disk at a time and one parity disk (only if there are 2 installed). In german speaking countries there is a saying in IT: "No backup, no compassion!". I will setup my array with XFS filesystem, as according to UnRAID_v6_FileSystems "ReiserFS seems to be reaching end-of-life" and BTRFS "it is not as well-tried as XFS and the recovery tools are not as good." Beside the situation when I have a power failure, the system is not gracefully shut down or a filesystem check is required, is there any case when i need to start the array in maintenance mode? On Array Status-Maintenance Mode it states that "When operating in Maintenance Mode, the array is Started as usual; however, the array disks and cache disk (if present) are not Mounted, and hence not exported either". That means the shares and everything are available as usual, in read only mode or not available at all?
  19. Hi Jonathan, Thanks for your answer. It is just a theoretical question. At this moment I've just build the appliance, precleared all disks including the ssd. The motherboard X11SSM-F have 8 SATA connectors (the powersource is Be Quiet. Pure Power 10 ATX) . The disks which are installed are salvaged 2.5 inch disks (various brands: HGST, Samsung, Toshiba, Hitachi) from old notebooks plus a 250GB Samsung SSD for cache. I have also available 2 disks of 3.5 inch WD Caviar Green WD15EARD of 1.5 TB. Should I install those ones also into array, one as parity and the second as storage? Are the 3.5 inch disks which I have better than the 2.5 inch? Additionally I will install a Syba SATA III 4 Port PCI-e x1 Controller to have in tota 12 SATA connectors. I know that is not the best option, but this is what I afford at this moment. What looks strange for me is that already 3 of disks have the counter "199 UDMA CRC error count" bigger than 0. I had build previously a test unRAID on a Bqeel Z83V MINI PC, using an USB hub and 4 HDDs, without any UDMA CRC error. That looks somehow odd on the X11SSM-F, since the SATA connections should be far more better than SATA to USB adaptors. Please comment my configuration and give me some tips and hints. Thank you in advance. Best regards, Florin
  20. Hi, Let's assume I have a broken disk in array and I am replacing it. What I am allowed to do during rebuild and what not? For example: 1. May I copy new files or do I need to wait until rebuild is completed. 2. May I read the files from array? It will be slower? 3. May I change the shares pemissions and add/delete users? 4. May I install/uninstall plugins? Thank you. Best regards, Florin
  21. Hi, I am trying to read the documentation for UnRAID_6 and every time I am clicking on an image to see it bigger I get 404 error. Some URL examples: http://lime-technology.com/wiki/UnRAID_6/VM_Management#Using_Virtual_Machines http://lime-technology.com/wiki/File:Bios-virtualization1.png http://lime-technology.com/wiki/File:Bios-virtualization2.jpg http://lime-technology.com/wiki/File:Bios-virtualization3.JPG same here: http://lime-technology.com/wiki/UnRAID_6/Getting_Started#Assigning_Devices_to_the_Array_and_Cache http://lime-technology.com/wiki/File:Configuringarray1.png same here: http://lime-technology.com/wiki/UnRAID_6/Docker_Management#Controlling_Your_Application http://lime-technology.com/wiki/File:Dockerguide-controlling.png Would it be possibile to fix this soon? I am thinking of buying a license, but for the moment I was not able to read the documentation... Thank you in advance. Best regards, Florin