rilles

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by rilles

  1. after more reading.. found this related topic
  2. Or a load based parity check. As an example, I have a ZFS mirror - when it does a scrub it hammers the disk but when I use the file system, the scrub seems seem to have very little impact and the scrubbing picks up again after the transaction. unraid parity check seems to run at max priority, probably to minimum check time. As mentioned, with huge drives taking more than a day to check.. some kind of load based scheduling option would help parity checks from hogging IO at the expense of completion time.
  3. hmmm, I run a ZFS mirror cluster on Ubuntu 18.04. It really is an amazing file system for its features and ease of use. You don't require lots of RAM or ECC RAM, this is a myth. 1. plain ZFS can run on 1GB of RAM on any size array. If you have more RAM it will use more for a cache. Once you enable Dedupe and L2ARC you needs lots of RAM (rule of thumb is about 1GB of RAM per TB of storage) 2. If you use ZFS for business reasons, like any server you should use ECC RAM. For home use, storing movies... you don't need ECC RAM. btrfs doesn't require ECC. ZoL is ZFS on Linux, it functions in linux like a native file system. BSD has switched to this. https://zfsonlinux.org/
  4. newegg.ca has even a better deal at $589 each.
  5. I have a testing unraid system build from an old Acer H340 WHS. 2GB of RAM and a 1 core Atom CPU. runs the array fine and a few dockers.
  6. 5.5? thats still a baby. I have a fleet of 2TB drives I bought in 2009/2010 and I deliberately bought different makes. Only my WD EARS drives started to show end of life weirdness. But I have noticed in 6.4.1 (it may have been there before) there is now a spot for a drive manufacture date and a warranty end date. Not sure what unraid will do with that info but it does have information that could be harvested for a report from a plugin..
  7. yes. I moved to a new case - i didn't track what was plugged into what - when I started unraid from the new box it recognized all drives for that they should be.
  8. downside to these cheap SATA controllers seems to be lots of reports of reliability issues - from the card stop working to corrupted data. The issues I've seen was enough for me to go for a LSI controller, I got a used Dell PERC HBA that was already flashed into IT mode for a decent price from ebay.
  9. Marked this topic solved. The problem is not solved, by the why is.
  10. While trying to debug an issue on my main handbuilt unraid system, I wanted to build a second test system and I wound up getting my hands on a free Acer H340 with bunch of 250GB drives in it. By default it has Microsoft WHS on it and at first would appear to be designed so that all it can do. After lots of googling I found a way (dip switch) to enable entry into the bios so that it can boot from a USB stick - normally it boots from internal flash but no GUUID that I could see for unraid. Replaced the dusty tired 120mm fan with a Noctua 140mm fan designed to fit into 120mm mounts (the drives in there would heat up over 40C during parity) The system runs nicely now, it has a Intel Atom cpu with 2GB of RAM. Picking up one of these on the cheap you can turn it into great basic NAS box.
  11. Its a H270 kabylake motherboard with built in Intel NIC. Its does not appear to be the NIC or any hardware, its an issue with a certain Linux smb storage interface (GVFS) that the file manager uses to mount smb shares. The GVFS developers don't think is important enough to spend time fixing it. NFS or an explicit "mount -t cifs ...." nets me 100MBs transfer speeds. I guess unraid users are heavily Windows client based, I didn't find anything here about this from searching.
  12. sigh.. doesn't seem to be a Linux OS issue.. the file manager seems to be the issue. Nemo/Nautilus file manager uses GVFS when it mounts a smb share. GVFS mounts seem brutally slow, no fix for this bug that I can find. https://github.com/linuxmint/nemo/issues/1304 https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/1236619 https://www.bountysource.com/issues/47919367-large-file-throughput-via-gvfs-smb-is-over-2x-slower-than-smbclient-or-fstab-mount-cifs I guess if I have a lot of files to copy I can cifs mount it explicitly which apparently doesn't have the speed issue.
  13. progress! sorta. after a night of extreme testing I have found the real problem - hopefully the solution is easy. On my main PC which is Linux Mint, samba transfers seem capped at 30MBs to/from unraid If I mount a NFS share - then 101Mbs to/from unraid If I use a Windows 10 machine, samba transfers are 101MBs to/from unraid Thanks to the Win10 transfer graph I can see Mint -> Win10 transfers start at 90MBs then drop to 30MBs after 60% of the file for a 6GB file. --> What could be causing Mint (Ubuntu 16.04 based) to have network throttling issues with samba?
  14. Transferred from different desktops, laptops that had Ubuntu, Linux Mint, Windows7 and windows 10 - same result. Transfering between the desktops resulted in 80MBs speeds. definitely something on the unraid box.. be it hardware or a config setting. next steps -- plug in a new nic and build a new unraid box from junk parts to see if I can replicate the issue.
  15. "iperf3" test between unraid and my PC it shows 993MBs transfer rates.. there doesn't appear to be a network bottleneck.
  16. flying at max (for sata2) speed. writing 10240000000 bytes to: /mnt/cache/test.dat 895405+0 records in 895405+0 records out 916894720 bytes (917 MB, 874 MiB) copied, 5.00294 s, 183 MB/s 1751206+0 records in 1751206+0 records out 1793234944 bytes (1.8 GB, 1.7 GiB) copied, 10.0087 s, 179 MB/s 2510890+0 records in 2510890+0 records out 2571151360 bytes (2.6 GB, 2.4 GiB) copied, 15.0141 s, 171 MB/s 3284608+0 records in 3284607+0 records out 3363437568 bytes (3.4 GB, 3.1 GiB) copied, 20.0201 s, 168 MB/s 4097778+0 records in 4097778+0 records out 4196124672 bytes (4.2 GB, 3.9 GiB) copied, 25.0267 s, 168 MB/s 4940662+0 records in 4940662+0 records out 5059237888 bytes (5.1 GB, 4.7 GiB) copied, 30.0321 s, 168 MB/s 5805463+0 records in 5805463+0 records out 5944794112 bytes (5.9 GB, 5.5 GiB) copied, 35.0417 s, 170 MB/s 6550138+0 records in 6550138+0 records out 6707341312 bytes (6.7 GB, 6.2 GiB) copied, 40.228 s, 167 MB/s 7326005+0 records in 7326005+0 records out 7501829120 bytes (7.5 GB, 7.0 GiB) copied, 45.0555 s, 167 MB/s 8017442+0 records in 8017442+0 records out 8209860608 bytes (8.2 GB, 7.6 GiB) copied, 50.0618 s, 164 MB/s 8892727+0 records in 8892727+0 records out 9106152448 bytes (9.1 GB, 8.5 GiB) copied, 55.1078 s, 165 MB/s 9681401+0 records in 9681401+0 records out 9913754624 bytes (9.9 GB, 9.2 GiB) copied, 60.0755 s, 165 MB/s 10000000+0 records in 10000000+0 records out 10240000000 bytes (10 GB, 9.5 GiB) copied, 61.8616 s, 166 MB/s write complete, syncing removed '/mnt/cache/test.dat'
  17. I setup my unraid system years ago in the 4.x era with a AMD Sempron and a Asus M4A785-M motherboard, it served me well. I was always pokey getting 20mbs per tops on writes (no cache drive) Fast forward today I decided to give some life back into the old horse. I replaced the file system with XFS from RFS and put a old 7200 drive in as a cache drive. 30mbs tops during read and write from my desktop PC on a 1GB wired network. I replaced it with a SSD.. same speeds. Parity check nets me 105mbs speeds on average across all drives and so do internal file copies. hdparm test shows 110 for the spinning disks and 525 for the SSD. Looked at the network, 1GB on the NIC and when I do a "iperf3" test between unraid and my PC it shows 993MBs transfer rates.. so it seems not a cable/NIC issue. What could be causing the slow transfer speeds? I've dug through the forums and no dice. My NIC is not stuck at 100Mbs, my SSD doesn't need trim as I just put it in (but I did a trim anyways). tower-diagnostics-20180207-1022.zip
  18. Well, I have two pulling air in near the bottom for the drive bays and one pushing air out the back. so I assume this would result in positive air flow, other then mesh.. no real filters on the box so I guess an air blast every few months. I put a kleenex on the case top grill.. it doesn't move so I'm guess a little bit of suction from the rear fan. I can see that being a dust inhaler.. I will put some cardboard on the top and live with the rear only fan. Drive temps are good.
  19. I recently got a CM HAF 912 case. It has 2 x 120mm in front and 1 120mm rear exhaust. It also has room on top for 2 x 120mm fans (or 1 x 200mm fan) I don't have any graphic cards or fancy coolers. I put one 120mm on the back.. but I'm wondering what to do about the top.. its open mesh. Should I leave it as is? seal the top? remove the rear fan and replace it with 2 x 120mm on the top? whats the best for airflow for hard drives? noise is not an issue, the box is shoved into a back room.
  20. I have the same cpu/mobo. you can convert your sempron into an athlon II 440 with a simple bios tweak. I really just means you get two cores instead of one. if you turn on vt-d in the bios you can run VMs - I found the sempron ok for VMs but it spends much of its time maxed. If you do upgrade the cpu (but slim pickings for AM3 these days) I would be interested in the performance. I was looking at a new mobo (pc-mate h270) with a kaby i3 or i5 which should not be too much.. ram is the price killer these days.
  21. As an owner of a icydock 5-in-3 ... how is the air spacing in this cage? My icydock is ok with a 80mm fan on high out back, but I've seen many with really poor venting which can result in high temps.
  22. rilles

    First Build

    This is probably right, for a cheapskate like me it got me by - but its barely enough for the upgrade process. in todays world a 4GB is probably the smallest you want for a new mobo
  23. rilles

    First Build

    I have the PC-MATE H270 motherboard as my main system, its a great mobo if you just want a mobo to work. For a basic NAS the G4560 is plenty powerful. Cut the RAM to 2GB, use the stock cooler to save $$ 4 drives? I would suggest 400W power supply. I have a Sempron 140 on my NAS - its rarely maxed (and even then its for the gui rendering) and it never exceeds 1GB of RAM usage (1.5GB used for unraid upgrades). My system uses 90W idle, 115W parity check with 6 5600rpm drives.