Jump to content

pwm

Members
  • Posts

    1,682
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by pwm

  1. No, I'm not involved in HDD manufacturing. But I have developed factory test equipment and software for a number of customers and products, and helped a number of other customers with how to design their firmware to allow as much functionality as possible to be testable without binding expensive external test equipment. That's also why most electronics has a number of extra connector pins, or patterns of gold-plated PCB pads, easily accessible. Lots of the equipment I have worked with also requires support for advanced self-tests for the full product lifetime. Infrastructure equipment are often installed and in use for 10 years or more and it isn't practical to send out a technician unless something really is wrong. So quite often, the electronics gets developed with support for internal loopback support to allow in-system tests of critical subsystems. HDD is a quite interesting concept - from a mechanical perspective it shouldn't have been possible to do what is actually done. There are a billion jokes about if the car had progressed as far as integrated circuits. But it isn't really fair to compare mechanics and electronics. But what if the mechanics of the cars could have progressed as much as the mechanics of a HDD?
  2. Collect yourself the zip file with the anonymized data and check for yourself what information remains.
  3. Rsync is excellent at allowing you to restart aborted copy operations. That's why I suggested rsync and the ability to just copy during off-hours and break the copy when you want personal access to your files without rsync stealing bandwidth.
  4. I'd recommend producing a process listing before the reboot - just to see if some process is consuming lots of CPU or having eaten lots of RAM. If there is an issue, it can only be fixed if people help to collect evidence.
  5. They need to perform a full surface scan. It is not possible to create fault-free surfaces. They need to identify bad sectors and set up an initial list of remapped sectors, because all writes performed are performed without read-back. So writes depends on trust - one of the trusted parameters is that the sector surface has already been screened for physical faults. And that is performed with the drives own heads. When you buy a disk and see that the remapped count is 0, that just means there has not been any new remapping after the drive left the factory. There are several very lengthy steps that are required when manufacturing disks that just can't be avoided. Even before the surface testing, it's often the drives themselves that spends a long time writing down the servo information that allows the final HDD to find the individual tracks and sectors. Remember that the drives can perform these tests by themselves. And in the factory, you don't need a SATA controller for the testing so it's possible to test a huge number of devices concurrently.
  6. pwm

    Secure SSH

    It isn't unlikely that the people who finds it challenging to set up a key etc are the ones who suffer an extended danger of trigging fail2ban on themselves too.
  7. A CNAME is a record with an alias name mapping to the Canonical Name. So service-oriented name www.somedomain.com might get normalized into the actual machine name rambo.somedomain.com So you want to visit www.somedomain.com and the CNAME record translates www.somedomain.com into rambo.somedomain.com and then an A or AAAA record translates from rambo.somedomain.com into an actual IP number. In this case, you want a translation from a name into an IP number. So the DNS would register A records for IPv4 or AAAA records for IPv6. The browser asks the DNS for help with '<some random hex characters>.unraid.net' and the DNS locates an matching A or AAAA record and returns back an IPv4 or IPv6 address. The browser in this case receives back a private IP number (like 192.168.1.199) only meaningful within your local network while inside your firewall. https://en.wikipedia.org/wiki/CNAME_record
  8. pwm

    Secure SSH

    Turning off password login means there will not be any repeated login attempts - the attacking bots moves to another target the same moment the SSH server wants to negotiate SSH key instead of asking for a password.
  9. The following would indicate that the controller ports do matters in your system - two identical disks with significant transfer speed differences. Disk 8: WDC WD20EZRX-00DC0B0 WD-WMC300473838 2 TB 113 MB/sec avg Disk 9: WDC WD20EZRX-00DC0B0 WD-WMC300180557 2 TB 83 MB/sec avg The alternative would be that the disk with higher serial number has some firmware improvement making it perform better.
  10. I have run quite a lot of systems with 2.5" drives. Often as mirrored system drives (in my case that means home directories, dockers, ...) or mirrored data drives. But basically for drives I want spinning continuously with low electricity cost. I have had quite good track record with both Seagate and WD blue and black drives for 24/7 use. Right now, I only install 2.5" drives where I know there will be a very large number of TB of lifetime writes, where I would have to either buy very expensive SSD or replace them regularly. Or where the size needs makes SSD too expensive.
  11. It's actually expected to get a tiny bit better performance for really large transfers because you do fewer translations from your program into the Linux kernel. But the difference should be ignorable. That's what can be seen for transfer sizes between 32kB and 16MB. It's hard to tell exactly how much it matters from a single test-run - there are quite a bit of random noise in every time measurement. But something else must happen for your 32MB test. But it's hard to say what optimization step you trig in your code or in the Linux kernel when going for 32MB blocks. Will you see similar times as for 32MB blocks if you continue and writes 64MB, 128MB, 256MB blocks? One note is that some Intel processors can support 2MB and even 1GB virtual memory pages besides the normal 4kB pages of the old 386 chips. So it isn't impossible that you may trig some very specific optimization code that isn't really meaningful from the scope of the current project.
  12. OK - so does your DNS server have a .local domain? Because avahi-daemon doesn't like that. https://askubuntu.com/questions/718653/avahi-daemon-repeatedly-registers-withdraws-address-record-causing-network-failu https://unix.stackexchange.com/questions/352237/avahi-daemon-and-local-domain-issues
  13. I don't think the drive can perform better for larger transfers than the max sectors per request value. I don't think the Linux disk subsystem will attempt any larger transfers. That parameter should be a "contract" in the interaction between disk and OS and the OS shouldn't try to "overclock" by trying to send larger write blocks than what the drive promises to support. But since hwinfo shows both a max and a current value, that indicates that this is a tunable parameter. So the drive could potentially be tuned for a lower value. And the question then is if your API shows the max value or the current value. My guess is that if the drive specifies a max of 16 sectors and is tuned to 8 sectors, then Linux will never send more than 8 sectors at a time. But will it be the value 8 or the value 16 you will see in your blockdev information? I'm assuming you are thinking about the parameter "--getmaxsect" but the documentation doesn't mention if it corresponds to "max" or "current" from hdinfo. I would suspect it's the "current" value, i.e. taking into consideration any tuning. The -m parameter of hdparm relates to the tunable value, i.e. "current" so it would be strange if--getmaxsect of blockdev doesn't too. http://man7.org/linux/man-pages/man8/blockdev.8.html http://www.manpages.info/linux/hdparm.8.html
  14. I'm not in too much of a hurry. I would go for rsync - and I would then run rsync again to verify if there are updated/new files since the first copy started. With 110 MB/second you can manage about 400 GB / hour. So just under 3 days. And if you need to you can stop the copy process in the middle if you want to look at a movie without the copy process stealing too much disk/network bandwidth. For me, it wouldn't be the total time that matters most but how much the copy process will affect my access to the data. Just remember that you want to use turbo-write, i.e. reconstructive writes, or the receiving machine will not be able to keep up with the network link speed. I'm assuming most of the data is media data (movies, audio, images) so no gain to hope for from stream-compressing the data.
  15. That's a question that only the the company hosting the DNS can answer. But it isn't impossible that the DNS server has a log entry with your public IP number from when the unRAID system reached out to register the CNAME entry. And it isn't impossible that the DNS server has log entries with your public IP number from when you make hostname lookup requests.
  16. I would assume the extended SMART test is the same test the factory does before shipping (but they then clear all SMART counters, logs etc to not leave any trace of the hours the disk was running when testing etc). But since the drive has been suffered unknown amounts of vibrations, shocks, temperatures etc during the full route from factory to end user it definitely doesn't hurt to test-write to every sector and then let the extended test verify that all sectors are still trustworthy.
  17. This has nothing to do with the controller. It's the drive itself that tries to monitor itself. And it's the drive that has aborted writes because it has somehow concluded/suspected that the write head hasn't been positioned well enough. It isn't an error in itself but if the frequency of high-fly errors starts to increase then it might be a reason to reevaluate. Since we don't know exactly how the drive measures the flying height - in write mode the write head is expected to be aligned on the target track which means the read head is not aligned and can't read data for the current track - it's hard to know what will cause the high-flying detection. But it isn't impossible that vibrations between the disks are causing the high-fly writes.
  18. There are thousands of reasons to get call traces. So you can't expect that your call traces should mean you suffer from memory leaks.
  19. There is a DNS server handling the unraid.net domain. And unRAID will report back your local IP number and this "random hex name" to this DNS server. And the DNS will append the host list for the domain unraid.net to include your unRAID system. If you change the IP of your unRAID, then your unRAID will report in the IP number change to the DNS to make sure the next DNS lookup will point to the new IP. Anyone who knows this "random hex name" and asks the DNS will manage to perform a DNS lookup and find the private IP of your unRAID. Which obviously doesn't matter because I can't from my home make use of the IP 192.168.1.199 to reach your unRAID - my machine would just try to find a machine with that IP within one of my networks.
  20. For normal media file server use, the OS don't need very much RAM for caching. The media files themselves tends to be huge so they quickly blow through any amount of cache RAM. Well-behaved programs that makes one-pass reads through large files should really specify that the file data shouldn't be buffered just to not evict the file system and directory structure meta data from the cache. If you run a database server on the other hand, you want large amounts of cache since you have pseudo-random accesses to the database and additional cache RAM can greatly increase the number of database transactions per second the machine can manage.
  21. You don't have snapshot support in the GUI, but you can still create a snapshot and mount separately to for the backup to read from. ZFS and BTRFS have lots in common. BTRFS doesn't have the deduplication functionality of ZFS, which on the other hand is a feature where lots of users have locked themselves out of their data because they have filled their storage pool larger than the maximum RAM capacity of the motherboard. Something that isn't obvious until they reboot and find they can no longer mount the ZFS array until they build a brand new system. The marketing point for unRAID as a storage system is for users who don't want to spin all drives of the array when making disk accesses. Which means to have parity without striping. If you want the bandwidth of a striped RAID and are ok with the recovery issues of a striped RAID, then the obvious route for you should be to select a system that stripes the data.
  22. Lot's of phones have a very high resolution. But to make the text big and easy to read, they still report a quite low resolution to the web server and to the javascript engine. It doesn't help if my phone as QHD resolution if it reports itself as 846x412 when accessing a web server. Try visiting http://www.whatismyscreenresolution.com/ and check what resolution your phones are announcing from the web browser.
  23. Yes, for shared servers - i.e. the traditional cloud infrastructure - the vulnerabilities are scary as hell. And hardly anyone seems to care after the first three days of scary news. For unRAID users, it all depends on what the machine is used for. The machine can only be attacked by running untrusted code, which means most unRAID uses are safe because people running the more common Docker like Plex etc have already decided to see these specific applications as trusted. But in the general case, people still needs to realize that Docker containers and VM will not represent a total protection so any unRAID machine that runs a workstation-class VM has to realize that there are remaining dangers. But there will always be dangers when running untrusted code so users will always be required to make intelligent decisions.
  24. BTRFS snapshots can be an alternative, when you don't want downtime. But from the general perspective, I would prefer if unRAID could handle multiple mirrors. My main storage server is not unRAID for that very reason. It has multiple RAID volumes, where most are two-disk mirrors.
×
×
  • Create New...