Jump to content

JonathanM

Moderators
  • Posts

    16,699
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Look at the top blue SATA cable. See how it's pulled downward? You can see it's not square with the socket.
  2. I'm not convinced the issue is crosstalk, but I AM positive that you can DEFINITELY create issues by bundling cables. I suspect the issue is more the extremely poor connection at the SATA end, where any misalignment can cause issues. If the connector is being pulled by the cable instead of being allowed to float free, it can create an angle between the drive and the connector, and when the drive vibrates and moves from thermal expansion the wires don't touch the pads on the drive solidly. This can cause CRC errors, and in extreme cases cause the drive to temporarily stop responding. It doesn't help that the cable and connector design has gone through several changes, with some cables being incompatible with some drives, notably the retention mechanism that's supposed to help alleviate the issues in my first paragraph. TL;DR, Don't bundle cables, allow them to relax near the drives so they don't pull on the connectors. If someone can link to a study that proves crosstalk corruption in modern SATA cables is a thing, I'm definitely willing to learn, but all I can find are studies proving data corrupting crosstalk shouldn't be possible due to balanced signalling. Looks like a custom cable set, should be easy to refit by splitting the existing cables into smaller sections and providing each section with a dedicated PSU feed.
  3. The automated replacement is 12 month fixed, no leeway there. However, if you email support and explain what you want to do and why, they will help you out. It's just going to be a manual process rather than completely automatic.
  4. Have you tried with the board sitting on the motherboard box, completely away from the case? Use something conductive to short the power button pins. Just MB, PSU, CPU, RAM. NOTHING else touching the board except for the video cable. Also, do you get different codes if you remove the CPU / RAM?
  5. I like to keep the cumulative free space close to the size of my largest data drive. That way if you need to juggle data around to clear off any specific disk you can. How full any specific drive is allowed to get is determined by usage. Drives with seldom accessed and never updated content I fill to 95%, or so. I try to keep heavily used drives as empty as possible, whatever that may be. Some free space can be useful for file system maintenance or repairs if needed. Filling to the last byte can be risky if something gets corrupted.
  6. What about parity? Are you proposing no parity disk(s) for this thought experiment? If you are talking about parity protected array disks, then write speed is going to be way worse than any individual disk write, as each portion of the images being written is going to force the parity disk(s) to update multiple likely non-contiguous addresses.
  7. Water cooling is great for the novelty and cool factor on a gaming desktop, but it has no place in a server. The pumps are not very reliable for long term use, and the last thing you want is for your system to have a meltdown when you aren't around. At least with air cooling there is a significant mass of metal in the heat sink that can shed heat to the ambient air even if the fans fail.
  8. Do you have bridging enabled on your active interface on the network settings?
  9. Because you are in host mode, which doesn't remap. If you change the application itself to listen on 1000, it should.
  10. are you volunteering your time and expertise?
  11. Cool! I knew there was a push to get the metadata moved to c3, so they also added the option for data c3? I haven't played with BTRFS options for several years, after a near data loss incident I've stayed with XFS.
  12. Can you elaborate on that a little bit? Exactly how do you plan to do that, and why?
  13. Not sure what you are getting at. The WebGUI comes up regardless of the array status, I log into my server and manually hit the start button normally.
  14. Wait, you are saying you overclocked the CPU and RAM? Don't do that. Servers and OC don't mix. At all.
  15. You can only have 1 main array that uses the Unraid traditional separate parity with individual format data drives. The pools can either be single device per pool XFS, or multi device BTRFS RAID volumes, using any RAID level you feel comfortable with. Typically the main array would be your bulk slow storage, all spinning rust of various sizes. SSD's would be arranged in different pools, with different RAID levels to suit each pool's specific purpose. By default all newly defined BTRFS pools are initialized as RAID1, but you can change that with a command line. Hopefully sometime in the next year you will be able to change RAID levels with a drop down selection.
  16. 3 drives in BTRFS RAID1 which is the default Unraid setting is still only double protection. The RAID level is what determines the redundancy. Triple requires you to manually specify RAID1c3, you will need to research the command line needed to do that at the balance prompt. If you specify c3 for both data and metadata, you would end up with 500GB, not 750GB of available space on 3 500GB drives.
  17. This... High quality 5e cable with perfect termination can compete quite well with 6. The biggest changes in the cable types seem to be enforced tolerances, where the wire to wire crosstalk has physical restraints to keep the distance more tightly controlled, and the termination pieces are similarly toleranced.
  18. How is the power linked between the two buildings? They must be served by a single common ground to safely run conductive (CAT5e,6,7) cables between the two. If they are served by the same electrical meter, then carry on, as long as the electrician did it right you should be ok. The safest method to run network between buildings is fiber, that way there isn't a conductor between the two. Otherwise you run the risk of differential grounding or EMP from nearby electrical storms frying the equipment at one or both ends of the cable. The longer the run, the higher the risk.
  19. Be careful with this approach, it's easy to run out of RAM and cause bad results. Only do this if you have a very good handle on managing your server.
  20. You technically could, but it would make recovery much more difficult if one of your data drives decided to act up during the parity build. Much safer to do a simple standard replacement of one of the parity drives, let that complete, then add the drive you unassigned back as a data drive. After you have an old parity drive assigned as a data drive, be sure to make a new flash backup and destroy any obsolete copies of the super.dat file that are hanging around in old backups.
  21. Kapton is a brand name for that exact product. Now you know how different it is from electrical tape! Night and day difference for this specific usage.
  22. Probably not related to container implementation, but I could be wrong. Try here. https://chevereto.com/community/
  23. I suspect you will be rather disappointed by the performance, given the slowness of those drives, but they should work ok. If your hardware is rock solid, use BTRFS on everything. However, if the hardware is marginal, or you don't have a proper UPS in place, I'd use XFS. It seems more resilient to hardware crashes.
×
×
  • Create New...