Jump to content

pwm

Members
  • Posts

    1,682
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by pwm

  1. If you run unRAID with n data drives and 1 parity you can only afford to lose one (1) drive. And if you need to repair, you can't afford a single failed read from any of the remaining drives. So bad drives are just not an option, even if you are on a budget.
  2. I think you should use your old drives for backup. Having parity doesn't make it safe to use bad drives - it just gives you more options to waste time when the bad drives starts to give problems and you then depend on the other drives to work flawlessly while you perform your rebuilds. Look at 2-3 new drives for your storage server - you will be much happier with drives you can trust. And larger drives will consume less power in relation to capacity.
  3. Your router is using DHCP and receives not just the IP but also the DNS servers from your ISP.
  4. Or btrfs as the default filesystem The BTRFS checksums isn't as explicit as the checksumming done by file integrity plugins and SnapRAID. But compared to SnapRAID etc, it isn't limited to just look at checksums for individual files - it also checksums the file system structures. The good thing is that it's possible to use both a file integrity plugin and BTRFS, where a file integrity plugin can also help with locating duplicated files i.e. files with the same hash. And a file integrity plugins can catch intentionally changed files while BTRFS can only catch bit rot.
  5. The UDMA CRC functions as high watermark. So they represent the full lifetime of the individual drives. Only worry if you get new notifications that they continue to tick up. You could live with maybe one or two ticks/year for a drive but if it keeps ticking up it's time to look at your cabling.
  6. An important thing with BTRFS is that it is using CoW - Copy on Write. This means that if you have a full drive and want to modify an existing file you will fail. The file system has to make the writes to empty space (Copy on Write). Then later it can remap the file content to point to the newer data and release the previous file content. This is different from almost all other file systems that only fail if you try to add new files but always allows you to write changes to existing files as long as you don't try to increase the file size. Lots of programs have hard-coded logic where they assume they can't fail modifying existing files which means they can behave very badly when used on a CoW file system. Developers just has to learn the hard way to not make assumptions that some operations can never fail. So rule #1 with BTRFS is to not run it full. Rule #2 and #3 are also to not run it full.
  7. Even 2+1 with 8TB drives is 16 TB which is plenty for lots of people. And small means easy to place. The important thing is that different people have different needs, so there are no needs to copy other peoples "recipes" unless the goals happens to be similar. The main thing is to build a stable machine with good disks and PSU and don't overclock.
  8. An interesting thing is that they claim they have no economic interest in AMD or Intel. But they do not say if Intel has any economical interest in them. And Intel has lots of staff in Israel - the Core chips originated in Israel as a continuation of the Pentium M (Centrino) and saved Intel from the P4/Prescott failure.
  9. Interesting. Looks like it's using smaller platters suitable for a 1.8" drive. Similar to how lots of high-end 3.5" drives have used 2.5" platters for a long time now. Maybe my guess that it isn't too far until 2.5" drives will replace the 3.5" drives was wrong. Maybe they jump all the way to 1.8" drives.
  10. First off you need to post the diagnostics so we can see what SMART data you talk about and also see in the logs if there are any interesting events. And you want to catch diagnostics before reboot so you don't lose the interesting log data.
  11. I'm interested in the outcome of this. That would then mean we can't easily plan where to locate our critical data to optimize for bandwidth. If they use mapping logic, then it wouldn't even be sure that we can create four 500 GB partitions to force fast and slow regions. Or maybe the unshingled region is large enough that the drive hasn't spilled over to the main storage region yet (while WD might have decided to take a bit of advantage of their unshingled region to "unintentionally" boost benchmarks).
  12. Yes, the inner tracks should drop off. I tried to find any benchmarks but failed to get any matches on WD20SPZX, WD20NPVZ and WD15NPVZ. Just sales talk or references to much older WD drives. And WD only shows the interface bandwidth (6Gbit/s) in their datasheet. Maybe the SMR means you fail to store data on the inner track because you haven't filled the drive full and whatever address you specify for your transfers ends up being mapped to an outside track?
  13. Very nice transfer speed. But 2.5" drives can use higher data density because the platters are smaller and so introduces better mechanical tolerances, less vibrations etc. At the same time, a 3.5" drive has the head optimized for the middle part of the media so they need to reduce the data density on the outside tracks or the bit rate (frequency) becomes too high compared to what the drive heads are designed for. That's also why they killed off the 5.25" drives - they couldn't keep up with the data densities of the 3.5" drives. The main issue with 2.5" drives is that they are so thin that they can't have as many surfaces as 3.5" drives - one of the reasons why lots of 2.5" USB drives uses extra-thick drives. Anyway - it will not take too many years before the 2.5" drives will leave the 3.5" drives in the dust. The best 2.5" drives are 5TB and have been for about two years now so soon time for a new high score. So they are right now about half the size of the best 3.5" drives.
  14. I have one huge and several small machines. A little guy with 5 drives in 4+1 configuration can fit 48 TB with 12 TB drives which isn't too shabby.
  15. I recommend to edit the local hosts file and add the name to ip translation there.
  16. The exception would be software like SnapRAID - but SnapRAID isn't performing parity in real time and is instead basically "committing" files to the parity system. So a completely different beast. The parity is just a "checksum" of the data disks. So one "checksum" can help repair one broken data disk. And dual-parity means the system can recompute the content of two brken data disks. But the parity operates on raw disk blocks. And it will not save you if you overwrite a file or delete a file - the parity will just remember the overwrite or delete so any disk recovery will just recover the disk back to the state with the files overwritten or deleted. So parity doesn't replace the need for backup. But it gives a chance to repair the disk system after a disk failure. And it gives you better availability since you can continue to access files while the system rebuilds the content of a broken disk onto a new drive.
  17. I think lots of people never visit the dashboard - I might visit the dashboard once every 3 months. But I rely on centralized supervision. Anyone who aren't using centralized supervision really must make sure they get mail from the system. And make sure they react if the mails stop arriving or if the mails indicate problems. Storage servers can run stand-alone for long times, but they do require someone to step in as soon as disks or fans starts to have issues just as people shouldn't continue to run their cars without enough coolant in the radiator.
  18. Note that the issue is when Samba is used as AD domain controller. So most users aren't affected.
  19. When a file system has broken files, it's normally files that are regularly modified.
  20. We don't know if your machine managed to do a correct DNS lookup to the system or if it tried to access the wrong machine. We don't know if you are expected to be allowed to make that access or not. We don't know if the request contained a payload that didn't match what the server expected. We don't know if you can make the requests if trying from some other machine. We don't know if it was a temporary issue and that the server might accept the request 10 minutes later. ... Lots of things we don't know, and that we can't guess about from your posts.
  21. I have as rule to always keep downloaded manuals for motherboards etc easily accessible from some other machine and without need for working network.
  22. One issue with persistent logs is that if you store the logs on the flash you wear out the flash. And RAM-stored logs doesn't survive a hang/reboot. Maybe unRAID should make it easy to push logs to a log server on another machine. I have a number of flash-running devices that I have push logs to a centralized log server, which in itself is a quite tiny machine with a 2.5" drive. It records logs, MQTT data etc.
  23. Is the machine overclocked? And do you get the same events if you reboot, i.e. is it just a single transfer failure when loading the microcode or a persistent error?
×
×
  • Create New...