c3

Members
  • Posts

    1175
  • Joined

  • Last visited

Everything posted by c3

  1. No, having the same subnet mask is not a problem. I would not think bonding would help, as you which to have the unRAID server on two different networks. The router/WAP/Wifi is often at the address of 192.168.x.1, so your server should have a different IP address. your screen shot has the server at 192.168.2.9, which is fine. From house 1, you should be able to ping the unRAID server at 192.168.1.x(9?) and trying to ping 192.168.2.9 should be unreachable. From house 2, the reverse should be true.
  2. Anytime the kernel starts killing things due to out of memory, bad things can happen. The oom process killer is NOT smart. It is just trying to keep the kernel running, anything in userland is fairgame. It would be good to find out what is consuming memory and tune it or add memory.
  3. Without the actually part/model number, can only guess, probably has SAS expanders built-in.
  4. I would be interested in the comparison as well. What made owncloud rubbish? Might be ok for another use case.
  5. There have been many studies of non SSD drive failures, none has found a correlation between drive usage/duty cycle and failure rate. SSDs have shown that writes lead to failures. Anyone trying to find or buy "highly-reliable drives" for array usage does not understand the basic reasoning behind RAID vs SLED. RAID is Redundant Array of Inexpensive Disk. SLED is Single Large Expensive Disk. RAID always wins, and thus drive reliability is NOT improving. Large consumers of disk drives are asking for lower costs, even if that means lower reliability.
  6. https://community.wd.com/t/wd-my-book-duo-data-forever-lost-if-drive-enclosure-dies/6496 It's even worse than you think.
  7. c3

    Ransomware recovery

    I didn't want to title this thread ransomware protection, because I am interested in what can be done to recover after the event. True, the actions are taken before the event, but they do not stop the event. Yeah, I am talking backups. Specifically, offline backups, as online backups are likely to be impacted. This eweek article indicates a large majority pay the ransom. This will continue to fund more events. First, let me cover the basics. [u][b]You need a backup[/b][/u] Sounds simple enough, but let's not ask for a show of hands of who has a less than 10 day old backup of everything. And don't forget this backup needs to be offline, meaning it has to be rather complicated to restore from. Some will jump in and say this easy use to software does the trick, and they might be correct, but anything with mapped drives or mounts, etc, will not survive. Ransomware is successful, and it is evolving. This extremely elegant plugin blocks ransomware from getting too far into data stored on the array. Since this reacts to damage done, cleanup is necessary. It is the cleanup (and being prepared to do the cleanup) that I am looking for input. Here is a scenario, from time to time I click on stuff my friend clicks on stuff, email attachments, links to interesting photos. You know, things none of us would ever do. The result, a call for help, tears, promises, etc. From the recent article, this also includes making payment to criminals. I'd rather not do that. I'd rather have the backup which is also useful in the event of fire, or drink spill, or drop. But most backups leave a lot of work to be done. First the OS needs to be installed and in many cases the applications, then the restore software, then the data can be restored. This other article talks about system image backups, but keeps mentioning the image is stored on a local filesystem (mounted external/nas drive, etc). These will be targeted by ransomware. Acronis has a Cloud option, but it is their cloud which means monthly charges (like $249/month for 5TB, yikes). Know of any system image backup to standard S3? Acronis kind of promised this in version 12, but it is missing.
  8. Yes, but that's a system generating the HPA => it's NOT a difference in the drive sizes. You can also add HPA's manually if you want to "fool" a system into thinking a drive is a smaller size for some reason. No HPA changes the actual drive size, even on externals.
  9. HPA appear from several sources, not just external. Gigabyte is famous for adding them.
  10. A 5TB drive will be 5,000,000,000,000 bytes or larger. The "or larger" is the possibility that one manufacturer or another will have a few 'extra" bytes of space. With raid controllers, there is a setting to round down the size of any drive to help cross vendor support. Some round down 1GB or 10GB. The bigger happening here is the potential of a HPA. This can make a drive look like it is less than the expected size. HPA can be reported via hdparm -N /dev/sdb
  11. And over to the next size, 12TB thread
  12. yes, it is searching for drives and starting them.
  13. Please consider adding a checkpoint and restart ability to parity check. The use case would be to schedule parity for limited time periods. As drive sizes continue to increase, the length of time required to complete a parity check gets longer. By limiting the time a parity check can run, the impact during "prime" time is avoided. By having restart, the entire check will eventually get completed, instead of just starting at the beginning over and over. If a parity check requires 24 hours to complete, likelihood of parity check overlapping with normal usage is approaching 100%. If the parity check is schedule for just 1 hour per day, the parity check is completed within the month, and overlap is very limited if any.
  14. That thread is about providing pre and post parity check scripts. Which seem to be aimed at process control for things outside parity check. Saving a checkpoint and restarting from the last checkpoint, seems completely different.
  15. Has this been put in the correct forum already? Storing the point reached in last parity check, and continuing based on time schedule or usage pattern. 1 hour a day should usually cover the 30 hour prediction.
  16. Does this mean writing the boot device with SMB no longer works in 6.3?
  17. There are people who confuse a warranty with quality. A warranty is a financial statement. The best case I can share for this is the 10 year/100,000 mile warranty on KIA cars. They knew they had quality problems, so they made it clear they would pay for them. The cars were not reliable. In the past 7 years KIA has climbed from second to last (151 defects per 100 cars) to second best (86 defects per 100 cars). The warranty has not changed, but quality has.
  18. Could you share this hard evidence of the higher quality and better reliability?
  19. 1) Is bit rot a real thing ? yes, it is real. Monthly parity checks are recommended. 2) Why doesn't Unraid support checksum correction of silent errors? unRaid is not a filesystem, but you can use btrfs got get that if desired. 3) Is e.g. SMART data monitoring enough? no, but what else is available? SMART is not predictive, but reporting. 4) Is it possible to combined snapraid with unraid (e.g. using the unassigned disk plugin to mount a disk for the snapraid parity file)? This might be possible, but how is this helpful? Are you seeking triple parity? unRaid is a solid solution for home media servers. if your use case is outside that, it may not be a good fit. Write performance is limited, and use of a cache drive to improve write performance puts data outside the array. For many this simple question decides between non-realtime and realtime. It is acceptable to be without until you have time to troubleshoot and complete the repair?
  20. I have not seen that article, but it is not so much that the head is larger, the required track on media is larger. The track needed to write is wider than needed to read, so smr (shingled) overlap the writes, removing some the "extra" area written, since it is not needed to read. This relationship between write and read requirements is unlikely to change in future generations. This article covers in depth many of the characteristics of the SMR drives. https://www.usenix.org/system/files/conference/fast15/fast15-paper-aghayev.pdf In summary, sustained random writes will not perform well. Random writes up within the capacity of the persistent cache, followed by lengthy (hours) idle time (no reads or writes) will achieve good performance.
  21. I think he meant to say hdparm -N /dev/sdX
  22. that's $25/TB which is good. The drive can be a WD Red, but could also be Green/Blue. None are bad. removing the drive from the case will likely make it hard to warranty the drive, so I advise running the preclear testing prior to removing from case even though it can be slower to use USB.
  23. I'd put money on there being nothing at all wrong with either disk, and it's actually a bad SATA cable or power cable. Without logs and smartctl, putting money on anything is speculative. Drives are very clear about frontend (cable/interface) vs backend (media/head/servo) problems. Specific counters, like 199, would indicate a cable problem. But other counters, like 5,196,197,198 are not cable related. That's a lot of part swapping without a log check or smart report.