Jump to content

pwm

Members
  • Posts

    1,682
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by pwm

  1. Reconstruct write just removes the read/modify/write step which means it can give a very high bandwidth when you write to one data disk. But when you write to two data disks, then each of the writes represents a specific LBA that needs to be processed on all disks (write for the specific data disk and the parity disk(s) and read for the other data disks). Since the two writes to different disks will affect different LBA, that means the parity disk(s) will have to perform a large number of seek to jump between the two different locations that parity needs to be updated. It's the same result you get as if you try to read and write two different files from a single data disk - the total bandwidth will be a small fraction of the bandwidth you can get for a single file because of all the seek latencies - the time spent seeking is time then disk will not do any read or write so the bandwidth will drop very sharply.
  2. Correct, but I don't much like moving files around since I keep a registry of the exact content of every single disk and also if each file is stored according to configured backup policy.
  3. I have a number of 4TB ReiserFS drives filled to just over 1GB free. Write performance wasn't important since they store static content. But the big disadvantage is that I can't convert the disks to any other file system because there isn't enough space for the meta-data - most other FS needs more meta-data than RFS. So in the end, the drives will stay RFS until I replace them with 10+TB drives. As @John_M notes, XFS have increased the meta-data size several times as they have added bug fixes and functional improvements. So filling the drives too full can cause future issues.
  4. Don't think there is anyone evil enough to laugh at you. Whatever you do - don't do anything that results in new writes to the disk. Then wait for someone with good experience recovering erased files from xfs to join the thread.
  5. I had to reboot a Debian machine (not unRAID) just some days ago because it lost state of one share from a Windows machine i rebooted. I could have continued to use the Debian machine but I really needed to be able to mount the same share again and any new mount attempt would hang with process state 'D' which means uninterruptible wait for I/O. It's quite likely that you have one guilty process that is waiting for SMB data from the other machine on a now dead TCP connection. And since it's in a critical section within in the kernel, any other process that wants access to the same resource will also get stuck. One trick I missed testing is to fake TCP/IP data to try to force close TCP connections that has huge timeout times. But that isn't an easy path to take without proper tools - normal users aren't intended to be able to generate and inject arbitrary TCP packets so it isn't supported by any normally available tools.
  6. I have very little experience specifically with XFS recovery, but UFS Explorer that you got a link to earlier is one of the most recommended tools.
  7. Just curios. The command below is likely to show one or more processes that are in state "D" (8th column of ps output). If you find any process with state "D" - what is the content of where <pid> is the PID (second column of ps output) of the process in state D? Regarding remote shares - they can sometimes hang badly. Sometimes possible to solve by doing but that doesn't always solve the problem - often a reboot will be required.
  8. Formatting means throwing away all the indexing of the file system, and creating a new - and empty - index ready for you to copy new file data to the file system. So the majority of the data is still there, but it's similar to taking thousands of binders and empty their contents all over the floor in random order. It's hard work to try to get some order out of millions of randomly ordered leafs of paper... Repair tools can do a better or worse job of puzzling together some of the randomized information. But different tools performs better/worse and different file systems are easier/worse to try to recover data from. And it also matters how you have been used the file system - how much fragmentation there is. If you are lucky, there could be partial indexing information still available on the disk. But it's hard to tell.
  9. How long have the disks worked ok in your machine? If a long time and then many starts to die in a short interval, then I would consider temperature or PSU. If you have quite recently built the machine using recycled drives, then I would guess that you started with bad drives that had already given their best. Another thing is that all drives aren't equally good at handling vibrations - and your machine has quite a lot of drives that will produce vibrations both when spinning and when performing seek operations. The tolerances required when aligning the heads are minute, which is why enterprise-level disks needs to measure vibrations so they can quick-terminate writes if they see too much vibrations.
  10. Of course. But it's important to realize that the total lifetime cost doesn't just depend on $/TB for the purchase price.
  11. One solution I used in a similar solution was to have my phone supply a WiFi hotspot. Then use a Raspberry Pi connect to the phone using WiFi. And the RPi was DHCP server and gateway for the other machines. Not quick, but a cheap way to create a networking island that can continue to function locally when the phone stops serving Internet access. The RPi could just as well tether to the phone using USB.
  12. Nothing wrong with the molding. But several modular PSU have added their own twists on the different connnectors. Some have identical PSU-side connector for 4+4 pin and 8 pin CPU connector and for 6+2 PIN PCI-E. Some have different connector. The PSU at top of this link shows different connectors for CPU and PCI-E cables: https://www.moddiy.com/products/Ultra-X4-5%2dPin-to-SATA-PCIE-Modular-Cable-(35cm).html The Corsair PSU in this link uses same keying for CPU and PCI-E cables: https://www.mindfactory.de/product_info.php/1200-Watt-Corsair-AXi-Series-AX1200i-Modular-80--Platinum_808210.html
  13. You then need to factor in the cost of the electricity per TB per year and 8TB drives will instantly have an advantage - especially 8TB helium drives that saves a lot on their power consumption. Another thing is that the economical lifetime is in general longer for a 8TB drive than a 6TB drive - when you run out of free SATA ports and need more disk space it's the smallest drives you will have to throw away first. Or you'll have to buy an additional SATA controller card.
  14. Your 169.254.x.y IP is a link-local IP because your unRAID never found a DHCP server that gave it a valid IP.
  15. That will need new fans and/or water cooling since the original fans have worn out from 24/7/365 full load.
  16. Yes - since all the supplied machines will be up on UPS power after a power loss, they need the switch also on UPS to be able to get proper information from the master apcupsd. If the network isn't on UPS, then the machines would need some other way to figure out that they are running on battery power and are expected to shut down. The UPS itself doesn't need networking - most intelligent APC UPS have can use a USB cable to the master machine.
  17. If the UPS is big enough and if you run apcupsd then you can run one apcupsd on every machine and have the the machines get UPC state information over the network from the master apcupsd - but you would want the machine with the master connection shut down last.
  18. I don't think it's good with a backup that claims the array was safely stopped. This means that if someone has a power outage and then can't get their machine to boot and decides to restore their backup will get unRAID to incorrectly skip the parity step It's way better that a backup restore forces an additional parity scan.
  19. Your system basically needs two things configured to reach internet (besides the obvious - having IP configured on the network interface the network cable is connected to). 1) A DNS so it can translate host names to IP. Easy to test if you can ping 8.8.8.8 but not ping www.google.com. 2) A default route that points to your router so any unknown IP gets sent to the router that can forward it out into the world. If you can ping IP inside your home (equipment within the same local network as the IP of your unRAID machine) but not 8.8.8.8 or global IP then your machine most probably doesn't have a default route.
  20. Just a quick note. unRAID or whatever NAS product you may find on the net are not a replacement for backup. unRAID can be your main storage server. But you want important files to be stored on at least two machines - and on at least two locations. First off - any amount of redundancy you might get from parity doesn't help if the PSU breaks and smokes every disk in the machine. And if your building burns down, it doesn't help if you have your pictures on both the PC and on the unRAID machine - you need a copy that doesn't burn down with the building. So consider to keep some cloud storage - or check if you can store a backup server or at least a USB disk somewhere else. Maybe at work. Maybe at parents and/or children. Best is if the off-site backup allows you to add new files to the off-site backup over the network. Anyway - an off-site backup should be a true backup with versioning. Mirroring of files isn't a good idea. That just means that if you accidentally break a file locally, the mirroring software will happily duplicate the broken file to the cloud server. You don't want multiple copies of a broken file - you want the off-site copy to be unharmed so you can restore the unmodified file.
  21. There isn't much use/need for any such tool. But you still haven't answered exactly what you want to accomplish. If you possibly want to play with thin provisioning of virtual disks.
  22. Yes, it's global, as I mentioned in my post. The other option is to use Google to search on this site. Google search is good enough that it doesn't matter very much if Google has to search on all forum content.
  23. Note that a short spike on mains power can reboot some machines while other machines can continue to run. So it isn't easy to prove that there hasn't been a power glitch. Old computers with AT-type PSU could continue to run even when the power glitch was long enough that you could see the lamps blink - while every ATX-powered machine instantly rebooted.
  24. Why? The traditional way would be to copy /dev/zero into a new file and have that file grow until the file system is full. Then erase the file after having "normalized" all unused area.
  25. This link allows search. But no selection on subforum. https://lime-technology.com/search
×
×
  • Create New...