Raident

Members
  • Posts

    70
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Raident's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Not really a support question, but I just started doing a rebuild and was quite surprised (read: concerned) that it took 35 minutes to mount the replacement disk and start the rebuild process, when it usually takes seconds. Is there something special going on during rebuilds?
  2. One of my disks just failed. The replacement drive is currently being precleared as I type, but one random thing that just popped in my mind as I wait is that in the many years of using the array, I've run many parity checks but never actually checked the data itself. I do have backups of the data, but those only go back a year so if something happened to an old file prior to 2020 I wouldn't be able to tell by comparing with the backup. On the other hand, I could theoretically get fresh copies (the vast majority of the files on my array were originally downloaded off the internet) to check with, but that would be... very labor-intensive, not to mention tedious, and I'm sure there's a number of files/providers that have since gone offline over the years. Thus, I figure it wouldn't hurt to ask the community if anyone has better ideas?
  3. Short of creating an archive, I don't suppose there's some kind of way to get the sending and receiving sides to simply treat all of the small files as a contiguous block, is there?
  4. There are no SATA, PCIe or even molex power cables. This is a prebuilt OEM system. And yes, the transfer is being done over the network. In this case I was backing up my Steam library (hundreds of thousands of tiny configuration files along with a few huge archives containing the game assets) via Samba, but NFS under a similar scenario is similarly slow.
  5. That is the problem. To add a cache drive, I would need to remove one of the 3 data disks to make space physically, which means that one of the 2 remaining disks would have to double in size just to keep the array at its current size, which in turn means that the parity disk needs to be doubled in size as well. And needless to say, I can't just download more RAM hard drive space 😉
  6. There are actually no SATA ports in the traditional sense - the drive bays are connected to a backplane, which in turn is connected to the mobo via some kind of proprietary (or maybe enterprise-grade?) connector. NVMe via an adapter is theoretically possible as the PCIe x16 slot is open, but that gets really expensive, really quickly and also poses its own set of compatibility problems with VT-d passthrough, questions about whether it'll even be recognized by an older pre-Z97 system, etc.
  7. First of all, thanks for the suggestions, Frank1940 and trurl. To provide a bit of background on my setup, I'm already using Turbo Write and unfortunately I have no spare drive bays for a cache drive - the array was set up years ago, before the cache drive concept was introduced and at this point putting in a cache drive would require a pricey 3 drive (bigger parity + data + new SSD) upgrade. It's definitely something I'll seriously think about when the array reaches maximum capacity in about 2 years and I need to upgrade the array anyway, but for now I'd like to avoid spending money on new drives. The reason I asked about CPU and memory is because this is a VM with only 1 vCPU and 2GB of RAM assigned to it, and those could be expanded very easily with just a few button clicks, if it would help at all.
  8. Essentially, I'm wondering if there's any quick and dirty way to speed up the transfer of large quantities of small files.
  9. This is a totally unimportant OCD pet peeve, but having replaced 2 drives this year, I'm wondering what the best way would be to reorder my drives so that once again my parity drive is /dev/sda, disk 1 is /dev/sdb, disk 2 is /dev/sdc, etc. I'm guessing that I'll need to physically swap the drives to make this happen? Also, if attempting such a thing would potentially risk catastrophe, please let me know ?
  10. I'm trying to do the parity swap procedure detailed at https://lime-technology.com/wiki/The_parity_swap_procedure, but unRAID is trying (and failing) repeatedly to connect to my dead hard drive at power on and thus not booting properly or initializing the web GUI. How am I supposed to do steps 1-4 if I can't access the website?
  11. Is there any way to test from Windows? I'd prefer not to use my unRAID server as it only has USB 2.0, and even generously assuming a 30 MB/s R/W speed it's going to take more than 3 days for a single pass with an 8 TB hard drive.
  12. I bought an external HDD with the intention of taking it apart and putting the bare drive into my array, but of course, I want to ensure that everything is good before I void the warranty. How should I go about testing it?
  13. The backed up data is scattered across the cloud (2016 onwards), an external HDD (2014-2016), and a spindle of BD-REs (2013 and older). It would take days to download/gather everything together, filter out duplicates, and then vet the data. Doesn't ReiserFS have any inode reverse lookup tools? Given that this kind of thing takes less than 10 minutes to query out on ext3/ext4 even when doing the math for block-sector mapping by hand, I'm kinda surprised to hear that there's nothing similar for ReiserFS.
  14. Hmm, I suppose a hex editor wouldn't be able to show what the previous value was? The goal is to either 1) Correct the data or 2) Just replace the entire file with a copy from backup, so I'm not particularly interested in viewing the corrupted data itself...