Raident

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by Raident

  1. Not really a support question, but I just started doing a rebuild and was quite surprised (read: concerned) that it took 35 minutes to mount the replacement disk and start the rebuild process, when it usually takes seconds. Is there something special going on during rebuilds?
  2. One of my disks just failed. The replacement drive is currently being precleared as I type, but one random thing that just popped in my mind as I wait is that in the many years of using the array, I've run many parity checks but never actually checked the data itself. I do have backups of the data, but those only go back a year so if something happened to an old file prior to 2020 I wouldn't be able to tell by comparing with the backup. On the other hand, I could theoretically get fresh copies (the vast majority of the files on my array were originally downloaded off the internet) to check with, but that would be... very labor-intensive, not to mention tedious, and I'm sure there's a number of files/providers that have since gone offline over the years. Thus, I figure it wouldn't hurt to ask the community if anyone has better ideas?
  3. Short of creating an archive, I don't suppose there's some kind of way to get the sending and receiving sides to simply treat all of the small files as a contiguous block, is there?
  4. There are no SATA, PCIe or even molex power cables. This is a prebuilt OEM system. And yes, the transfer is being done over the network. In this case I was backing up my Steam library (hundreds of thousands of tiny configuration files along with a few huge archives containing the game assets) via Samba, but NFS under a similar scenario is similarly slow.
  5. That is the problem. To add a cache drive, I would need to remove one of the 3 data disks to make space physically, which means that one of the 2 remaining disks would have to double in size just to keep the array at its current size, which in turn means that the parity disk needs to be doubled in size as well. And needless to say, I can't just download more RAM hard drive space 😉
  6. There are actually no SATA ports in the traditional sense - the drive bays are connected to a backplane, which in turn is connected to the mobo via some kind of proprietary (or maybe enterprise-grade?) connector. NVMe via an adapter is theoretically possible as the PCIe x16 slot is open, but that gets really expensive, really quickly and also poses its own set of compatibility problems with VT-d passthrough, questions about whether it'll even be recognized by an older pre-Z97 system, etc.
  7. First of all, thanks for the suggestions, Frank1940 and trurl. To provide a bit of background on my setup, I'm already using Turbo Write and unfortunately I have no spare drive bays for a cache drive - the array was set up years ago, before the cache drive concept was introduced and at this point putting in a cache drive would require a pricey 3 drive (bigger parity + data + new SSD) upgrade. It's definitely something I'll seriously think about when the array reaches maximum capacity in about 2 years and I need to upgrade the array anyway, but for now I'd like to avoid spending money on new drives. The reason I asked about CPU and memory is because this is a VM with only 1 vCPU and 2GB of RAM assigned to it, and those could be expanded very easily with just a few button clicks, if it would help at all.
  8. Essentially, I'm wondering if there's any quick and dirty way to speed up the transfer of large quantities of small files.
  9. This is a totally unimportant OCD pet peeve, but having replaced 2 drives this year, I'm wondering what the best way would be to reorder my drives so that once again my parity drive is /dev/sda, disk 1 is /dev/sdb, disk 2 is /dev/sdc, etc. I'm guessing that I'll need to physically swap the drives to make this happen? Also, if attempting such a thing would potentially risk catastrophe, please let me know ?
  10. I'm trying to do the parity swap procedure detailed at https://lime-technology.com/wiki/The_parity_swap_procedure, but unRAID is trying (and failing) repeatedly to connect to my dead hard drive at power on and thus not booting properly or initializing the web GUI. How am I supposed to do steps 1-4 if I can't access the website?
  11. Is there any way to test from Windows? I'd prefer not to use my unRAID server as it only has USB 2.0, and even generously assuming a 30 MB/s R/W speed it's going to take more than 3 days for a single pass with an 8 TB hard drive.
  12. I bought an external HDD with the intention of taking it apart and putting the bare drive into my array, but of course, I want to ensure that everything is good before I void the warranty. How should I go about testing it?
  13. The backed up data is scattered across the cloud (2016 onwards), an external HDD (2014-2016), and a spindle of BD-REs (2013 and older). It would take days to download/gather everything together, filter out duplicates, and then vet the data. Doesn't ReiserFS have any inode reverse lookup tools? Given that this kind of thing takes less than 10 minutes to query out on ext3/ext4 even when doing the math for block-sector mapping by hand, I'm kinda surprised to hear that there's nothing similar for ReiserFS.
  14. Hmm, I suppose a hex editor wouldn't be able to show what the previous value was? The goal is to either 1) Correct the data or 2) Just replace the entire file with a copy from backup, so I'm not particularly interested in viewing the corrupted data itself...
  15. It seems a disk was fried by the power outage, as unRAID detected 39 errors on a single disk (but none on the others) the moment I started the array up again. Unfortunately, several seconds passed before I was able to cancel the parity check, and 1 parity "error" at sector 123720 was unfortunately "corrected" as a result. How can I find out which file was occupying this sector?
  16. I just checked the unRAID web GUI for the first time in ages, and noticed that disk2 is showing 11 errors, but still has a green ball next to it:
  17. Well, LimeTech was very upfront about V5 being inherently insecure - I believe the official documentation mentioned that it should only be run behind a firewall, never to be exposed to the public internet. I still remember their response to Heartbleed was something along the lines of "V5 uses insecure Telnet, so there's no impact".
  18. I'm sure it's do-able It's just that there's a zillion compatibility-related factors that need to be re-checked (if someone else hasn't already done it then that means I need to spend the time re-running all the tests I initially ran before picking unRAID V5 as my NAS solution in the first place - and that took about 2-3 months worth of weekends), and on top of that, now that I'm actually using the system there's the added complication of "will running this test destroy my data?" Given the projected headaches and the huge risk, I can't really justify it for the rather meagre benefits. I'd rather be spending my weekends going on ski trips or watching a movie or something instead. I suppose it going over and asking never hurts, though...
  19. Pretty much - it's the kind of situation where my better judging is trying to hold back my urge to tinker Thanks, this is good to know. Unless if a) LimeTech officially supports my upgrade scenario (they don't as far as I can tell - nothing is mentioned in the upgrade guide and it seems unRAID has moved from happily running on top of a hypervisor in V5 to providing a KVM hypervisor in V6 and expecting everyone to rebuild their entire infrastructure on top their new architecture), or b) Someone with a very similar configuration to mine has already gone through the whole upgrade process already and is willing to share their experiences and answer a bunch of questions (a few questions to start off: Is there a VMware Tools plugin for V6? Is VT-d passthrough of the SATA controller confirmed to work with 64-bit guests in ESXi 6.0? What is the process to install V6 to a VMDK file as opposed to a USB stick? Does V6 have ESXi drivers?) thus greatly simplifying my own validation process, It's simply not going to happen regardless of the benefits V6 brings. I don't have the time to go through the whole trial-and-error route of creating a custom installation/upgrade process.
  20. First of all, I'm very happy with 5.0.5 and have been for a long time now. With 6.x, however, LimeTech seems to have shifted focus from providing a NAS to providing a media server. Given that my unRAID installation runs on top of ESXi and I already have dedicated VMs covering most of the new features advertised in 6.x, this is fundamentally at odds with my use case. And even if I ignore all that stuff, there's still the need to go through validation all over again, so either way it would be in my best interests not to upgrade to 6.x. That leaves 5.0.6 - I'm torn between upgrading to what is essentially the final 5.0.x service release and being on the same version as everyone else for support reasons as well as having the opportunity to do a clean install and wipe out all the plugins that I tried but ended up not using on the one hand, but on the other hand, there's the old adage "don't fix what ain't broke"...
  21. Every 15 minutes or so, the music player (I've had this happen with WMP and Winamp so far...) will freeze (in the sense of Windows asking if you want to terminate the program or wait for it to respond) for 10-30 seconds. I have a suspicion that this isn't unRAID's fault but rather that of the platform it's running on, but does unRAID come with any diagnostic tools that can be used to pin down the issue?
  22. I haven't tried waiting for a while after failing to connect... I'll test it out tomorrow and let you know. I suppose you could say I accidentally discovered the issue while trying to diagnose an unrelated driver issue. It was basically an Install Driver -> Reboot -> Uninstall Driver -> Reboot -> Install Older Version -> Reboot -> ..... rinse and repeat type of situation. With the speed of modern CPUs, SSDs, and Windows 8's greatly improved boot times, each of these reboot cycles ended up taking maybe 2 minutes.
  23. Has anybody done this before, and is there a guide?