Jump to content

chickensoup

Members
  • Content Count

    513
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chickensoup

  • Rank
    Advanced Member
  • Birthday 05/09/1987

Converted

  • Gender
    Male
  • Location
    Brisbane, Australia

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Made one last night from a inspired by a wallpaper i really like, thought I'd keep it simple. Recommend changing "Header custom text color" to FFFFFF as well.
  2. Was able to generate a new config and rebuild Parity to both P disks without any issue, all disks are up and there doesn't appear to be any issue with the data. Thanks for all your help guys Marked as Solved.
  3. Thanks! I have a quick question about re-building the array when I boot it back up. Assuming I trust the data on Disk 8, am I able to run a new config and re-build both the parity drives (at once) based on the data that is on the disks currently? Disk 8 is showing a red X (as per the second screenshot on my OP) but I don't think there is an issue with the drive at all, I've tested it outside of the array- I certainly don't trust my Parity right now but I want to re-introduce all the disks.
  4. Sorry for the long reply but I think I've finally worked out what has happened. It always felt like it was power related but the one thing I could never understand was why I was getting errors on some disks but not others- even when they were connected to the same chain off the same power supply cable. I thought at one point that maybe bending the cables in to shape to fit the case might have had some impact, since the power supplies are reasonably old (though good quality). It took a few days of thinking about the symptoms and scratching my head; the comment about poor ground also got me thinking and then while sifting through my power supplies and cabling I had a realization. I had 3 x Modular 6-Pin to SATA cables connected to my ToughPower 750W power supply. Turns out, the power supply likely only shipped with two of these and the one additional cable must be from a different PSU with a slightly different pin-out (I'm pretty sure there is no damage to the drives). My guess is the drives had 12V and Ground but that the 3.3V and 5V lines were swapped around. I feel like such an idiot. The picture below shows the two TT cables connected to the power supply (top row G, 12V, G on each) and two more modular SATA cables I had in my stash. If I had to put money on it, I would guess that the one on the right hand side in the picture was also being used which is why the symptoms were so strange. Close enough voltage to be OK for a little while, but ultimately not what the drive was looking for. Edit: All of the cable below are SATA to 6-Pin modular PSU cables, the one on the right in my hand at a distance, looks pretty much identical to the ones which shipped with the PSU.
  5. Hey Benson, Is that table of Parity speed history a plugin? I can't check mine right now sorry as it's out of action. If it's part of 6.7, would you mind sharing how to get to it? Thanks!
  6. I actually tested both the power supplies before rebuilding the server and they looked OK, even under load but it's curious that other than Parity 2 (which dropped due to a SMART 199, which could be the SATA cable) all the other power supplies with errors are on the same PSU. Disks 10, 11, Cache and Parity 1 are all on a different PSU and show no errors. I have another power supply I can use but now I'm not really sure about how to best proceed with my disks having dropped out all over the place. i.e. - Parity 2 dropped out so I unassigned it for now - After the reboot when all looked well, I ran a correcting parity check which fixed ~10 errors but when I checked the server after it had finished there were read errors showing on all the data disks. I'm not sure if I can trust my parity is valid at this stage and I'm not sure when the errors started happening, can anyone tell from the diagnostic? - Disk 8 dropped out (also not sure if this was after the parity check) but SMART looks OK, I'm running a full check on it now Not sure if I should dump the data off disk 8 and rebuild it from parity as I feel like I actually trust the data disk more than the current parity state. Is there an option to reintroduce the disk to the array and rebuild parity off the data, assuming the disk is OK? Sorry if any of the above is confusing, just never had so many errors all at once, it's been rock solid up until now (going on 10 years..) Edit: Photo of the setup attached, if anyone is curious - disk 8 is missing as i'm running a WDDiag on it at the moment.
  7. Will change the onboard SATA controller to AHCI tonight, anything else I should look at before I reboot it? It is still currently powered on. Not to over-complicate things but in full disclosure, the system is actually running off two power supplies, for no reason other than that the case supports them and I was testing power usage balancing the load between the two. Based on what you have said I suspect my TT 750 might be playing up, which is strange since it is actually powering less than it has been for the last few months. I tested both the other night after it first failed and they looked OK but I might try swapping them around to see if this fixes anything. More info here >
  8. Sorry for the late reply, been really busy with work. I've updated the OP with diagnostic after booting the server back up last night, it looked OK initially and I ran a non-correct parity check overnight, in the morning all disks were OK at about 30% with 8 errors detected so I stopped the check and changed to a correcting parity check, which I now regret- I'm hoping the data isn't corrupt. This afternoon looks just as bad as the other day only with different disks. Please note that between the two screenshots/reboots I also tidied up the cabling so the specific disks aren't necessarily on the same ports as they were the first time. Apologies if this makes things a little messier to diagnose but the logs should clear up any confusion.
  9. I'm using two Adaptec 1430SA's, the thing is that Parity 1 and Parity 2 are on the same card, one has errors and one doesn't. Drives on the second card have errors and so do ones on the motherboard, but some don't...
  10. Help please I recently transplanted my hardware over to a new case and at the same time changed the board, cpu and memory. Initially I had a problem where one of the controller cards was showing all disks on boot but not in unraid (displaying as missing). I updated the BIOS on the board from F3 to F8 and re-seated the card and cables, which seemed to work. Everything booted up ok and all disks went green. I allocated a new/spare disk as a cache drive, which formatted OK and i had green lights across the board, no errors. A few hours later after some light plex use, Parity 2 drops out first (red X). I figure maybe the cable is bad and I'm dealing with dinner/son/etc so leave it for the moment. Shortly there after when I get a chance to check it, the shares have dropped off and I have read errors across most of the disks. I'm wondering if the BIOS is setup slightly differently on the new board (legacy, ide mode, etc) so tomorrow afternoon i'll compare against the old board but I'm mostly at a loss as to what is going on. Motherboard changed from a Gigabyte GA-H57M-USB3 (rev 2.0) to a Gigabyte GA-H67MA-USB3-B3 (rev 1.0) and I ran a few passes of memtest on the new board without any issues yesterday. Edit: I unassigned Parity 2 since it was 'dropped' and after a reboot and replacing a couple of SATA cables (both parity drives) it was looking OK but then after today, errors all over the place again. Syslog is from the first time it failed, diagnostic is from after the reboot. Now disk 8 has dropped completely, I have no idea what is going on. Added an extra screenshot. unraid-syslog-20190603-1428.zip unraid-diagnostics-20190607-0945.zip
  11. Hi guys, I recently acquired a replacement case which supports dual ATX power supplies. I'm currently at 14 drives and a few months ago my original single 12V rail 550 just wasn't holding up every boot anymore. I swapped it out with a spare 750 I have and it's been OK but the new case got me thinking. I'm at a place in terms of drive count where it's hard to avoid using power adapters in one form or another. I was curious about how much difference (in terms of power draw from the wall outlet) would make, so I ran some tests before my case transplant which is currently underway, to determine if it may be worthwhile. The difference is surprisingly negligible and would give me the benefit of ditching a number of molex-sata adapters I currently have in use. I didn't think it would be worth the extra power, but I think I may actually go down this road after all. Since I'm certain someone will ask, you can power them on together using one of these The wall outlet power draw was measured using a TP-Link HS110 Smart Wi-Fi Plug I'll be posting the actual build log (case transplant + upgrades) shortly with photos of the new case but I'm still waiting on some parts and it's not finished yet... so stay tuned if you are curious Results are below!
  12. Bumping this thread again to see if there are any plans to implement in the near future. I've had my server running extremely well for around a decade now (pre-4.7) and as a result, have some smaller disks I would prefer to phase out completely, reducing my disk count, rather than replace. I'm aware there are ways to do this but I'd currently require some assistance from the forums to make sure I don't break anything (i.e. rsync/move data on to alternate disk, remove, rebuild parity). The idea of being able to 'decomission' a disk seems like a great idea.
  13. I'm actually in a similar situation in terms of drive numbers and trying to find a replacement case. I currently have 11 data, dual parity and want to add a cache disk for a total of 14 X 3.5" disks. I'm currently using a custom modified case (old server case very heavily modified) which supports up to about 17 disks but several of them are incredibly difficult to swap out. I was looking at potentially the Fractal Design Define XL, though the current revision (the XL R2) supports one less disk than the original. Edit: I just jumped to another post after writing this and found someone who built one of these with 17 disks. You can buy an extra 4-bay drive cage from FD as an optional extra. This means 12x3.5" and 4x5.25" even without getting creative for those last couple disks. Add the fact that I live in Australia, cases like the Norco 4224 or even the 4220 are super expensive to ship from the US and/or hard to find.
  14. I'll give you some ballpark figures based on my personal experience with using unRAID for several years. Hardware doesn't really make a lot of difference unless you are bottle-necking somewhere. Performance is mostly dependent on drive selection and network performance. The use of a cache drive should, in most cases, cap gigabit ethernet for writes. Write speeds will somewhat vary depending on the size of the files you are writing and which drive you are writing to (newer, higher capacity and higher rpm drives will perform better). Writing to array with a decent cache drive: 100mb/sec+ Writing to array without cache drive, good drives: ~60mb/sec Writing to array without cache drive, slower drives: ~40mb/sec Reading from array, good drives: limited by drive performance, single files are usually ~80mb/sec Keep in mind this list is very dependent on configuration, hardware, fine-tuning, file sizes, file system.. there are a lot of variables.
  15. Wow, I've never even noticed that before. It's been a while since I've been actively keeping up with everything. File is attached. unraid-diagnostics-20160912-1002.zip