• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

aqua's Achievements


Newbie (1/14)



  1. Yes, was/am running 2021.04.20. I noticed yesterday that when I was checking settings, before changing to a percentage to make things run, I think I had tried setting by age and had a weird thing where if I had it on "No" then I could select the number of days but if I had it on "Yes" then that selection was greyed out. Anyways, it is working on 0% now when I tested it so I guess things are back to normal, though I don't know what the issue was. So I am clear with the tuner, the mover settings schedule (set to daily) just invoke the mover and then the mover tuning settings get looked at to determine whether mover does anything, right? As I'm set now, I just want it to run ever night and move everything off the cache (except cache prefer) which is why I have things set to 0% and everything else set to no (except allowing mover to run during parity check/rebuild). I think I had logging on for a bit when it wasn't working, but then disabled it later - would the logs for that still be available? I can try to dig them up if they would be useful - though I don't know that I've looked at logs before and am not immediately sure where to go to grab them.
  2. I am on 6.9.2 and recently noticed that my cache drive was filling up and that mover wasn't doing anything. Mover tuning had pretty much everything set to "No" and the "Only move at this threshold of used cache space" set to 0%, but nothing would move. Didn't get anything to change until I set this to an actual percentage (I chose 30%) then ran mover. Is this a bug? I have been using for years without changing settings and not having errors.
  3. When I was setting up the docker profile I had changed "/output" to "/OUTPUT" and then none of the files I encoded showed up. I figured out the problem and changed it back and new encodes show up, but where would the old ones have gone, so that I can go and delete those files?
  4. DISREGARD. I THINK THIS IS AN ISSUE WITH THE UNASSIGNED DEVICES PLUGIN. I've been running a preclear on two drives at the same time, one that is on a LSI card that is new to me and the drive on it disappeared from the unassigned devices list on the main tab of unraid midway through the test, but still finished fine and still shows up when you use the "-l" command, and will confirm the preclear finished when you run the "-t" command. The other drive is still going (must slower it seems, though they are both the same model) and is still listed, connected to the motherboard intel controller. I posted about this in the main unraid support forum, as I don't expect this is a preclear issue, but in case it is, thought I'd mention it here - not sure if that LSI card is suspect or what... here is that post. Edit: I note that the "-t" which confirms it was precleared is states the size as "2T" in the table it spits out - I can't test against the other drive right now to see if that is normal, for anything 2TB+ or if that is unexpected.
  5. I connected two new (old to me, but haven't been used with unraid) 8TB drives through my system and was running a preclear (via the docker) on them: "sdb" was connected to a new LSI card that I got (ie: haven't used before - this preclear was intended to be part of the test) and "sdg" which is connected to the intel controller on the motherboard. The sdb drive disappeared from the unassigned devices list on the "main" tab in unraid midway through the preclear, for some reason, but it still finished successfully, and it still gets listed when I run the " -l" command, and the " -t /dev/sdb" command will still read it and confirm that preclear finished on it. The sdg drive is still listed but hasn't finished yet and was/is going significantly slower. I haven't rebooted or anything (since preclear is still running on sdg) but I'm not sure whether I need to be concerned about the LSI card ( LSI-9200-8i from China off ebay) or if this just sometimes happens - obviously I wouldn't want the drive dropping out of the array. Any thoughts?
  6. Just to update this, removing the molex to sata adapter that was in place and using the power supply cables and issues went away. Ran another parity test and had errors that got corrected (uncorrected due to issues from before) and then a subsequent parity test without issues. I don't actually think it is the cable but more likely that the power supply deteriorating and no longer being able to handle all the drives spinning up at once on the one rail. I'm moving to another case anyways, so I will be putting in a better/bigger power supply. Thank you for your support, @trurl - I think it would have taken me a long time to suspect power might have been the issue.
  7. Hmm. I tried a few things and so far what seems to have worked was taking out the molex to 5x sata splitter (only connected to 4 drives) that I had the drives running off of and having them powered straight off the sata connections from the actual power supply (I'm down to enough drives that I can do that now). Not sure why that would be the issue all of a sudden, but so far it seems that may have been it. Will do more testing.
  8. Oh, OK. They aren't bundled but the case is kind of tight - never had an issue before now though. I'll unplug and replug everything. yeesh.
  9. I've unplugged sata on both sides, though didn't do power on both sides and plugged in again. what does bundling sata cables mean? Things seemed OK but I will take apart again and see. I've confirmed that he ports I'm using are the intel ports.
  10. I am using the same ports. I think there is an additional 4 ports on another controller (has 10 onboard) but I'm not using those - using the main ones. Motherboard dying?
  11. I started a parity test yesterday and it just finished, with 97238 errors!!! I am not seeing any problems in the smart data and I ran a short test on each drive and they all passed. I am also noticing lots or errors and warnings in disk log information. Not sure what to do or how to approach this now - can anyone point me in the right direction? It has been 100+ days since the previous parity test, but that one completed without issue. Since then, I have moved homes, so maybe something happened in the move? Really appreciate any help anyone can give. I've attached diagnostics zip file - hopefully I've done that right. After getting diagnostics file I shut down, unplugged and replugged all sata connections and have started some long smart tests. I check disk log info and parity drive is showing some errors, as is another drive. sigh...
  12. High-water Choose the lowest numbered disk with free space still above the current high water mark... The above is from the help - meaning that the fill goes in disk number order, which is why I was thinking I should have my largest drives first, but that might be not really a big deal. I'm still confused as to the whether I would need to use "new configuration" - isn't that just for multiple drive adds or removals? Couldn't I just swap the drive allocations around after stopping the array and then rebuild parity?
  13. Disk order impacts the order that the allocations will fill up said disks - but maybe it doesn't matter enough to worry about it. I was thinking I would want my largest disks (8TB x2) to be disks 1 and 2 and my 5TB to be disk 3 and my 4TB to be disk 4, but maybe this isn't totally worth dealing with. If I choose to do it though, I still want confirmation of how I should be doing it. Thanks.
  14. Looking for some confirmation before I do something wrong: I had an 8TB drive as a parity drive but got a new 10TB drive and put it place of the old drive and parity is now rebuilding. I want to add that old 8TB drive to the array, which should be simple enough, but have a few questions: 1) I assume there is no point in preclearing a drive that was working fine in operation for several months, right? 2) I was thinking I'd like to add the drive and then start making better use of the split levels, and would likely want to reorder my disks as I have some smaller ones ahead of it in the disk order. Is there anything special to do for this? Do I need to add the drive and then move them around in one step, two steps? From what I understand, the "new config" isn't what I'm after (though I am confused as to when it would be used to add multiple disks at a time - wouldn't that destroy parity?) and that I would just stop the array and move drive orders - right? Also, to confirm, drive order is tied to serial number of the drive and not the connector the drive goes to, right? I should be able to move drives to other ports (including expander cards (not the ssd cache though)) anywhere without consequence, right? With dual parity does that change?
  15. Also having this issue, only with linuxserver repositories.