Alby24

Members
  • Posts

    33
  • Joined

  • Last visited

Alby24's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. I wanted to add something outside of what I marked as a solution. Basically, the issue was that Unraid decided that my parity disk had problems when it hadn't. Just because one single write failed, it decided that the disk was dead and therefore disabled it. By doing this, the array was put at risk, once again, for no reason at all. Unraid should've been built to protect your data, instead it tried to temper with it like a malware. This is terrifying. Disabling a drive just because one single write failed (that could happen for a loooot of reasons) does not make any sense and I hope you all see that. Run a check on that sector and keep the drive active. If write operations fails frequently, then you can think about disabling the drive. I am growing tired of the lack of documentation, of the lack of official support that this (paid) software has. In fact, this forum was not able to provide any concrete solution and help outside of something very basic and, to be precise, 90% of my post was just ignored. I really cannot recommend Unraid anymore, unless you want to risk losing your data without any failure and want to troubleshot it with a UI that gives out useless messages only.
  2. Two days later, this is what I've done: I've changed the cable of Disk 4 and never encountered any other error I've rebuilt Parity disk onto itself (lol) and never encountered any other error In short, Disk 4 was unmountable because of the cable, and the parity disk was disabled because Unraid said so.
  3. What to do now then? How do I re-enable the parity drive?
  4. Thanks for your answer. That's good to know, it should be stated in the UI somewhere. Given that the SMART report of the parity drive seems ok, is there are a possibility that there was a write failure due to the fact that Disk 4 was unmountable?
  5. Hi there, I'm having issues with my Unraid server. Yesterday, Disk 4 reported an UDMA CRC error count equal to 1. The array was working fine. Today, when I booted the system up, the UDMA CRC error count of Disk 4 was increased to 9 and the Web UI was saying that the drive was unmountable. Most importantly, the parity disk was automatically disabled. After a reboot, Disk 4 was successfully mounted and present in the array, however the parity disk was still disabled. I'm aware of the fact that UDMA CRC error count is related to connection errors, therefore I am planning on checking the cable of Disk 4. What concerns me is that I have no clue why the parity disk, which seems healthy, was randomly disabled. It also says that the array is unprotected. What happened to the parity disk? How do I re-enable it? I'm attaching the diagnostic. P.S. I find unbelievable that there is no explanation on the Web UI on why the system decided to disable a working drive. unraid-nas-diagnostics-20230311-2025.zip
  6. Of course not sir. I posted a bug report here but I haven't got even a single sign of life from the dev team in 8 months.
  7. Indeed, and that is weird, since the standard "/sbin/init 0" was being reached. Plus, the system doesnt shut down with the button anymore, that implies that I have modified the correct file. We're missing something for sure, but I have no idea what
  8. I tried as you said and I believe nothing is being logged at all. I don't really know what to say...
  9. Man if you aren't sure, I really don't know what to do, I know very little about this syntax. Do you mean writing something like this? case "$2" in power) logger "before run" && /usr/bin/docker pull alpine:latest && logger "after run" ;; Plus, where can I find the log? Thanks again.
  10. Yes you are correct, now I can use the command. Anyway it doesn't seem like it is being executed when I press the power button. Does acpi logs anything? Here is my entire acpi_handler.sh script: #!/bin/sh # Default acpi script that takes an entry for all actions IFS=${IFS}/ set $@ case "$1" in button) case "$2" in power) /usr/bin/docker pull alpine:latest ;; *) logger "ACPI action $2 is not defined" ;; esac ;; *) # logger "ACPI group $1 / action $2 is not defined" ;; esac
  11. But when I try to run a command like: /var/lib/docker run alpine:latest This is shown: bash: /var/lib/docker: Is a directory And nothing is executed
  12. Thanks for your repy. I see what you mean, I found out that docker is located at: EDIT: this is the right path /usr/bin/docker But I dont know the exact path to the executable. Isn't there a simple command that I can replace "/sbin/int 0" with, in order to see if this approach can work?
  13. So...Nobody? I started messing up with /etc/acpi/acpi_handler.sh and I have replaced "/sbin/init 0" with "docker run ... " The system does not turn off anymore when I press the power button, but the docker command isn't being executed either. I do not know how this works and how to debug it, so any help will be appricated.
  14. Do any of you know if this method still works? I know it's been a while, but I'm trying to replicate it and I cannot get my script to run. The server doesn't shutdown anymore tho, so I might be in the right direction.
  15. Hi there, I am trying to run a custom command whenever the power button of the server is pressed. In particular, I'm looking for a docker run command. I found at this page that one way to do that is sysctl -w kernel.poweroff_cmd="/sbin/powerdown" So I replaced "/sbin/powerdown" with "docker run ..." But it is not working and the system is shutting down normally. The page also says that: Guess mine doesn't. Is there any other way to do this?