Jump to content

smaster

Members
  • Posts

    44
  • Joined

  • Last visited

Posts posted by smaster

  1. 16 hours ago, JorgeB said:

    There's still something reading from all disks, doesn't look like parity since it's reading them differently, any idea what that is?

     

    Hey,

    So I've stopped the docker network completely and I now believe there should be nothing major running.

    I also tried again to pause or cancel, but nothing happened.

    The diags are attached.

     

    Thanks and sorry for the hassle, just trying to do this the right way.

    tower-diagnostics-20240108-0346.zip

  2. 15 hours ago, JorgeB said:

    Nope, but it's making it difficult to search the log for other issues.

    Hey,

    I found the culprit. It was mongodb.

    It's fixed and the last 2 error messages are when I press "pause" or "cancel" in the parity check. Here they are:

    Jan  7 00:53:28 Tower nginx: 2024/01/07 00:53:28 [error] 20161#20161: *24622960 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 192.168.1.105, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.1.118", referrer: "http://192.168.1.118/Main"
    Jan  7 00:53:49 Tower nginx: 2024/01/07 00:53:49 [error] 20161#20161: *24622934 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 192.168.1.105, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.1.118", referrer: "http://192.168.1.118/Main"

     

    Diags attached and there are a few more errors that I got previously, here is an example of one:

    Jan 6 21:48:08 Tower kernel: pcieport 0000:00:1c.4: AER: Multiple Corrected error received: 0000:06:00.0 Jan 6 21:48:08 Tower kernel: atlantic 0000:06:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jan 6 21:48:08 Tower kernel: atlantic 0000:06:00.0: device [1d6a:d107] error status/mask=00000041/0000a000

     

    Any idea how to get the parity check going and making sure it's all good before going about a parity swap?

     

    Thanks a bunch.

    tower-diagnostics-20240107-0057.zip

  3. 5 hours ago, JorgeB said:

    You likely have a container constantly restarting spamming the log, check the uptime of all containers, because of that it's difficult to see if there was any issue with the parity check, but there's still read activity on all disks

    Should a docker constantly restarting stop me from cancelling or pausing the parity check?

    When pressing pause or cancel here, it essentially doesn't do anything.

    WgOCqBULkh.thumb.png.b6c097c267c63eb00ecf4f04771849f0.png

     

    Thanks.

  4. 21 hours ago, JorgeB said:

    Array will not be accessible during the parity copy part, it will be once the rebuild begins, nothing you can do to speed up the copy.

     

    Thanks for your answers.

    I have a follow-up, currently my parity check seems to be stuck at 40.9% for a few days and even when I press cancel or pause it doesn't do anything. I'm just worried that the parity isn't fully up-to-date or correct and when doing a parity swap some data will be lost.

    I attached the diags if that helps.

     

    Thanks a bunch.

    tower-diagnostics-20240105-1510.zip

  5. Hey guys,

    I currently have 30 drives in my array.

    Parities are 16TB and 14TB.

    I've had a drive failure (3TB) and want to replace it with a 16TB drive.

    I know the fact that the replacement being larger makes the process a little different and I'd just like to confirm that I have the right idea:

     

    1. Stop Array
    2. Remove the failed hard drive
    3. Insert the new hard drive
    4. In the "Main" section of unRAID, unassign the 14TB parity
    5. Assign the new 16TB drive to parity
    6. Assign the old parity drive in place of the failed drive
    7. Proceed to a parity swap (essentially transferring the data from the old parity to the new one)
    8. Start the array and rebuild the old drive from the newly created parity

     

    If this is all correct, I'd just like to also ask a few questions:

    • While the new parity drive is being built, is the array inaccessible?
    • Is there anything I can do to my new drive before I put it in my server that will speed up the process? Pre-clear or something along those lines? I'd like to avoid as much downtime as possible.

     

    Thanks for your time.

  6. Hey, 

    I noticed a week ago that 2 of my drives started being emulated and I immediately restarted my server seeing if it would get fixed. I realise now that this was a mistake and I should've taken a diagnostic before. 

     

    After the restart and seeing that nothing changed, I preformed extended SMART tests on both disks and after a lot of time both drives came back without errors. I also plugged both drives with m new connectors in case that could've been the problem. 

     

    I'm not sure what the correct procedure is here, since I can't fully check my drives due to not being at home but I do have access to my network via a vpn. 

     

    Should it be fine for me to perform a rebuild? 

    Do I need to start the array in maintenance mode? Just trying to get things done the right way to avoid any data loss. 

     

    I'm also attaching my current diagnostics

     

    Thanks. 

    tower-diagnostics-20230711-1406.zip

  7. 1 hour ago, ZappyZap said:

    I think you are using the wrong template......
    there is 3 of them ......

     

    I found a container who seems to be using unraid and well maintain , i might go this direction , and work on that this week-end

     

    ggvIfNF.png

     

    As you can see at the top it's the repo linked in the original post and the last line has the different capitalised config location...
    Weird!
    I'll just wait for your unbound version too! Like this I can get the perfect pack! 🙂

  8. 14 hours ago, ZappyZap said:

    really weird,

    just cheked on github :
     

    <Config Name="DoT DoH config" Target="/config/" Default="/mnt/user/appdata/pihole-dot-doh/config/" Mode="rw,slave" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/pihole-dot-doh/config/</Config>
      

     

     

    It is odd indeed, if you check on unRAID it still seems to give me the two folders...

    Did you manage to find something for unbound? if I may suggest something, maybe having both in the same container wouldn't be a bad idea, to have all the options for DNS.

     

    Thanks 🙂

  9. 21 hours ago, ZappyZap said:

    Must be from a previous install somehow ,  this template use "pihole-dot-doh" 
    and for unbound you will need to use an other template 
    i will see if i can build a container or found one and create a template

     

    This is how the template looks (I didn't change anything):

    HI88tEt.png

     

    So from what I see one of them is lower case and the other uppercase, maybe best to change the template? Or am I doing something wrong?

    Regarding an unbound template or build, that would be great so we could have the full setup consolidated here.

     

    Thanks for all your help!

  10. Hey,

    First of all thanks for continuing the great work in this container.

    I've installed it and I'm able to access it.

    I have a few questions, some aren't quite related but I hope you could help me 🙂

    • Is it normal that the docker container creates two different folders in my appdata? In this case they are "pihole-dot-doh" and "Pihole-DoT-DoH". If so, could I just understand why? 
    • Also, how would I be able to add unbound to this setup to direct pihole to a self-hosted recursive DNS?

    Thanks for your time!

  11. On 11/15/2022 at 10:28 AM, ChatNoir said:

    I looked at it yesterday but there is not much to forge an opinion:

    Nov 10 04:31:03 Tower kernel: mce: [Hardware Error]: Machine check events logged

    and that's it.

     

    I would say it's probably fine. Maybe run a few passes of Memtest to be on the safe side.

     

    Thank you very much!

    Although you're not ruling anything out at least it gives me some peace of mind.

    I will maybe run some diagnostics soon and also upgrade to the latest version of unRAID.

     

    Thanks again 🙂

  12. Hello all,

    I just got a "Machine Check Events" error.

    It appeared on the "Fix common problems" plugin and this is the suggested fix:

    "Your server has detected hardware errors. You should install mcelog via the NerdPack plugin, post your diagnostics and ask for assistance on the Unraid forums. The output of mcelog (if installed) has been logged"

     

    I checked this link and decided to check the syslog, but couldn't see if it was a harmless error or not.

    I attached my diagnostics to this message to hopefully have someone take a look and see if all is well or I should be worried.

     

    Thanks!

    tower-diagnostics-20221111-1642.zip

  13. On 7/6/2022 at 1:37 AM, tjb_altf4 said:

     

     

    I wonder if there are issues due to the shim network itself being macvlan as noted in help section, which is already been known to cause crashes for some (certainly has for me since moving to 6.10).


    image.png.8e6bd1400150adf1b29b616ca4fec7b9.png

     

    Just to confirm that once I change "Docker custom network type" to "ipvlan" everything works fine when having the "Host access to custom networks" set to "Yes".

    On "macvlan" however, everything is off. I currently have the 6.10.3 version (latest at the moment).

     

    For now, I shall leave it on ipvlan, maybe I'll test it in macvlan in the next version.

  14. I finally found the culprit! 

    For anyone finding this thread, the container causing my logs to be overflowed was:

    Rimgo by Joshndroid's Repository.

     

    Thanks for your assistance!

  15. 1 hour ago, Squid said:

    So long as there is a template (ie: if you can click on the icon within the docker tab and there's an "Edit" option), Apps, Previous Apps will show it.

     

    Thank you for your reponse.

    Before I delete the docker.img file, as a last ditch effort would it be possible for me to stop all my dockers and one by one find out which is causing the problem? Or is this a more deep-rooted issue that can only be fixed by a wipe?

     

    Thanks.

×
×
  • Create New...