Yivey_unraid

Members
  • Posts

    165
  • Joined

  • Last visited

Posts posted by Yivey_unraid

  1. 8 minutes ago, itimpi said:

    In that case is it not only the ends that are plugged into the drives that are SATA - the other end being the PSU specific connector?  If that is the case then that is not what I would normally call a power splitter as the cables that come with the PSU will be rated for the required current.

    Yes, that's correct! Maybe wrong wording on my count there. Seems as it wasn't the power cables anyway.

    Any take on this?

      

    8 hours ago, Yivey_unraid said:

    Now, regarding the array. Would it be best now, since I've got no parity, to just not assign any drive to that spot and let it rebuild parity? Or doesn't that work since one device would be missing anyway and it would have no valid parity for it? 
    Should I instead do a New Config again following the same steps as my first post and not assign any thing to Disk 6?
    In that case I can expand the array with the other HGST 10TB, after parity is rebuilt and then transfer the data from the SSD.

     

  2. 6 hours ago, bombz said:

    Same issue here
    All my other docker containers have no issues resolving and IP using the docker container network
    what's the difference with plex?

    image.png.3414750a9c13a8581c8b67b691bbf8fb.png

    Any assistance is appreciated. 

    If you change from your custom Docker network to Host, do you get an IP then? I seem to remember that Plex's official container suggests running it with Host, but I'm not sure. I know other Docker images doesn't suggest it.

  3. 5 hours ago, TacomaTy said:

    Hi, I would like to install Immich but I'm a bit nervous to give it full access to all my pictures and videos. Is there an easy way to give it read only access? Call me paranoid but this would give me peace of mind pointing it at my 15yrs of pictures and videos.

    Backup, backup and backup. 😉 That way it's no problem if something happens.

  4. Do you have the correct cable? There are different cables for connecting from motherboards SATA ports to a SAS-8087 backplane vs connecting SAS-8087 on an HBA card to SATA drives. They look the same, but only works one way.

    Example (no affiliation with these cables, nor have I tested them): 

    SAS-8087 (host) to 4x SATA (target)

    4x SATA (host) to SAS-8087 (target)

     

  5. Just recapping what happened in the end.

     

    The last week it got so bad I couldn't boot anything at all, no Live-USBs or anything.

    So after months of trying to find the problem I finally couldn't narrow it down any more than to the MB or the CPU. Since I had so many errors that pointed to the CPU I decided to open a warranty ticket with Intel. They accepted after I presented my evidence of my problems and replaced my CPU with a new one.

     

    Very nice experience dealing with the support, and I'm glad I bought the boxed and not the trayed version of the CPU since the trayed had shorter warranty. At least for 11th gen, I think the warranty aren't differentiated in the newer generations.

     

    Now it runs good again. 👍

    • Like 2
  6. Thank you for your answers!

     

    13 hours ago, itimpi said:

    If using SATA->SATA power splitters do not spilt more than 2 ways as the master SATA connector is limited in the max current it can draw without voltage saga occurring.    If using Molex->SATA you can normally spilt 4 ways as the pins are much more robust.

     

    It's 4-way splitters directly attached to the 6-pin output on the PSU. Original Seasonic cables. Not ideal, but not terrible either. 

     

    12 hours ago, JorgeB said:
    Dec  7 22:04:54 Define7 kernel: sd 1:1:14:0: Device offlined - not ready after error recovery

     

    Disk dropped offline, check/replace cables, power and SATA.


    Looks like it actually was the drive that was the problem. I think.. Restarted the server after moving the cabling around and still got multiple errors. See attached SMART report. Now, perhaps those CRC errors and Current Pending Sector errors might have been a result of the first initial problem, and now causing the drive to behave irrational. I don't know.

    At first it didn't show at all when connecting it over the HBA. Put it in an external USB enclosure and that made it show up but with no FS or possibility to mount. Added it back to the server and tried it with a SATA cable directly connected to the MB and that worked. Somewhat..
    The drive wouldn't mount normally, and after googling the error messages from the syslog I tried mounting it RO in UD and that finally worked.

    Now I'm copying the 4-5TB data off the drive to an SSD. Figured that would still be faster than restoring from backup. (I really need to setup some cataloging system over what files are in what drive and keep that list somewhere, been on my mind for a long time and that would save so much time in the event of a failure.) It keeps coming more current pending sectors errors during file transfer as well.

    Will not re-deploy the drive again, if ever, until I've run multiple more tests. Maybe I should just call it DOA and return it.

    I have another of the same drive bought together with the failing one. A bit hesitant to deploy that now too. But that is also pre-cleared at least.

     

     


    Now, regarding the array. Would it be best now, since I've got no parity, to just not assign any drive to that spot and let it rebuild parity? Or doesn't that work since one device would be missing anyway and it would have no valid parity for it? 
    Should I instead do a New Config again following the same steps as my first post and not assign any thing to Disk 6?
    In that case I can expand the array with the other HGST 10TB, after parity is rebuilt and then transfer the data from the SSD.

    Thoughts?


    623864070_Skrmavbild2023-12-09kl_01_14_14.thumb.png.e943f7b27bb63c9d6c85c73fe13fdb79.png
     

    HUH721010ALE601_2TGD77RD-20231209-0043.txt define7-diagnostics-20231209-0120.zip

  7. 'Evening folks!

    Been shuffling data around in the array to be able to shrink it and remove three 10+ years old HDDs. 
    Following the tried and true "Remove Drives Then Rebuid Parity" method.

     

    • Moved what I needed to keep elsewhere in the array, then cleared all the drives of data.
    • Stopped the array
    • Tools --> New Config --> Retain current configuration (All) --> Confirm yes
    • Started the array without checking "Parity is already valid".
    • It started rebuilding the parity

     

    After only a few minutes Disk 6 started spitting errors and it just kept rising until I paused the parity re-build after a couple of more minutes.
     

    1804331309_Skrmavbild2023-12-07kl_22.19_22.thumb.png.368c76c3602163d7af60d08e2029c4dd.png

     

    The HGST drive is new in the array since a couple of days and was added as a replacement of a smaller fully functioning data drive.

     

    It is however a refurbished drive and newly bought, but dated 2017/2018. Passed pre-clear before I added it to the array.

    I suspect the SATA cable or power splitter is the culprit but can't be sure. It could be the SATA power splitter, although a bit unlikely, since this new HGST drive is more power demanding than the previous Seagate it replaced.

     


    - Can I in this stage of the shrinking process "safely" (I know I no longer have valid parity) power down the server and do some changing of cables? Physically remove the now unassigned old array disks?

    - When I after this have rebooted the server, should I do a short SMART self-test? Since I no longer have a valid parity I can't start the array without the drive either and conduct another pre-clear on it. So SMART tests is basically what I could do.
     

    Any advice for me here? Am I missing something? What is my best course of action at this point forward (other than restoring from backup)?

     


    Note:

    • Drives are connected via SAS->SATA breakout cables to an Adaptec ASR-71605 in HBA mode.
    • Since unRAID has disabled the drive I can't see it in the maxView Storage Manager for the Adaptec card either. 
    • unRAID OS is 6.12.4

     

     

    All the best! /Yivey

    define7-diagnostics-20231207-2209.zip

  8. Thanks a lot! Using this with a Tasmota flashed Shelly Plug S. Is there any possibility to add time active field to “Total”? Total is quite useless unless you know when it was reset last. I haven’t seen this on the Tasmota UI either, so perhaps it’s not possible to do.

  9. On 12/27/2021 at 10:00 AM, infernix said:

    So I'm trying to make this work on the newer X12SCA-5F and so far no luck. This board has a jumper where you can disable the ATEN Video and that's about the only way I can get the iGPU working, obivously losing console access in the process. I've reached out to Supermicro to see if there's some way of making it work like on the X11.

     

    For those with iGPU enabled, have any of you tried the Intel AMT/vPro way of getting access to remote console?

    Did you ever get this working with your X12SCA-5F? I'm thinking of buying this board but I would like to be sure that it'll work with unraid and I can use the all the functions on the MB.

  10. 15 hours ago, Kev600 said:

    My Unraid GUI has started being unresponsive this evening.. (6.12.4)

    I've been working with nginx & cloudflare config and it ground to a halt -  It seemed to be fixed after a restart but the system log was flooded with entries like:
    Oct 8 01:09:01 KBNAS nginx: 2023/10/08 01:09:01 [alert] 11885#11885: worker process 18538 exited on signal 6 
     - before I started the container!

    I'm happy to try to look for any commonalities that might give us a reproducible-on-demand scenario 👍

    Unfortunately this isn’t related to the container Nginx Proxy Manager at all. Nginx is related to the webUI.

  11. Also should note that I was able to update the FW on the Adaptec card through the MSI BIOS advance setting where I can access some of the settings. Went very smooth compared to the DOS booting option I saw on YouTube.
     

    When I have my server running I’m also using maxView Storage Manager in a Docker container. Gives me access to see the cards temp etc.

  12. On 4/22/2023 at 11:59 PM, kadajawi said:

    I have one that worked well for me for a couple of months. I had repasted the heatsink and had a 120mm fan in the side panel slowly blow air towards it.

     

    My system started to give me some trouble though (seems like a faulty HDMI cable was at least part of the problem), and while trying to figure out what was wrong I ran the system for a couple of minutes without heatsink. I got the buzzer, but not knowing what was happening I failed to turn it off immediately. This happened twice. Now the system boots fine, unless I have the card installed. Then I'm getting "Controller failed to load utility. Press any key to Exit." With drives connected I get a couple of error messages (BlockIO-Command Failed and LBA out of range), but I can't get into the BIOS for example where I could tell it to work in HBA mode. The system stops booting at this point. (Also without drives connected, just without the error messages). I can't enter the BIOS.

     

    Next up I'll try the card in another system, but I suppose it's gone. I can only say, keep it cool. And that means plenty of airflow. It gets really hot.

     

    (Any alternatives that support 16 drives or more and won't bankrupt me?)

    @kadajawi Did you end up getting this fixed?
    I started having random other problems with my unRAID server last couple of weeks. Though problems persisted even without the ASR-71605, so that probably wasn’t the culprit. 
     

    Anyway, trying to solve the above problems I flashed the MB BIOS and forgot to take the Adaptec card out. After that I kept getting the “Controller failed to load utility” error whenever I tried booting with the Adaptec card in. Re-flashed the MB BIOS and after that the Adaptec worked again.

     

    But tonight I took out and reseated the CPU in the hunt for the problem, after that the error came back again… Will try re-flashing the BIOS again…

     

    Have always had a 40 mm fan on the heatsink so shouldn’t be any heat problem. I was under the expression that these card are very reliable, but maybe they’re not? 🤷‍♂️

  13. 1 hour ago, Vr2Io said:

    Just note in pulling stage already have problem, I agree it likely network issue, suggest trying adjust MTU setting at router, try 1500, 1492. ( unraid keep at 1500 )

     

    Or as previous mention, try use /tmp to store docker img to eliminate storage issue.

     

    ** /tmp were RAM disk **

     

    To verify network issue, could you try other ISP ? i.e. make test at friend's home or use Mobile data ( an easy way was buy a type C USB RJ45 network adapter then plug it to a Android phone and connect to Unraid. )

    I did ran the previous post tests with all Docker directories on the /tmp RAM disk.

     

    I believe it’s a network issue, but shouldn’t adjusting MTU have been needed when I tested setting up a NUC also?

     

    Yeah, I’ll try and run it from another network if I can. Will even try and setup a new router and run through that. 
     

    Don’t really understand though how my network could work flawless with same unRAID flash in another machine (NUC)?!

    • Like 1
  14. 11 hours ago, Vr2Io said:

     

    Pls create memory test boot USB to test CPU / MB / Memory ( must use parallel CPU thread during test, if you got two pass then it is fine, no need 24hrs )

    https://www.memtest86.com/

    Ran a complete memory test with CPU in parallel. This test was on the newly installed Corsair Vengeance DDR4-3200 2x16GB. No errors.

    1687769114_Skrmavbild2023-10-07kl_16_07_49.thumb.png.9c723915f7a07e19581eafa9ae37174e.png

     

    11 hours ago, Vr2Io said:

    If all fine, then start Unraid and test docker service in /tmp to isolate any storage affect issue first. ( no need unplug any other hardware this moment )

    Did a new trial install on a flash drive to eliminate any previous configuration errors. Didn't have the NVMe adapter card nor the HBA card installed since before so they weren't present during this test.

     

    Changed all the Docker paths to /tmp.

     

    Booted up and got the exact same problems as before. A mix of mostly tls x509 failures and some SHA256 errors. I'd say that SHA256 errors was more frequent before though, now tls takes the lead... 

     

    docker run
      -d
      --name='S-PDF'
      --net='bridge'
      -e TZ="America/Los_Angeles"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Tower-test"
      -e HOST_CONTAINERNAME="S-PDF"
      -e 'APP_HOME_NAME'='Stirling PDF'
      -e 'APP_HOME_DESCRIPTION'='Your locally hosted one-stop-shop for all your PDF needs.'
      -e 'APP_NAVBAR_NAME'='Stirling PDF'
      -e 'ALLOW_GOOGLE_VISIBILITY'='false'
      -e 'APP_ROOT_PATH'='/'
      -e 'APP_LOCALE'='en_GB'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:8080]'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/Frooodle/Stirling-PDF/main/src/main/resources/static/favicon.png'
      -p '8080:8080/tcp'
      -v '/tmp/Stirling-PDF/OCR':'/usr/share/tesseract-ocr/4.00/tessdata':'rw' 'frooodle/s-pdf' 
    Unable to find image 'frooodle/s-pdf:latest' locally
    docker: Error response from daemon: Head "https://registry-1.docker.io/v2/frooodle/s-pdf/manifests/latest": Get "https://auth.docker.io/token?scope=repository%3Afrooodle%2Fs-pdf%3Apull&service=registry.docker.io": tls: failed to parse certificate from server: x509: malformed extension OID field.
    See 'docker run --help'.
    
    The command failed.

     

     

    IMAGE ID [182761574]: Pulling from frooodle/s-pdf. 
    IMAGE ID [34df401c391c]: Pulling fs layer. Downloading 100% of 53 MB. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 53 MB. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 0 B. 
    IMAGE ID [8cdc2b53ba57]: Pulling fs layer. Downloading 100% of 15 MB. Verifying Checksum. Download complete. 
    IMAGE ID [c6c65d966457]: Pulling fs layer. Downloading 100% of 82 MB. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 82 MB. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 second. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. 
    IMAGE ID [13099527500c]: Pulling fs layer. Downloading 100% of 526 KB. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 264 MB. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 265 MB. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 265 MB. 
    IMAGE ID [78c116fb88da]: Pulling fs layer. Downloading 100% of 84 MB. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 84 MB. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 83 MB. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 second. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 84 MB. 
    IMAGE ID [c8b0cfd16c77]: Pulling fs layer. Downloading 100% of 252 B. Verifying Checksum. Download complete. 
    IMAGE ID [8dba544373b8]: Pulling fs layer. Downloading 0 B. 
    IMAGE ID [0b7bf9f56788]: Pulling fs layer. Downloading 100% of 526 KB. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 9 MB. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 9 MB. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 6 MB. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 15 MB. 
    IMAGE ID [4f4fb700ef54]: Pulling fs layer. Verifying Checksum. Download complete. 
    IMAGE ID [0e1176372572]: Pulling fs layer. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Downloading 100% of 58 MB. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Retrying in 25 seconds. Retrying in 24 seconds. Retrying in 23 seconds. Retrying in 22 seconds. Retrying in 21 seconds. Retrying in 20 seconds. Retrying in 19 seconds. Retrying in 18 seconds. Retrying in 17 seconds. Retrying in 16 seconds. Retrying in 15 seconds. Retrying in 14 seconds. Retrying in 13 seconds. Retrying in 12 seconds. Retrying in 11 seconds. Retrying in 10 seconds. Retrying in 9 seconds. Retrying in 8 seconds. Retrying in 7 seconds. Retrying in 6 seconds. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. 
    IMAGE ID [a057ccd9dc71]: Pulling fs layer. Retrying in 5 seconds. Retrying in 4 seconds. Retrying in 3 seconds. Retrying in 2 seconds. Retrying in 1 second. Verifying Checksum. Download complete. 
    
    TOTAL DATA PULLED: 520 MB
    
    Error: Get "https://registry-1.docker.io/v2/frooodle/s-pdf/blobs/sha256:0e1176372572ad78ed341fc455897b93e51d90b04f5b41d8a1bb57caad3d3810": tls: failed to parse certificate from server: x509: invalid RDNSequence: invalid attribute value: invalid PrintableString

     

     

    docker run
      -d
      --name='adminer'
      --net='bridge'
      -e TZ="America/Los_Angeles"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Tower-test"
      -e HOST_CONTAINERNAME="adminer"
      -e 'ADMINER_DESIGN'='flat'
      -e 'ADMINER_PLUGINS'=''
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:8080]'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/selfhosters/unRAID-CA-templates/master/templates/img/adminer.png'
      -p '8080:8080/tcp' 'adminer'
    Unable to find image 'adminer:latest' locally
    latest: Pulling from library/adminer
    ddf874abf16c: Pulling fs layer
    9d75c017e041: Pulling fs layer
    86e22a4c35d4: Pulling fs layer
    eb4bd38d1031: Pulling fs layer
    e3bc33b7b683: Pulling fs layer
    3d61d710f98a: Pulling fs layer
    f8441003ca94: Pulling fs layer
    e3bc33b7b683: Waiting
    3d61d710f98a: Waiting
    f8441003ca94: Waiting
    eb4bd38d1031: Waiting
    86e22a4c35d4: Retrying in 5 seconds
    86e22a4c35d4: Retrying in 4 seconds
    86e22a4c35d4: Retrying in 3 seconds
    86e22a4c35d4: Retrying in 2 seconds
    86e22a4c35d4: Retrying in 1 second
    9d75c017e041: Retrying in 5 seconds
    ddf874abf16c: Retrying in 5 seconds
    9d75c017e041: Retrying in 4 seconds
    ddf874abf16c: Retrying in 4 seconds
    9d75c017e041: Retrying in 3 seconds
    ddf874abf16c: Retrying in 3 seconds
    9d75c017e041: Retrying in 2 seconds
    ddf874abf16c: Retrying in 2 seconds
    9d75c017e041: Retrying in 1 second
    ddf874abf16c: Retrying in 1 second
    86e22a4c35d4: Retrying in 10 seconds
    86e22a4c35d4: Retrying in 9 seconds
    86e22a4c35d4: Retrying in 8 seconds
    86e22a4c35d4: Retrying in 7 seconds
    86e22a4c35d4: Retrying in 6 seconds
    86e22a4c35d4: Retrying in 5 seconds
    86e22a4c35d4: Retrying in 4 seconds
    86e22a4c35d4: Retrying in 3 seconds
    86e22a4c35d4: Retrying in 2 seconds
    86e22a4c35d4: Retrying in 1 second
    9d75c017e041: Retrying in 10 seconds
    ddf874abf16c: Retrying in 10 seconds
    9d75c017e041: Retrying in 9 seconds
    ddf874abf16c: Retrying in 9 seconds
    9d75c017e041: Retrying in 8 seconds
    ddf874abf16c: Retrying in 8 seconds
    9d75c017e041: Retrying in 7 seconds
    ddf874abf16c: Retrying in 7 seconds
    9d75c017e041: Retrying in 6 seconds
    ddf874abf16c: Retrying in 6 seconds
    9d75c017e041: Retrying in 5 seconds
    ddf874abf16c: Retrying in 5 seconds
    9d75c017e041: Retrying in 4 seconds
    ddf874abf16c: Retrying in 4 seconds
    9d75c017e041: Retrying in 3 seconds
    ddf874abf16c: Retrying in 3 seconds
    9d75c017e041: Retrying in 2 seconds
    ddf874abf16c: Retrying in 2 seconds
    9d75c017e041: Retrying in 1 second
    ddf874abf16c: Retrying in 1 second
    86e22a4c35d4: Retrying in 15 seconds
    86e22a4c35d4: Retrying in 14 seconds
    86e22a4c35d4: Retrying in 13 seconds
    86e22a4c35d4: Retrying in 12 seconds
    86e22a4c35d4: Retrying in 11 seconds
    86e22a4c35d4: Retrying in 10 seconds
    86e22a4c35d4: Retrying in 9 seconds
    86e22a4c35d4: Retrying in 8 seconds
    86e22a4c35d4: Retrying in 7 seconds
    86e22a4c35d4: Retrying in 6 seconds
    86e22a4c35d4: Retrying in 5 seconds
    86e22a4c35d4: Retrying in 4 seconds
    86e22a4c35d4: Retrying in 3 seconds
    86e22a4c35d4: Retrying in 2 seconds
    86e22a4c35d4: Retrying in 1 second
    docker: error pulling image configuration: download failed after attempts=6: dial tcp: lookup production.cloudflare.docker.com: i/o timeout.
    See 'docker run --help'.
    The command failed.

     

     

    I'm not sure, but in some way I do still believe it is network related. I tried using both the internal 1 GbE and the PCIe 1 GbE card. Attached diagnostics.

    What do you say?

    tower-test-diagnostics-20231007-1720.zip

  15. 8 hours ago, Vr2Io said:

    CPU seldom have problem. But both are fundamental component, so we must testing those first.

     

    Pls create memory test boot USB to test CPU / MB / Memory ( must use parallel CPU thread during test, if you got two pass then it is fine, no need 24hrs )

    https://www.memtest86.com/

     

    If all fine, then start Unraid and test docker service in /tmp to isolate any storage affect issue first. ( no need unplug any other hardware this moment )

     

    image.png.ae0ede8f949869e25bcb3af9230a5929.png

    Thank you for your answer! Much appreciate any help I can get!

     

    I actually went and purchased two new memory sticks, because I was so fed up. Missed that part in the specs list. But I’ll do it anyway, just to try!
     

    Is there a need to create a separate bootable flash for MemTest86 when it’s already on the unRAID flash? I see it’s choosable during startup. Never used it that way before though.. 😅

     

    I’ll get back to you when I tested everything!

     

    EDIT: Upgraded the specs list with the RAM in the first post.