Jump to content

SquareGravy

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by SquareGravy

  1. 4 minutes ago, JorgeB said:

    Don't rebuild on top before checking that the emulate disk is mounting and contents look correct, if you didn't yet check, unassign the disk and start the array.

     

    Most likely a bad SATA cable.

     

    I don't know where to see if it's being emulated or not. I unassigned and started the array and now its showing under "unassigned" as Dev 1.

     

    If I go into Krusader I can see the folders for all my disks under my "Media" and I can browse into Disk 3 and everything is there, is that what it means by "emulated"?

  2. Update again: Rebooted and the drive is back... When adding it back to Disk 3 it says "All existing data on this device will be OVERWRITTEN when array is Started". Is that right? It'll just rebuild itself from the parity drives right? Should I start in Maintenance mode to be safe? Thanks.

     

    Edit: Disk 3 is now showing the same UDMA error as Disk 2 so I'm thinking its the board in the case that's the issue.

  3. Update: Checked in BIOS. I forgot the BIOS on this motherboard are weird (MSI) and when it boots it only registers 4 drives. If I go to SATA configuration within the BIOS it only shows 4, but if I go to to Boot Menu, I can see all 8 drives + my NVMe cache drive, so I have no idea.

  4. 1 minute ago, JorgeB said:

    See if the disk is detected i the BIOS and/or try swapping both cables with another disk, if it's still missing after that it's likely dead.

     

    Appreciate the response. I'll check BIOS now. I'm wondering if the drive board in the case is going bad. Should have mentioned this but Disk 2 has been showing a UDMA CRC Error Count warning for a few days. The drive that failed had no warning at all. Wondering if I should replace the whole thing. I saw the UDMA error is usually a cable issue, but since its a single board for all 8 disks, there's just two power cables plugging into it and a sata cable for each drive.

  5. Hello everyone, 

     

    Pretty sure my disk is either dead, or hoping (sort of) it's a cable issue. Noticed this morning my Disk 3 was showing as disabled. Tried following the steps here: https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself but upon restarting the array, it says "not installed" for Disk 3. Before starting the array it showed "no device". I've attached my diagnostics. If the disk isn't showing at all would that be a power issue? If it is I'm not sure how I could fix that as its part of the case that all 8 drives plug in to for power... This is also a data disk so if it is a dead drive I'm hoping its an easy fix of just popping in a new drive and the array rebuilds itself...

     

    Any help is appreciated, thanks.

    tower-diagnostics-20240306-0833.zip

  6. Anyone having issues with the latest plexpass update? I updated this morning and now it basically nukes my server after running for a few minutes. Lose all network connectivity and and I have manually reboot. Only started happening after installing the update this morning.

     

    And how can I downgrade to the previous version?

     

    Edit:

     

    If anyone else encounters this issue, go to edit the container and change the repository to "binhex/arch-plexpass:1.40.0.7775-1-01" to go back to the previous version.

  7. I'm having the same issue as a lot of other people where it's been working no issues for a year and for the last two weeks I haven't been able to access the WebUI. I haven't changed anything and I do not have the VPN enabled. Below are my logs. I have no idea what the issue could be. Any help is appreciated.

     

    Quote

    Created by...
    ___. .__ .__
    \_ |__ |__| ____ | |__ ____ ___ ___
    | __ \| |/ \| | \_/ __ \\ \/ /
    | \_\ \ | | \ Y \ ___/ > <
    |___ /__|___| /___| /\___ >__/\_ \
    \/ \/ \/ \/ \/
    https://hub.docker.com/u/binhex/

    2021-08-25 11:02:25.586103 [info] Host is running unRAID
    2021-08-25 11:02:25.603819 [info] System information Linux 46626e98853e 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 GNU/Linux
    2021-08-25 11:02:25.624657 [info] OS_ARCH defined as 'x86-64'
    2021-08-25 11:02:25.645568 [info] PUID defined as '99'
    2021-08-25 11:02:25.685419 [info] PGID defined as '100'
    2021-08-25 11:02:25.744337 [info] UMASK defined as '000'
    2021-08-25 11:02:25.764333 [info] Permissions already set for volume mappings
    2021-08-25 11:02:25.790782 [info] Deleting files in /tmp (non recursive)...
    2021-08-25 11:02:25.816336 [info] VPN_ENABLED defined as 'no'
    2021-08-25 11:02:25.831958 [warn] !!IMPORTANT!! VPN IS SET TO DISABLED', YOU WILL NOT BE SECURE
    2021-08-25 11:02:25.854406 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
    2021-08-25 11:02:25.875257 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
    2021-08-25 11:02:25.898932 [info] Starting Supervisor...
    2021-08-25 11:02:26,088 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
    2021-08-25 11:02:26,088 INFO Set uid to user 0 succeeded
    2021-08-25 11:02:26,090 INFO supervisord started with pid 7
    2021-08-25 11:02:27,092 INFO spawned: 'shutdown-script' with pid 84
    2021-08-25 11:02:27,094 INFO spawned: 'start-script' with pid 85
    2021-08-25 11:02:27,095 INFO spawned: 'watchdog-script' with pid 86
    2021-08-25 11:02:27,095 INFO reaped unknown pid 8 (exit status 0)
    2021-08-25 11:02:27,103 DEBG 'start-script' stdout output:
    [info] VPN not enabled, skipping configuration of VPN

    2021-08-25 11:02:27,103 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2021-08-25 11:02:27,103 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2021-08-25 11:02:27,104 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2021-08-25 11:02:27,104 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 22773084371456 for <Subprocess at 22773084370784 with name start-script in state RUNNING> (stdout)>
    2021-08-25 11:02:27,104 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 22773084496320 for <Subprocess at 22773084370784 with name start-script in state RUNNING> (stderr)>
    2021-08-25 11:02:27,104 INFO exited: start-script (exit status 0; expected)
    2021-08-25 11:02:27,104 DEBG received SIGCHLD indicating a child quit
    2021-08-25 11:02:27,126 DEBG 'watchdog-script' stdout output:
    [info] Deluge not running

    2021-08-25 11:02:27,129 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI not running

    2021-08-25 11:02:27,130 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start Deluge...
    [info] Removing deluge pid file (if it exists)...

    2021-08-25 11:02:27,251 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'listen_interface' currently has an undefined value
    [info] Deluge key 'listen_interface' will have a new value ''
    [info] Writing changes to Deluge config file '/config/core.conf'...

    2021-08-25 11:02:27,358 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'outgoing_interface' currently has an undefined value
    [info] Deluge key 'outgoing_interface' will have a new value ''
    [info] Writing changes to Deluge config file '/config/core.conf'...

    2021-08-25 11:02:27,501 DEBG 'watchdog-script' stdout output:
    [info] Deluge key 'default_daemon' currently has a value of '22fa08cc64cf41f0adcbd5cd4febc427'
    [info] Deluge key 'default_daemon' will have a new value '22fa08cc64cf41f0adcbd5cd4febc427'
    [info] Writing changes to Deluge config file '/config/web.conf'...

    2021-08-25 11:02:27,772 DEBG 'watchdog-script' stdout output:
    [info] Deluge process started
    [info] Waiting for Deluge process to start listening on port 58846...

     

×
×
  • Create New...