Jump to content

CS01-HS

Members
  • Posts

    479
  • Joined

  • Last visited

Posts posted by CS01-HS

  1. Instructions for an Email-to-Growl relay to get growl notifications from any app that supports email alerts, like unRAID.

    I have this setup on a pi but any debian (and maybe other) distros should work.

     

    This is a dirty first draft so if you're one of the two people who still use growl and run into difficulty let me know and I'll clean it up.

     

    NOTE: This assumes you have Growl setup and working on all client machines. I'm not sure where to download 2.x from anymore. If I find the installer I'll make it available.

     

    CAUTION: This assumes the target machine DOES NOT have SMTP ports open to the internet. This is an insecure local-only setup.

     

    This Guide Assumes:

    Local Network: 10.0.1.0/24

    Docker Network: 172.17.0.0/24

    Install host: pi.lan

    Install host IP: 10.0.1.10

    Install user/group: pi/pi

    Growl Client IPs: 10.0.1.2, 10.0.1.3

    Growl Password: my_password

    unRAID hostname: nas.lan

     

    Customize as needed.

     

    ## References

    https://wiki.debian.org/Exim

    https://geekthis.net/post/exim-process-mail-script/

    https://askubuntu.com/questions/927056/disable-ipv6-in-exim4

     

     

    ## Install pip

    sudo apt update
    sudo apt install python-pip

     

    ## Make sure it's up to date

    sudo pip install --upgrade pip

     

    ## Install gntp (python library for growl notifications)

    sudo pip install gntp

     

    ## Create a notifier script

    sudo emacs /usr/bin/growl-notify.py

    ## Paste:

    # use standard Python logging
    import sys
    import logging
    logging.basicConfig(level=logging.INFO)
    import gntp.notifier
    
    # takes host-ip (maybe hostname) and password in argv,
    # e.g growl-notify.py 10.0.1.1 my_password
    
    growl = gntp.notifier.GrowlNotifier(
        applicationName = sys.argv[3],
        notifications = [sys.argv[4],],
        defaultNotifications = [sys.argv[4]],
        hostname = sys.argv[1],       # Here enter your Mac IP addresses
        password = sys.argv[2]        # Here enter your growl password
    )
    growl.register()
    
    # Send one message
    growl.notify(
        noteType = sys.argv[4],
        title = sys.argv[5],
        description = sys.argv[6],
        icon = sys.argv[7],
        sticky = True,
        priority = 1,
    )

     

     

    ## Exim

    ## Should be installed by default on debian so reconfigure

    sudo dpkg-reconfigure exim4-config

    ## Select options:

    1. internet site; mail is sent and received directly using SMTP
    2. pi.lan
    3. <leave blank>
    4. pi.lan
    5. <leave blank>
    6. 10.0.1.0/24; 172.17.0.0/24
    7. No
    8. mbox format in /var/mail/
    9. Yes (split)

     

    ## Create exim router

    ## Forwards any mail addressed to [email protected] to the transport

    sudo emacs /etc/exim4/conf.d/router/101_growl_forward

    ## Paste:

    growl_forward:
            driver = accept
            local_part_prefix = growl
            transport = transport_growl_forward

     

    ## Create transport

    ## This will forward to our shell script

    sudo emacs /etc/exim4/conf.d/transport/101_custom-growl_forward

    ## Paste:

    transport_growl_forward:
            driver = pipe
            command = /usr/local/bin/growl_forward.sh
            user = pi
            group = pi

     

    ## Create shell script

    sudo emacs /usr/local/bin/growl_forward.sh

    ## Paste:

    #!/bin/bash                                                                                                                                                                                                                        
    
    ## User Config                                                                                                                                                                                                                     
    
    # Touch this file (as root) then chown it to whatever user this runs as                                                                                                                                                             
    LOG_FILE="/var/log/growl_forward.log"
    
    # List of growl clients by IP                                                                                                                                                                                                       
    GROWL_CLIENTS=(
        "10.0.1.2"
        "10.0.1.3"
    )
    
    # Hardcoded Growl password (do we really need security?)                                                                                                                                                                                      
    GROWL_PASSWORD="my_password"
    
    ## Main                                                                                                                                                                                                                            
    MESSAGE=$(cat)
    
    echo ' ' >> $LOG_FILE
    date >> $LOG_FILE
    
    echo "MESSAGE: ${MESSAGE}" >> $LOG_FILE
    
    # Parse from                                                                                                                                                                                                                       
    FROM=$(grep 'From' <<< ${MESSAGE})
    FROM=$(head -1 <<< $FROM)
    FROM=${FROM#'From: '}
    
    # Parse subject                                                                                                                                                                                                                     
    SUBJECT=$(grep 'Subject' <<< ${MESSAGE})
    SUBJECT=$(head -1 <<< $SUBJECT)
    SUBJECT=${SUBJECT#'Subject: '}
    SUBJECT=${SUBJECT/': '/' - '}
    
    # Headings                                                                                                                                                                                                                         
    #                                                                                                                                                                                                                                  
    
    HEADING=" "
    
    # For Unraid use Importance
    if [[ $FROM -eq "[email protected]" ]]; then
        HEADING=$(grep 'Importance' <<< ${MESSAGE})
        HEADING=${HEADING#'Importance: '}
        HEADING=${HEADING^^}
        HEADING="[$HEADING]"
    fi
    
    # Parse description                                                                                                                                                                                                                 
    DESCRIPTION=$(grep 'Description' <<< ${MESSAGE})
    DESCRIPTION=${DESCRIPTION#'Description: '}
    
    for growl_client in ${GROWL_CLIENTS[*]}
    do
        echo "*** EXECUTING: /usr/bin/python /usr/bin/growl-notify.py ${growl_client} '*********' \"$SUBJECT\" \" \" \"$HEADING\" \"$DESCRIPTION\" \"\" " >> $LOG_FILE
        /usr/bin/python /usr/bin/growl-notify.py ${growl_client} "${GROWL_PASSWORD}" "$SUBJECT" " " "$HEADING" "$DESCRIPTION" ""
    done
    
    exit 0

    ## Make it executable

    sudo chmod a+x /usr/local/bin/growl_forward.sh

     

    ## Disable ipv6 to eliminate SIGNIFICANT delays caused by connection attempts to nonexistent network

    sudo cp /etc/exim4/conf.d/main/02_exim4-config_options  /etc/exim4/conf.d/main/02_exim4-config_options.orig
    sudo emacs  /etc/exim4/conf.d/main/02_exim4-config_options

    ## Paste this after the header:

    disable_ipv6=true
    
    ## Allow unqualified recipients (missing domain) defaulting to local domain
    recipient_unqualified_hosts = *
    qualify_recipient = pi.lan

     

    ## Reconfigure and restart exim

    sudo update-exim4.conf
    sudo service exim4 restart

     

    ## Verify ipv6 is disabled

    sudo apt install net-tools
    netstat -tulpn | grep :25

    # You should only see ipv4 addresses

     

    ## Log file

    sudo touch /var/log/growl_forward.log
    sudo chown pi:pi /var/log/growl_forward.log
    sudo chmod a+r /var/log/growl_forward.log

     

    ## Tail to monitor:

    sudo tail -f /var/log/exim4/mainlog /var/log/syslog /var/log/growl_forward.log

     

    ## Now any emails to [email protected] should be broadcast as Growl notifications

     

    ## Add Growl alerts to unRAID

    # Settings -> Notifications -> SMTP

    1005978992_ScreenShot2020-12-15at11_04_36AM.thumb.png.6f6340a75244829fedfd2dc537af7e66.png

     

    ## Click

    APPLY

     

    ## then click

    TEST

     

    ## and watch the tail above for any errors

  2. 58 minutes ago, JorgeB said:

    Yes, you can have a script doing that after every reboot.

    Great, added to config/go. Easy fix, thanks.

     

    58 minutes ago, JorgeB said:

    This, 2.5" drives are usually much more aggressive with power saving features, WD black are usually a exception since they are optimized for performance.

    The LED difference is still curious. I always assumed the "access" signal was caught when the MB (or card) detected HD access but this suggests it relies on the drive to say "I have been accessed."

     

     

  3. My array has 3 disks, all Seagate ST5000LM000

    I have telegraf polling SMART data with a poll interval of 10 seconds.

     

    I noticed a chirping every 10 or so seconds coincident with drives' access LEDs flashing.

    I figured the access was due to telegraf and the chirping was the heads parking/un-parking.

     

    The SMART reports seem to confirm this, e.g.

    9 Power on hours 0x0032 094 094 000 Old age Always Never 5996 (173 29 0)
    193 Load cycle count 0x0032 001 001 000 Old age Always Never 454511

    That works out to 75 times/hour so it's probably been going on ever since I installed telegaf.

     

    I disabled head parking on all 3 drives with smartctl:

    smartctl -s apm,off /dev/sdX

    which stopped the chirping but I still see the drives' LEDs flash every 10 seconds.

     

    What's confusing is I have another 3 drives in a pool (on 6.9.0-RC1), all WDC WD3200BEKX

     

    These drives are also polled by telegraf with no excessive head parking, which can be explained by the drives' internal settings:

    9 Power on hours 0x0032 033 033 000 Old age Always Never 48920 (5y, 6m, 27d, 8h)
    193 Load cycle count 0x0032 200 200 000 Old age Always Never 647

    but also no LED flashing, which I can't explain.

     

    One seagate and one WD are on the motherboard's controller and the rest on the HBA so it's not likely a controller issue.

     

    Questions

    1. Would it be wise to disable APM on the Seagates on startup using smartctl?
    2. Is the difference in LED behavior likely drive-specific or due to the way unRAID treats array disks vs pool disks?

     

  4. 38 minutes ago, macmanluke said:

    In the instructions i noticed it says use mover to transfer cache to array but onto a btrfs drive - is there any reason for this?
    my array is XFS

    My guess would be to preserve the NOCOW attributes.

     

    39 minutes ago, macmanluke said:

    If its required is there any way of converting a drive without messing up the array/parity etc?
    Currently have one drive thats empty (and excluded from shares globally) as i moved its contents off when i had a drive start failing recently (since replaced/rebuilt array)

     

    If your current pool is RAID1 you can use this easier method:

     

  5. 1 hour ago, theruck said:

    i went to unraid over synology as i read that the TM is working but now i read that in the version 6.8.3 it is not working to many users and there seems to be no real support just guessing on the forums which is very frustrating

    6.8.3 should work. The Spaceinvader tutorial (did you follow it?) uses that or an earlier version.

     

    Max throughput with SMB is fine - I can max out my wireless uploading to a share. It's small files or whatever Time Machine does under the hood that's painful. Try this benchmark for random read/write:

    https://www.katsurashareware.com/amorphousdiskmark/

     

    I won't blame Limetech for slowness unless I see a network implementation that isn't slow and I haven't. Apple really dropped the ball on a clever and unique feature.

  6. 6 minutes ago, olfolfolf said:

    Hey there,

     

    I'm using the latest lsio container with Unraid 6.8.3.

    To get HW transcoding running with my J4105, I have to use this workaround: 

     

    
    cd /app/emby/dri
    mv mv iHD_drv_video.so iHD_drv_video.so.disabled

    Unfortunately, I have to do this again after updating the container. Is there a quick and easy way, to keep this workaround?

     

     

    I use the User Scripts plugin to run this script every hour - although maybe there's a more clever solution using Post Arguments in the docker template?

    #!/bin/bash
    
    # EmbyServer
    #
    # Verify it's running
    running=`docker container ls | grep EmbyServer | wc -l`
    if [ "${running}" != "0" ]; then
      docker exec EmbyServer /bin/sh -c "mv /lib/dri/iHD_drv_video.so /lib/dri/iHD_drv_video.so.disabled" 2>/dev/null
      if [[ $? -eq 0 ]]; then 
        echo "EmbyServer: Detected iHD driver. Disabling and restarting EmbyServer..."
        docker restart EmbyServer
        echo "Done."
      fi
    fi
    
    exit 0

     

  7.  

    5 hours ago, DMenace83 said:

    Has anyone successfully passed through an iPhone to the macinabox vm?

    Yes, but only by passing a whole controller and performance is terrible, although that may be specific to my convoluted setup.

     

    1617902422_ScreenShot2020-12-07at5_20_26AM.thumb.png.6448059db69cdc2de4e9e235fd8b706c.png

     

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </hostdev>

    I saw a recent post about usb/ip support in the next version which might be an option if you don't have a spare controller.

  8. 8 hours ago, [email protected] said:

    i am running BTRFS - i don't know where you would turn copy-on-write to no? where is this option?

    Assuming that's an array share and not an unassigned device, go to Shares then click the share and you'll see the field Enable Copy-on-write.

     

    Disabling it when creating a new share works (which means you'd have to start over) but I don't think changing it for an existing share with existing directories will, although there may be a way to manually convert it.

     

    In my experience though these are all marginal tweaks, I'm not sure there is a way to get fast time machine backups. I switched from a Time Capsule to unRAID hoping to speed things up. If it has it's not by much. 

     

    The app TimeMachineEditor lets you reduce the backup frequency so at least you get fewer slow backups.

  9. 26 minutes ago, aidenpryde said:

    Wow, that's cool, that worked!  Shut down my Unraid server and my Proxmox server.

    Great. If you have the UPS set to shut itself off after issuing shutdowns make sure that delay is set long enough that the servers have time to shutdown. I mention it because out of caution I suggested stopping the array which consumes most unraid's shutdown time.

  10. This doesn't confirm your "shut down after" setting works but you can confirm the shutdown procedure works by STOPPING THE ARRAY (to avoid data corruption if something goes wrong) and running

    upsmon -c fsd

    From help:

    usage: upsmon [OPTIONS]
    
      -c <cmd> send command to running process
    commands:
    - fsd: shutdown all master UPSes (use with caution)
    - reload: reread configuration
    - stop: stop monitoring and exit
      -D raise debugging level
      -h display this help
      -K checks POWERDOWNFLAG, sets exit code to 0 if set
      -p always run privileged (disable privileged parent)
      -u <user> run child as user <user> (ignored when using -p)
      -4 IPv4 only
      -6 IPv6 only

     

    • Thanks 1
  11. A few observations from my testing.

     

    SMB random read/write performance in MacOS stinks. I'm using Big Sur and it's still worse than the old AFP method. Still you're getting relatively terrible performance. My initial ~150GB backup over wireless took about half a day.

     

    One of the recommended SMB tweaks is disabling signing. Have you done that? It may require a reboot to take effect:

    $ more /etc/nsmb.conf
    [default]
    signing_required=no

     

    I think? the spaceinvader tutorial sets an unassigned device as the time machine share. That's good because it bypasses the cache/disk abstraction layer and every bit helps.

     

    If you're backing up to an array share that uses BTRFS cache (I run a dedicated cache-only pool in beta35) setting Copy-on-write to No seems to help.

     

    The new APFS formatting for Time Machine images in Big Sur (previous versions used HFS+) also seems to help. And I may be imagining it but my backups seem smaller. Note: I don't think it's possible to convert existing HFS+ images to APFS so you have to start fresh.

  12. Here's a strange problem:

     

    I have an always-attached (mechanical) USB hard drive mounted through UD.

    I have SSD Trim set to run 7AM daily through the built-in scheduler.

    It seems? unRAID recognizes this USB disk as an SSD and tries to trim it.

    I don't know if that's because UD's reporting it as an SSD or SSD detection happens outside UD.

     

    (I don't see the error every time trim runs but syslog server's been buggy lately so maybe it's just not logged.)

    Sep 30 07:00:03 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct  1 07:00:02 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct  2 07:00:02 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct  7 07:00:09 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct  9 07:00:02 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 11 07:00:02 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 13 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 22 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 24 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 26 07:00:01 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 27 07:00:09 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Oct 30 07:00:08 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov  6 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 10 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 11 07:00:02 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 13 07:00:09 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 14 07:00:09 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 15 07:00:09 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 16 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 17 07:00:02 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 20 07:00:09 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 22 07:00:08 NAS kernel: blk_update_request: critical target error, dev sda, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 24 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76096 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Nov 26 07:00:03 NAS kernel: blk_update_request: critical target error, dev sda, sector 76048 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0
    Dec  2 07:00:08 NAS kernel: blk_update_request: critical target error, dev sdb, sector 76048 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0

     

    nas-diagnostics-20201203-0844.zip

  13. 1 hour ago, dlandon said:

    It's missing 'sec=ntlm'.

    I updated, tested and confirmed it now passes that parameter but only if Force all SMB remote shares to SMB v1 = Yes. It doesn't pass it if it degrades to SMB1 after failing higher versions. Not a problem if that's by design.

    Nov 23 17:59:31 NAS unassigned.devices: Mount SMB share '//10.0.1.1/Data' using SMB1 protocol.
    Nov 23 17:59:31 NAS unassigned.devices: Mount SMB command: /sbin/mount -t cifs -o rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=1.0,credentials='/tmp/unassigned.devices/credentials_Data' '//10.0.1.1/Data' '/mnt/disks/time-capsule'

     

  14. I may be an edge case but in beta35 this (very handy) docker fills up my syslog with the following error until the system's overloaded.

    Nov 23 10:00:10 NAS kernel: bad: scheduling from the idle thread!
    Nov 23 10:00:10 NAS kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.8.18-Unraid #1
    Nov 23 10:00:10 NAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J5005-ITX, BIOS P1.40 08/06/2018
    Nov 23 10:00:10 NAS kernel: Call Trace:
    Nov 23 10:00:10 NAS kernel: dump_stack+0x6b/0x83
    Nov 23 10:00:10 NAS kernel: dequeue_task_idle+0x21/0x2a
    Nov 23 10:00:10 NAS kernel: __schedule+0x135/0x49e
    Nov 23 10:00:10 NAS kernel: ? __mod_timer+0x215/0x23c
    Nov 23 10:00:10 NAS kernel: schedule+0x77/0xa0
    Nov 23 10:00:10 NAS kernel: schedule_timeout+0xa7/0xe0
    Nov 23 10:00:10 NAS kernel: ? __next_timer_interrupt+0xaf/0xaf
    Nov 23 10:00:10 NAS kernel: msleep+0x13/0x19
    Nov 23 10:00:10 NAS kernel: pci_raw_set_power_state+0x185/0x257
    Nov 23 10:00:10 NAS kernel: pci_restore_standard_config+0x35/0x3b
    Nov 23 10:00:10 NAS kernel: pci_pm_runtime_resume+0x29/0x7b
    Nov 23 10:00:10 NAS kernel: ? pci_pm_default_resume+0x1e/0x1e
    Nov 23 10:00:10 NAS kernel: ? pci_pm_default_resume+0x1e/0x1e
    Nov 23 10:00:10 NAS kernel: __rpm_callback+0x6b/0xcf
    Nov 23 10:00:10 NAS kernel: ? pci_pm_default_resume+0x1e/0x1e
    Nov 23 10:00:10 NAS kernel: rpm_callback+0x50/0x66
    Nov 23 10:00:10 NAS kernel: ? pci_pm_default_resume+0x1e/0x1e
    Nov 23 10:00:10 NAS kernel: rpm_resume+0x2e2/0x3d6
    Nov 23 10:00:10 NAS kernel: ? __schedule+0x47d/0x49e
    Nov 23 10:00:10 NAS kernel: __pm_runtime_resume+0x55/0x71
    Nov 23 10:00:10 NAS kernel: __intel_runtime_pm_get+0x15/0x4a [i915]
    Nov 23 10:00:10 NAS kernel: i915_pmu_enable+0x53/0x147 [i915]
    Nov 23 10:00:10 NAS kernel: i915_pmu_event_add+0xf/0x20 [i915]
    Nov 23 10:00:10 NAS kernel: event_sched_in+0xd3/0x18f
    Nov 23 10:00:10 NAS kernel: merge_sched_in+0xb4/0x1de
    Nov 23 10:00:10 NAS kernel: visit_groups_merge.constprop.0+0x174/0x3ad
    Nov 23 10:00:10 NAS kernel: ctx_sched_in+0x11e/0x13e
    Nov 23 10:00:10 NAS kernel: perf_event_sched_in+0x49/0x6c
    Nov 23 10:00:10 NAS kernel: ctx_resched+0x6d/0x7c
    Nov 23 10:00:10 NAS kernel: __perf_install_in_context+0x117/0x14b
    Nov 23 10:00:10 NAS kernel: remote_function+0x19/0x43
    Nov 23 10:00:10 NAS kernel: flush_smp_call_function_queue+0x103/0x1a4
    Nov 23 10:00:10 NAS kernel: flush_smp_call_function_from_idle+0x2f/0x3a
    Nov 23 10:00:10 NAS kernel: do_idle+0x20f/0x236
    Nov 23 10:00:10 NAS kernel: cpu_startup_entry+0x18/0x1a
    Nov 23 10:00:10 NAS kernel: start_kernel+0x4af/0x4d1
    Nov 23 10:00:10 NAS kernel: secondary_startup_64+0xa4/0xb0

     

  15. After enabling some disk-related power saving features I occasionally see the error below in the logs.

     

    Is it anything to worry about?

    ata3 is a mechanical disk connected to my motherboard's ASM1062 controller.

    I don't see any indication of a problem except the log message.

    Nov 13 04:05:04 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Nov 13 04:05:04 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov 13 04:05:04 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov 13 04:05:04 NAS kernel: ata3.00: configured for UDMA/133
    Nov 13 04:06:34 NAS kernel: ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4050002 action 0xe frozen
    Nov 13 04:06:34 NAS kernel: ata3.00: irq_stat 0x08000040, interface fatal error, connection status changed
    Nov 13 04:06:34 NAS kernel: ata3: SError: { RecovComm PHYRdyChg CommWake DevExch }
    Nov 13 04:06:34 NAS kernel: ata3.00: failed command: FLUSH CACHE EXT
    Nov 13 04:06:34 NAS kernel: ata3.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 1
    Nov 13 04:06:34 NAS kernel:         res 40/00:a0:90:1a:0e/00:00:4d:00:00/40 Emask 0x10 (ATA bus error)
    Nov 13 04:06:34 NAS kernel: ata3.00: status: { DRDY }
    Nov 13 04:06:34 NAS kernel: ata3: hard resetting link
    Nov 13 04:06:35 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Nov 13 04:06:35 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov 13 04:06:35 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov 13 04:06:35 NAS kernel: ata3.00: configured for UDMA/133
    Nov 13 04:06:35 NAS kernel: ata3.00: retrying FLUSH 0xea Emask 0x10
    Nov 13 04:06:35 NAS kernel: ata3: EH complete

     

     

     

×
×
  • Create New...