Jump to content

jenga201

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by jenga201

  1. 2 hours ago, FlamongOle said:

    Did you update the plugin(s) to the newest version? Otherwise it's a false negative and should work. I think this plugin works from 6.9.0 and above, but haven't tested. I don't know why you're getting this message.

    This was my bad.

     

    I had the warning set to 'unmonitored' and thought it would go away on rescan.  Putting it back into monitoring and rescanning gives no more warnings.

    • Like 1
  2. I'm getting a Fix Common Problems warning; disklocation-master.plg Not Compatible with Unraid version 6.11.5

     

    I know this topic has been discussed, but I'm not sure what the resolution is.

     

    Unraid Version: 6.12.3

    I've reinstalled via Apps.

     

    What can I do to get this warning solved?

  3. On 6/19/2023 at 10:45 AM, DigitalDivide said:

    I updated Jacket this morning ( Version update eea7..ee02) and now I am getting an error. 

     

    An error occurred while testing this indexer
    Exception (privatehd): Invalid status code 503 (ServiceUnavailable) received from indexer: Invalid status code 503 (ServiceUnavailable) received from indexer
    Click here to open an issue on GitHub for this indexer.

    https://github.com/Jackett/Jackett/issues/13573

    According to the forums, the API will be down for maintenance until July 1

  4. 10 hours ago, Timberwolf said:

    It's a NordVPN thing, possibly. They recently changed something in 'unattended logins' to require an App account. You need to go to your Nord dash, under account, 'set manually' and copy the manual installation credentials to your Docker settings. See also:

     

     

     

    Thanks for some info.

    I'm using PIA and it doesn't seem they've updated to use any token system.

     

    I re-made the docker container and found there have been a bunch of parameter updates.

    Not sure why this happened, but keeping my configurations & remaking the unraid docker container with the same info seemed to do the trick.

  5. I've upgraded to unraid 6.12.1 and am having trouble starting up the sabnzbd container again.

     

    These are the logs; sabnzbd.txt

    It seems the app attempts to start, then restart, then the UI isn't accessible.

     

    I had a large queue of files during the upgrade & restart.  The container should have shut down cleanly.

    Could the program be choking on hashing a large queue on start?

     

    Are there any ways to clear the queue from command line or do some other diagnostic?

     

    Thanks for any replies! :D

  6. 3 minutes ago, dlandon said:

    Why are you using thep ip address to referecne the server?  Use the name and see if that works.

     

    It looks like all your shares are set up public.  Is that what you intended?

    I wasn't aware accessing the shares using the IP was a problem.

     

    None of my shares are public.

    image.thumb.png.87de3d41a1b85f9cb372211913f0fac0.png

     

    I can resolve the server name, but cannot access it through windows file explorer.  It's the same error using the name or ip.

    image.thumb.png.2359d52d2ddd44d7ad864c613b78484f.png

     

    As a side note, I've had this share configuration for probably 10 years without any issues.

  7. +1 to update iotop / iftop.  I haven't been able to run these for a couple years due to

    Traceback (most recent call last):
      File "/usr/sbin/iotop", line 17, in <module>
        main()
      File "/usr/lib64/python2.7/site-packages/iotop/ui.py", line 620, in main
        main_loop()
      File "/usr/lib64/python2.7/site-packages/iotop/ui.py", line 610, in <lambda>
        main_loop = lambda: run_iotop(options)
      File "/usr/lib64/python2.7/site-packages/iotop/ui.py", line 508, in run_iotop
        return curses.wrapper(run_iotop_window, options)
      File "/usr/lib64/python2.7/curses/wrapper.py", line 22, in wrapper
        stdscr = curses.initscr()
      File "/usr/lib64/python2.7/curses/__init__.py", line 33, in initscr
        fd=_sys.__stdout__.fileno())
    _curses.error: setupterm: could not find terminal

     

    Thank you!

  8. Restarting the container or making any changes attempts to remake the mysql db.   This clears out any saved configurations.

     

    => Installing MySQL ...
    220123 04:52:59 mysqld_safe Logging to syslog.
    220123 04:52:59 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
    ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'rootuser'@'localhost'
    
    => Database created!

     

    The mysql db needs to be persisted to appdata to prevent this.

  9. Ok, it seems to work using the sg_start and sdparm commands only.

    The UI just doesn't reflect the state of the drive properly.

     

    I think the read errors are due to spinning down the drive while it was being accessed.

    They aren't going away, so I'll probably just try replacing/formatting that drive again.

  10. I didn't mean to imply the syslog errors in my previous post were read errors.

     

    image.thumb.png.6a524a65034643f6fd64c7005a931aea.png

    This is after a parity check on a previously good drive.

    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826600
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826608
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826616
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826624
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826632
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826640
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826648
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826656
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826664
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826672
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826680
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826688
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826696
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826704
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826712
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826720
    Oct 12 09:42:17 Beast kernel: md: disk7 read error, sector=5848826728
    Oct 12 09:43:55 Beast kernel: md: sync done. time=45545sec

     

    5 hours ago, doron said:

    Thanks. How did you determine that the drive is not spun down?

    The unraid icon remained green and data was still immediately accessible.

    The particular drive I tested with was not showing any traffic throughout my testing.

     

    I did try using this command, but there was no output;

    sg_start  --readonly --pc=3 /dev/sde

  11. Thanks for making this a plugin.  Hope I can help with some diagnostics.

     

    HUS724030ALS640  Fail with *Temp and drive not spun down.  Still operational, seemingly no effect.  Getting read errors after a reboot.

     

    image.thumb.png.f8c1d2dca4416530290696be38f50e5d.png

     

    Oct 11 20:38:24 Beast kernel: mdcmd (47): spindown 7
    Oct 11 20:38:24 Beast kernel: md: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5
    Oct 11 20:38:24 Beast emhttpd: error: mdcmd, 2723: Input/output error (5): write
    Oct 11 20:38:24 Beast SAS Assist v0.6[27532]: spinning down slot 7, device /dev/sde (/dev/sg4)
    Slot:   06:00.0
    Class:  Serial Attached SCSI controller [0107]
    Vendor: Broadcom / LSI [1000]
    Device: SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] [0064]
    SVendor:        Broadcom / LSI [1000]
    SDevice:        SAS 9201-16i [30c0]
    Rev:    02
    NUMANode:       0

     

  12. I used the guide on the wiki to replace a cache drive;

    https://wiki.unraid.net/Replace_A_Cache_Drive

     

    Old cache drive 1.5TB intel

    New cache drive 1TB samsung

     

    Step 5 to 8 says to finish mover, unassign the cache drive, stop array, assign the new cache drive.  Simple enough..

     

    But, after starting the array in step 10, the 'new' cache drive was mounted with the 'old' partition info.

     

    To clarify that,  the web UI showed the 1TB samsung drive with a 1.5TB partition. 

    Running df showed the 1.5TB drive & partition mounted.

    This old cache drive was working properly, even though the new drive was not mounted.

     

    Without changing the drive config, I shut down the system and physically uninstalled the 1.5TB intel.

    After starting up the system, the 1TB samsung had the correct option to Format the new drive.

     

    Is unraid mounting partitions called 'cache' regardless of if they are on the correct device?

    Or, is there a step missing in the wiki?

  13. On 11/12/2019 at 5:01 PM, dmacias said:
    On 11/12/2019 at 4:52 PM, IamSpartacus said:
     
    image.png.aeb179868778409910a116feb8a7cb04.png

    Ok I see. I'll have to update the plugin when I get a chance

    Hey, Thanks for your work on this plugin!

     

    Is it possible to also add SAS support to the HDD Temperature sensor?

     

    Here's a sample output;

    Quote

    root@:~# smartctl -A -n standby /dev/sdf
    smartctl 7.0 2018-12-30 r4883 [x86_64-linux-4.19.98-Unraid] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

     

    === START OF READ SMART DATA SECTION ===
    Current Drive Temperature:     45 C
    Drive Trip Temperature:        85 C

     

    Manufactured in week 52 of year 2013
    Specified cycle count over device lifetime:  50000
    Accumulated start-stop cycles:  33
    Specified load-unload count over device lifetime:  600000
    Accumulated load-unload cycles:  1022
    Elements in grown defect list: 0

     

  14. 1 hour ago, sota said:

    Definitely like IPMI in that it reads the Sea of Sensors on my HP server.

    Definitely don't like how it hijacks the Dashboard.

    Hoping that gets fixed soon.  At least make it so it can collapse? :D

    You can adjust that in the IPMI Tool settings page.

    Settings> Display Settings> Dashboard Sensors

  15. In previous version of unraid, the sensors command would show temperatures for my LSI 9211-4i SAS card.

     

    82:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03)

     

    Is there any way to make the sensors command output the temp of this card or an alternate method to get the temp?

  16. 8 hours ago, SlrG said:

    @jenga201

    In theory this is of course possible. But It might be necessary to add a new dependency on the openldap package which is not default on unRAID. Sadly I don't have the time at the moment to check out possible negative side affects. The plugin is heavily integrated info the unRAID user management and intended to use only that.

     

    For a more ambitious setup it is better and far easier to setup a VM with a more flexible linux distro than unRAIDs Slackware. Within debian and ubuntu for example there are addon packages for proftpd with ldap that can be installed without compiling. Pointing to the shares on your unRAID for data storage should give you what you want without possible adverse effects.

    Ok, Thanks for your response.

  17. Is there a way to view the UNRAID folder by default in the Cloud Commander container?

    On page load, view the mounted files, not the docker's file system.

     

    Also, why does Cloud Commander change the file extension for downloaded files?

  18. Your syslog shows;

    Nov 25 17:30:22 Tower kernel: BTRFS warning (device loop2): block group 8083472384 has wrong amount of free space
    Nov 25 17:30:22 Tower kernel: BTRFS warning (device loop2): failed to load free space cache for block group 8083472384, rebuilding it now

    ...

    Nov 29 22:26:06 Tower kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 2696072 off 186834944 csum 0x80fe9b58 expected csum 0x2d132b24 mirror 1
    Nov 29 22:26:06 Tower kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 2696072 off 186834944 csum 0x80fe9b58 expected csum 0x2d132b24 mirror 2

    .. repeating

     

     

     

    I recently had my cache go read-only which has 2x PCIe NVMe drives.

    The cause was a stalled balance operation, not the drive itself.  After a reboot, the balance completed quickly.

     

    BTRFS is buggy and is still getting updates, so i'm guessing this is a general problem when using PCIe devices.

     

    Check the health of your nvme0n1 disk and think about doing cache backups.

     

    I use the "CA Backup / Restore Appdata" to backup my cache only appdata, usb, and system directories to a parity protected share.

×
×
  • Create New...