Jump to content

goinsnoopin

Members
  • Posts

    357
  • Joined

  • Last visited

Posts posted by goinsnoopin

  1. @trurl

    Parity Check completed and I got an email when it finished indicating that there were 0 errors??  I have attached a current diagnostics.  I also attached a screenshot of the parity history that shows the zero errors and the sync errors that were corrected.  There was also a second email that read as follows...(what is error code -4 listed after the sync errors):  

     

    Event: Unraid Status
    Subject: Notice [TOWER] - array health report [PASS]
    Description: Array has 9 disks (including parity & pools)
    Importance: normal

    Parity - WDC_WD140EDGZ-11B2DA2_3GKH2J1F (sdk) - active 32 C [OK]
    Parity 2 - WDC_WD140EDFZ-11A0VA0_9LG37YDA (sdm) - active 32 C [OK]
    Disk 1 - WDC_WD120EMFZ-11A6JA0_QGKYB4RT (sdc) - active 32 C [OK]
    Disk 2 - WDC_WD20EFRX-68EUZN0_WD-WMC4M1062491 (sdf) - standby [OK]
    Disk 3 - WDC_WD40EFRX-68N32N0_WD-WCC7K4PLUR7A (sdi) - active 28 C [OK]
    Disk 4 - WDC_WD30EFRX-68EUZN0_WD-WCC4NEUA5L20 (sdd) - standby [OK]
    Disk 5 - WDC_WD30EFRX-68EUZN0_WD-WMC4N0M6V0HC (sdj) - standby [OK]
    Disk 6 - WDC_WD40EFRX-68N32N0_WD-WCC7K3PZZ7Y7 (sdh) - standby [OK]
    Cache - Samsung_SSD_860_EVO_500GB_S598NJ0NA53226M (sde) - active 37 C [OK]

    Last check incomplete on Tue 26 Mar 2024 06:30:01 AM EDT (yesterday), finding 25752 errors.
    Error code: -4

    parity_history.png

    tower-diagnostics-20240327-1952.zip

  2. I realize that, and have the settings so it does a shutdown with 5 minutes remaining on battery. I think the issue was server was brough back up after utility power was on for an hour...ups settings did their shutdown again with 5 minutes remaining on ups.  This cycle repeated itself a couple times.   If I was home, I just would have left the server off.

     

    Any suggestions...should I cancel parity check?  It will start again at midnight.

  3. Yes, just double checked history...monthly parity checks for the last year have been 0.  I saw that in the logs and was concerned as well.  It was an ice storm and the power came on and off several times in a 5 hour window.  So its possible the UPS battery ran down on first outage and got minimal charge before the next outage.  Unfortunately I was not home so I am going by what my kids told me.

     

    Dan

  4. During storm we lost power.  I have a UPS, but for some reason did not shut unraid down cleanly like it has in the past.  On reboot this triggered a parity check.  My parity check runs over several days due to the hours I restrict this activity.  It is currently at 90% or so complete and there are 25,752 sync errors.  Monthly on my scheduled parity checks, I set the corrections to No...not sure what the settings are from an unclean shutdown. 

     

    Logs are attached.

     

    Any suggestions on how to proceed?  Should I cancel balance of the parity check? 

     

     

     

    tower-diagnostics-20240326-1000.zip

  5.  JorgeB

     

    Attached is my syslog...this was started on 10/25/23 after a crash I experienced earlier in that day.  The snipet above from from line 126 to line 746.   Then after line 746 was the reboot.  I took a picture of my monitor when the system was crashed....see below.  Obviously with the crash being random, I am unable to run and capture a diagnostic that covers the crash event...so I ran one right now for your reference and attached it.

     

    Thanks,

    Dan

    unraidscreenshot.jpg

    tower-diagnostics-20231028-0928.zip syslog.txt

  6. I upgraded to 6.12.4 a couple of weeks ago and my server has gone unresponsive several times a week.  I just enabled syslog server in an attempt to get the errors that proceed a crash.  Here is what I got on the last crash:

     

    Oct 26 19:01:06 Tower nginx: 2023/10/26 19:01:06 [alert] 9931#9931: worker process 7606 exited on signal 6
    Oct 26 19:02:14 Tower monitor: Stop running nchan processes

    Please note there were 600 or so nginx errors basically every second before this one….just omitting to keep this concise.
     

    Does anyone have any suggestions on how to proceed?  Right now I am considering downgrading to the last 6.11.X release….as I never had issues with it.

  7. So I registered Unraid back in 2009 and I am still using the 2GB Lexar firefly usb thumbdrive that was recommended way back then.  

     

    I am on 6.11.1 and just tried updating to 6.11.5 and I can't upgrade as there is not enough free room on the USB flash drive.   It looks like this is because old versions are kept on the flash drive.  

     

    Back in June, I purchased a Samsung Bar Plus 64 GB thumbdrive to have on hand should I ever need to replace the original Lexar.

     

    So I am looking for opinions...should I try and figure out what I can delete off my old flash drive or migrate to the spare I have on hand?

     

     

  8. The issue is not with this palette...it is all palettes...installing new or updating.  I have gotten some help from the nodered github.  Here is the issue as I understand it....link does not work within the nodered container and starting with nodered 3.0.1-1 they moved the location of cache to inside the /data path.  Here is the link to the github issue I raised with nodered...has some info that may be helpful:

     

    github issue

     

     

    • Like 1
  9. I have been using this container successfully for over a year.  I am getting an error when I attempt to update a palette.  Please note that all three palette's give the same error...just with specifics for each palette. Here is the excerpt from the logs for one example:

     

    51 verbose type system
    52 verbose stack FetchError: Invalid response body while trying to fetch https://registry.npmjs.org/node-red-contrib-power-monitor: ENOSYS: function not implemented, link '/data/.npm/_cacache/tmp/536b9d89' -> '/data/.npm/_cacache/content-v2/sha512/ca/78/ecf9ea9e429677649e945a71808de04bdb7c3b007549b9a2b8c1e2f24153a034816fdb11649d9265afe902d0c1d845c02ac702ae46967c08ebb58bc2ca53'
    52 verbose stack     at /usr/local/lib/node_modules/npm/node_modules/minipass-fetch/lib/body.js:168:15
    52 verbose stack     at async RegistryFetcher.packument (/usr/local/lib/node_modules/npm/node_modules/pacote/lib/registry.js:99:25)
    52 verbose stack     at async RegistryFetcher.manifest (/usr/local/lib/node_modules/npm/node_modules/pacote/lib/registry.js:124:23)
    52 verbose stack     at async Arborist.[nodeFromEdge] (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1108:19)
    52 verbose stack     at async Arborist.[buildDepStep] (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:976:11)
    52 verbose stack     at async Arborist.buildIdealTree (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:218:7)
    52 verbose stack     at async Promise.all (index 1)
    52 verbose stack     at async Arborist.reify (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:153:5)
    52 verbose stack     at async Install.exec (/usr/local/lib/node_modules/npm/lib/commands/install.js:156:5)
    52 verbose stack     at async module.exports (/usr/local/lib/node_modules/npm/lib/cli.js:78:5)
    53 verbose cwd /data
    54 verbose Linux 5.15.46-Unraid
    55 verbose node v16.16.0
    56 verbose npm  v8.11.0
    57 error code ENOSYS
    58 error syscall link
    59 error path /data/.npm/_cacache/tmp/536b9d89
    60 error dest /data/.npm/_cacache/content-v2/sha512/ca/78/ecf9ea9e429677649e945a71808de04bdb7c3b007549b9a2b8c1e2f24153a034816fdb11649d9265afe902d0c1d845c02ac702ae46967c08ebb58bc2ca53
    61 error errno ENOSYS
    62 error Invalid response body while trying to fetch https://registry.npmjs.org/node-red-contrib-power-monitor: ENOSYS: function not implemented, link '/data/.npm/_cacache/tmp/536b9d89' -> '/data/.npm/_cacache/content-v2/sha512/ca/78/ecf9ea9e429677649e945a71808de04bdb7c3b007549b9a2b8c1e2f24153a034816fdb11649d9265afe902d0c1d845c02ac702ae46967c08ebb58bc2ca53'
    63 verbose exit 1
    64 timing npm Completed in 3896ms
    65 verbose unfinished npm timer reify 1661012682451
    66 verbose unfinished npm timer reify:loadTrees 1661012682455
    67 verbose code 1
    68 error A complete log of this run can be found in:
    68 error     /data/.npm/_logs/2022-08-20T16_24_42_307Z-debug-0.log

     

    I cleared my browser cache and have tried from a second browser (chrome is primary...tried from firefox also).

     

    Any help would be greatly appreciated!

     

    Dan

    • Upvote 1
  10. I have a windows 10 vm, and the other day performance became terrible…basically unresponsive.  I shutdown Unraid and Unraid did not boot up.  I pulled the flash drive made a backup without issue on a standalone pc then ran chkdsk and it said it needed to be repaired.  I repaired and Unraid booted fine.   The VM still had terrible performance.  I used a backup image of a clean win 10 install created a new VM and everything was fine for last day or so.  This new VM now has issues.  I have attached diagnostics.  Recently upgraded to 6.10.2.

     

    Would love for someone with more experience to take a look at my logs and see if anything jumps out as being an issue.   
     

    Thanks,

    Dan
     

     

    tower-diagnostics-20220608-1029.zip

  11. About a month ago, when attempting to shutdown my server I had two disks that did not unmount.  I have never had this issue in the several years that I have used unraid.  

     

    The second time I shutdown, I decided to manually stop my vms and docker containers.  Once both of these were stopped, I then stopped the array on the main tab.  Two disks did not unmount....my ssd cache drive and disk 6.

     

    What is the best way to troubleshoot why these disks won't unmount?  I have need to restart my server again and would like a strategy for investigating so I can figure out the issue.  Obviously I can grab logs.

     

    Thanks,

    Dan

  12. My wife went to use our kitchen pc which is a windows vm running on unraid.  The vm which was running did not wake up when she moved the mouse.  I opened a VPN connection and went to the VM tab and it said Libvirt Services failed to start.   Then while on the phone with my wife she said nevermind it came back up.  I am home now writing this post from the VM and it appears to be working fine, however the VM tab still says Libvirt Sevices failed to start.  

     

    I have attached a copy of diagnostics.  Any suggestions.  Something like this happed a couple of weeks ago (same message) and I rebooted the unraid server and everything came back fine.  Since its happened twice...should I delete the libvirt image and recreate my vm's?  I know I have backups of the xmls.   

     

    Thanks,

    Dan

    vm.jpg

    tower-diagnostics-20210520-1338.zip

  13. Jorge,

     

    Thanks for the link.  I tried method 1 and copied my data to the array, but kept getting an error on my VM image files.  I then unmounted the cache drive and tried method 2 and saved that data to a different place on the array.   I then ran to store and purchased a new ssd with the intent of getting my unraid server back on line with the new SSD and then testing the data.   Before installing the new SSD, I decided to try method 3 with the --repair option.  After completing the repair, I stopped then started the array and everything came back up as expected.  VM started and all docker containers appear to be working as expected.

     

    Should I backup the drive to a third location and then format the SSD and copy data back to it or just go with it since its working?

     

    Thanks,

    Dan

     

  14. Woke up this morning and my unraid VM and webgui were sluggish.  Noticed logs were at 100%.    I grabbed a diagnostic, see attached.  I rebooted unraid and the cache drive says unmountable no filesystem.

     

    Started array in maintenance mode and ran the btrfs check in readonly mode and got the following:

     

    [1/7] checking root items 
    [2/7] checking extents data backref 7812329472 root 5 owner 22293485 offset 22859776 num_refs 0 not found in extent tree incorrect 
    local backref count on 7812329472 root 5 owner 22293485 offset 22859776 found 1 wanted 0 back 0x3a5e11b0 incorrect local backref 
    count on 7812329472 root 5 owner 1125899929136109 offset 22859776 found 0 wanted 1 back 0x392bb5e0 backref disk bytenr does not match
    extent record, bytenr=7812329472, ref bytenr=0 backpointer mismatch on [7812329472 4096] data backref 111082958848 root 5 owner 53678 
    offset 0 num_refs 0 not found in extent tree incorrect local backref count on 111082958848 root 5 owner 53678 offset 0 found 1 wanted
    0 back 0x39470c80 incorrect local backref count on 111082958848 root 5 owner 1125899906896302 offset 0 found 0 wanted 1 back 
    0x2fcc10e0 backref disk bytenr does not match extent record, bytenr=111082958848, ref bytenr=0 backpointer mismatch on [111082958848 
    274432] ERROR: errors found in extent allocation tree or chunk allocation 
    [3/7] checking free space cache 
    [4/7] checking fs roots 
    [5/7] checking only csums items (without verifying data) 
    [6/7] checking root refs 
    [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdk1 UUID:
    3ab3a3ac-3997-416c-a6fd-605cfcd76924 found 249870127104 bytes used, error(s) found total csum bytes: 127993568 total tree bytes: 
    1612578816 total fs tree bytes: 1038614528 total extent tree bytes: 365559808 btree space waste bytes: 331072552 file data blocks
    allocated: 4561003868160 referenced 205858721792

     

    Looking for any suggestions.   

     

    Dan

    tower-diagnostics-20201221-0712.zip

  15. The nodered docker keeps crashing on me.  If I restart it, it runs for a day or so.  I have deleted the container image and reinstalled and the outcome is the same.  Here is the log....any suggestions?

     

    0 info it worked if it ends with ok
    1 verbose cli [ '/usr/local/bin/node',
    1 verbose cli   '/usr/local/bin/npm',
    1 verbose cli   'start',
    1 verbose cli   '--cache',
    1 verbose cli   '/data/.npm',
    1 verbose cli   '--',
    1 verbose cli   '--userDir',
    1 verbose cli   '/data' ]
    2 info using [email protected]
    3 info using [email protected]
    4 verbose config Skipping project config: /usr/src/node-red/.npmrc. (matches userconfig)
    5 verbose run-script [ 'prestart', 'start', 'poststart' ]
    6 info lifecycle [email protected]~prestart: [email protected]
    7 info lifecycle [email protected]~start: [email protected]
    8 verbose lifecycle [email protected]~start: unsafe-perm in lifecycle true
    9 verbose lifecycle [email protected]~start: PATH: /usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/usr/src/node-red/node_modules/.bin:/usr/lo>
    10 verbose lifecycle [email protected]~start: CWD: /usr/src/node-red
    11 silly lifecycle [email protected]~start: Args: [ '-c',
    11 silly lifecycle   'node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"' ]
    12 silly lifecycle [email protected]~start: Returned: code: 1  signal: null
    13 info lifecycle [email protected]~start: Failed to exec start script
    14 verbose stack Error: [email protected] start: `node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"`
    14 verbose stack Exit status 1
    14 verbose stack     at EventEmitter.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:332:16)
    14 verbose stack     at EventEmitter.emit (events.js:198:13)
    14 verbose stack     at ChildProcess.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
    14 verbose stack     at ChildProcess.emit (events.js:198:13)
    14 verbose stack     at maybeClose (internal/child_process.js:982:16)
    14 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
    15 verbose pkgid [email protected]
    16 verbose cwd /usr/src/node-red
    17 verbose Linux 4.19.107-Unraid
    18 verbose argv "/usr/local/bin/node" "/usr/local/bin/npm" "start" "--cache" "/data/.npm" "--" "--userDir" "/data"
    19 verbose node v10.22.1
    20 verbose npm  v6.14.6
    21 error code ELIFECYCLE
    22 error errno 1
    23 error [email protected] start: `node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"`
    23 error Exit status 1
    24 error Failed at the [email protected] start script.
    24 error This is probably not a problem with npm. There is likely additional logging output above.
    25 verbose exit [ 1, true ]

     

  16. So I just finished preclearing a 12 TB drive.  In my current setup I have dual parity drives both 4 TB.  I have 6 drives in my array of various sizes (4TB down to a 1 TB).  I want to install the new 12 TB as a parity drive and then replace  the 1 TB drive with one of the current 4TB parity drives.  

     

    What is the recommended/safest way to do this?  My first thought was to stop array and unassign one of the parity drives and then assign the new 12 TB drive and let it rebuild parity...then once parity is rebuilt I could swap the 1 TB and 4 TB.   

     

    or is this the procedure I should follow:  https://wiki.unraid.net/The_parity_swap_procedure

     

    Would like confirmation before proceeding.

     

    Thanks in advance,

    Dan

     

  17. On 6/13/2020 at 2:21 PM, chris_netsmart said:

    I got the same problem, and trying to understand the instructors is confusing.

     

    so I did a install of the docker application, and now when I try and find the " /home/Shinobi" within the docker or within Unraid I am inform that it can't be found

    I also try some of the offer commands and get the same issues no like sh: source: can't open 'sql/framework.sq': No such file or directory.

     

    so can someone please write a dummy step by step so that we can get this docker up and running.

     

    I just installed this docker and was having the same issue as chris_netsmart.  Went to Shinobi's website and read their installation instructions.  Discovered if you login to the console of this docker and perform the following that you will be able to login with the docker's default username and password.  I was then able to create my own username/password.  Wondering if there is something wrong with the template that is not inserting the values that we define at docker install/creation.

     

     

    Set up Superuser Access
    Rename super.sample.json to super.json. Run the following command inside the Shinobi directory with terminal. Passwords are saved as MD5 strings. You only need to do this step once.
    
    cp super.sample.json super.json
    Login at http://your.shinobi.video/super.
    
    Username : [email protected]
    Password : admin
    You should now be able to manage accounts

    Here is the direct link for the above....go to account management section:  https://shinobi.video/docs/start

×
×
  • Create New...