goinsnoopin

Members
  • Posts

    345
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

goinsnoopin's Achievements

Contributor

Contributor (5/14)

3

Reputation

  1. Thanks, figured my issue out, corrupt libvirt.img now that the image is deleted and recreated all is well.
  2. About a month ago, when attempting to shutdown my server I had two disks that did not unmount. I have never had this issue in the several years that I have used unraid. The second time I shutdown, I decided to manually stop my vms and docker containers. Once both of these were stopped, I then stopped the array on the main tab. Two disks did not unmount....my ssd cache drive and disk 6. What is the best way to troubleshoot why these disks won't unmount? I have need to restart my server again and would like a strategy for investigating so I can figure out the issue. Obviously I can grab logs. Thanks, Dan
  3. Since I have backup of the VMs xml, any reason not to delete libvirt.img and start fresh?
  4. @JorgeB I just found the logs for the last crash, it was only 5 days ago...not the couple of weeks I stated in my original posting. Attached are those logs. All I did after this crash on 5/15/2021 was reboot the server and all came back fine. Thanks for looking! tower-diagnostics-20210515-0746.zip
  5. My wife went to use our kitchen pc which is a windows vm running on unraid. The vm which was running did not wake up when she moved the mouse. I opened a VPN connection and went to the VM tab and it said Libvirt Services failed to start. Then while on the phone with my wife she said nevermind it came back up. I am home now writing this post from the VM and it appears to be working fine, however the VM tab still says Libvirt Sevices failed to start. I have attached a copy of diagnostics. Any suggestions. Something like this happed a couple of weeks ago (same message) and I rebooted the unraid server and everything came back fine. Since its happened twice...should I delete the libvirt image and recreate my vm's? I know I have backups of the xmls. Thanks, Dan tower-diagnostics-20210520-1338.zip
  6. Jorge, Thanks for the link. I tried method 1 and copied my data to the array, but kept getting an error on my VM image files. I then unmounted the cache drive and tried method 2 and saved that data to a different place on the array. I then ran to store and purchased a new ssd with the intent of getting my unraid server back on line with the new SSD and then testing the data. Before installing the new SSD, I decided to try method 3 with the --repair option. After completing the repair, I stopped then started the array and everything came back up as expected. VM started and all docker containers appear to be working as expected. Should I backup the drive to a third location and then format the SSD and copy data back to it or just go with it since its working? Thanks, Dan
  7. Woke up this morning and my unraid VM and webgui were sluggish. Noticed logs were at 100%. I grabbed a diagnostic, see attached. I rebooted unraid and the cache drive says unmountable no filesystem. Started array in maintenance mode and ran the btrfs check in readonly mode and got the following: [1/7] checking root items [2/7] checking extents data backref 7812329472 root 5 owner 22293485 offset 22859776 num_refs 0 not found in extent tree incorrect local backref count on 7812329472 root 5 owner 22293485 offset 22859776 found 1 wanted 0 back 0x3a5e11b0 incorrect local backref count on 7812329472 root 5 owner 1125899929136109 offset 22859776 found 0 wanted 1 back 0x392bb5e0 backref disk bytenr does not match extent record, bytenr=7812329472, ref bytenr=0 backpointer mismatch on [7812329472 4096] data backref 111082958848 root 5 owner 53678 offset 0 num_refs 0 not found in extent tree incorrect local backref count on 111082958848 root 5 owner 53678 offset 0 found 1 wanted 0 back 0x39470c80 incorrect local backref count on 111082958848 root 5 owner 1125899906896302 offset 0 found 0 wanted 1 back 0x2fcc10e0 backref disk bytenr does not match extent record, bytenr=111082958848, ref bytenr=0 backpointer mismatch on [111082958848 274432] ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdk1 UUID: 3ab3a3ac-3997-416c-a6fd-605cfcd76924 found 249870127104 bytes used, error(s) found total csum bytes: 127993568 total tree bytes: 1612578816 total fs tree bytes: 1038614528 total extent tree bytes: 365559808 btree space waste bytes: 331072552 file data blocks allocated: 4561003868160 referenced 205858721792 Looking for any suggestions. Dan tower-diagnostics-20201221-0712.zip
  8. The nodered docker keeps crashing on me. If I restart it, it runs for a day or so. I have deleted the container image and reinstalled and the outcome is the same. Here is the log....any suggestions? 0 info it worked if it ends with ok 1 verbose cli [ '/usr/local/bin/node', 1 verbose cli '/usr/local/bin/npm', 1 verbose cli 'start', 1 verbose cli '--cache', 1 verbose cli '/data/.npm', 1 verbose cli '--', 1 verbose cli '--userDir', 1 verbose cli '/data' ] 2 info using npm@6.14.6 3 info using node@v10.22.1 4 verbose config Skipping project config: /usr/src/node-red/.npmrc. (matches userconfig) 5 verbose run-script [ 'prestart', 'start', 'poststart' ] 6 info lifecycle node-red-docker@1.2.2~prestart: node-red-docker@1.2.2 7 info lifecycle node-red-docker@1.2.2~start: node-red-docker@1.2.2 8 verbose lifecycle node-red-docker@1.2.2~start: unsafe-perm in lifecycle true 9 verbose lifecycle node-red-docker@1.2.2~start: PATH: /usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/usr/src/node-red/node_modules/.bin:/usr/lo> 10 verbose lifecycle node-red-docker@1.2.2~start: CWD: /usr/src/node-red 11 silly lifecycle node-red-docker@1.2.2~start: Args: [ '-c', 11 silly lifecycle 'node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"' ] 12 silly lifecycle node-red-docker@1.2.2~start: Returned: code: 1 signal: null 13 info lifecycle node-red-docker@1.2.2~start: Failed to exec start script 14 verbose stack Error: node-red-docker@1.2.2 start: `node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"` 14 verbose stack Exit status 1 14 verbose stack at EventEmitter.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:332:16) 14 verbose stack at EventEmitter.emit (events.js:198:13) 14 verbose stack at ChildProcess.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14) 14 verbose stack at ChildProcess.emit (events.js:198:13) 14 verbose stack at maybeClose (internal/child_process.js:982:16) 14 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5) 15 verbose pkgid node-red-docker@1.2.2 16 verbose cwd /usr/src/node-red 17 verbose Linux 4.19.107-Unraid 18 verbose argv "/usr/local/bin/node" "/usr/local/bin/npm" "start" "--cache" "/data/.npm" "--" "--userDir" "/data" 19 verbose node v10.22.1 20 verbose npm v6.14.6 21 error code ELIFECYCLE 22 error errno 1 23 error node-red-docker@1.2.2 start: `node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"` 23 error Exit status 1 24 error Failed at the node-red-docker@1.2.2 start script. 24 error This is probably not a problem with npm. There is likely additional logging output above. 25 verbose exit [ 1, true ]
  9. Thank you! Proceeding now with the normal way. Dan
  10. So I just finished preclearing a 12 TB drive. In my current setup I have dual parity drives both 4 TB. I have 6 drives in my array of various sizes (4TB down to a 1 TB). I want to install the new 12 TB as a parity drive and then replace the 1 TB drive with one of the current 4TB parity drives. What is the recommended/safest way to do this? My first thought was to stop array and unassign one of the parity drives and then assign the new 12 TB drive and let it rebuild parity...then once parity is rebuilt I could swap the 1 TB and 4 TB. or is this the procedure I should follow: https://wiki.unraid.net/The_parity_swap_procedure Would like confirmation before proceeding. Thanks in advance, Dan
  11. I just installed this docker and was having the same issue as chris_netsmart. Went to Shinobi's website and read their installation instructions. Discovered if you login to the console of this docker and perform the following that you will be able to login with the docker's default username and password. I was then able to create my own username/password. Wondering if there is something wrong with the template that is not inserting the values that we define at docker install/creation. Set up Superuser Access Rename super.sample.json to super.json. Run the following command inside the Shinobi directory with terminal. Passwords are saved as MD5 strings. You only need to do this step once. cp super.sample.json super.json Login at http://your.shinobi.video/super. Username : admin@shinobi.video Password : admin You should now be able to manage accounts Here is the direct link for the above....go to account management section: https://shinobi.video/docs/start
  12. For reference the above diagnostics file is after a reboot of unraid. When it originally crashed I also grabbed a diagnostics....it is attached to this post. Thanks, Dan tower-diagnostics-20200411-2019.zip
  13. My cache drive has btrfs errors. When I start array and check file system it lists a bunch of errors. When I add the --repair option it starts and the it asks a y/n question that I am unable to answer since I ran this from the webgui. Any suggestions on how to proceed. Diagnostics are attached. Thanks, Dan tower-diagnostics-20200412-2108.zip
  14. I was looking at this list and thought I was okay: https://www.elpamsoft.com/?p=Plex-Hardware-Transcoding Guess I need to do a little more research. In looking at pricing gt 710 runs somewhere around $30 used....but a GTX 1050ti can be had for $70 or so used on Reddit hardwareswap. Maybe I should just go for that and call it a day.