Jump to content

jowi

Members
  • Content Count

    1131
  • Joined

  • Last visited

Community Reputation

7 Neutral

About jowi

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Looked for ipmi logging, but can’t find them? Do i need to run some cmdline ipmi thingy on the server to collect or start ipmi logging? I’m using supermicro’s ipmiview windows app as ipmi client. *edit found ipmi log settings in bios, will take a look, is probably disabled. *edit yes it was. Enabled it.
  2. yeah i know but the server crashing while roon was rebuilding the library after losing connection to the rooncore (which happens a lot...), occured at least two times, maybe even 3 times. Not much weird stuff running on the server, sabnzbd, nextcloud, unify controller, pihole, sonarr, thats about it. 32Gb mem, quadcore xeon 1230. no vm’s running. syslog shows NOTHING. Is there a way to get roon logging somewhere persistent?
  3. Syslog is synced to the flash drive, but it doesnt show anything usefull. Last entry was a spindown of some drives at 4:10 this morning, and the next entry is the reboot i performed... no clues, no mentioning of anything interesting.
  4. This docker has a very serious issue, it takes down my array sometimes... it starts with the remote app losing the roon core, you have to reselect it, login again, and re-add your folders. Then it starts indexing, and crashes... and takes my complete unraid server with it. Seriously. No shares, no gui, no dockers, no plugins, nothing, it is dead. It kills my server. Have to hard reset (yikes) or use IPMI to reset. Any idea on what it can be? Some logging i can look into?
  5. ... i know, not a sexy issue, but it happened again. This time it crashed while i was reconnecting to the Roon docker. The Roonserver sometimes loses it's library for some reason, and you have to re-select the rooncore in the remote app, and rebuild the libraries. After adding 2 folders, roon stopped working, and took the array with it. Now, this time i recalled that one of the previous times the array crashed, it was also during issues with Roon, so my conclusion is that the roon docker f*cks things up big time and takes the array down with it... unraid can not be reached, not even over IPMI. IPMI only shows powered up state, but nothing works. No gui, no shares, no dockers, no plugins, just dead. Which to me is surprising, wasn't that the whole reason why we want containerized apps? So they did not influence the host at all?
  6. Happened again this morning. syslog was synced to the flashdrive so i can at least show some logging: Oct 10 21:54:57 UNRAID kernel: mdcmd (247): spindown 8 Oct 10 22:22:06 UNRAID kernel: mdcmd (248): spindown 5 Oct 10 22:22:07 UNRAID kernel: mdcmd (249): spindown 7 Oct 10 22:22:08 UNRAID kernel: mdcmd (250): spindown 6 Oct 10 22:59:22 UNRAID kernel: mdcmd (251): spindown 1 Oct 10 23:23:14 UNRAID kernel: mdcmd (252): spindown 7 Oct 11 02:00:00 UNRAID kernel: nginx[12848]: segfault at 10 ip 0000149d40bbe9fb sp 00007ffead203818 error 4 in libperl.so[149d40b70000+114000] Oct 11 02:00:00 UNRAID kernel: Code: 83 f8 09 76 f1 45 31 c0 80 3f 7d 41 0f 94 c0 44 89 c0 c3 31 c0 48 85 ff 74 09 31 c0 80 7f 0c 04 0f 94 c0 83 e0 01 c3 48 8b 17 <48> 8b 42 10 48 85 c0 74 0b 0f b6 52 30 48 c1 e8 03 48 29 d0 c3 48 Oct 11 03:38:46 UNRAID kernel: mdcmd (253): spindown 7 Oct 11 03:41:42 UNRAID kernel: mdcmd (254): spindown 5 Oct 11 03:41:54 UNRAID kernel: mdcmd (255): spindown 2 Oct 11 03:41:56 UNRAID kernel: mdcmd (256): spindown 4 Oct 11 03:42:12 UNRAID kernel: mdcmd (257): spindown 8 Oct 11 03:42:27 UNRAID kernel: mdcmd (258): spindown 6 Oct 11 04:13:16 UNRAID emhttpd: Spinning down all drives... Oct 11 04:13:16 UNRAID kernel: mdcmd (259): spindown 0 Oct 11 04:13:16 UNRAID kernel: mdcmd (260): spindown 1 Oct 11 04:13:16 UNRAID kernel: mdcmd (261): spindown 2 Oct 11 04:13:16 UNRAID kernel: mdcmd (262): spindown 3 Oct 11 04:13:16 UNRAID kernel: mdcmd (263): spindown 4 Oct 11 04:13:17 UNRAID kernel: mdcmd (264): spindown 5 Oct 11 04:13:17 UNRAID kernel: mdcmd (265): spindown 6 Oct 11 04:13:18 UNRAID kernel: mdcmd (266): spindown 7 Oct 11 04:13:19 UNRAID kernel: mdcmd (267): spindown 8 Oct 11 04:13:19 UNRAID kernel: mdcmd (268): spindown 9 Oct 11 04:13:19 UNRAID emhttpd: shcmd (32928): /usr/sbin/hdparm -y /dev/nvme0n1 Oct 11 04:13:19 UNRAID root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device Oct 11 04:13:19 UNRAID root: Oct 11 04:13:19 UNRAID root: /dev/nvme0n1: Oct 11 04:13:19 UNRAID root: issuing standby command Oct 11 04:13:19 UNRAID emhttpd: shcmd (32928): exit status: 25 Oct 11 08:32:03 UNRAID kernel: microcode: microcode updated early to revision 0x21, date = 2019-02-13 Oct 11 08:32:03 UNRAID kernel: Linux version 4.19.107-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Mar 5 13:55:57 PST 2020 Oct 11 08:32:03 UNRAID kernel: Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=8086:1521 initrd=/bzroot Oct 11 08:32:03 UNRAID kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 11 08:32:03 UNRAID kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 11 08:32:03 UNRAID kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 11 08:32:03 UNRAID kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 11 08:32:03 UNRAID kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 11 08:32:03 UNRAID kernel: BIOS-provided physical RAM map: Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000beec6fff] usable Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000beec7000-0x00000000bef09fff] ACPI NVS Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000bef0a000-0x00000000ce48ffff] usable Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000ce490000-0x00000000ce534fff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000ce535000-0x00000000ce5c6fff] usable Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000ce5c7000-0x00000000ce68bfff] ACPI NVS Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000ce68c000-0x00000000cf7fefff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000cf7ff000-0x00000000cf7fffff] usable Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000fed00000-0x00000000fed03fff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Oct 11 08:32:03 UNRAID kernel: BIOS-e820: [mem 0x0000000100000000-0x000000082fffffff] usable Oct 11 08:32:03 UNRAID kernel: NX (Execute Disable) protection: active ... Rebooted at 8:32 this morning, looks like the last logging at 4:13 before crashing was something to do with the ssd? Is there an issue with trim? I have the trim plugin running, disabled it for now...
  7. Yes, if i had the expertise. I can do c# .net core etc programming and some minor python, but somehow i think that is not needed here. Also linux is not my thing. If you’re suggesting i am not allowed to comment unless i contribute? F*** you. That is not up to you.
  8. User scripts has grown from a quickly bolted together handy but buggy tool into a substantial part of unraid that a lot of users have use for, and it needs and deserves a lot of rework and refactoring and attention to take it to a higher level.
  9. Ok, but if i enable cache drive, that silly /mnt/user0 shows up... i don't want that. It is confusing.
  10. Yes, but i do NOT want to use the cache drive as a cache drive (with the mover etc), for me it's just an SSD. It contains dockers, apps, vm's, my nextcloud repository etc. When stuff gets downloaded, i've written my own scripts that downloads everything on the ssd first, repairs it, and when it's done, it moves it to the right place on the array. I don't need and i don't want unraid's mover/cache combi.
  11. Had to start diagnostics twice... also weird. Syslog has something to say about that as well. unraid-diagnostics-20200912-1929.zip
  12. Yes, read back a few posts. If i add a cache only user share from unraid gui, it fails, if i create it manually in /mnt/user then it shows up in shares, but if i change that to cache only, it gets removed again.
  13. You mean set up a syslog CLIENT then on e.g. win10? Unraid does the serving part...?
  14. Yeah, but my ssd is NOT a cache drive... it is NOT configured as cache, i don't need caching. Even though it shows up as 'cache' in unraid's list of storage (which is very annoying). Also not using mover. Way to obfuscated for me. So, i can not create a 'cache only' share since... there is no cache. If i add a share from unraid itself 'cache only' it is not getting created. Syslog shows 'cannot create directory, no medium found'... If i manuall create the share in /mnt/users it does show up in the unraid shares, but it is created on a random fixed disk. If i then change it to 'cache only', by magic, it is gone...
  15. I can only use an array folder for syslog to write to, which will cause that disk to spin up constantly... this would only be ok if i could let syslogserver write to my (cache) ssd. It’s not very clear, it says ‘custom’ amongst my shares, but i can not add a custom location... so i guess mirroring syslog to the flashdrive is the only way.