ionred

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ionred's Achievements

Newbie

Newbie (1/14)

8

Reputation

  1. Yes this can be accomplished from the unraid terminal, either via web or via ssh to unraid. Fortunately I did not have to do it from a remote machine like Hoopster, it just “worked”. The BMC takes a couple minutes to fully start back up. By rough memory I’d say it was another 4-5 minutes before I could get to the login screen, but it let me login immediately afterwards without any ugly loops. The unraid system (and ssh session) stayed up the entire time.
  2. If this happens, you can cold reset the BMC from the command line from your machine. I'm pretty sure it's ipmitool mc reset cold but I could be wrong on that. I've had hte same issue you described with this board before with the BMC login loop and the BMC just needed a reset. Never lost any uptime on the machine itself.
  3. Can confirm all unraid related ips on my system are static as well
  4. Cross posting this: @jonp never heard back from you on that thread. This bug is still ongoing for other too.
  5. Can confirm, seeing the same on 6.10.0rc1
  6. For those interested, it seems like the changes in Version: 6.10.0-rc1 have fixed the issues with losing the KVM/local monitor on the ASRockRack E3C246D4U board (BIOS L2.21A) when the on board GPU is enabled for hardware encoding. Just was watching the bootup in the KVM on the upgrade reboot and my jaw dropped when I saw a login prompt rather than the screen go black at the end of the boot process!
  7. Keep getting "Initializing Presets" at the top and can't save a job to start. (see screenshot) Also getting a debug error when trying to save even a basic ffmpeg profile for testing. For reference, I have an intel setup, not nvdia, which is why i'm choosing to use vaapi. 2021-05-29T01:14:09.4573812-05:00 0HM92D2TDTQSF:00000002 [ERR] {SourceContext: "Microsoft.AspNetCore.Components.Server.Circuits.CircuitHost", TransportConnectionId: "cVEIOsITmh9f-_HHvm0VWg", RequestPath: "/_blazor", ConnectionId: "0HM92D2TDTQSF"} Unhandled exception in circuit '"9zBxGwnM4yDP_f5erFPpb29zF55XM0AxZbF9Gdr8CDQ"'. (47be2d5d) System.ArgumentNullException: Value cannot be null. (Parameter 'source') at System.Linq.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument) at System.Linq.Enumerable.Select[TSource,TResult](IEnumerable`1 source, Func`2 selector) at Compressarr.Settings.FFmpegFactory.FFmpegPresetBase..ctor(FFmpegPreset preset) in /src/Compressarr/Settings/FFmpegFactory/FFmpegPresetBase.cs:line 22 at Compressarr.Application.ApplicationService.<>c.<SaveAppSetting>b__94_0(FFmpegPreset x) in /src/Compressarr/Application/ApplicationService.cs:line 99 at System.Linq.Enumerable.SelectEnumerableIterator`2.MoveNext() at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType) at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType) at Newtonsoft.Json.JsonSerializer.Serialize(JsonWriter jsonWriter, Object value) at Newtonsoft.Json.Linq.JToken.FromObjectInternal(Object o, JsonSerializer jsonSerializer) at Newtonsoft.Json.Linq.JToken.FromObject(Object o) at Compressarr.Application.ApplicationService.SaveAppSetting() in /src/Compressarr/Application/ApplicationService.cs:line 99 at Compressarr.Presets.PresetManager.AddPresetAsync(FFmpegPreset newPreset) in /src/Compressarr/Presets/PresetManager.cs:line 630 at Compressarr.Shared.PresetView.savePreset() in /src/Compressarr/Shared/PresetView.razor:line 217 at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__140_0(Object state) at Microsoft.AspNetCore.Components.Rendering.RendererSynchronizationContext.ExecuteSynchronously(TaskCompletionSource`1 completion, SendOrPostCallback d, Object state) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) --- End of stack trace from previous location --- at Microsoft.AspNetCore.Components.Rendering.RendererSynchronizationContext.ExecuteBackground(WorkItem item)
  8. @ljm42 I think it had to do with this being a reinstall. I can't recall the status of my backup before but it's possibly related. I just fixed it for myself. I did the following steps to get it working: *wiped the /boot/.git directory *removed /var/local/emhttp/flashbackup.ini * then manually initialized with php /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php activate Perhaps more checks are needed on a reinstall to remove the .git directory and any leftovers from a previous install to prevent this? Edit: the last step may have been unnecessary to manually activate from the CLI. I did not refresh the myservers page in the GUI to see if it would work automatically. I can say for sure that just removing the .git folder under boot was not enough, though.
  9. +1 Bump. Just reinstalled after having it uninstalled for a month. Same issue as OP.
  10. Here's a quick paste from my console. the first half is after a hard reboot, then I shut down docker and restarted it from the settings menu in unraid. Afterwards (I made a comment in the middle), there are new routes (the docker generated shims 10.0.0.0/25 and 10.0.0.128/25 to link the host and docker networks) and the network works properly again. Relevant new ip routes added: 10.0.0.0/25 dev shim-br0 scope link 10.0.0.128/25 dev shim-br0 scope link [email protected]:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: [email protected]: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: [email protected]: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: [email protected]: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 11: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff permaddr d0:50:99:d5:cf:27 12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 13: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 14: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/none 15: br-c31dbeb7d5c2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:43:73:59:a7 brd ff:ff:ff:ff:ff:ff 16: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:f6:f6:69:75 brd ff:ff:ff:ff:ff:ff 18: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether a6:76:5d:44:40:7c brd ff:ff:ff:ff:ff:ff link-netnsid 0 34: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether e6:22:25:f9:dd:04 brd ff:ff:ff:ff:ff:ff link-netnsid 14 [email protected]:~# ip route default via 10.0.0.1 dev br0 10.0.0.0/24 dev br0 proto kernel scope link src 10.0.0.246 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.0.0/16 dev br-c31dbeb7d5c2 proto kernel scope link src 172.18.0.1 linkdown [email protected]:~# ping 10.0.0.187 PING 10.0.0.187 (10.0.0.187) 56(84) bytes of data. From 10.0.0.246 icmp_seq=1 Destination Host Unreachable From 10.0.0.246 icmp_seq=2 Destination Host Unreachable From 10.0.0.246 icmp_seq=3 Destination Host Unreachable ^C --- 10.0.0.187 ping statistics --- 4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3066ms pipe 4 [email protected]:~# ## SHUTDOWN DOCKER, RESTARTED DOCKER [email protected]:~# ## NO OTHER CHANGES [email protected]:~# ping 10.0.0.187 PING 10.0.0.187 (10.0.0.187) 56(84) bytes of data. 64 bytes from 10.0.0.187: icmp_seq=1 ttl=64 time=0.024 ms 64 bytes from 10.0.0.187: icmp_seq=2 ttl=64 time=0.020 ms ^C --- 10.0.0.187 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1014ms rtt min/avg/max/mdev = 0.020/0.022/0.024/0.002 ms [email protected]:~# ip route default via 10.0.0.1 dev br0 10.0.0.0/25 dev shim-br0 scope link 10.0.0.0/24 dev br0 proto kernel scope link src 10.0.0.246 10.0.0.128/25 dev shim-br0 scope link 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 172.18.0.0/16 dev br-c31dbeb7d5c2 proto kernel scope link src 172.18.0.1 linkdown [email protected]:~# [email protected]:~# [email protected]:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: [email protected]: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: [email protected]: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: [email protected]: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: [email protected]: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 11: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff permaddr d0:50:99:d5:cf:27 12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 13: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether d0:50:99:d5:cf:26 brd ff:ff:ff:ff:ff:ff 14: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/none 15: br-c31dbeb7d5c2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:43:73:59:a7 brd ff:ff:ff:ff:ff:ff 59: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:b9:5f:64:4d brd ff:ff:ff:ff:ff:ff 60: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 62:a9:9c:7c:8c:ec brd ff:ff:ff:ff:ff:ff 62: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 82:aa:df:a5:64:6f brd ff:ff:ff:ff:ff:ff link-netnsid 0 64: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether de:80:0a:a4:85:de brd ff:ff:ff:ff:ff:ff link-netnsid 1
  11. @jonp, What @tjb_altf4 said. Unraid creates a shim network when “host access to custom networks” is enabled.
  12. Bump, Limetech (or other friendly folks here in the forums) are you able to assist?
  13. After a graceful reboot, no issue, but after a hard reboot from watchdog or powerloss, the system will not start up with the docker shim in place resulting in no network connection between the unraid server and custom docker IP's. As soon as I shut down and re-start the docker service from the settings menu, the shim is generated and everything works as expected. Confirmed that access to custom networks is in place. I've seen a few other mentions on the forums but nobody ever seemed to have a resolution (nor did they mention in the post they realized the shim wasn't generated). Additionally, this is not new to 6.9.2. Pretty certain it's been going on since pre 6.9 but can't be certain. fc-ps-urd1-diagnostics-20210512-2016.zip
  14. Can confirm a bad poweroff/restart through watchdog on IPMI or a loss of power causes this for me as well. I've been experiencing it since pre 6.9. Afterwards I cannot ping any docker with a custom IP from the unraid server BUT they can be reached from non unraid devices on the network. Soft/Graceful reboot gives no issue.