Interstellar

Members
  • Posts

    622
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Interstellar's Achievements

Enthusiast

Enthusiast (6/14)

3

Reputation

  1. @bonienl Should I raise these bugs on GitHub for resolution? Still suffering from them...
  2. Sorry totally forgot about this as the next day it suddenly started working with no changes by me other than booting it up again..! If it happens again I’ll run that command, thanks!
  3. I am also having this problem, bonienl can you add some code that gives us the option to increase the PHP memory limit? This problem isn't going away, disks are getting larger, thus filling with more files. So two bugs with the script at the moment: 1. SHA256 "Check Export" button does not work correctly. 2. 128MB PHP limit is too low. (not strictly a bug, but it doesn't work with it set at 128MB so might as well be one)
  4. There are hash files there, the Export function works. The Verify Export function does not work, when the relevant disk is checked... "Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00" So I cannot start a manual verify. A automatic verify does start the check however.
  5. Still struggling to get this to manually do a verify after a disk rebuild. The Export then Check Export just ends up with the following when disk 3 selected... Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00 Obviously wrong. Nothing in the logs either, just trying to make it do an automatic daily check instead to see if that works... At the moment I do not believe this plugin works correctly, sorry Bonienl. Edit: Automatic verify seems to work, the Check Export button does NOT work.
  6. Randomly hit by this today, nothing else of mine is suffering from this problem (Mac, RPIs, Windows machines and indeed a VM on the NAS itself!) Seems similar issues reported here: During startup you can see I can ping bbc.co.uk at a regular 1 sec cadence, as is expected. However between 1650491180 and 1650491186 something happens to cause the ping DNS resolution to slow to a crawl. [1650491160.677062] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=2 ttl=57 time=7.19 ms [1650491161.680240] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=3 ttl=57 time=7.59 ms [1650491162.680644] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=4 ttl=57 time=7.59 ms [1650491163.682572] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=5 ttl=57 time=7.72 ms [1650491164.684427] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=6 ttl=57 time=7.59 ms [1650491165.686395] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=7 ttl=57 time=7.63 ms [1650491166.687953] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=8 ttl=57 time=7.66 ms [1650491167.688993] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=9 ttl=57 time=7.81 ms [1650491168.692176] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=10 ttl=57 time=7.97 ms [1650491169.692828] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=11 ttl=57 time=7.70 ms [1650491170.695271] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=12 ttl=57 time=7.46 ms [1650491171.697118] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=13 ttl=57 time=7.77 ms [1650491172.698616] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=14 ttl=57 time=7.86 ms [1650491173.699806] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=15 ttl=57 time=7.86 ms [1650491174.699658] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=16 ttl=57 time=7.54 ms [1650491175.701393] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=17 ttl=57 time=7.55 ms [1650491176.704215] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=18 ttl=57 time=7.83 ms [1650491177.705895] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=19 ttl=57 time=7.73 ms [1650491178.707608] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=20 ttl=57 time=7.95 ms [1650491179.709268] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=21 ttl=57 time=7.81 ms [1650491180.710587] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=22 ttl=57 time=7.58 ms [1650491186.712848] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=23 ttl=57 time=7.68 ms [1650491188.485948] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=24 ttl=57 time=7.81 ms [1650491188.495136] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=25 ttl=57 time=7.68 ms [1650491194.497036] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=26 ttl=57 time=7.68 ms [1650491199.507572] 64 bytes from 2a04:4e42:200::81 (2a04:4e42:200::81): icmp_seq=27 ttl=57 time=7.85 ms If I decode the unix time, this command is shown in the log: Apr 20 22:46:21 NAS emhttpd: shcmd (18): /etc/rc.d/rc.avahidaemon start If I then run: /etc/rc.d/rc.avahidaemon stop The ping behaviour etc goes back to how it is supposed to be? What is mDNS doing and why is it causing ping name resolution to fall over? (This problem also prevented me from updating dockers, pulling apps, etc and happens in safe mode too). Edit: Happens on 6.9 and 6.10rc3.
  7. I have now also come up against this problem. ping google.co.uk starts immediately and has a nice 1 sec cadence. ping bbc.co.uk starts slowly and takes a good few seconds per ping. ping -n bbc.co.uk works as expected. I've nailed it down to something around the mDNS stuff, as before its mentioned in the log everything is OK, once that starts up (well before the array starts) then it all falls over. What is UnRAID doing with mDNS? Edit: To clarify, SSH in and start the ping bbc.co.uk as soon as you can. It works OK for a few seconds until something else is done during the startup and then it goes slow. Edit: Confirmed. Pings start going slow just as this happens: /etc/rc.d/rc.avahidaemon start Running /etc/rc.d/rc.avahidaemon stop Fixes it...
  8. Running with the VNC option as GPU #1 has been perfect. No lockup. Anyone any ideas? Going to plug a keyboard, mouse and monitor in directly and play around next.
  9. Also now tried assigning more/less memory to the VM, changing vm.dirty_ratio and background ratio - no change. Crash within an hour. Memtest86+ passed and in any case it is ECC memory. Back to VNC without a HDMI plug - no issues 3 hours later... Again nothing in syslog saved to flash, nothing on a SSH syslog "watch 0.1 tail /var/log/syslog" nor on the IPMI view, a total machine lockup. cmdline.txt btrfs-usage.txt plugins.txt motherboard.txt iommu_groups.txt ethtool.txt folders.txt lsmod.txt lsscsi.txt loads.txt memory.txt lspci.txt meminfo.txt lsusb.txt urls.txt top.txt ps.txt vars.txt lscpu.txt df.txt ifconfig.txt Windows 10 Gaming.txt Windows 10 Gaming.xml
  10. Note: This isn't a bug-report per-say, more a thread looking for potential fixes. I've been struggling with this for a while and any 'solutions' I've found on the forums thus far have not been successful. Background: I've been passing through an RX480 without a dongle (i.e. with VNC enabled) absolutely fine for months (up-time of 40+ days at one point), however now as I want to use it as a remote Windows workstation I've tweaked it to make Parsec work properly (4K HDMI Dongle + Parsec). However in that configuration it totally locks up the machine with zero information displayed in the log or on the IPMI KVM view screen (Its a total and immediate lockup... which is useful). In addition any attempt to close a Remote Desktop session also results in the machine totally locking up, a reboot is the only solution. The notes for changing the WDDM thing hasn't made a difference (in any case an update KB is installed that allegedly fixes it). I have the AMD reset plugin installed and I'm running 6.10-rc2 and rc1 prior to that. Two cores + HT isolated (2,3, 6, 7) assigned to the VM, along with 6144MB RAM. Q35-6.1 (i440fx never seems to work for me, gets stuck at the boot screen or Windows stops responding) OVFM TPM USB Controller (3 qemu XHCI) Windows 10 21H2 - totally 100% up-to-date, latest AMD drivers, latest VirtIO drivers, etc... I've now taken the dongle out and added the VNC server back in and I'll test across the next two weeks (Parsec also disabled). Does anyone have any thoughts on what is causing UnRAID (or Windows to cause UnRAID) to completely lock up either by closing a Remote Desktop connection or randomly at some point in time when there is no VNC enabled? If it wasn't for the fact GPU prices are insane at the moment I'd pick something up newer, but alas we are where we are. I'll post the diagnostics plus information on other server configs when I have more time, but for now if anyone has any ideas, please feel free to let me know and I'll try them!
  11. Same - it used to be obvious now to do a manual verify but it isn't now. The "Check Export" button does nothing. --> "Check Export Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00" How do I command a manual verify after a disk rebuild and upgrade?!
  12. Update seems to go OK. Only problem I have (and its been there for quite a while now since 6.8.X IIRC) is that whenever I update and reboot it gets stuck at a flashing "_". As I have to login via IPMI to see the display I don't know when this is happening. Only doing a full power down and start up ensures it actually boots properly. Other than that RC1 was a 40+ day uptime...!
  13. same issues here. Any functionality which is based on this pop up window does not work at all on iOS (Chrome or Safari) hence pretty useless as it stands!
  14. This is still a problem on iOS 15. Can’t for example open any log page (scripts or system). This needs to be resolved, if it’s a webkit thing then open a normal page instead…
  15. Updated and other than the reboot hanging at "_" (after its restarted forcing a power off and power on) everything appears to be OK. TL;DR: No difference that I can tell from a previous update other than the login button top right, which is fine!