Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About Goldfire

  • Rank
  • Birthday June 19


  • Gender
  • Location
    Sydney, Australia

Recent Profile Visitors

287 profile views
  1. I'm using local with the Source Directory set to: /mnt/user I've also mounted an external drive via Unassigned Disks using the Path option: Within DirSyncPro, I have the /external mountpoint as my backup destination, which works fine. Unfortunately, that didn't have any positive effect. When attempting to "Analyze this job now", nothing with Japanese/Chinese unicode characters will be detected. I've also tried to delete the job and remake it. Other English named files within my test folder are detected. Take your time though, I appreciate you looking at this for me.
  2. Sorry for the delay, different timezones and all of that. The Windows version of DirSyncPro works fine with Japanese characters. The small set of files I was using for testing works. Ah sorry, I was (poorly) assuming that other languages were also affected. Chinese (Simplified) is also affected, I only have a handful of files with Chinese naming - that would've been an annoyance if I needed to restore those files at a later date and they weren't there, ha.
  3. @ich777 You may not be able to help with this because the docker itself isn't at fault (at least I don't think it is). If you could poke around with it, I'd appreciate it. There doesn't seem to be any support for Japanese unicode (Kanji/Katakana/Hiragana). When a backup is running, DirSyncPro will simply skip over these files. It seems anything that isn't English won't be backed up. For example, this filename won't be backed up: リライアンス I've left a bug report on their SourceForge as well in hopes of this being fixed up.
  4. Hopefully someone can help me with this one. I have an older, but reliable, APC UPS SU1400RMXLI3U with a AP9617 Smart Slot Card. The UPS is correctly detected using the PCNet option with the correct username/passphrase (omitted out due to it being an actual password), all other tech info below is also displayed correctly: As unRAID is shutting down due to power loss and the batteries hitting the ~85% threshold, it will not put the UPS to sleep or shut it down after the expected delay has been met. I have the UPS set to turn back on after power has been restored and there is at least a 50% charge. I've ensured that DSHUTD is set correctly and I've also attempted to use SNMP as an alternative (with the same results). Instead, I receive this error just before the server shuts off (quickly captured via the IPMI): I believe I've only ever seen this procedure work correctly once over the last couple of years of extended power outages, with the most recent ones not behaving as expected. Any suggestions? yamato-diagnostics-20190613-0515.zip
  5. Updated and you nailed it, it's working perfectly. Thanks for your support, I appreciate it. It's my payday on Friday
  6. I received the update and it's working. Although, I have an odd problem where the log file will stop updating after midnight, unsure if this is a limitation of the container, or the docker engine itself - I can still see the new entries in the logs\latest.txt provided by the Minecraft server itself. Is that something that can be fixed with an update?
  7. Cool thanks, I'll let that update via the CA Auto Update tomorrow morning. Appreciate it.
  8. @ich777 Thank you very much for making the Minecraft server docker for us, I'll be sure to make a donation when I can spare some money myself. I got it working fine with Forge and ~40 mods. Is there a way to show the server console through either the docker console or log view via unRAID? Other than looking at the logs\latest.txt and keep refreshing it of course. I'd like to be able to keep an eye on info/warn/error entries, such as people joining/leaving and the "server is overloaded" etc
  9. Here's my follow up: Changing the BIOS of my VM with the passthrough GPU from OVMF to SeaBIOS made no change at all Using Q35 instead of i440fx made it feel a bit "snappier" - but it didn't improve the situation Updating UnRAID to 6.5.1-rc6 didn't help Removing the auto restart of the VM (simple batch file on task scheduler in Windows 10) has stopped UnRAID from hanging on occasion Restarting the VM manually can also cause UnRAID to hang, out of ~10 restart attempts, UnRAID would hang after anywhere between 3-5 restarts With the above info, it seems to be completely random on when UnRAID decides to hang. Having it automatically restart via the batch file at 6am was the main cause of me losing UnRAID overnight. Now I'm reluctant to even restart the VM, I've disabled Windows Update altogether so it can't automatically restart (which I'll manually update once a month for the cumulative updates). I'll just have to use it as is, at least I won't have to worry too much about waking up in the morning and UnRAID is dead in the water. I'm disappointed that I can't reliably restart a VM (even manually), but, I'm also disappointed in the lack of support from the forum when I'm basically high and dry
  10. That didn't work with running 6.5.1-rc5. Didn't even last 48 hours. I can't provide a diagnostics for obvious reasons. The only thing I can think of is the VM restarting each day, but if that was the case, I'd imagine it would be hanging the server every day the VM restarts. Besides, I've had the server hang at various times throughout the day, even when the VMs aren't restarting. I've also tried both i440fx and Q35 machine types, both with the same outcome. If it is the VM restarting that's causing the issue... What good will it be if I can't simply restart VMs without the server hanging? So, now I'm here with another ~2-3 days of runtime before it inevitably happens again What are my other options? Any other input from others? EDIT: After a bit more searching, it seems that others that have this issue switched to SeaBIOS instead of using OVMF and had better results, I'll attempt that when the VM isn't in use and give a follow up.
  11. I just realised that I typed 6.5.1-rc5 in my original post, I meant rc2 (I used the numpad to type that and must've missed the 2 - I've edited my original post) Alright, I'll give that a go and report back. Thank you for the reply.
  12. Hi everyone, I'm back again with trouble since my last topic and my remaining patience is just about gone. tl;dr - Ever since updating to 6.4 back in January 2018, my server has had trouble ranging from VM stutter (now seemingly fixed after changing my append) to completely hanging and not accepting any input every 24-72 hours - my record uptime in this state is ~four days. My VM stutter was "fixed" by adding pti=off to my append for startup - I'm aware of the risk involved, but I've almost been pulling my hair out due to frustration and this was a relief to the stutter, but the hanging issue still remains. As before, I still require vfio_iommu_type1.allow_unsafe_interrupts=1 to be added for my pass through devices to be visible to VMs. I've always left isolcpus=4,5,10,11 as is so UnRAID doesn't interfere with the VMs. Most of the hardware has not changed since the start of the year. The USB drive was changed due to testing in my old topic, I was not able to continue using my old USB drive as it was blacklisted. So, onto the "new" problem: Specs: Sandisk Facet 8GB USB drive for UnRAID 6.5 stable (Also tried 6.5.1-rc2 with the same outcome, but reverted back to 6.5 to reduce the complexity of the issue) SuperMicro X8DTH-6F (latest BIOS update 2.1b installed) + 1x Xeon X5670 32GB Hynix ECC RAM HMT31GR7BFR4C-H9 - on the Supermicro list of qualified memory Onboard SAS2008 and 1x Dell H310 - both flashed to IT mode with 3x breakout SATA cables 4x 4TB WD Red Drives - One of them as a single parity 6x 3TB WD Red Drives 1x 250GB Samsung Evo SSD for Cache 1x 120GB Corsair Force SSD for VMs - 2 VMs, one called "HTPC" and the other "Haruna", both running Windows 10 x64 LTSB and up to date GT710 passed through to HTPC VM (only this VM restarts at 6am every day to ensure updates are applied when it's unused to avoid interruptions during video playback later in the day) 2x Hauppauge WinTV-HVR-2200 TV Tuners passed through to the "Haruna" VM Dockers: Duckdns (1GB of RAM assigned) Jackett (1GB) LetsEncrypt (autostart is off, only ran once every few months) Plex (4GB) qBittorrent (4GB) Radarr (4GB) Sonarr (4GB) The amount of RAM being assigned to the dockers did not change the outcome of the server hanging, at first I believed the server was simply running out of memory, wasn't the case. RAM usage is ~40% with average CPU usage most of the time ~10%, obviously depending on load from Plex etc Since my topic back in January, I've done: Memtest, this comes back without any errors after ~24 hours of testing at which point I stop the test as I really need my server Checked and changed multiple values in the BIOS relating to C-States and RAM (also tried failsafe and optimal defaults) Forced the dockers to only have a set amount of memory each in case I was getting OOM errors (ranging from 1GB - 4GB (listed above)) Tips and tweaks vm.dirty_background_ratio and vm.dirty_ratio set to 2% and 3% respectively. Also tried 1% and 2% respectively. (this was set back in the 6.3.5 days) I can't effectively attempt to use the Safe Mode on UnRAID as I need the Unassigned Devices plugin to run my VMs from the Corsair SSD - but I'll attempt to run the VMs from the cache drive as troubleshooting if it's really needed. This is the console view after the server has ceased up (screenshot from the IPMI interface), I can not input anything with the virtual keyboard, network shares are not accessible, web management is not accessible and times out, VMs appear to be running, but the VMs also hang after a short time - My only "fix" from all of this, is to hard reset the server via the IPMI management and let it start up again for another few days of use. I was hoping it hangs at a consistent time so I can at least narrow it down, but that wasn't the case. The times at when the server hangs varies greatly, there is not a set or consistent time and it doesn't always seem to coincide with an event, for example, Plex usage, the Mover kicks in or an update for dockers etc. I've included the tail log and three of the last diagnostics before my latest hang (please let me know if more info is required). It seems the latest hang was around 6am this morning (which coincides with the HTPC VM restarting) - although, the two previous days, the VM restarted without issue. As noted above, the hang is not consistent. I was attempting to fix the problem myself rather than bother people on the forums, but I'm now at a dead end after ~three months of trying. Please help me out before I really start regretting going with UnRAID During the 6.3.5 days, everything was rock solid with none of the above changed and the only time the server came down was due to an extended power outage that my UPS couldn't last for. (You'll notice that the logs show my cache as being full around the 7th of April, this was the case during a day of heavy downloading and transfers to the cache drive. This does not have any bearing on the frequency of the hanging of UnRAID as it happens randomly during any state of the server) FCPsyslog_tail.txt yamato-diagnostics-20180411-0441.zip yamato-diagnostics-20180411-0511.zip yamato-diagnostics-20180411-0541.zip
  13. Thanks for the reply. Hm, isn't that simply for clock sync? Unless I'm missing something here? I saw other people having issues with this plugin recently as well, so I uninstalled it before making this thread, I'm unsure why it's still floating around in the logs. But... that plugin also isn't running during the fresh copy version that I tested (as far as I know). I only had Unassigned Devices installed.
  14. Hi everyone, hopefully I'll cover everything because this is really starting to frustrate me. tl;dr - Updating to 6.4 causes a VM to stutter like a bitch regardless of the content that is viewed on it. I've been running UnRAID since early last year and its been great, no issues other than a few minor problems that were ironed out with MSI fixes for VMs and throwing a new GPU in to replace a failing one. I wanted to update to 6.4 to cover the Meltdown CPU exploits etc and thought everything would go fine, but I've got nothing but problems now. Quick specs: Sandisk Ultra 16GB USB drive for UnRAID 6.4 stable SuperMicro X8DTH-6F (latest BIOS update 2.1b installed) + 1x Xeon X5670 32GB Hynix ECC RAM Onboard SAS2008 and 1x Dell H310 flashed to IT for controllers 4x 4TB WD Red Drives 6x 3TB WD Red Drives 1x 250GB Samsung Evo SSD for Cache 1x Corsair Force SSD for VMs (2 VMs, one called "HTPC" and the other "Haruna") nVidia GT710 passthrough to "HTPC" VM 2x Hauppauge WinTV-HVR-2200 TV Tuners passthrough to the "Haruna" VM 2x diagnostics files from my normal 6.4 and a fresh copy of 6.4 attached below The two VMs both consist of Windows 10 x64 LTSB 2016 and are up to date - they both perform as per usual in terms of speed. The first VM I have (Haruna) runs fine, I use RDP to operate that one and run Windows only tasks that won't run in a Docker for example - so I'm unsure if it is showing the same problems as the second VM due to the lack of audio and GPU video (audio redirect is disabled). The second VM (HTPC) is where all of my troubles are. I mostly use Kodi on this VM and some other light tasks. Intermittently, the VM will completely freeze up for a few seconds which causes a large amount of audio and video stutter - that makes watching any content a painful experience. This isn't the same "demonic sound" as running without the MSI fix though, but I double checked to make sure it's enabled anyway. I've tried the following: Tried variations of Q35 to i440fx, including newer and older versions Made sure all device drivers are up to date, such as VirtIO drivers and especially the nVidia drivers Removed some unneeded plugins and made sure all of the others are up to date, including Dockers Tried disabling the C-States in the BIOS. If I leave them enabled, the server will cease up which requires a forced restart (the built-in watchdog on the BIOS will also force a restart). Since the very start though, I've always needed vfio_iommu_type1.allow_unsafe_interrupts=1 appended to my syslinux.cfg in order to use the PCI-e devices as a passthrough, I assume this is because of the dated hardware - unsure if that has any bearing on any of this. As a last ditch attempt, I even tried a new, fresh copy of 6.4 on a new, fresh USB drive - I only added Unassigned drives as a plugin to mount the VM SSD. I've attached the diagnostics of the fresh copy as well (yamato-diagnostics-20180119-0558). This was all rock solid prior to the 6.4 update. Before I start losing sleep over this one and/or roll back to 6.3.5, are there any suggestions from the pros? Thanks in advance. yamato-diagnostics-20180119-2347.zip yamato-diagnostics-20180119-0558.zip