mortenmoulder

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by mortenmoulder

  1. Is it possible to tweak the compression level of zstd? I don't care if my backup takes 5 minutes or 30 minutes, if that saves me a couple of gigabytes per backup.
  2. Not sure how a frontend variable would change in safe mode. I guess I can give it a try. Making it a constant could be a proper workaround. The object is never overwritten, so making it a constant would prevent it from ever being null. You can add stuff to objects in JavaScript, even though they're set to a constant. Weird, I know.
  3. Original post: On 6.10.0-rc2. No clue why, but at some point the variable "timers" is being set to null. I tried clearing everything in Chrome and I tried incognito, but at some point "timers" is null. A fix is quite easy. Instead of using "var", use "const". We know it's not supposed to be overwritten, so just set it to a constant. File location: /usr/local/emhttp/webGui/include/DefaultPageLayout.php
  4. I recently updated from 6.9-beta35 to 6.10.0-rc2, and I noticed that my desktop PC cannot open some tabs like the Docker tab. My desktop's connection is routed through the unRAID server (bridged via network settings) and I'm using a 10 Gbit NIC in both my server and the unRAID server. I can easily connect from my phone or other devices that are not bridged through unRAID. I also see these errors in the browser console: I have attached the diagnostics file. EDIT: Seems like Chrome is the culprit here, even though I cleared everything and even tried incognito. Edge works fine. I don't get those console errors in Edge. EDIT 2: I found a fix, but it's really ugly. I edited this file: /usr/local/emhttp/webGui/include/DefaultPageLayout.php and changed the var timers = {}; to const timers = {};. For some reason, something is setting the timers variable to null. No clue why. It should be a constant either way, so I guess this is good for now. littlefinger-diagnostics-20220215-1706.zip
  5. Thanks for the reply. I have not tried LanSync, but I might give that a shot. Should it get its own LAN IP via DHCP then? Because that would be interesting indeed. I have a 10 Gigabit link between my server and PC, so that should go quite fast. Makes sense you've moved to OneDrive. I'm considering the same, except maybe using Mega. Dropbox is quite expensive, but their desktop app and mobile app just works flawlessly for me (auto upload literally 5 seconds after I've taken a 30 MB photo). Thanks for the support on this!
  6. Very well written. I gave it a go and deselected a few containers that I know need to run 24/7. Took a backup this morning at 6 AM and it worked perfectly. Thank you! And wow it's fast!
  7. I don't think I've ever used a more buggy application before. It constantly tells me to reauthenticate/allow access to Dropbox. Sometimes it says "Dropbox isn't running", when I know for a fact it ran perfectly fine just a couple of hours ago. If I restart the container, it literally takes hours for it to be done with syncing, because I have a lot of data in my Dropbox. And yes, I did set it up correctly, by selecting a share with the cache set to Only or No. I know some of the blame is on Dropbox' side, but man.. I wish this worked perfectly without any issues what so ever.
  8. The only reason why I haven't used this plugin yet, is because it stops my Docker containers. It's such a shame that has to happen. I have people accessing my Docker containers pretty much 24/7, so I can't just shut it down every night for a few minutes. I really wish it was possible to backup the data in my appdata directory without stopping Docker. Guess I have to figure something out myself, that doesn't require Docker to stop.
  9. Unfortunately my script relies on unRAID, yep, but I don't like your approach. I actually used your approach for a couple of days, but I just couldn't rely on it (no offense). I much prefer creating an archive, adding the files to it, and then moving that to Dropbox. Since unRAID has a built in function that literally does that, I think it's okay to rely on it. I did add a check to see if it exists, but creating a notification is also flawed, as that script could also move to a different location. Nothing I can do about that, unfortunately. The only thing you need to do after updating unRAID, is check if the script still works. Otherwise it shouldn't change.
  10. It's just a personal preference. It does too much, warns you that your Docker containers, VMs, etc. all need to be stopped, and so on. If all I want is /boot/ backup, it's pretty much overkill to get it. I know other people have said similar things, which I discovered while searching for good ways to backup /boot/. We each have our opinions about these things, fortunately. Great Limetech is working on it. Would be very nice with some built-in S3 off-site backup, that you can run on a schedule. I know it's not top priority, but it's definitely up there in my opinion.
  11. I've explored the possibility of calling the already built-in unRAID "backup to ZIP" function, and I've made a shell script, that will call said "backup to ZIP" script and then move the ZIP file to a directory. In my case, I have a Dropbox Docker container running, so I just move the ZIP file to there, and then I have backup of my /boot/ directory exactly like how unRAID does. I couldn't find a lot of information about backing up /boot/, except for the CA backup plugin which I don't like, so I made the shell script from scratch (I am a web developer, so bear with me). Suggestions are welcome. I think I made it as "noob friendly" as I possibly could. Just create a new User Script, paste in the script below (newest revision), change the path of where you want your ZIP file to end up, and then set a custom schedule using cron (mine does it every 12 hours). --- Rev. 001: https://pastebin.com/Jc8wh57P Rev. 002: https://pastebin.com/3udPK7w8 Rev. 003: https://pastebin.com/mBYQqae2
  12. Use Method 2 and use VNC in your browser. Don't passthrough anything until you're actually inside of macOS, as I bet any interference could stop it from working.
  13. I see that SpaceinvaderOne just commented on a pull request to fix said issue: https://github.com/SpaceinvaderOne/Macinabox/pull/34 He said it has been merged. Can you try again? If it's merged, it should pull the latest code and should fix your issue.
  14. Do you have the latest unRAID version? unRAID Version: 6.9.0-beta35 or above
  15. Do you have the helper script and does it report that it has finished downloading? Try deleting everything in the folders you see the Docker container points to locally, if they are associated with the Big Sur or Catalina updates. For Big Sur I am pretty sure you need Method 2, because Method 1 downloads Catalina regardless of what you pick.
  16. I tried it, but unfortunately it does exactly what a regular VNC software does. I cannot get more than 1 screen working.
  17. So basically if I have 3 monitors on my Windows rig, I need to VNC into the machine 3 times with 3 dummy plugs? Yeah, nah. I'll just stick with 1920x1080 and consider macOS a bad OS. I can't believe someone hasn't made a remote desktop server (not VNC), that automatically creates X amount of virtual screens, depending on how many monitors the guest computer has.
  18. The 3rd party software options, does not, as far as I know, replicate the native Screen Sharing capabilities in macOS. All the VNC options I've tried, doesn't span across the monitors. It's fixed to 800x600 with no monitor plugged in or 1920x1080, which is the resolution of the monitor I plugged into the GPU, which I passthrough'd to the VM.
  19. Dang, that sucks. I really wanted an almost true 1:1 feel of macOS, while sitting on my Windows rig. There HAS to be someone who has developed a 3rd party software for this. It makes no sense there isn't.
  20. May I ask how that's working for you? Can you get it to create virtual monitors for you? I run Windows on my main rig, and I want to connect to my Big Sur VM with either RDP or VNC, but I want it to span across all my monitors in the correct resolution. So far it seems like that's not possible with macOS.
  21. I use an ASRock B550M Pro4 motherboard and I am using the onboard ethernet controller. No external one.
  22. Did you try https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#fixing-en0 as I linked to? NullEthernet patch was the one that made it work for me.
  23. Didn't see your reply before I replied with an update. Check out the post right below yours - the fix actually works!
  24. I got iCloud and iMessage to work by reading the link in this post: Link: https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#fixing-en0
  25. Unfortunately that didn't work. I went this route instead, and that worked flawlessly for me: https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#fixing-en0 I knew my serial number was good to go, so I went to install the NullEthernet patch (the kext and compiled AML file). Added them into OpenCore Configurator then added the files into the Kext/ and ACPI/ folders in the EFI partition, then rebooted the VM and it magically worked. Both iCloud and iMessage now works!