MikeAH

Members
  • Posts

    17
  • Joined

  • Last visited

About MikeAH

  • Birthday 02/27/4

Converted

  • Gender
    Male
  • Location
    New York, New York

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MikeAH's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Curious if anyone has established which versions of various Binhex containers don't have the affected xz packages. In the meantime I plan to rollback to older versions of docker containers until xz has moved beyond this version or been removed from the affected packages. Edit: I added a few of the binhex docker containers I found that shouldn't be vulnerable with this version of xz. Hope this helps! sabnzbd 4.2.2-1-01 privoxyvpn 3.0.34-1-08 prowlarr 1.13.3.4273-1-01 plexpass 1.40.1.8227-1-01
  2. What are the specs of the machine you're running it on and is it consistently choppy or only in the sliced preview? FWIW this is being rendered using only your CPU using the default configuration which will be quite slow when slicing generally. For me, the non-sliced preview is super smooth, but when I slice it's a bit lethargic. Still usable for me though.
  3. Disregard there was indeed a bug, thanks for catching it, @windlok1010. It's with how Prusaslicer handles stable/beta versioning on Linux. More specifically, if it's beta it uses .config/PrusaSlicerBeta and on stable it uses .config/PrusaSlicer. The script I used to pull the latest version takes either of them. If there is a strong sentiment to not use the beta versions, let me know and I'll regex beta versions out of the pool. I modified the Docker container to forcefully set the data directory to /configs/.config/PrusaSlicer going forward. I'm pushing to GitHub with the latest changes and I've already made a push as of 2 minutes ago. This means if you were using this container successfully previously, you might need to move .config/PrusaSlicerBeta to .config/PrusaSlicer.
  4. That's great to hear! Thanks for confirming that the fix resolved your issue and let me know if you have any feedback or suggestions.
  5. I just made another push to Docker and GitHub with a fix to the /configs/.config/ directory not existing for fresh installations. If this was the issue you were hitting, it's since been fixed. I tested on two devices with brand new docker volumes for it with success so 🤞. Appreciate the kind words and support!
  6. Just checked both my unraid servers (one I imaged fresh last year and one that I imaged like 6 years ago). They both had macOS Interoperability disabled by default, so hopefully this is the case for most of the default configurations of unraid. Not sure though and it's definitely worth checking if this removes the risk of the known vulnerability.
  7. Docker: https://hub.docker.com/r/mikeah/prusaslicer-novnc GitHub: https://github.com/helfrichmichael/prusaslicer-novnc Overview I recently started to become slightly frustrated with my all of my varying devices having various STLs (the model files used for 3D Printing) that I am printing/want to print. I decided why not use unraid to host a share and then serve Prusaslicer via a Docker container? I have 3 printers, so sometimes I want to re-slice a model for another one of my printers or another filament type. I started my search first in the Community Apps plugin and was sad to discover "No results found" when I searched for "prusaslicer", but luckily I landed on a few Docker containers with built-in VNC support. A few of the containers worked well, but I found them to be overly complex for the task at hand. In my approach I simplified the docker container as much as possible and provided some quality of life changes to keep Prusaslicer happy and healthy. I was able to get this template added to the Community Apps plugin and I've also provided manual installation instructions. Feel free to add it and let me know if you run into any issues. From my testing so far, it runs really smooth in the browser and additionally is retaining the configuration properly through image wipes, etc. Thanks and hope you all like it! Installation guide Using Community Apps, search for prusaslicer-novnc and it should return the MikeAH's Repository offering. Manual installation process: Open your Docker page on your unraid instance. Scroll to the bottom and in the Template repositories textarea, add the URL below https://github.com/helfrichmichael/unraid-templates (This will add all of my unraid Docker templates as I add more as an FYI. If you only want to get the prusaslicer template, add /prusaslicer to the URL above and that should work) Then click ADD CONTAINER Select prusaslicer-novnc from the menu You will now set all of the varying environment variables. By default I used my /mnt/cache/appdata/prusaslicer for the /configs/ directory. I also passed in a unraid share for my STL and GCODE files by adding a path for /prints/. Click APPLY or DONE. The docker should hopefully spin up and build successfully. You can now access your Prusaslicer instance at http://UNRAID_IP:6080 (or if you changed the port, use that port -- additionally UNRAID_IP is your unraid host IP). Accessing the web interface Once installed visit http://UNRAID_IP:6080 (UNRAID_IP will be your unraid host IP -- something like 192.168.1.92 for example). Warnings and Best Practices This uses VNC and noVNC to provide access to a Dockerized instance of Prusaslicer. In the default state, there is NO AUTHENTICATION whatsoever. This means any device on your local network will be able to access this Docker unless you've locked the Docker down or provided some network level security. One method for securely accessing this remotely could be using ArgoTunnel (Cloudflare Tunnel) and Cloudflare Teams to provide remote access with an authentication layer. This would also in theory allow you to connect each of your printers in Prusaslicer to your Octoprint instances and seamlessly print from Prusaslicer directly to your printer. Screenshot(s)
  8. Amazing! That worked. I did this via the VM settings however since I still had access to the VM settings page. I ensured that the domain.cfg file updated as well. After making the modification and rebooting though, things are back up and running. It looks like a few people who moved to 6.5.0 ran into this. Was the culprit the MEDIADIR (mine was pointing at /mnt) or VIRTIOISO (It was pointing to virtio-win-0.1.102.iso)? Thanks so much, Squid. I really appreciate your help.
  9. Does it make more sense to just backup the USB stick and restore the files per (http://lime-technology.com/wiki/index.php/Files_on_v6_boot_drive) on a fresh instance of unRAID? Is that documentation up-to-date (was last modified on September 17, 2016)? The fact that files went missing during an upgrade concerns me about the overall sanity of the system currently.
  10. Sure! I've attached a diagnostic set that I just generated. Let me know what further action to take. Looking at the VM settings, the current libvirt.img expected location is "/mnt/user/system/libvirt/libvirt.img" unraid-diagnostics-20180411-2011 (1).zip
  11. Hi all, I just upgraded to 6.5.0 and upon upgrading noticed my VM tab was completely empty. I've listed my troubleshooting below. Additionally I have diagnostics from before and after if anyone is interested. Checked the VMs tab and saw nothing. Opened the settings in the Web UI and ensured that VMs were enabled (they were). SSH'd into the server and ran 'virsh list --all' and got the following response: error: failed to connect to the hypervisor error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory Then I did 'find /mnt -name libvirt.img' and found nothing... When I try to start libvirt via '/etc/rc.d/rc.libvirt start' I get the following: Starting virtlockd... 2018-04-12 00:23:14.071+0000: 19684: info : libvirt version: 4.0.0 2018-04-12 00:23:14.071+0000: 19684: info : hostname: unraid 2018-04-12 00:23:14.071+0000: 19684: error : main:1216 : Can't load config file: Failed to open file '/etc/libvirt/virtlockd.conf': No such file or directory: /etc/libvirt/virtlockd.conf Starting virtlogd... 2018-04-12 00:23:14.082+0000: 19687: info : libvirt version: 4.0.0 2018-04-12 00:23:14.082+0000: 19687: info : hostname: unraid 2018-04-12 00:23:14.082+0000: 19687: error : main:982 : Can't load config file: Failed to open file '/etc/libvirt/virtlogd.conf': No such file or directory: /etc/libvirt/virtlogd.conf no image mounted at /etc/libvirt Am I screwed or is there still hope? I'm hoping I can get this resolved soon as I have a VM I'm fairly reliant on.
  12. Great! Thanks for the response and I'll set this up when I get home later today! As a worst case scenario, is there a way to quickly kill the VM during boot in-case I ever need to access the console (generally, I use the WebUI).
  13. So recently, I built out an UnRAID server. My specs are an FX-8320E, ASRock 970 Pro3 R2.0, AMD HD7850, and an AMD HD5450. I was really impressed with the general performance, but I just realized why the games on my computer seem to be running poorly. I'm running an AMD HD7850 on the PCIE3 slot (x4 lane, so low performance). On the PCIE2 slot above, I'm running an AMD HD5450 card on a x16 lane (it's crap, yes). The setup is this way because I wanted the HD5450 to be the primary graphics card for UnRAID and I wanted my HD7850 to be passed-through to my gaming VM. Manual I am referencing for my motherboard: http://www.asrock.com/mb/AMD/970%20Pro3%20R2.0/?cat=Manual Would it be possible to switch these cards PCIE slots so I am getting the full potential out of my HD7850. The idea would then be to tell UnRAID to use the HD5450 instead, but the actual machine would boot in via the HD7850 (no control over that AFAIK). I tried digging through the settings within the BIOS, but I had no luck finding a way to set a config for which graphics card to default to. Figured someone might have previous experience in this realm. Thanks!