c0d3m0nk3y

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

c0d3m0nk3y's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Keep getting many of these "Please consider ignoring this container" warnings even though I have set them to be skipped
  2. Seems like it will change to "apply update" when a new image is available, but you have to manually update the unraid-update-status.json again to clear it
  3. I found a "solution" but I wouldn't call it safe, by mucking with the docker.img file mounted to /var/lib/docker. The docker update info seems to be saved to /var/lib/docker/unraid-update-status.json First I made sure to update my stacks I nulled out the "local" and "remote" properties to null and "status" to undef (example below) Then I went back to the Docker tab and hit "Check for Updates" and the statuses went back to "up-to-date" Unsure if it will go back to "apply update" when a new image is available, I'll try to remember to report back
  4. After upgrading to 6.11.5, I lost the nvme-cli package because it is no longer available in the Nerd Pack plugin. I needed it because I am managing my fan speeds with a custom script as Fan Auto Control does not fulfill my needs. I managed to manually install it and thought I'd document it, as the steps I took work for any other missing package you can't get through the Nerd Pack plugin Acquiring the package: https://packages.slackware.com/ or https://slackware.pkgs.org/ have repositories of official Slackware packages You need the Slackware version of your Unraid instance to find an appropriate repo. You can get that by running this in your shell. cat /etc/os-release Find and download the package you need, if multiple options are provided, you're going to need a file with extension txz In your shell run the following command upgradepkg --install-new package-file.txz This installation is not permanent, you will need to install any packages after reboots. I suggest you setup a script in User Scripts set to "At First Array Start Only"
  5. I've got my own auto transcoding script I use on my linux machine and would like move the process to my server, but I'm running into an issue when running HandBrakeCLI using an NVenc profile. When using the same video and profile in the GUI it works fine, but when I run HandBrakeCLI directly I get an error "[hevc_nvenc @ 0x14b6a589c6c0] Cannot load libcuda.so.1" I've run into this error before on my linux machine but in that case the profile didn't work with either the GUI nor HandBrakeCLI and it was a problem with my installation, but that's clearly not what's happening here. Has anyone else run into this and found a solution? Example script HandBrakeCLI --preset "H.265 NVENC 1080p" -q 28.0 -i "/storage/path/video.mp4" -o "/storage/path/video-nvenc-cq28.mp4" --start-at seconds:375 --stop-at seconds:2351 Edit: I also tried including "--preset-import-file /config/ghb/presets.json" used in the script found at "/run/s6/services/autovideoconverter/autovideoconverter"
  6. I needed ack and prename installed in the system bin, along with some of my own. So I cleaned up the code and decided to share. This only requires vanilla python v3+. What it does: For each item in the dictionary _scripts - If script is missing , or older than a week (can be changed on line 27), it will download it from the url defined in the dictionary item For each *.sh or *.pl file in the script's directory - Copies it to /usr/bin/ and enables execution permission Python Script import os import subprocess from datetime import datetime from datetime import timedelta def shell(cmd, check=False) -> str: res = subprocess.run(cmd, stdout=subprocess.PIPE, shell=True, check=check) return res.stdout.decode('utf-8').strip() def curl(url, scriptName): shell(f'curl {url} -o {scriptName}') _scripts = { #https://forums.unraid.net/topic/79490-rename-instead-of-rename-perl-instead-of-unix-util/ 'prename.pl': lambda k: curl('https://gist.githubusercontent.com/javiermon/3939556/raw/b9d0634f2c099b825a483d3d75cae1712fb9aa31/prename.pl', k), #https://beyondgrep.com/install/ 'ack.pl': lambda k: curl('https://beyondgrep.com/ack-v3.5.0', k) } _scriptExt = ['.sh', '.pl'] _bin = '/usr/bin/' def run(): oneWeekAgo = datetime.now() - timedelta(days=7) for k in _scripts.keys(): sp = './' + k if not os.path.exists(sp) or datetime.fromtimestamp(os.path.getmtime(sp)) < oneWeekAgo: print(f'Updating: {k}') _scripts[k](k) for f in os.listdir(os.fsencode('./')): fn = os.fsdecode(f) fns = os.path.splitext(fn) if fns[1] in _scriptExt: print(f'Copying: {fn} -> {_bin}{fns[0]}') shell(f'cp {fn} {_bin}{fns[0]}') shell(f'chmod +x {_bin}{fns[0]}') run() Bash Script I suggest adding to UserScripts with Schedule set to "At First Array Start Only" Change "/mnt/user/projects/scripts" to wherever you save the python script Change "installScripts.py" to whatever you name the python script #!/bin/bash cd /mnt/user/projects/scripts python3 installScripts.py
  7. So it's suddenly working now. I believe it was a cable issue. To plug it in for my VM I just plug into the front ports. For Unraid host it's all the rear ports and I use an extension cable. I'm a dumb dumb and didn't think to rule that out til today when I reconnected it to grab that screenshot and it wouldn't mount at all. As soon as I plugged directly without the extension everything worked. Thanks for the help, and sorry for the bother
  8. I have. I also tried adding the ":latest" tag to the repository field to make sure it was grabbing latest since I noticed after posting that although last updated on CA is 2/22, on DockerHub it shows the last update as 2/28, but that didn't resolve the issue either
  9. Running into an issue and haven't found anyone running into it on this forum I've tried multiple blu-ray and DVDs but any disk I load, gives me a ton of errors that look like... 'Posix error - no such device' occurred while reading '[My bluray name and model #]' at offset '[some number]' From what I'm seeing on the MakeMKV forums it's an issue with AACS host certificate. It's a usb drive. LiteOn BD-RE Slimtype EB1, and I don't have this problem when plugging into a usb port passed through to a VM with the latest version of MakeMKV installed. The last release of MakeMKV was on 2/27, just 5 days after your last update. Is it possible that version has this fixed? Could you update it please?
  10. Steps to recreate Basic view only Enter VM edit on a VM with VNC already selected as the graphics option Enter password Click update button Enter VM edit Expected result: VNC password field has value and VNC connection requires password Actual result: VNC password field is empty and VNC connection does not require password Advanced view Enter VM edit on a VM with VNC already selected as the graphics option Switch to advanced view Add "passwd" attribute and value to the "graphics" xml element Click update button Enter VM edit Expected result: VNC password field has value in basic view, "graphics" xml element has "passwd" attribute in advanced view, and VNC connection requires password Actual result: VNC password field is empty in basic view, "passwd" attribute is removed from graphics xml element in advanced view, and VNC connection does not require password Additional details If I SSH into the server and manually edit the xml to add the "passwd" attribute, it works, but I have to edit anytime I make a change to the VM For anyone experiencing this issue, setup a User Script with this command to fix it (replace [YOUR PASSWORD] and [YOUR VM NAME] with their respective values sed -i "s_<graphics type='vnc' port='-1' autoport='yes' websocket='-1'_<graphics type='vnc' port='-1' autoport='yes' websocket='-1' passwd='[YOUR PASSWORD]'_" "/etc/libvirt/qemu/[YOUR VM NAME].xml" rmofrequirement-diagnostics-20220414-1001.zip
  11. Like JorgeB said, you can shutdown, the only downside is loss of data that's on the cache drive which seems to have already happened. I believe you should be able to pull out the drive and use the system without a cache drive until you get a replacement.
  12. Is it possible to use a GPU configured for passthrough to VMs by Docker containers when no VMs are running, without changing IOMMU bindings or rebooting? Based on my research it seems this isn't possible, but I figured I'd ask in case I'm misunderstanding something. I can't run a 2nd GPU in my system and only use my GPU for my gaming VM. I'd love to be able to setup automated video encoding that only runs when that VM is shutdown. I already built the container for this, only diff is it checks whether other containers are runnning. If I can figure out how to use the GPU when the IOMMU group is bound, then it's only a small change to make it only run when the VM isn't running My backup plan is a Linux VM that runs the scripts then shuts down when complete, but I'd rather not have to architect another solution
  13. Can you give us more details please? I have MacInABox setup as an AirMessage server but hoping this container will be a bit lighter weight
  14. Had the same issue. Way around it for me was to go into a "XML View" and remove the entire "hyperv" element. Had no problems myself, but be sure to backup your original XML. This is what I removed <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv>