Leaderboard

Popular Content

Showing content with the highest reputation on 10/22/20 in all areas

  1. 3 points
  2. Based on the error message I saw "/app/RoonServer/start.sh: line 50: /app/RoonServer/Server/RoonServer: Permission denied". I think that the issue has to do with the permissions of the Roon start script changing after the update, rather than a port issue, but I'm not positive.
    2 points
  3. If you click the “tags” tab it gives you the build number to use. Example- binhex/arch-delugevpn:1.3.15_18_ge050905b2-1-04 FYI- These versions do not support the next-gen servers.
    2 points
  4. Hi JorgeB, like you said, I updated the BIOS and Firmware of the LSI-Controller and of my Motherboard. Thanks a lot for your superb instruction how to flash the Controller under Unraid ! The parity-build finished now with no errors (like before). Parity-Discs were spun down and I've copied a file to the system without any error. Now it seems that all works fine! Let's have a look if it's lasting, when I'll copy more files to the NAS. Your advice to update was right. I thought about that too, but wouldn't belive in that, because the controller has run in the other server with no problems. Thanks a lot again for your hint and instruction.
    2 points
  5. Just FYI, I think I was having a similar issue to Marshalleq. On RC2, when I stopped the unRAID array, which stopped my VM, restarted the unraid array and attempted to restart my VM, it would hang the VM management page (white bottom, no uptime in unraid) and then if you attempted to reboot, it would not reboot successfully. You would have to reset the machine to reboot. However, with RC4, everything seems to be working correctly.
    2 points
  6. New Zealand. So, I've pushed x3 new features today... Quicksync HW Encoding (VAAPI) support. Quicksync was not fully enabled after the NVIDIA encoding feature was added a few months back. It was missing some testing and additional configurations. I finally managed to get my hands on an Intel laptop a couple of months back, so I was able to finally finish that part off. There are no configuration options at the moment for HW decoding, but I think I would like to get around to that some time also. However, there is now a way you can configure Intel decoding and encoding if you know what you are doing (keep reading below) Task lists are now read from the database. This is a big update. If anyone's install breaks because of this change... sorry. Basically, all the "lists" are now inserted and selected from the SQLite DB. This change required a decent overhaul of the way the application passes tasks around to the workers and post-processor etc. Now that this change has been implemented, it will be possible to add features to move items around in the task list or set priority on items. This is what I want to move onto next. This was the last big change that I had to implement before moving from "beta". It is the last change that I have on my milestones that would break anyone's configuration and cause them to need to delete the database and start fresh. So now that this is done, I should be moving this out of beta and expanding it to other forums. Advanced FFMPEG Options. This is something that users that want to get their hands dirty may particularly enjoy. I often see posts wishing Unmanic could give them more control over how it was transcoding their library. Even recently I have seen a post here wondering what the FFMPEG arguments were that Unmanic were executing. In settings you will now find a fifth tab - "Advanced Options". This tab will give you a print out example of what FFMPEG will be run on your video files as well as a text input box for adding your own custom FFMPEG Options. For those of you who can be bothered reading the FFMPEG docs, you may find some ways to further improve the command. For now, this "Custom FFMPEG Options" field will also be a requirement to get the Intel VAAPI encoders working. It requires these two params to be added: -vaapi_device /dev/dri/renderD128 -vf format=nv12|vaapi,hwupload If this is to difficult for people, I will eventually come up with a way for these to be automatically populated when one of the vaapi encoders is selected. But that is a low priority. It's time to get onto some front-end improvements. I will add instructions on the first post for setting this up. Sorry for the seemingly lack of development people. I have far from given up on this application. I quite enjoy it. I have been slowly working on it this year. Unfortunately, earlier this year when we finished our lock-down here in NZ, my workload increased as we went into catch-up mode at my day job. Its been the busiest year form me. My next goal is to give us some ability to sort the pending tasks list and blacklist any files that are constantly failing. Hopefully some people get some good use out of this last update with quicksync. Cheers
    2 points
  7. I am partially colorblind. I have difficulty with similar shades (such as more green blues, next to greens, or more blue greens next to blues.) The default Web Console color scheme makes reading the output of ls almost impossible for me: I either have to increase the font size to a point that productivity is hindered drastically, or strain my eyes to make out the letters against that background. A high contrast option would be great. Or, even better, the option to select common themes like "Solarized" et al would be great. Perhaps even, the ability to add shell color profiles for the web console. For now I use KiTTY when I can - and I've added a color profile to ~/.bash_profile via my previously suggested "persistent root" modification. Also, worth mentioning here: https://github.com/Mayccoll/Gogh Gogh has a very extensive set of friendly, aesthetically pleasing, and well-contrasting color profiles ready-to-go. Edit: Also worth noting that currently the web terminal doesn't source ~/.bashrc or ~/.bash_profile, and this results in the colors being "hardcoded" (source ~/.bashrc to the rescue) Edit2: Additionally, the font is hard coded. If we are fixing the web terminal to be a capable, customizable platform - this would also be high on the list of things to just do.
    1 point
  8. That was a good question, actually. I just realized that I hadn't had to update the CORE (running in Docker) until right now. I just backed up my library and then updated to Roon Version 7.1 (build 610) using the internal updater and ran into exactly the same problem that it seems like you guys are having. Once Roon Remote on my Mac and Roon Core (on my server) rebooted I was forced to sign again and connect to my Roon Core as if this was the first time I'd done so. I tried to restore my library only to find that after the restore was complete my Roon Core had disappeared and now I've had to reinstall the container. I suspect that this may be an issue related to running Roon inside a docker container so I've created an issue in steefdebruijn's repo: https://github.com/steefdebruijn/docker-roonserver/issues/8 I encourage you all to post your experiences in this issue. Particularly if you have and logs - I accidentally deleted mine before I could add it to this issue.
    1 point
  9. I've switched to an older NUC (Intel NUC6CAYH) with 8 GB, similar to wcg66. I bought used it for less than $150 to be able to stream multichannel audio thru Roon, and it is working perfectly - now for multichannel and also as the server. I was able to easily upgrade Roon server software from 610 to 667.
    1 point
  10. @dkerlee, I'm running the ROCK software on the NUC. It's Linux based and setup very much like an appliance. The NUC I have is a few years old (5th generation mobile i5 processor with 8 GB RAM) but it seems to be ok. Even with DSP and transcoding it keeps up. I decided to use a 2 TB drive I had lying around as external storage for the library. That works well too and I can access it through Samba (it's also mounted in unRAID too.) Not a bad solution in the end. However, I had the hardware laying around so it was $0 and gave me a use for a NUC I bought
    1 point
  11. From what I understand, yes. Does not hurt to make a backup of your flash and note your disk assignements, just in case. Also, if you have VMs, the passthrough elements would have to be checked and potentially changed.
    1 point
  12. Not in my experience. I have done this at least a half dozen times without any issues. Sent from my iPad using Tapatalk
    1 point
  13. Man, I'm sorry for not checking the FAQ beforehand. I usually do. Didn't even think about simply downgrading the whole container. Thank you, as always, for being patient with us.
    1 point
  14. yep its simplicity itself!, follow Q5:- https://github.com/binhex/documentation/blob/master/docker/faq/unraid.md
    1 point
  15. @Maddeen Oooh, I would have waited just a couple weeks for the Ryzen 5000 chips, in particular the 5600X or 5800X for you. Anyhow, I upgraded from my (Now Backup) unraid to my new Ryzen build earlier this year. If you have your array setup now without any issues, and you're migrating to the same version, it should be easy to swap your disks out. You'll be able to find plenty of advice on this forum about that, from better people than me. You should basically be able to swap your hardware out, as far as I'm aware.
    1 point
  16. Those ones are very old. It's probably best to wait and see what josh5 has to say.
    1 point
  17. Solved my own issue and it's now working as expected. I was trying to assign an address that was inside of my routers dhcp pool (which it doesn't allow.)
    1 point
  18. sorry i tried to repro, but it seems like it must have been some sort of transient network error, because they all download and unzip fine now..... strange
    1 point
  19. Just as an update: I set up your recommended logging method, but since then the issue has not occured again. If it occurs again, I‘ll post the log here. Thanks for the suggestion
    1 point
  20. One of the preclears is a docker. There is also a plugin, and various script versions floating around. Preclear is really only for testing these days, since Unraid will clear a disk without taking the array offline, in those situations where a clear disk is required (hint: there is only one situation). Some people like to preclear new disks to test and hopefully weed out "infant mortality". I don't bother with preclear.
    1 point
  21. It did indeed fix the error, however it appears to have been unrelated. Now I have no errors in the log, but it still doesn't process anything. All the workers are idle even though there are several pending. If I roll back to 119 everything works again. Is there any information I can provide that would be useful in identifying the problem?
    1 point
  22. yes, you will only slow things down when using a proxy over a vpn tunnel, its unnecessary.
    1 point
  23. Yoy've said a couple of times now you've 'regenerated credentials': what credentials are you using, your main username/password that you log in to the PIA website with or these? It should be your main username (pXXXXXX) & password you use, not the generated SOCKS ones.
    1 point
  24. I was 50/50 because of the new FreeNAS Scale coming out with linux kernel, docker and Kubernetes, mainly because I want all the more business like features. But, Unraid have done such a fantastic job and apparently (hopefully) ZFS is coming along with who knows what else. They do a really good job on the whole don't they, so I don't mind giving a bit of extra cash at all. I'm lucky to be recently employed again and I can share that around a little here and there Thanks @limetech for a great product, community and support!
    1 point
  25. Generate a report with powertop. It will contain the commands to set auto for unused pci slots. I don't know why this general rule is a problem for your server. It seems one of your pci cards has a problem with automatic power control.
    1 point
  26. SOLVED geht wieder, hab den USB-Stick neu erstellt und die Config aus dem Backup drauf kopiert. Alles wieder da und läuft.
    1 point
  27. In the syslog, though I should have said continuous blocks, parity is checked on a standard 4k Linux block, each block has 8 sectors (for standard 512E drives), so when the errors error are logged for every 8th sector, they are on continuous blocks.
    1 point
  28. This is actually the dedicated server but it's windows only and I couldn't find much information about it. I could try it with WINE but this could cause much overhead, eat up much system resources, consume much CPU time, is most of the time not stable and also is possible that it won't run - for example running DayZ and Astroneer through WINE is not possible. You can try my 'The Forest' container, it's also built with WINE and then decide if it's worth the hassle...
    1 point
  29. Yeah it is working fine. Fair enough about not wanting to implement the configs, as they can change from versions. But maybe just add a note somewhere that they are user supplied?
    1 point
  30. I created the new directory in Nextcloud template then added the external storage and it is showing up great in Nextcloud. However it doesnt seem like I'm able to redirect the InstantUpload picture folder to the external storage folder. It 'seems' like Nextcloud creates those InstantUpload folders and wont allow me to redirect them. I even just tried to move the Instaupload picture folder to the external storage folder but it wont let me move it. How were you able to connect your InstantUpload camera folder to the external storage folder? I promise I'm trying!!! lol
    1 point
  31. lol. that's it. all container are downloaded, probably just need to re-setup pi-hole. Thanks! easy answer, but tough to troubleshoot. Thanks! Guys!
    1 point
  32. It's not so much a rule as a support nightmare. If there emerges a popular adapter with consistent linux support from the manufacturer submitting working drivers for the current kernel, then yes, it's quite possible for that specific adapter to be supported. The current issues with realtek wired adapters are bad enough, trying to support wifi generally just isn't going to happen, AFAIK. It's only just recently that some specific USB wired adapters could be used.
    1 point
  33. @Porkie Your CPU seems not to be supported: https://github.com/georgewhewell/undervolt/issues/72 Search in the Github issues if your CPU model has already a request and if not ask for support. The dev sometimes adds new CPUs. @Nuke Sorry, bug is fixed.
    1 point
  34. I had a *lot* of issues with the SQLite database in photoprism actually, lots of "database is locked" errors. After switching it out to MariaDB it just works and everything is much much faster now as a whole. FWIW I pinged the devs on SQLite and they said they tested ith a 6 core machine with fairly standard RAM but I'm running 16T/32C and 128GB of RAM so perhaps my machine was going *too* fast for it to keep up and was trying to move on while the SQLite DB was still locked? Not sure but moving to MariaDB solved the issues. If you do decide to pull it from CA let me know - I will take over the template, I already have an approved CA repo so I can put it up instead if you wish. Your template works great so far on everything with it! I plan to roll with this as a self hosted Google Photos replacement (best thing I can find for now) and get Nextcloud to auto-upload photos from my phone to external storage that is the import folder of Photoprism and script auto-imports to happen every hour or overnight each day.
    1 point
  35. if you can leave it for a few days (with other launchers than steam)
    1 point
  36. Rebuilt zfs-2.0.0-rc4 for unRAID 6.8.3 & 6.9.0-beta30
    1 point
  37. Solved. Rebooted Server and asked to format the SSD again. Basically have to reformat the drive to fix this issue.
    1 point
  38. SMART attributes 1 and 200 are also important for WD Red drives, but you have to add these attributes to the list for each drive before Unraid will warn about them. I don't know if the various preclear utilities report on those attributes or not.
    1 point
  39. ok, i fixed the broken face plugin by myself. If someone have trouble with it, remove the /plugins/face/npm_module Folder (rm -R ) and make shure, your package.json is like the original one https://github.com/rico360z28/Shinobi/blob/3f536cc1c6c616029f4a8a83c48356cb934979e9/plugins/face/package.json First edit /plugins/face/INSTALL.SH and replace all "@1.7.3" with "@1.7.4". Second, comment out line 145 #sudo npm audit fix --force this breaks the dependencies my INSTALL.sh looks like this #!/bin/bash DIR=`dirname $0` if [ -x "$(command -v apt)" ]; then sudo apt update -y fi # Check if Cent OS if [ -x "$(command -v yum)" ]; then sudo yum update -y fi INSTALL_WITH_GPU="0" INSTALL_FOR_ARM64="0" INSTALL_FOR_ARM="0" TFJS_SUFFIX="" echo "----------------------------------------" echo "-- Installing Face Plugin for Shinobi --" echo "----------------------------------------" echo "Are you Installing on an ARM CPU?" echo "like Jetson Nano or Raspberry Pi Model 3 B+. Default is No." echo "(y)es or (N)o" read useArm if [ "$useArm" = "y" ] || [ "$useArm" = "Y" ] || [ "$useArm" = "YES" ] || [ "$useArm" = "yes" ] || [ "$useArm" = "Yes" ]; then INSTALL_FOR_ARM="1" echo "Are you Installing on an ARM64 CPU?" echo "like Jetson Nano. Default is No (64/32-bit)" echo "(y)es or (N)o" read useArm64 if [ "$useArm64" = "y" ] || [ "$useArm64" = "Y" ] || [ "$useArm64" = "YES" ] || [ "$useArm64" = "yes" ] || [ "$useArm64" = "Yes" ]; then INSTALL_FOR_ARM64="1" fi fi if [ -d "/usr/local/cuda" ]; then echo "Do you want to install the plugin with CUDA support?" echo "Do this if you installed NVIDIA Drivers, CUDA Toolkit, and CuDNN" echo "(y)es or (N)o" read usecuda if [ "$usecuda" = "y" ] || [ "$usecuda" = "Y" ] || [ "$usecuda" = "YES" ] || [ "$usecuda" = "yes" ] || [ "$usecuda" = "Yes" ]; then INSTALL_WITH_GPU="1" TFJS_SUFFIX="-gpu" fi fi echo "-----------------------------------" if [ ! -d "./faces" ]; then mkdir faces fi if [ ! -d "./weights" ]; then mkdir weights if [ ! -x "$(command -v wget)" ]; then # Check if Ubuntu if [ -x "$(command -v apt)" ]; then sudo apt install wget -y fi # Check if Cent OS if [ -x "$(command -v yum)" ]; then sudo yum install wget -y fi fi cdnUrl="https://cdn.shinobi.video/weights/plugin-face-weights" wget -O weights/face_landmark_68_model-shard1 $cdnUrl/face_landmark_68_model-shard1 wget -O weights/face_landmark_68_model-weights_manifest.json $cdnUrl/face_landmark_68_model-weights_manifest.json wget -O weights/face_landmark_68_tiny_model-shard1 $cdnUrl/face_landmark_68_tiny_model-shard1 wget -O weights/face_landmark_68_tiny_model-weights_manifest.json $cdnUrl/face_landmark_68_tiny_model-weights_manifest.json wget -O weights/face_recognition_model-shard1 $cdnUrl/face_recognition_model-shard1 wget -O weights/face_recognition_model-shard2 $cdnUrl/face_recognition_model-shard2 wget -O weights/face_recognition_model-weights_manifest.json $cdnUrl/face_recognition_model-weights_manifest.json wget -O weights/mtcnn_model-shard1 $cdnUrl/mtcnn_model-shard1 wget -O weights/mtcnn_model-weights_manifest.json $cdnUrl/mtcnn_model-weights_manifest.json wget -O weights/ssd_mobilenetv1_model-shard1 $cdnUrl/ssd_mobilenetv1_model-shard1 wget -O weights/ssd_mobilenetv1_model-shard2 $cdnUrl/ssd_mobilenetv1_model-shard2 wget -O weights/ssd_mobilenetv1_model-weights_manifest.json $cdnUrl/ssd_mobilenetv1_model-weights_manifest.json wget -O weights/tiny_face_detector_model-shard1 $cdnUrl/tiny_face_detector_model-shard1 wget -O weights/tiny_face_detector_model-weights_manifest.json $cdnUrl/tiny_face_detector_model-weights_manifest.json else echo "weights found..." fi echo "-----------------------------------" if [ ! -e "./conf.json" ]; then echo "Creating conf.json" sudo cp conf.sample.json conf.json else echo "conf.json already exists..." fi if [ ! -e "$DIR/../../libs/customAutoLoad/faceManagerCustomAutoLoadLibrary" ]; then echo "Installing Face Manager customAutoLoad Module..." sudo cp -r $DIR/faceManagerCustomAutoLoadLibrary $DIR/../../libs/customAutoLoad/faceManagerCustomAutoLoadLibrary else echo "Face Manager customAutoLoad Module already installed..." fi tfjsBuildVal="cpu" if [ "$INSTALL_WITH_GPU" = "1" ]; then tfjsBuildVal="gpu" fi echo "-----------------------------------" echo "Adding Random Plugin Key to Main Configuration" node $DIR/../../tools/modifyConfigurationForPlugin.js face key=$(head -c 64 < /dev/urandom | sha256sum | awk '{print substr($1,1,60)}') tfjsBuild=$tfjsBuildVal echo "-----------------------------------" echo "Updating Node Package Manager" sudo npm install npm -g --unsafe-perm echo "-----------------------------------" echo "Getting node-gyp to build C++ modules" if [ ! -x "$(command -v node-gyp)" ]; then # Check if Ubuntu if [ -x "$(command -v apt)" ]; then sudo apt install node-gyp -y sudo apt-get install gcc g++ build-essential libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev librsvg2-dev -y fi # Check if Cent OS if [ -x "$(command -v yum)" ]; then sudo yum install node-gyp -y sudo yum install gcc-c++ cairo-devel libjpeg-turbo-devel pango-devel giflib-devel -y fi fi sudo npm install node-gyp -g --unsafe-perm --force echo "-----------------------------------" npm uninstall @tensorflow/tfjs-node-gpu --unsafe-perm npm uninstall @tensorflow/tfjs-node --unsafe-perm echo "Getting C++ module : @tensorflow/[email protected]" echo "https://github.com/tensorflow/tfjs-node" npm install @tensorflow/[email protected] --unsafe-perm --force npm install @tensorflow/[email protected] --unsafe-perm --force npm install @tensorflow/[email protected] --unsafe-perm --force echo "Getting C++ module : face-api.js" echo "https://github.com/justadudewhohacks/face-api.js" sudo npm install --unsafe-perm --force if [ "$INSTALL_WITH_GPU" = "1" ]; then echo "GPU version of tjfs : https://github.com/tensorflow/tfjs-node-gpu" else echo "CPU version of tjfs : https://github.com/tensorflow/tfjs-node" fi sudo npm install @tensorflow/[email protected] --unsafe-perm --force if [ "$INSTALL_FOR_ARM" = "1" ]; then cd node_modules/@tensorflow/tfjs-node$TFJS_SUFFIX if [ "$INSTALL_FOR_ARM64" = "1" ]; then echo "{ \"tf-lib\": \"https://cdn.shinobi.video/binaries/libtensorflow-gpu-linux-arm64-1.15.0.tar.gz\" }" > scripts/custom-binary.json else echo "{ \"tf-lib\": \"https://cdn.shinobi.video/binaries/libtensorflow-cpu-linux-arm-1.15.0.tar.gz\" }" > scripts/custom-binary.json fi cd ../../.. fi #sudo npm audit fix --force echo "-----------------------------------" echo "Start the plugin with pm2 like so :" echo "pm2 start shinobi-face.js" echo "-----------------------------------" echo "Start the plugin without pm2 :" echo "node shinobi-face.js" pm2 stop shinobi-face pm2 delete shinobi-face cd /opt/shinobi/plugins/face rm -R node_modules/ sh INSTALL.sh node shinobi-face.js pm2 start shinobi-face.js
    1 point
  40. Wait a minute! You take time out of development/documentation to eat dinner? Did you even check first with your loyal and demanding user base who is paying you absolutely nothing for you to slave away on our behalf? Unacceptable! 😁
    1 point
  41. Ja stimmt schon, aber wie finanzieren die sich denn. Ich mein schau dir TMDB an. Deren Seite sieht nicht so aus als seien die arm an fähigen Entwicklern. EDIT: Aha, von TiVo kommt die Kohle (Quelle) und die lizenzieren die Daten an andere Firmen weiter. So viel zu "Community". Ich will gefälligst bezahlt werden für meine Updates in die Datenbank. Waren bestimmt 10 Sachen oder so ^^
    1 point
  42. B550 Motherboards is not good for iommu groups. I don't know if a bios correct this. Choose an x570 it's better. Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  43. Do: "During first login, make sure that the "Authentication" in the webui is set to "Local" instead of "PAM". Then set up the user accounts with their passwords (user accounts created under PAM do not survive container update or recreation)" mean that "admin" should be removed in the url logon?: https://XX.XX.XX.XX:943/admin/ If so I arrive here: Going to "Admin" I get this url: https://XX.XX.XX.XX:943/admin/ Loggin in with default PAM I get this annoying guy: I have a feeling I am doing something wrong (after 2 hours)........... Can it be port forwarding? 1194 to 943 on server ip. Anyone? //Frode logfile.rtf
    1 point
  44. @chip - were you able to get this working? I am trying and I think I am close but I am getting this error: Error: ENOENT: no such file or directory, open '/home/meshserver/views/layouts/main.handlebars' I connected to the console and do not see the directory "views" under "/home/meshserver"
    1 point