Leaderboard

Popular Content

Showing content with the highest reputation on 10/09/19 in all areas

  1. You might find this useful (Yes means the iGPU can both encode and decode.): BTW: this chart is from the Quick Sync Video Wikipedia page.
    3 points
  2. Please be courteous and respectful to one another. Thank you
    2 points
  3. actually in this instance there was a reason why the update didn't trigger in the usual period of time, and that is simply that the server that does the triggering is my home server (yes im tight and dont wanna pay for a VPS :-) ) and that server was for all of yesterday in bits, literally!. reason being a massive hardware upgrade, which i am now happy to say is complete, and as soon as i brought the server online i saw the trigger for plex pass, so all good in the world again :-).
    2 points
  4. Use df -h /var/log Btw Dynamix System Stats doesn't store its information anymore under /var/log.
    2 points
  5. ** Container Depreciated and Archived ** Please switch to Ferdium-server: Application Name: Ferdi-server Application Site: https://github.com/getferdi/server Docker Hub: https://hub.docker.com/r/getferdi/ferdi-server Github: https://github.com/getferdi/server Ferdi is a hard-fork of Franz, a messaging app for WhatsApp, Slack, Telegram, HipChat, Hangouts and many more. Ferdi-server is an unofficial replacement of the Franz server for use with the Ferdi Client. This is a docker image of Ferdi-server running on Alpine Linux and Node.js (v10.16.3). Configuration After the first run, Ferdi-server's configuration is saved inside the `config.txt` file inside your persistent data directory (`/config` in the container). <port>:3333 - Will map the container's port 3333 to a port on the host, default is 3333. NODE_ENV- for specifying Node environment, production or development, default is development currently this should not be changed. APP_URL - for specifying the URL, local or external, of the Ferdi-server, including http or https as necessary. DB_CONNECTION - for specifying the database being used, default is sqlite. DB_HOST - for specifying the database host, default is 127.0.0.1 DB_PORT - for specifying the database port, default is 3306 DB_USER - for specifying the database user, default is root DB_PASSWORD - for specifying the database password, default is password DB_DATABASE - for specifying the database name to be used, default is ferdi DB_SSL - true only if your database is postgres and it is hosted online, on platforms like GCP, AWS, etc MAIL_CONNECTION - for specifying the mail sender to be used, default is smtp SMPT_HOST - for specifying the mail host to be used, default is 127.0.0.1 SMTP_PORT - for specifying the mail port to be used, default is 2525 MAIL_SSL - for specifying SMTP mail security, default is false MAIL_USERNAME - or specifying your mail username to be used, default is username MAIL_PASSWORD - for specifying your mail password to be used, default is password MAIL_SENDER - for specifying the mail sender address to be used, default is [email protected] IS_CREATION_ENABLED - for specifying whether to enable the creation of custom recipes, default is true IS_DASHBOARD_ENABLED - for specifying whether to enable the Ferdi-server dashboard, default is true IS_REGISTRATION_ENABLED - for specifying whether to allow user registration, default is true CONNECT_WITH_FRANZ - for specifying whether to enable connections to the Franz server, default is true DATA_DIR=/data - for specifying the SQLite database folder, default is /data <path to data on host>:/data - this will store Ferdi-server's data (its database, among other things) on the docker host for persistence. <path to recipes on host>:/app/recipes - this will store Ferdi-server's recipes on the docker host for persistence By enabling the `CONNECT_WITH_FRANZ` option, Ferdi-server can: - Show the full Franz recipe library instead of only custom recipes - Import Franz accounts NGINX Config Block To access Ferdi-server from outside of your home network on a subdomain use this server block: # Ferdi-server server { listen 443 ssl http2; server_name ferdi.my.website; # all ssl related config moved to ssl.conf include /config/nginx/ssl.conf; location / { proxy_pass http://<SERVERIP>:3333; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; } } Importing your Franz account Ferdi-server allows you to import your full Franz account, including all its settings. To import your Franz account, open `http://[YOUR FERDI-SERVER]/import` in your browser and login using your Franz account details. Ferdi-server will create a new user with the same credentials and copy your Franz settings, services, and workspaces. ** UPDATE 2021-11-17 ** The new version of the Ferdi-server docker image has been pushed to Docker Hub. Existing users, please note: The latest updates to Ferdi-server and the Ferdi-server Docker image introduce changes to the default SQLite database name and location, as well as the internal container port. The new container port is 3333. If you would like to keep your existing SQLite database, you will need to add the DATA_DIR variable and change it to /app/database, to match your existing data volume. You will also need to change the DB_DATABASE variable to development to match your existing database. Please see the parameters in the Migration section below. Migrating from an existing Ferdi-server: If you are an existing Ferdi-server user using the built-in `SQlite` database, you should include the following variables: | -p 3333:3333 | existing Ferdi-server users will need to update their container port mappings from 80:3333 to `3333:3333` | | -e DB_PASSWORD=development | existing Ferdi-server users who use the built-in sqlite database should use the database name development | | -e DATA_DIR=/app/database | existing Ferdi-server users who use the built-in sqlite database should add this environmental variable to ensure data persistence | | -v <path to data on host>=/app/databases | existing Ferdi-server users who use the built-in sqlite database should use the volume name /app/database | If you are an existing Ferdi-server user who uses an external database or different variables for the built-in `SQlite` database, you should update your parameters accordingly. For example, if you are using an external MariaDB or MySql database your unique parameters might look like this: | -e DB_CONNECTION=mysql | for specifying the database being used | | -e DB_HOST=192.168.10.1 | for specifying the database host machine IP | | -e DB_PORT=3306 | for specifying the database port | | -e DB_USER=ferdi | for specifying the database user | | -e DB_PASSWORD=ferdipw | for specifying the database password | | -e DB_DATABASE=adonis | for specifying the database to be used | | -v <path to database>:/app/database | this will strore Ferdi-server's database on the docker host for persistence | | -v <path to recipes>:/app/recipes | this will strore Ferdi-server's recipes on the docker host for persistence | **In either case, please be sure to pass the correct variables to the new Ferdi-server container in order maintain access to your existing database.** For more information please check out the Docker README.md on Github: https://github.com/getferdi/server/tree/master/docker If you appreciate my work please consider buying me a coffee, cheers! 😁
    1 point
  6. Hi, i created a user with the wrong name and i'm trying to find how to delete it. I'm the users section, i click on the user then i see a DELETE option with 2 buttons apply (grey) and done . They behave more like a commit to changes made to the description of the user. They do not delete the user. So how do you delete users? TYVM
    1 point
  7. Hi there, So we have a couple of 45Drives servers in our labs that we do testing on from time to time. I can confirm in a recent test that both 6.7.2 and our 6.8-internal series of builds have a faulty R750 driver. We have already contacted highpoint about the problem but their response was less than thrilling. In short, they cannot give a timeframe yet on when they will be able to update their driver to fix the issue. This is a huge frustration for us because they intentionally do not include the driver in the kernel tree, which means that as we update the kernel, if they don't keep their driver code in pace, problems like this can occur. All of that said, we did test version 6.6.7 and it worked just fine with the R750 controller. I also don't believe there are any issues with the Storinator Workstation (AV15) product. Those use a different LSI controller I believe.
    1 point
  8. I pushed an update after pulling up BeyondCompare to do a code sync but didn't actually sync the code. That'll teach me to try to work on bugs after being up for nearly 24 hours after a whirlwind vacation of New York City & Long Island! Try again, I show 2.3 on my main rig. Instead of just increasing the timeout to accommodate your rig, I just bumped it up to 9999 seconds for the controller benchmark (2.78 hours).
    1 point
  9. If you look some pages before (p.50) you can find some screenshots of geekbench tests: personally, for gpu I have good performances with metal (near same results of CUDA/openCL in windows native), quite poor with openCL (1/3). Leoyzen is able to have near the same performance of a native os.
    1 point
  10. Docker is done and template is uploaded and will be available in the CA App within the next few hours.
    1 point
  11. @Zonediver Just to discount Quick Sync hardware acceleration out of hand because it doesn’t meet your own quality standards doesn’t mean it wouldn’t be acceptable for others in different situations. Coming on here berating others for using it is rather rude.
    1 point
  12. That’s part of the problem of your perspective. Quick Sync quality on an Ivy Bridge CPU is not that great. It improved significantly with Haswell. Not everyone has the hardware reserves for CPU transcoding. You don’t know what else people have running on their machines.
    1 point
  13. When writing new data Unraid works at the file level and selects an appropriate drive when a file is first created. Once a file has been selected then Unraid will keep writing to that drive until the file write completes. If the disk runs out of space while writing the file then an an error occurs and the write fails. There is a Minimum Free Space option for the cache drive under Settings->Global Share Settings and when the free space on the cache falls below this value Unraid starts by-passing the cache and writing new files directly to the array bypassing the cache. Ideally this option to be set to be larger than the largest file you are likely to be writing to Unraid. For existing files to be updated Unraid simply selects the drive already containing the file. there is an option to run mover immediately on the Main tab. There is an option to schedule mover runs under Settings->Scheduler. There is also the Mover Tuning plugin allowing for more sophisticated automated control. Moving cache/dockers/VMs is easy enough although it can be time consuming. It is basically just a copy operation.
    1 point
  14. Thanks to @johnnie.black for this info: Up to Unraid 6.3.5, mpt3sas driver 13.100.00.00, TRIM works on LSI SAS2 and SAS3 HBAs, starting with Unraid 6.4.1, mpt3sas driver 15.100.00.00, TRIM stopped working on SAS2 HBAs, like the 9211-8i, 9207-8i, etc, but still works on SAS3 HBAs like the 9300-8i. But note that for all cases, TRIM with LSI HBAs only works on SSDs with deterministic read zeros after TRIM, for SSDs with no deterministic read after TRIM you get the standard TRIM unsupported error when running fstrim: the discard operation is not supported When running fstrim with a SAS2 LSI HBA on an SSD with deterministic read trim support and latest drivers you get a different more cryptic error: FITRIM ioctl failed: Remote I/O error All Samsung consumer SSD models prior to the 860 EVO don't support deterministic reads after TRIM, so if for example you have an 850 EVO it will never be trimmed by an LSI HBA. I believe the PRO models are different, and most support it, you can easily check with hdparm: OK for LSI HBA: hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK for LSI HBA: hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks)
    1 point
  15. I would suggest the following steps: - backup your important data on an external hd/pendrive (if you can do a superduper/carbon copy cloner image) - backup your current clover image - update your clover image to latest as I explained - reboot your vm, if the updated clover works, proceed, otherwise replace the new image with your backup through unraid shares, and start again - set smbios to imac 18,2 (you will not have problem with MCE error) - if you have gpu passthrough I would disable it and enable only vnc - reboot and test vnc: I got an error with vnc, stating something like "display is not initialized yet": if this happens you can delete your vm (NOT the disk!!), create a new vm pointing to the actual disk(s), set all the customizations to start a macosx vm (vmxnet3 for network and additional cpu parameters, as you already did in the past), after that vnc will work again - if you have problems with resolution or weird display into vnc, start the vm, press esc before clover hds display, go into the resolution settings and change the resolution to 1920x1080, save changes, force stop the vm, and start the vm again - copy MCEReportDisabler.kext to /Library/extensions (download it some posts above, it's a post from @Leoyzen) --> This, if you want to use later smbios of MacPro6,1 iMacPro1,1 or MacPro7,1 - download kext utility to rebuild the cache, or rebuild the cache with terminal - boot into your current mojave and upgrade to Catalina - After catalina is installed you can re-enable gpu passthrough and any other modification you had - After that you can change smbios to macpro7,1 if you want
    1 point
  16. Thanks for taking the time! (Just to avoid confusion, this particular issue was first reported by @electron286 , I was asking about progress since I bumped into same issue). At any rate, I found the problem, at least in my case. My setup includes a floppy drive, which is a platform-connected device. Seems like ScanControllers.cfm device tree parsing is slightly confused by this. What you get in ls -l /sys/block for this device looks like this: fd0 -> ../devices/platform/floppy.0/block/fd0 Once instructed to ignore such devices, the scanning process completes happily and everything starts to work flawlessly. Below is a proposed patch to ScanControllers.cfm. Again, thanks for building and maintaining this fantastic tool over the years! --- ScanControllers.cfm.orig 2019-08-07 06:57:04.000000000 +0300 +++ ScanControllers.cfm 2019-10-09 10:49:20.150512121 +0300 @@ -388,7 +388,7 @@ <CFFILE action="write" file="#PersistDir#/ls_sysblock.txt" output="#BlockDevices#" addnewline="NO" mode="666"> <CFLOOP index="i" from="2" to="#ListLen(BlockDevices,Chr(10))#"> <CFSET CurrLine=ListGetAt(BlockDevices,i,Chr(10))> - <CFIF FindNoCase("virtual",CurrLine) EQ 0> + <CFIF FindNoCase("virtual",CurrLine) EQ 0 AND FindNoCase("platform",CurrLine) EQ 0> <CFSET DrivePath="/sys/" & ListDeleteAt(ListLast(CurrLine,">"),1,"/")> <CFSET CurrDrive=Duplicate(Drive)> <CFSET CurrDrive.DevicePath=DrivePath>
    1 point
  17. You don't need to worry about categories or anything. Sab will pass through to sonarr the full path including the category. So long as the host path matches between the two. Right now, Sab says that a download is sitting at /data/category/TVShow.mkv This works out to a host path of /mnt/user/Downloads/Complete/category/TVShow.mkv Because the mappings don't match between Sab and Sonarr, sonarr is effectively looking for the file at /mnt/user/Downloads/Complete/Complete/Sonarr/category/TVShow.mkv And the file is obviously not there.
    1 point
  18. It's validation your ssl certificate. So the issue was that you turned off the container that updates Cloudflare when you get a new IP.
    1 point
  19. This container does not update your ip on your dns provider. That's your responsibility. This container only does the domain validation through various methods (through cloudflare in your case).
    1 point
  20. I found that it is clover which breaks vmxnet3 and virtio-net, maybe during clover devices initializing. After hotplug the ethernet devices after clover initialization, I've got a much better throughput performance: 1. virtio-net works(works properly) and get 20Gb/s+ performance(before won't work, osx can't get correct mac address after clover handled the devices) 2. vmxnet3 works properly and get 4Gb/s+ performance (before only 100Mb/s+ and quiet buggy) I will try to find the specific issues where the clover break. Maybe we are closed to get a much more efficient ethernet option which works property Steps to make virtio-net work: 1. do not add virtio-net in vm xnl 2. boot vm, when in clover boot selection section, go to 3 3. [IMPORTANT] virsh attach-device --live Catalina /boot/config/ethernet-virtio.xml 4. [IMPORTANT] add clover boot args: keepsyms=1 debug=0x100 the ethernet-virtio.xml looks like this: <interface type='bridge'> <mac address='52:54:00:d6:ac:37'/> <source bridge='br0.100'/> <model type='virtio'/> </interface>
    1 point
  21. I am so happy to see that support of password managers is actually coming that I don't care what it looks like!!!! 😁
    1 point
  22. Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
    1 point
  23. yep thats your issue, so looks like krusder was initially installed as user 'root', group 'root' (or restored?) but your PUID is specifying user 'nobody' (id 99) and PGID is specifying group 'users' (id 100). so best thing to do is stop the container, then delete the file '/mnt/cache/appdata/binhex-krusader/perms.txt' then start the container, this will then set permissions correctly for the defined PUID and PGID you specified.
    1 point
  24. There should be a checkbox next to the Delete option! What browser are you using - some recent versions of Chrome have a bug that prevents check boxes being handled correctly.
    1 point
  25. Yea, I found the way to boot on my flash drive finally! I use Rufus to format it after strait forward the normal procedure Here a screenshot (Sorry for frenchy dialog) what preset I use in Rufus. Hope that can help someone else.
    1 point