Leaderboard

Popular Content

Showing content with the highest reputation since 12/02/21 in all areas

  1. 7Days to die "experimental" (how to) Docker settings: - GAME_ID: 294420 -beta latest_experimental - Serverconfig: config.xml - Validate Installation: true In the tower/appdata/7dtd/ folder. Make a copy of the serverconfig.xml file and rename that file to config.xml Edit the config file to what you need. Restart the docker container, and you should be running A20. Extra: (Custom maps) To do this, you first need to create a map within the game itself. - The newly generated map is located here: C:\Users\WindowsUser\AppData\Roaming\7DaysToDie\GeneratedWorlds (This path could be different on your system) Copy the name of the folder (its called something random) to tower/appdata/7dtd/Data/Worlds/ Boot the server, and enjoy!
    4 points
  2. HOUSTON WE HAVE LIFT OFF !! LOOK AT THESE BEAUTIES!!! IOMMU group 18:[11f8:8001] 0b:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. Device 8001 (rev 05) [11:0:0:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdc 4.00TB [11:0:1:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdd 4.00TB [11:0:2:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sde 4.00TB [11:0:3:0] disk NETAPP X477_HMKPX04TA07 NA01 /dev/sdf 4.00TB [11:0:4:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdg 4.00TB [11:0:5:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdh 4.00TB [11:0:6:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdi 4.00TB [11:0:7:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdj 4.00TB [11:0:8:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdk 4.00TB [11:0:9:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdl 4.00TB [11:0:10:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdm 4.00TB [11:0:11:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdn 4.00TB [11:0:12:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdo 4.00TB [11:0:13:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdp 4.00TB [11:0:14:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdq 4.00TB [11:0:15:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdr 4.00TB [11:0:16:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sds 4.00TB [11:0:17:0] disk NETAPP X477_HMKPX04TA07 NA01 /dev/sdt 4.00TB [11:0:18:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdu 4.00TB [11:0:19:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdv 4.00TB [11:0:20:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdw 4.00TB [11:0:21:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdx 4.00TB [11:0:22:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdy 4.00TB [11:0:23:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdz 4.00TB IOMMU group 19:[11f8:8001] 0c:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. Device 8001 (rev 05) [12:0:0:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdaa 4.00TB [12:0:1:0] disk NETAPP X477_HMKPX04TA07 NA01 /dev/sdab 4.00TB [12:0:2:0] disk NETAPP X316_HAKPE06TA07 NA00 /dev/sdac 6.00TB [12:0:3:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdad 4.00TB [12:0:4:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdae 4.00TB [12:0:5:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdaf 4.00TB [12:0:6:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdag 4.00TB [12:0:7:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdah 4.00TB [12:0:8:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdai 4.00TB [12:0:9:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdaj 4.00TB [12:0:10:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdak 4.00TB [12:0:11:0] disk NETAPP X477_HMKPX04TA07 NA01 /dev/sdal 4.00TB [12:0:12:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdam 4.00TB [12:0:13:0] disk NETAPP X316_HARIH06TA07 NA00 /dev/sdan 6.00TB [12:0:14:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdao 4.00TB [12:0:15:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdap 4.00TB [12:0:16:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdaq 4.00TB [12:0:17:0] disk NETAPP X477_SMEGX04TA07 NA03 /dev/sdar 4.00TB [12:0:18:0] disk NETAPP X477_WVRDX04TA07 NA02 /dev/sdas 4.00TB [12:0:19:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdat 4.00TB [12:0:20:0] disk NETAPP X477_HMKPX04TA07 NA01 /dev/sdau 4.00TB [12:0:21:0] disk NETAPP X477_SMKRE04TA07 NA01 /dev/sdav 4.00TB [12:0:22:0] disk NETAPP X477_HAKPE04TA07 NA00 /dev/sdaw 4.00TB [12:0:23:0] disk NETAPP X316_HARIH06TA07 NA00 /dev/sdax 6.00TB Thanks guys feel free to pin this or add it to knowledge along with the original patch thread that I was not able to find in the first place LOL Will be purchasing license by friday at latest need to configure a bunch of stuff. cheers.
    3 points
  3. Probably this Dec 2 05:39:35 Tower root: Creating new image file: /mnt/user/system/docker/docker.img size: 18000G Dec 2 05:39:35 Tower root: touch: cannot touch '/mnt/user/system/docker/docker.img': No space left on device You're attempting to create an 18T docker image. Settings - Docker and set it to something reasonable (like 20G)
    3 points
  4. Do you plan to release an update for the latest XMrig version (currently v6.16.2)?
    2 points
  5. Thanks SO MUCH for the help guys. Let's call this one SOLVED! I was able to use a modified version of the cmd string above to get everything whipped back into shape. For anyone finding this later, and being clueless like me, this is how it shook out... 1) Stop all VMs 2) Manually move (I used Krusader) the vdisk images for each of your VMs to some location that has space enough for them. If you have room for 2 copies of them on your cache drive, then you can omit this step. I also copied the entire folder (VM name) to help me keep track of which VM I was moving at the time. 3) Once the file(s) are moved, and you have some breathing room on your cache drive, open terminal, and use the following command (edited for your pathing of course): cp -n -r --sparse=always /mnt/disk#/Sourcefoldername /mnt/cache/domains Note: If you have any spaces in your VM names (like I did), then you have to use "\ " (backslash space) between the words [eg. "Windows 10 VM" would be "Windows\ 10\ VM"] 4) Wait for the copy to complete. Once the terminal is back to the ready prompt, you should be able to use your method of choice to confirm that the vdisk is back in your domains share where it belongs, and is now much smaller. I used QDirStat, but there are lots of other ways. 5) Before restarting the VM, edit its properties, flip over to XML mode (switch at top right corner), then add discard='unmap' to the location shown in this post. If you're running a fairly recent version of qemu, you won't need to jump through all those other hoops, and can just add that one bit of code and be done. 6) Start the VM(s). 7) If it's Windows 10, these next tasks may help. For any other OS, I can't say much... A. Go to This PC, right click Local Disk, select {Properties}, [General] tab, then [Disk Cleanup] button (next to the usage graph). Click [Clean up System Files] button, let it reload. Choose what you'd like to clean. Let it run. I recommend the Windows Update files for sure. They're likely to be large. When that's done (might take a bit)... B. Go to This PC, right click Local Disk, select {Properties}, [Tools] tab, then [Optimize] button. In that dialog, verify that "Media Type" is shown as "Thin provisioned drive". Click [Optimize] button. Wait some more... Rejoice! You have your space back, and your VMs should be working fine again. Thanks again to those who patiently lead me to the info I needed to get this sorted out. This really is a fantastic community!
    2 points
  6. After updating your Authelia to v4.33.1, You'll probably see an error "Can't continue due to the errors loading the configuration" To solve the issue, edit the configuration yaml file, and add a new encryption_key key under storage. storage: local: path: /config/db.sqlite3 #this is your databse. You could use a mysql database if you wanted, but we're going to use this one. encryption_key: you_must_generate_a_random_string_of_more_than_twenty_chars_and_configure_this Hope this helps
    2 points
  7. now, if its only this one channel, quick & dirty but working choose the quality you would like as input, as sample i stripped your file down now to 1080p only (highest quality) #EXTM3U #EXTINF:0,master.m3u8 #EXTVLCOPT:network-caching=1000 https://rbmn-live.akamaized.net/hls/live/2002830/geoSTVDEweb/master_6692.m3u8 like this i just drive around the ABR AR i explained above ... now import this file into xteve assign a epg in case you use xepg update plex playlist enjoy your channel in plex ... 2021-12-05 22:10:54 [xTeVe] Streaming Status: Playlist: test - Tuner: 1 / 1 2021-12-05 22:10:54 [xTeVe] Streaming URL: https://rbmn-live.akamaized.net/hls/live/2002830/geoSTVDEweb/master_6692.m3u8 you can also use a lower resolution, look at your 4 available streams ... in terms its too high, its using pretty high bitrate here ... also you can check which buffer fits best in terms of reliability, xteve, ffmpeg or may even vlc, in this order i would test throuh.
    2 points
  8. For anyone who missed the sale, sign up to the monthly newsletter and keep up with news. https://unraid.net/blog/unraid-monthly-digest This is the first time Unraid has offered discounts and it might have been considered enough of a success that they will do it again
    2 points
  9. Sure thing. @Torsten_UN- please send us a ticket to https://unraid.net/contact with a screenshot of the issue and I will assist.
    2 points
  10. Ctrl + p To screenshot current screen. The output file corefreq-xxxxxx can then be read with command less -R Alt + p To record everything as an ascii-cinema. Video duration can be set in Settings menu.
    2 points
  11. Did you reboot after getting the error? Not seeing any issues so far.
    2 points
  12. @Reptar yes. So i have to downgrade the OnlyOffice Plugin. This helped me: https://help.nextcloud.com/t/error-after-upgrading-app/126540/17 only the command on Step 5 isn't right. Maybe this is the right one: find ./onlyoffice -type d -exec chmod 0750 {} \; find ./onlyoffice -type f -exec chmod 0640 {} \;
    2 points
  13. The stock Fractal fans are not bad, just tonight I pulled mine out of my desk rig for a cleaning, they are 3 years old and still work fine. I do have all Noctua in the NAS though, it always gets the better parts.
    2 points
  14. Below I include my Unraid (Version: 6.10.0-rc1) "Samba extra configuration". This configuration is working well for me accessing Unraid shares from macOS Monterey 12.0.1 I expect these configuration parameters will work okay for Unraid 6.9.2. The "veto" commands speed-up performance to macOS by disabling Finder features (labels/tags, folder/directory views, custom icons etc.) so you might like to include or exclude these lines per your requirements. Note, there are problems with samba version 4.15.0 in Unraid 6.10.0-rc2 causing unexpected dropped SMB connections… (behavior like this should be anticipated in pre-release) but fixes expected in future releases. This configuration is based on a Samba configuration recommended for macOS users from 45Drives here: KB450114 – MacOS Samba Optimization. #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native spotlight backend = tracker [data01] path = /mnt/user/data01 veto files = /._*/.DS_Store/ delete veto files = yes spotlight = yes My Unraid share is "data01". Give attention to modifying the configuration for your particular shares (and other requirements). I hope providing this might help others to troubleshoot and optimize SMB for macOS.
    2 points
  15. Would be better to do both read and write options to stress test the drive. I usually do 3 complete cycles.
    1 point
  16. EDIT2: Just leave the core count untouched. As long as you aren't isolating those CPU cores from Unraid, Linux should be able to use the cores even though there is a VM running on them (since the VM is idle). Changing memory should not trip the WGA activation message.
    1 point
  17. Even if you are using QSV in your HandBrake profile audio, subtitles, etc. will always be transcoded by the CPU and not the GPU. This looks pretty normal to me. I often see CPU usage in the 90+% range without using QSV and in the 60+% range when using a preset with QSV.
    1 point
  18. Watchtower updated the container image last night and was able to successfully start up the container. I have the FORCE_UPDATE flag set to true in my docker-compose.xml, so it all looks good now. Thanks again ich777! You do excellent work!
    1 point
  19. yes and see if something changed..if not, just add it again and reboot. In general it's always (at least it should be) better to have the latest bios version.
    1 point
  20. Nice! I really don't understand why that suspend option for usb is enabled by default.. Another thing I spent over 2 hours was that vm never completely shutdown on my setup and in the end I found that it's due to fast boot, enabled by default (energy settings). Microsoft, if I want to shutdown, I don't want to hybernate or something like this!! And obviously if I plug a pendrive I'm expecting to see it, and not to scan for new hardware...btw that energy settings for usb are also not 'easily' accessible...
    1 point
  21. Nein, nicht den Traffik, aber die Konfiguration und Überwachung....stell Dir vor Du bist Admin von 25+ Niederlassungen, je mit 4-10+ Switches Ja, denn es gibt Fehler oder aber auch neue Features. Beispiel Kompatibilität mit SFP+ Modulen oder Features, die in der HW stecken, aber noch nicht "gehoben" wurden. Beim MT CRS3xx zum Beispiel gibt es HW-Beschleunigung für L3 erst seit RouterOS v7.1rc2...die 7.1 ist erst diese Woche rausgekommen...die "normale" Version ist immer noch die v6.49 ...und ja, es macht einen Unterschied, wie lange man da Updates bekommt und wie häufig. Schau mal hier, bei MT, was da so los ist: https://mikrotik.com/download/changelogs/ ...und das so oft was kommt ist ein *Vorteil* und dass es ein Changelog gibt hat was mit Transparenz und Professionalität zu tun. RouterOS ist zwar nicht open, aber Linux basiert....D-LINKs und andere sind dagegen eigentlich entbehrliche Massenware...da wird nur das nötigste gefixt...aber im Consumer Bereich fällt auch so Vieles erst garnicht auf. ...nur wenn DU 10GB-T Module brauchst. Ein DAC ist nicht schlimm...den CRS309 mit 8xSFP kannst Du ohne Kühlung komplett oder mit bis zu 6x10GB-T Modulen bestücken. Wenn der Netgear keine VLANs kann, kannst Du den ganzen Switch ja an einen Acces-Port des Grossen hängen. Gibt aber auch schöne andere Sateliten: https://geizhals.de/mikrotik-cloud-smart-switch-css610-desktop-gigabit-smart-switch-css610-8g-2s-in-a2379806.html Aber für die Last geht das latürnich. MT hat meiner Meinung nach seit einigen Jahren die mit dem besten P/L. Und wenn Du Deiner Fritte mal überdrüssig bist und einen "echten" Router willst, wirst Du da auch fündig. Ein Switch mit 4x SFP+ ist kokolores....nicht Fleisch, nicht Fisch. Wenn Du einen 48er suchst, der was kann, nimm den CRS54 https://geizhals.de/mikrotik-cloud-router-switch-crs354-rackmount-gigabit-managed-switch-crs354-48g-4s-2q-rm-a2216037.html ... der hat noch 2x QSFP+ *und* 4x SFP+ ... aber eben 3 Lüfter. Wenn Du direkt noch kein 10G brauchst, würde ich warten...dann die Ports dediziert nachrüsten (CRS305, CRS309 oder CRS317, ...) Ports kann man nict genug haben. Vielleicht willst Du irgendwann auch 10G zu den Dosen führen...dann brauchst Du evtl was in Kupfer statt SFP+ wenn es nicht nur ein paar wenige Ports sind. Kann man oft beim lokalen Energieversorger / Netzbetreiber leihen...ansonsten hilfsweise eine smarte Steckdose (10-15EUR) mit ner echten 60/100W Glühbirne kalibrieren.
    1 point
  22. https://magic-8ball.com/ Will give you the correct answer
    1 point
  23. ok will do thank you so much for all of your help! ill be heading to bed ill check back tomorrow
    1 point
  24. Similar to a memtest, those type of testers ( I have a couple) only catch the really bad errors. If you leave all the drive connectors connected to drives while using the tester, you can get some idea of the loaded voltages, but not very accurately. It really won't help find a PSU that only flakes out under heavy load, it only measures unloaded voltage. To really see what's going on, you need an oscilloscope with multiple probes graphing voltage over time under varying load. Or, just swap PSU's and see if it fixes things.
    1 point
  25. Trying to compose up this yaml with this .env file. I'm getting this error: "unexpected character "-" in variable name near "- ./data/db:/data/db\n\n # Redis server\n redis:\n image: eqalpha/keydb\n restart: always\n\n # REVOLT API server (Delta)\n api:\n image: revoltchat/server\n env_file: .env\n depends_on:\n\t- database\n\t- redis\n environment:\n\t- REVOLT_MONGO_URI=mongodb://database\n\t- REVOLT_REDIS_URI=redis://redis/\n ports:\n\t- \"8000:8000\"\n\t- \"9000:9000\"\n restart: always\n\n # REVOLT Web App\n web:\n image: revoltchat/client:master\n env_file: .env\n ports:\n - \"5000:5000\"\n restart: always\n\n # S3-compatible storage server\n minio:\n image: minio/minio\n command: server /data\n env_file: .env\n volumes:\n - ./data/minio:/data\n ports:\n - \"10000:9000\"\n restart: always\n\n # Create buckets for minio.\n createbuckets:\n image: minio/mc\n depends_on:\n - minio\n env_file: .env\n entrypoint: >\n /bin/sh -c \"\n while ! curl -s --output /dev/null --connectmeout 1 http://minio:9000; do echo 'Waiting minio...' && sleep 0.1; done;\n /usr/bin/mc alias set minio http://minio:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD;\n /usr/bin/mc mb minio/attachments;\n /usr/bin/mc mb minio/avatars;\n /usr/bin/mc mb minio/backgrounds;\n /usr/bin/mc mb minio/icons;\n /usr/bin/mc mb minio/banners;\n exit 0;\n \"\n # REVOLT file hosting service (Autumn)\n autumn:\n image: revoltchat/autumn\n env_file: .env\n depends_on:\n - database\n - createbuckets\n environment:\n - AUTUMN_MONGO_URI=mongodb://database\n ports:\n - \"3000:3000\"\n restart: always\n\n # REVOLT metadata and image proxy (January)\n january:\n image: revoltchat/january\n ports:\n - \"7000:3000\"\n restart: always\n"". I think essentially what it's trying to say is that it doesn't like the lists in the yaml file denoted by hyphens. Any ideas? Edit: Never mind I'm an idiot. FIgured it out. Copied the yaml into .env. :/. Remember always mash ctrl+c.
    1 point
  26. @jj_uk did you try googling it yet? <wink> the broad strokes are: download the 1.18 Minecraft jar (I think either paper version or official vanilla), I like to rename that jar to xxxx-1.18.jar, update the server settings to the new server jar file name, restart the server. As the logs are flipping by, you’ll see 1.18 server. Once it’s started, the status will show 1.18.
    1 point
  27. thanks @ich777, I guess I have to try it manually then. thanks so much for your reply
    1 point
  28. Community Applications (aka CA) This thread is rather long, and it is NOT necessary to read it in order to utilize Community Applications (CA) Just install the plugin, go to the apps tab and enjoy the freedom. If you find an issue with CA, then don't bother searching for answers in this thread as all issues (when they have surfaced) are fixed generally the same day that they are found... (But at least read the preceding post or two on the last page of the thread) Simple interface and easy to use, you will be able to find and install any of the unRaid docker or plugin applications, and also optionally gain access to the entire library of applications available on dockerHub (~1.8 million) INSTALLATION To install this plugin, paste the following URL into the Plugins / Install Plugin section: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg (When running Unraid 6.10+, if CA is not installed the Apps tab will still appear. Go to the tab and click "Install") After installation, a new tab called "Apps" will appear on your unRaid webGUI. To see what the various icons do, simply press Help or the (?) on unRaid's Tab Bar. Note All screenshots in this post are subject to change as Community Applications continues to evolve Easily search or browse applications Get full details on the application Easily reinstall previously installed applications Find out about your favourite authors And much, much more Multi-Language Installations When running on a supported version of Unraid that supports Multi-Language (6.9.0+), CA is the recommended way to install any of the Language Packs available. See this post for more detail Note that CA is always (and always will be) compatible with the latest Stable version of unRaid, and the Latest/Next version of unRaid. Intermediate versions of various Release Candidates may or may not be compatible (though they usually are - But, if you have made the decision to run unRaid Next, then you should also ensure that all plugins and unRaid itself (not just CA) are always up to date). Additionally, every attempt is made to keep CA compatible with older versions of unRaid. As of this writing, CA is compatible with all versions of unRaid from 6.9.0 onward. Require a proxy? See this post for CA to operate through a proxy Cookie Note: CA utilizes cookies in its regular operation. Some features of CA may not be available if cookies are not enabled in your browser. No personally identifiable information is ever collected, no cookies related to any software or media stored on your server are ever collected, and none of the cookies are ever transmitted anywhere. Cookies related to the "Look & Feel" of Community Applications will expire after a year. Any other cookies related to the operation of CA are automatically deleted after they are used. Multi-language Note: When running on a version of unRaid that supports multi-language, CA will operate in the language of your choice. However, translations of the descriptions of the applications themselves are outside the scope of the translations, and will always appear in whatever the author themselves has dictated (ie: English) Additionally, CA supports translations on the spotlighted apps "Reason". Translations can be submitted against https://github.com/Squidly271/Community-Applications-Moderators/blob/master/Recommended.json if you wish to contribute Contribute towards development (or simply buy me a beer) Credits Development Andrew Zawadzki Additional Contributions bonienl, eschultz GUI Layout Design Mex Application Feed Andrew Zawadzki, Kode, Limetech Additional Testing CHBMB, SpaceInvaderOne, Sparklyballs, wgstarks, DJoss, Zer0Nin3r, Mex, prostuff1, bonienl, ljm42, kizer, trurl, Jos, Limetech, SimonF, ich777, jimmy898, Alex.b, neruve, Eugeni_CAT, ChaseCares, TheEyeTGuy Moderation dockerPolice, pluginCop Additional Libraries Awesomeplete (Lea Verou), Chart.js (Various), XML2Array, Array2XML (Miles Johnson), chartjs-plugin-trendline (Marcus Alsterfjord), sprintf.js (Alexandru Mărășteanu), Magnific-Popup (Dmitry Semenov) Copyright © 2015-2021 Andrew Zawadzki For the details regarding the various policies that Community Applications has regarding applications, see here
    1 point
  29. like mentioned several times here, it wont work on any alpine based docker, also not on the official NC alpine based docker. same goes for the OnlyOfiice integration. so in terms you want to use a alpine based NC (like this one) you need to take a sep office docker.
    1 point
  30. You enabled multifunction in your snippet of code on the video portion, and that's ok, but the audio part was on a different bus. In a multifunction device you have the different pieces belonging to the multifunction device in the same bus, same slot, but different function. You can see inside the <source> the address of the gpu in the host system (unraid): video part is at 04:00.0 (bus 4, slot 0, function 0) audio part is at 04:00.1 (bus 4, slot 0, function 1) Every recent gpu is a multifunction device. <address> lines outside <source> are the addresses in the virtual machine, in this case mac os will see gpu at: video part is at 04:00.0 (bus 4, slot 0, function 0) audio part is at 05:00.0 (bus 5, slot 0, function 0) <-- WRONG, this was your code --> modifided to audio part is at 04:00.1 (bus 4, slot 0, function 1)
    1 point
  31. I wanted to report that my service is working after trying and failing at every mentioned configuration file listed in all the documents and discussions. The first change I noticed was while I was attempting different configurations a "error socket message" started popping up in red which was NEW code. When I left nothing still worked, so I cleared everything and re-installed the docker. With a new docker install and a new SSL cert/config file, I left for the day at 4:pm without testing, then returned home at 10:pm. The socket issue was resolved and I can only assume the developer(s) fixed it. I have done nothing different on the new install, so I assume the code has been updated/improved. I've said it before and I will say it again... keep up the great work! So audiobookshelf is working using the default settings and here is my setup. Installed on Unraid 6.9 ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (WI-FI) , Version Rev X.0x American Megatrends Inc., Version 2702 AMD Ryzen 9 3900X 12-Core @ 3800 MHz NginxProxyManager generated config file which is NOW working. Proxy setup - websockets Support enabled nothing else - SSL generated but I didn't enable Force SSL When is comes to the Nginx Proxy Manager I most likely can also enable cache Assets and Block common Exploits but it is working for now so I will hold off. I also normally Force SSL but haven't as of yet... again I will enjoy the server for awhile before updating my config file. I just wanted to explain my setup in case it helps anyone. Server is working.
    1 point
  32. Dec 4 05:29:51 <server> kernel: Modules linked in: vhost_net tun vhost vhost_iotlb tap kvm_intel kvm macvlan xt_nat veth nf_conntrack_netlink xt_addrtype br_netfilter md4 sha512_ssse3 sha512_generic cmac cifs libarc4 xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink dm_crypt dm_mod dax md_mod i915 iosf_mbi drm_kms_helper drm intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding igb i2c_algo_bit ipmi_ssif wmi_bmof x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl intel_cstate intel_uncore i2c_i801 i2c_smbus i2c_core nvme nvme_core cdc_acm ahci libahci intel_pch_thermal fan thermal wmi video backlight acpi_ipmi ipmi_si acpi_power_meter acpi_pad button [last unloaded: tun]
    1 point
  33. ^^^This. Wait for the docker container to be updated. Attempting to update the Plex server version within the container can get messy. Best to let container updates handle that.
    1 point
  34. as you see this is a real m3u8 with ABR and AR which is not to handle properly for the purpose of xteve without a complete reencoding to use in plex. ABR (Adaptive Bitrate) and AR (Adaptive Resolution) are no formats which can be simply remuxed to a .ts stream, VLC would play them, yes, but a client like Plex as sample will simple stop the playback as soon a switch comes in. so in short, nope, not supported
    1 point
  35. @SpencerJ can you help here? The option to buy the Basic license is greyed out, do you know what causes this? There are actually only 6 disk (3 Array and 3 Cache) assigned.
    1 point
  36. 我解决了,反向代理都没有问题。比如用 https://c.b.com:1234 这个地址是不能访问到unraid上的需要后面加 login 变成https://c.b.com:1234/login 才能正确访问!!!!!
    1 point
  37. What is the proper way to set these scripts up so that the mounts do not hold the array open when trying to stop it / reboot? Maybe I have missed a step, but I am having to ps -ax | grep rclone and kill IDs and then kill mounts with fusermount -uz What am I missing?
    1 point
  38. It's already in the issue tracker, it seems: https://github.com/hexparrot/mineos-node/issues/446
    1 point
  39. Hello, I was running into the same problems @skois and @blaine07 after a CODE / collabora update. Finding the solution was a pretty frustrating journey and was buried in a forum post somewhere far along the trail. I could not find the solution in here either, but I might have read over it as the search options on this forum are not optimal, so please correct me if this a double post. THIS is what worked for me: post collabora/code:6.4.9.3 versions there have been some changes that need to be adjusted in your nginx collabora config, and in your advanced settings in your docker template. Change in appdata\letsencrypt\nginx\proxy-confs\collabora.subdomain.conf all instances of: "loleaflet" to "browser" "lool" to "cool" Change docker template in advance view: Web UI "https://[IP]:[PORT:9980]/loleaflet/dist/admin/admin.html" to ""https://[IP]:[PORT:9980]/browser/dist/admin/admin.html"
    1 point
  40. It might be worth installing the Parity Check History plugin even if you do not intend to use its capability to split checks up into increments as one of the other features is that parity history entries will have the type of check that was run added as additional information.
    1 point
  41. Ok - I found a MUCH easier way..... After making the changes to goaccess.conf to be: time-format %T date-format %d/%b/%Y log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R" log-file /opt/log/proxy_logs.log Simply add the following line to each proxy host in NGINX Proxy Manager - Official "advanced" access_log /data/logs/proxy_logs.log proxy; like so: (if you already have advanced stuff here, add the line to the VERY top) Now they all log to the same file, and same format, simply add the line to all proxy_hosts and remember to add it to any new ones.
    1 point
  42. Drive encryption is one of Unraid's many good features. When you encrypt part or all of your array and cache, at some point you might end up wanting to change your unlock key. Just how often, would depend on your threat model (and on your level of paranoia). At this time (6.8), Unraid does not have a UI for changing the unlock key. Here is a small tool that will let you change your unlock key. Each of the current and new unlock keys can either be a text password / passphrase, or a binary key file if you're into those (I am). Your array must be started to use this tool. Essentially, the script validates the provided current key against your drives, and on all drives that can be unlocked with the current key, replaces it with the new one (in fact, it adds the new key to all of them, and upon success, removes the old key from all of them). Important: The tool does not save the provided new (replacement) key on permanent storage. Make very sure you have it backed up, either in memory (...) or on some permanent storage (not on the encrypted array 😜 ). If you misplace the new key, your data is hosed. Currently this script needs to be run from the command line. I may turn it into a plugin if there's enough interest (and time) - although I'm pretty sure Limetech has this feature on their radar for some upcoming version. Usage: unraid-newenckey [current-key-file] [new-key-file] Both positional arguments are optional and may be omitted. If provided, each of them is either the name of a file (containing a passphrase or a binary key), or a single dash (-). For each of the arguments, if it is either omitted or specified as a dash, the respective key will be prompted for interactively. Note: if you provide a key file with a passphrase you later intend to use interactively when starting the array (the typical use case on Unraid), make sure the file does not contain an ending newline. One good way to do that is to use "echo -n", e.g.: echo -n "My Good PassPhrase" > /tmp/mykeyfile This code has been tested, but no warranty is expressed or implied. Use at your own risk. With the above out of the way, please report any issues. EDIT 2021-08-16: Posted an updated version for Unraid 6.10. The 6.10 OS includes an updated "lsblk" command which is not backwards compatible. unraid-newenckey
    1 point
  43. It's been a while, but phpipam works on Unraid. I have written a guide for it. It's in German, but I think you can take the most important things from it. https://knilixun.wordpress.com/phpipam/ Best regards
    1 point
  44. No. You can get the container sizes right from the docker tab (Container Size button) to see which app is taking up which space (not quite 100% accurate, but will give a good idea)
    1 point
  45. I underestimated myself because I actually managed to get Unraid use the Docker container OpenVPN tunnel. Below is how I did it, in case it can help someone. Please let me know if so. Best, OP ==================== We will assume that: you already have a running OpenVPN server on your remote network you already have a working .ovpn profile to connect to that server you already managed to get the dperson/openvpn-client Docker container up and running with this .ovpn profile the local network is 192.168.100.0/24 the remote network is 192.168.200.0/24. 1. Create a new docker network, eg (in the Unraid terminal console): docker network create --subnet=172.19.0.0/16 openvpntunnel 2. Set the dperson/openvpn-client Docker container's "Network Type" to: "custom : openvpntunnel" 3. Set the dperson/openvpn-client Docker container's Fixed IP address to: "172.19.0.100" 4. Add a "route" Post Argument to the dperson/openvpn-client Docker container profile, pointing to the local network on which the Unraid machine is: -r 192.168.100.0/24 Note that you need to turn on the "advanced view" in the Docker container configuration page in order to set a Post Argument (seems no longer required) 5. Add a route to the Unraid Routing Table (in network settings) to access your remote network through the OpenVPN tunnel: set "192.168.200.0/24" as the "IPv4:nn route" set "172.19.0.100" as the "Gateway address" set "1" as the "Metric" You can now open a terminal in Unraid and try to ping a machine on the remote network (eg: 192.168.200.21) to see if the link is alive. 6. Once you checked that everything works, make the route persistent across reboots by running the following script upon each start of your array (this can easily be done with the excellent "userscripts" plugin by Andrew Zawadzki (@Squid) for example) : #!/bin/bash sleep 5 ip route add 192.168.200.0/24 via 172.19.0.100
    1 point
  46. If that were the only problem you might get away with using them in an array that didn't use parity, since there would be nothing for them to be out-of-sync with. But not having is going to be a total non-starter.
    1 point
  47. Fantastic, it works! Thanks for the help and the explanation! (Oops, forgot to press post on this yeaterday, sorry)
    1 point
  48. There should be a logical explanation. I just want to know if other people on 6.7 see the same speeds. For example, what write speed do you see when running the following command from the Unraid terminal? dd if=/dev/zero of=/mnt/user/appdata/test1 bs=1024 count=10240000
    1 point