Leaderboard
Popular Content
Showing content with the highest reputation since 08/24/23 in all areas
-
The 6.12.4 release includes a fix for macvlan call traces(!) along with other features, bug fixes, and security patches. All users are encouraged to upgrade. Please refer also to the 6.12.0 Announcement post. Upgrade steps for this release As always, prior to upgrading, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular. If the system is currently running 6.12.0 - 6.12.3, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type: umount /var/lib/docker The array should now stop successfully (This issue was thought to be resolved with 6.12.3, but some systems are still having issues) Go to Tools -> Update OS. If the update doesn't show, click "Check for Updates" Wait for the update to download and install If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. Reboot Special thanks to all our contributors and beta testers and especially: @bonienl for finding a solution to the macvlan problem! @SimonF for bringing us the new System Drivers page This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.21 points
-
Thank you for the feedback on the previous rc release, we have one more small set of updates to test before releasing 6.12.4. Highlights include: Resolved an issue with VMs on the macvtap interface not being able to connect to the Internet Additional IPv6 improvements The delay before auto-closing notifications is now configurable (see Settings/Notification Settings) Fix the custom network DHCP subnet options on the Settings/Docker Settings page If you are already on 6.12.4-rc18 this should be a simple update, no need to change any settings. If you are coming from an earlier release please see the 6.12.4-rc18 announce post for info on how to solve macvlan issues and other changes. Upgrade steps for this release As always, prior to upgrading, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular. If the system is currently running 6.12.0 - 6.12.2, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type: umount /var/lib/docker The array should now stop successfully (This issue was resolved with 6.12.3) Go to Tools -> Update OS, change the Branch to "Next". If the update doesn't show, click "Check for Updates" Wait for the update to download and install If you have any plugins that install drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. Reboot Known Issues Please see the 6.12.4-rc18 release notes Rolling Back Please see the 6.12.4-rc18 release notes Changes vs. 6.12.4-rc18 docker: add routing when shim or macvtap network is used docker: fix routing when "host access" is enabled docker: remove IPv6 from shim/vhost interface (some routers are incompatible) New notification option: auto-closure time New notification option: notification life time Set default notifications life time to 5 seconds network: print public ipv6 address network: shim interface gets MAC address of parent, no need to generate one Docker settings: fix subnet sizes CSS: set overflow-x to 'auto' Helptext: fix typo Linux kernel version 6.1.4714 points
-
I just came across an issue after restarting my server where mergefs wasn't installing. After some investigation it seems the latest build from mergerfs does not work properly and I solved the problem by adding the previous (and currently listed as latest) version to the tag (line 184). I'm not sure why the latest build is looking for 2.37 but 2.36 is listed by trapexit as latest on his github. I hope this helps someone escape some frustration and that the problem resolves itself soon..9 points
-
I am stopping updates for my version of the plugin. Update to the latest version 2023.09.17 and then you can remove my version and install Rysz's from CA and it will retain your configs. There are some nice changes that have been added to the new version.7 points
-
5 points
-
I noticed the lack of a response over the last 16 hours so.... On behalf of the Unraid team, Welcome and I feel the same way too! I also enjoy the community support.5 points
-
mergerFS for UNRAID (6.10+) A plugin that installs mergerFS, a featureful union filesystem, onto UNRAID systems. mergerfs is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. It is similar to mhddfs, unionfs, and aufs. For obvious reasons this plugin is still considered EXPERIMENTAL and is not officially endorsed by LimeTech. Please do report any issues or feedback you have in this topic, it'll help with the development of the plugin! How to install? https://raw.githubusercontent.com/desertwitch/mergerFS-unRAID/main/plugin/mergerfsp.plg (via UNRAID Web GUI: Plugins ➔ Install Plugin ➔ Enter URL ➔ Install) ... also coming soon to Community Applications. 🙂 How does it work? !!! PLEASE READ THE DOCUMENTATION BEFORE RUNNING ANY COMMANDS !!! mergerFS is a non-supported filesystem and you should know what you are doing. If you don't know what you are doing, you can easily wreak havoc on your system! https://github.com/trapexit/mergerfs#readme After installation the mergerFS binaries are available on your UNRAID system and will persist reboots: /usr/bin/mergerfs (i.e for mounting) /usr/bin/mergerfs-fusermount (i.e. for unmounting) You can now make use of them via shell scripting - see "mergerFS Settings" for array-status based scripts surviving reboots. If you do not want to use the inbuilt event hooks, you can also use the binaries through i.e. the "User Scripts" plugin! How to contribute? https://github.com/desertwitch/mergerFS-unRAID ... and of most importantly - please do test and report back here! Special Credits? @trapexit - creator of mergerFS 🏆4 points
-
To follow up on the second question, my latest version offers three branches to choose from (on 6.10+): default (recent master) release (2.8.0 stable) legacy (2.7.4. stable) So @ich777 choosing the "release" branch you could now theoretically stay on the year-old stable release 2.8.0 until the next stable release 2.8.1 is integrated into the "release" branch (when it becomes available). But I strongly recommend everyone else without UPS problems to stay on default (recent master) for the latest NUT bugfixes and UPS compatibility features unless you know what you are doing, what to expect from the switch and/or are desperate to get your non-functional UPS working by attempting an older version of NUT. PHP8 warnings have since also been fixed, thanks to @Squid and @SimonF.4 points
-
Hi All - We have a fix and are publishing a Connect plugin update to fix it today. Thanks for letting us know.4 points
-
Hey, answering some of the questions: @XuvinWhat does it mean if the dataset/snapshot icon is yellow instead of blue: It means that the last snapshot is older than the time configured on the settings, is just a visual indicador that you should create a new snapshot for the dataset. @samsausagesI was wondering if you would consider a setting/button that allows the option for manual update only: Yes, I was finally able to get some time for working on the next update, and that's one of the planned features. @lordsysopUpdate 2023.09.05.31: It was just a test for the new CI/CD system I'm using: Sorry about that. @mihcox The only way i was able to delete anything was to completely stop the docker daemon: I haven't been able to reliable delete datasets used at some point by unraid without rebooting or stoping docker daemon; a procedure that sometimes works, is the following: Stop the docker using the directory Delete all snapshots, clones, holds, etc Delete the directory (rm -r <dataset-path>) Delete the dataset using ZFS Master or CLI. Sorry for the delay on the update with the lazy load functionality and custom refresh time guys, now I'm back to work on the plugin, so hopefully the new update adressing most of your concerns will be released this month.4 points
-
nach langem rumtesten kann ich jetzt sagen, dass es am Netzteil liegt, Ich habe provesorisch zwei Netzteile angeschlossen, und die hälfte der Festplatten ans zweite Netzteil angeschlossen. Seit dem läuft dass Array stabil mit vollbestücktem ASM1166 Controller. Habe auch mehrere ASM1166 Controller durchgetestet und es laufen alle stabil.4 points
-
for all, who are looking for a fix, here it is:4 points
-
I recently purchased Unraid and all I can say is that I wish I had discovered this earlier. It's the web based operating system that I had been longing for and looking for forever and finally found it. Thanks for building this, amazing community support and not having a subscription based model for the software!4 points
-
4 points
-
und wenn du uns das nächste mal eventuell diese (elementaren) Infos auch zukommen lassen würdest ... wäre es hilfreich gewesen ... Hauptsache hast deinen Fehler gefunden. Enjoy wieder normale Geschwindigkeit3 points
-
Overview: Support for Docker image unraid-api-re in the bokker/unraid-api-re repo (fork of the original). Docker Hub: https://hub.docker.com/repository/docker/bokker/unraidapi-re Github/ Docs: https://github.com/BoKKeR/UnraidAPI-RE If you feel like supporting my work: just say thanks! This is a fork of https://github.com/ElectricBrainUK/UnraidAPI I managed to fork and make the original project work on github ci. I will try to keep this project alive with newer unraid releases, its an exact dropin for the original https://github.com/ElectricBrainUK/UnraidAPI just replace the image electricbrainuk/unraidapi with bokker/unraidapi-re I will create a tag for each minor release as in: bokker/unraidapi-re:6.12 which will cover 6.12.0 <-> 6.12.2 I mostly test the functions related to reporting status, if you have special needs as removing/editing containers, vms, switch gpus please tell if the container malfunctions for you. I cant stress enough that this is not a proper solution, this container scrapes the unraid UI. Every minor update will possibly break functionality. I would love to setup automated tests, but to be able to do that I need a way to spin up the unraid os (or frontend) in a test environment. Happy to hear what ideas people can come up with. Until then please call up your local Unraid congressman and tell them how much you care about a publicly available API.3 points
-
3 points
-
Please delete the file i915.conf from your /boot/config/modprobe.d/ folder and leave the i915-sriov.conf untouched <- I can't do anything about the second file since strictly speaking it is not necessary to have it in there but you have to leave it in place because otherwise the plugin won't work properly or better speaking it can lead to issues again.3 points
-
With the changes coming up in 6.12 around how pools are displayed, it would be great if we could use pools like the main array. For example, I have a separate pool for my torrents, where I can use cheap disks that don't need a parity without needing to exclude disks and works quite well. I would love for this pool to be able to use an SSD cache so I can move things off my pool quicker, then the mover can move them to normal bulk storage for seeding down the path. The way pools and the array are presented on the webui make it seem like this may be able to work with a little modification, but I'm no developer. This should be disable by default if implemented, like exclusive access is right now, to prevent issues with newer users who don't need this feature. It could also be an option to hold over until multiple arrays are available in unraid in the future.3 points
-
This is the day. The newest version of the plugin is here, 2023.09.15. As said in the posts above, I have added: The advanced docker context menu. Support for more languages. Button to open the console in the preview. Thanks to @Kloudz Now you can add docker containers to a folder with labels, the label to do this is folder.view The value should be the name of the folder. Thanks to @jbb Added a small button to verify that the order of the autostart match the order you see. This icon will turn red if the autostart order is different from what you see. Fixed the error when rebooting the server without internet connectivity. EXPECT A LOT OF BUGS!3 points
-
This was a good idea, so good that i actually I made it, now I just need people to translate the plugin. I'll leave the translation file if anyone wants to translate something. The "locale" field and the name of the file should be an ISO 639-1 language code. If you translate, don't edit any HTML tags you see, only edit text. For anyone else, I'm getting a little burned out on the advanced context, so I made this, I promise I will continue, I just need to comment out the code, iron out some bugs, and copy the new code to the dashboard, the update is not far in the future. With this update I also introduced the feature requested from @jbb, you can add containers to a folder with labels. en.json Edit: For anyone translating, the field "updating" have a $1 in the string, that $1 is the name of the folder, so it should stay there, you can change its position. The same concept is for HTML tags mentioned above, you can change their positions, but they shouldn't be removed. Edit 2: Forgot to mention I translated the plugin in Italian, if you want to redo my work you are more than welcome.3 points
-
The cause of this issue was a special character in the server description that was causing an HTML bug that the user dropdown's JavaScript didn't know how to handle. We will release a new version of the plugin tomorrow that fixes this issue. @nicosemp, thank you for helping me debug this issue.3 points
-
I see that a new patch was released for 7.5.174 today : "Last pushed 4 hours ago by linuxserverci" So most likely they have patched the upgrade issue. Im probably not going to try again as the release has an incredible 16 pages of issues after just 7 days since it was released (it starts as an RC then was moved to an official release yesterday). https://community.ui.com/releases/UniFi-Network-Application-7-5-174/d05b091f-f00c-4ebb-8f42-b77e0adac78b?page=14 For this reason Im just going to give it a miss, I already had to roll back backups once and dont particually feel like running a version that will just be replaced with a version this is identical but has all the fixes in a few weeks time so may as well wait this one out. GL whatever decision you guys make P3 points
-
But then you should use something more efficient like a Nvidia T1000 and even consider switching to the iGPU from your 7700 since it is really, really more efficient (a few μW in Idel and depending on your iGPU up to around 12W max). I don't want to start a war in terms of quality but some say that the Intel iGPUs are far superior than Pascal and older NVENC. Sure the Nvidia cards are faster in terms of FPS but do you need that for HW transcoding...? Most applications like Emby and Plex throttle the transcoding anyways. So to speak you even can transcode with a Intel iGPU 4-6 simultaneous streams. Sure I'm the maintainer from the Nvidia Driver plugin and for people who have no iGPU or are in need of 10+ simultaneous streams I completely get the point but for people with only a few streams a Intel iGPU does the job good enough (of course if someone wants to do something else too ). When 6.13 is released ARC will be natively supported and it will also just work, even easier than a Nvidia GPU because you don't have to install a driver that is about 200MB, add three additional parameters to the Docker template (okay, you also have to add one entry) and you don't have to restart the Docker service/entire Server, so I don't see this as a valid argument. I've now run a few test with my Intel ARC A380 on a custom built Unraid version and these cards are fast, really fast (keep in mind that the official 6.12.x release doesn't support ARC) : I haven't tested the power draw yet because this was not my main goal, it was just to investigate what's needed to get them running on Unraid back then. I saw that there where reports about high power draw in Idle but there is already a fix out there (I think). Please keep also in mind that this is more or less a feature request thread and where people can communicate about their Intel ARC experience and custom Kernels, they are not really interested in Nvidia cards because they most certainly already have a ARC GPU on hand and don't want to buy a new one. Hope you get my point and answered most (or hopefully all) of your questions.3 points
-
Please do not upgrade to 7.5.174 at this time. It is broken.3 points
-
sorry, dass ich den uralten Thread nochmal ausgrabe, aber evt. interessiert es jemanden, der eine aktuelle 10 Gbe Karte sucht. Ich habe per Skript ASPM für die Trendnet 10GECSFP V2 (schwarzer Kühlkörper) aktiviert und komme damit nun wieder auf C7, mehr ging vorher ohne Karte auch nicht. Das Skript kann man hier im Original https://wireless.wiki.kernel.org/en/users/Documentation/ASPM oder hier modifiziert finden: https://forums.servethehome.com/index.php?threads/sfp-cards-with-aspm-support.36817/page-2 Außerdem musste ich noch das Paket bc in den NerdTools aktivieren. Was das an tatsächlicher Ersparnis bringt muss ich noch genauer untersuchen, ich erwarte mit aber nicht allzuviel. Die Karte hat vorher ohne ASPM ca. 4W Mehrverbrauch ausgemacht.3 points
-
You can either change the checksum or delete the compatibility check. Works on 12.4 # ------------------------------------------------- # RAM-Disk for Docker json/log files v1.6 for 6.12.4 # ------------------------------------------------- # check compatibility echo -e "45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1 if [[ $compatible ]]; then # create RAM-Disk on starting the docker service sed -i '/nohup/i \ # move json/logs to ram disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ mountpoint -q /var/lib/docker/containers || mount -t tmpfs tmpfs /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be mounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk created' /etc/rc.d/rc.docker # remove RAM-Disk on stopping the docker service sed -i '/tear down the bridge/i \ # backup json/logs and remove RAM-Disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ umount /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be unmounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker # Automatically backup Docker RAM-Disk sed -i '/^<?PHP$/a \ $sync_interval_minutes=30;\ if ( ! ((date("i") * date("H") * 60 + date("i")) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\ exec("\ [[ ! -d /var/lib/docker_bind ]] && mkdir /var/lib/docker_bind\ if ! mountpoint -q /var/lib/docker_bind; then\ if ! mount --bind /var/lib/docker /var/lib/docker_bind; then\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ fi\ if mountpoint -q /var/lib/docker_bind; then\ rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers && logger -t docker Success: Backup of RAM-Disk created.\ umount -l /var/lib/docker_bind\ else\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ ");\ }' /usr/local/emhttp/plugins/dynamix/scripts/monitor else logger -t docker "Error: RAM-Disk Mod found incompatible files: $(md5sum /etc/rc.d/rc.docker /usr/local/emhttp/plugins/dynamix/scripts/monitor | xargs)" fi3 points
-
Limetech never provides ETA and Soon™ has often the answer provided in a tongue-in-cheek kind of way by the team. It has rubbed on many long time users.3 points
-
Correct me if I'm wrong anyone, but as I understand it you can still use macvlan, but instead of br0 it will now be eth0 and everything should work as before. That's what I read when I skimmed through the release notes anyway. I'm also using macvlan and will continue to do so if I can.3 points
-
Thanks for all of your help testing the rc series, Unraid 6.12.4 is now available! Changes from rc19 are pretty minor, but if you are running one of the RCs please do upgrade as it simplifies support to have everyone on a stable release.3 points
-
3 points
-
3 points
-
The above solution originally published by @Vivent that includes setting "Include listening interfaces" works for me with the latest 6.12.4-rc18 and 6.12.4-rc19. unRaid UI, shares, and docker containers are fully available for other nodes in Zerotier virtual network again. Important, if you upgrade from 6.12.3 and have the listening interfaces already set up as suggested: wipe them out, save, and put your zerotier gateway name back again. It was a necessary step for me to make things work after 6.12.3 -> 6.12.4-rc18/19 upgrade.3 points
-
3 points
-
Can I use a cache, log, special, spare and/or dedup vdev with my zfs pool? At this time (Unraid v6.12) they cannot be added to a pool using the GUI, but you can add them manually and have Unraid import the pool, a few notes: currently zfs must be on partition #1, for better future compatibility (though not guaranteed) recommend partitioning the devices first with UD. main pool should be created with Unraid, then add the extra vdev(s) using the CLI and re-import the pool available vdev types and what they do are beyond the scope of this entry, you can for example see here for more information. please note that since the GUI doesn't not support this it might give unpredictable results if you then try to replace one of pool devices, so if you plan to use this recommend for now doing any needed device replacement with the CLI. How to: first create the main pool using Unraid in this example I've created a 4 device raidz pool start array, format the pool if it's a new one, and with the array running partition then add the extra vdev(s) using the command line to partition the devices with UD you need to format them, but there's no need to use zfs, I usually format with xfs since it's faster, just format the device and leave it unmounted: to add a vdev to the pool use the CLI (need to use -f to overwrite the existing filesystem, always double check that you are specifying the correct devices, also note the 1 in the end for the partition), a few examples: - add a 2-way mirror special vdev zpool add tank -f special mirror /dev/sdr1 /dev/sds1 - add a 2-way mirror log zpool add tank -f log mirror /dev/sdt1 /dev/sdu1 - add a striped cache vdev zpool add tank -f cache /dev/sdv1 /dev/sdw1 - add a 2-way mirror dedup vdev zpool add tank -f dedup mirror /dev/sdx1 /dev/sdy1 - add a couple of spares zpool add tank -f spare /dev/sdb1 /dev/sde1 when all the vdev(s) are added to the pool stop the array, now you need to re-import the pool unassign all pool devices start array (check the "Yes I want to do this" box) stop array re-assign all pool devices, including the new vdev(s), main pool should be assigned to the top slots, order doesn't matter for the remaining devices but assign all devices sequentially, i.e., don't leave empty slots in the middle of the assigned devices. start array existing pool will be imported with the new vdev(s):3 points
-
2 points
-
Fixed that. Just have to change this css code: /* Expand the clickable area to the full width of the header */ .folder-showcase-outer[expanded="true"] .folder-hand-docker { position: absolute; z-index: 50; /* Keep below the main unRAID Nav */ display: inline-block; left: 0; right: 0; top: 0; bottom: 0; padding: 8px; }2 points
-
I just experienced this issue with several containers (Nextcloud, Swag, Sonarr, Radarr, Plex, and a few more) and was able to fix it so I thought I would share what I did in case anyone came looking in the future. I was having issues with steps 4-7 causing my server to hang and the GUI to become unresponsive so here's what I tried that ultimately worked for me: Try steps 4-7. If they don't work or cause a server hang, comeback and start at 2. Shutdown Unraid (force it if you have to and are comfortable with the risk) Once restarted, stop any running containers (not sure if necessary but I did it as a precaution) Delete orphaned images via the bottom of the Docker tab in Advanced mode. While still in Advanced mode, force update any containers that were failing to start with the error listed by OP. Disable and re-enable the docker service via Settings > Docker (not sure if required, did this as a shortcut to get containers to auto-start rather than manually starting each). Check your docker tab and attempt to start anything that hasn't auto-started, they should all start as expected.2 points
-
Sorry about that, I am working on another project and don't have much time, I'll implement it, you just have to wait a little bit.2 points
-
If it really reboots on its won, instead crashing or hanging, it's most likely a hardware issue, you can enable the syslog server and post that after a crash, but if it's hardware most likely there won't be anything relevant logged.2 points
-
In the new update, you can do what you want. .folder-showcase-outer[expanded="true"], .folder-showcase-outer[expanded="true"] .folder-showcase { float: left; display: block; width: 100%; }2 points
-
A month later I love this box. Working as intended fix a myriad of issues I was having with Windows machines. Eliminated 2 physical machines and 3 virtual ones from my home environment. I call that a win!2 points
-
Definitely a problem with your setup, not Unraid. Comments like "I really want to like Unraid, but......" are not going to engage people to try and help out. We're not here to motivate you to use a product. Your comments and what I've seen in your other topic is some lack of understanding the technology. Not a problem at all, but right now you're pointing at all other factors as the problem instead of your own lack of understanding. I'm running Plex on the latest Unraid stable version and also latest Plex version (plex pass) and my HW transcodes even still work. So I don't even have the issues the others here have. - Firstly, in your own topic, I see that you're playing a file that has both audio transcoded and a PGS subtitle. PGS subtitles are always CPU transcodes, and single core at that as well. So CPU spikes during playback of such files is normal. - Second, I'm not seeing your GPU stats, so I wonder which plugins you actually installed for your GPU? You should have the plugin "Intel-GPU-TOP". And with the plugin "GPU Statistics" you can see your GPU workload - You're in the linuxserver Plex topic right now. So use that docker, not the other ones. After you've installed it properly and can access your Plex server, click on the Plex docker and then > console. Enter: ls /dev/dri What is the output in the console? - How are you playing a video, like in the screenshot above? What device? What app? - What are your Plex transcoder settings?2 points
-
If you are on tag 7.2.95 you could upgrade to this 7.3.83 tag, I consider this moving from "old old stable" to "old stable" on Wifi6/New Deployments. Reminder of current situation: Legacy systems: 5.14.23-ls76 or 6.5.55 versions seem fine. (AKA "old old old old stable" or "old old old stable"). Wifi6/New Deployments: 7.2.95 or 7.3.83 tags are fine. (AKA "old old stable" or "old stable"). Old stable and old old stable are the kind of stable where you upgrade and should have no issues upgrading (always backup just in case). If you require a new feature from a later deployment you can use "'stable'" which I consider to be 7.4.162 This is not for business deployments that cannot afford any downtime, stable is considered "this can run stable if you fiddle around with it and then leave it alone". If your business can live with an hour of wifi being down you can move to it and sort out any issues then get it working. Its that kind of stable. Do not use tag: "latest" unless you are an alpha tester who fixes their own problems Future version is : 7.5.174 at this time. (Using future versions is risky.) All these tags exist in the unifi docker image location here: https://hub.docker.com/r/linuxserver/unifi-controller/tags Tags available: Old safe versions for legacy hardware - 5.14.23-ls76 (old old old old stable) 6.5.55 (old old old stable) Newer safe "wifi6" versions for current hardware (old hardware might not adopt) 7.2.95 (old old stable) 7.3.83 (old stable) <--------- best version for business use if business uses current hardware not out of support Latest feature set versions 7.4.162 ('stable') 7.5.174 (future version/release candidate, YOU are an alpha tester/do not use in production/same as using "latest" tag). Kind regards Pete For reference at home I am currently on 7.4.162. I test the versions for our clients who have unifi wifi aps. No client is on any version beyond 7.3.83 at this time.2 points
-
游戏虚拟机关闭后自动降低显卡功耗 效果: 虚拟机启动后独立显卡自动进入工作模式,unraid无法查看显卡功耗 虚拟机关机后unraid接管独立显卡,显卡进入低功耗模式 功耗对比:索泰RTX4080天启OC,unraid接管状态下功耗由45W降至16W,风扇由低速转动变为停转 其他参考文章: [Unraid] 不直通独显将N卡直接传递给虚拟机使用并设置虚拟机关机时降低显卡功耗_NAS存储_什么值得买 (smzdm.com) UnRaid硬件直通的n种正确姿势_unraid 显卡直通-CSDN博客 总结: unraid不屏蔽显卡 安装插件 Nvidia Driver 和 Gpu Statistics 主界面→flash→在unraid os中 append 和 initrd=/bzroot 之间添加 video=efifb:off(记得加空格) 在Windows物理机上使用GPU-Z提取显卡BIOS的rom文件。如果没有其他机器,可以在unraid里的硬盘里选一块固态(后称为硬盘A),先数据迁移到其他盘,然后取消分配,开启Unassigned Devices插件的破坏性模式,将硬盘A格式化,然后创建一个虚拟机,将硬盘A直通给虚拟机,在上面安装windows,然后重启机器,启动项选择硬盘A,就进入windows物理机了。 安装HxD或者使用VSCode安装Hex Editor扩展修改rom文件,搜索“VGA”,定位第一个,然后删除55 AA之前的全部内容,如图删除00009400那一行之前的全部内容(00009400那一行不删除) 按照一般直通显卡的方式操作 User Scripts创建脚本,内容如下 #!/bin/bash nvidia-smi --persistence-mode=1 打开unraid终端,在/etc/libvirt/hooks/目录下创建文件夹qemu.d,在qemu.d内创建脚本文件(随便命名,不需要后缀名),脚本内容如下 #!/bin/bash if [[ $2 == "release" ]] then bash /boot/config/plugins/user.scripts/scripts/gpu.lowpower/script fi 其中gpu.lowpower是上一步创建的脚本名,改成你自己的 额,写出来才发现,好像可以直接在qemu.d里那个脚本里把bash /boot/config/plugins/user.scripts/scripts/gpu.lowpower/script改成nvidia-smi --persistence-mode=12 points
-
Good idea - BUT beware the typo 🙂: return gmdate("H:i:s" ,$t ) ;2 points
-
@guboehm ja, es liegt an dem Plugin, hab grad noch eine Bestätigung bekommen.2 points
-
Die Erläuterung zum exklusiven Share findet sich hier: https://docs.unraid.net/de/unraid-os/release-notes/6.12.0/#exclusive-shares In diesem Kapitel wird Bezug genommen auf diese mit 6.12 eingeführten neuen Features (Primary/Secondary Storage): https://docs.unraid.net/de/unraid-os/release-notes/6.12.0/#share-storage-conceptual-change Kurz gesagt umgehen Exklusive Shares mit einem Symlink FUSE - die Basis der User Shares.2 points
-
This is a very useful site https://ikrima.dev/dev-notes/homelab/zfs-for-dummies/2 points
-
根据官方的校验盘工作原理介绍,校验盘会对阵列(Array)里面所有硬盘内的数据进行奇偶校验,将形成的数据存储到校验盘中。 但是硬盘里面的数据是会变化的。 比如说我今天做了一次完整的奇偶校验,但随后我将阵列里某一块硬盘所有的数据都删掉了,那么此时校验盘保存的校验数据依然是先前硬盘还有数据时的校验数据。换句话说,已经计算得到的校验数据是静态的,删除数据(或修改数据)不会让先前已经计算好的校验数据实时发生变化。 为什么删除数据版不会实时改变校验数据,因为校验盘就是为了保护你的数据呀。 阵列里某个硬盘坏了,无法读取了,是不是从某种程度上也算是一种非人为的”删除数据“? 只要硬盘里面的数据被删除或修改了,校验后的数据都会与上一次校验的数据有所不同,并且就算数据没有删除或者修改,如果说硬盘出现了坏道或者一些其他的问题导致数据发生了变化(或者导致数据不完整了),那么定时校验可以发现问题并对数据进行修复,这就是定时校验的意义。 关于 unRAID 的校验盘更完整的信息可以参考我写的博客:新手教程:什么是校验盘,校验盘有什么作用2 points