Leaderboard

Popular Content

Showing content with the highest reputation on 11/22/21 in Posts

  1. 纵观整个论坛,尤其是简体中文板块,几乎没有类似的解决方案,即便有也是比较旧的了。 本文基于 UNRAID 6.10.0-rc2 版本演示,实际上 6.9.2 也是可以的,不过只有在 6.10.0-rc2 中才有对 Windows 11 的完整支持,如果需要安装 Windows 11 建议也升级至 6.10.0-rc2。 大前提:BIOS中打开了Intel vt-x和vt-d(AMD端应该叫AMD-V和IOMMU),并且设置 IGD 为第一显示设备(也就是 BIOS 之类的默认从核显输出),且需要至少一个显示器连接到主板上的视频输出端口(也可以是欺骗器,总之需要系统识别到)。 1. 直通核显 实际上直通核显和独显并不严格要求顺序,如果对独显的直通没有信心,也可以先做直通独显。直通核显应该算是最难的了,建议先整(整不好可以直接劝退了哈哈哈) (1) 进入MAIN→Flash→Syslinux configuration→Unraid OS (中文版是主界面→Flash→Syslinux 配置→Unraid OS,看到右边绿色的那个框就对了) 内容替换为: kernel /bzimage video=efifb:off vfio-pci.ids=8086:3185,8086:3198 disable_vga=1 modprobe.blacklist=i915,snd_hda_intel,snd_sof_pci,mei_me,snd_hda_codec_hdmi,snd_hda_codec_realtek append initrd=/bzroot 其中,vfio-pci.ids=8086:3185,8086:3198 这一段,不同的CPU和主板都有所差异,以我的为例是: 有些CPU第二个设备是音频输出,其实都随意吧。这里试错成本也比较低,搞错了大不了再来一次( 至于如何查看,可以先看第三点的地方。 (2) 进入 SETTINGS→VM Manager→ADVANCED VIEW (中文版:设置→虚拟机管理器,然后点一下右边的高级视图) 设置如下属性: PCIe ACS override: Downstream VFIO allow unsafe interrupts: Yes 中文版如图所示: (3) 前往TOOLS→System Devices (中文版:工具→系统设备) 把刚刚填的vfio-pci.ids对应的设备勾上即可。 (4) 重启主机,新建虚拟机,配置要求如下: Windows 最好使用 i440fx,Linux 最好使用Q35,都选最新版本。 其中,VNC 必须先行保留,因为显卡可能还驱动不起来,需要用 VNC 装系统。等装完系统,驱动程序也会自动安装,到时候可以将第一显卡设为核显,不需要保留 VNC。图形 ROM BIOS 需要从这里下载:https://github.com/my33love/gk41-pve-ovmf,并且放在自己知道的地方,把路径填到该选项中。如果不指定图像 ROM BIOS,大概率是无法正常输出视频的。貌似 Intel 核显都可以用这个,所以不需要查型号了,如果有不可用的情况请回复到帖子中。 其他的设置根据自己的喜好或者用默认设置即可。 (5) 启动虚拟机,安装系统,最后等 Windows 自动安装驱动即可。如果使用 Windows 8.1 及以下操作系统,大概率是需要自己手动下载驱动程序的(而且不一定有兼容低版本操作系统的驱动程序),所以还是建议直接 Windows 10 或 Windows 11。 核显部分就到这里,接下来是较为容易的独显部分。 2. 直通独显 这一部分没有太多讲究,也可以完全不按我的步骤来做,不过为了确保尽可能一次成功还是按步骤来做。 (1) 前往TOOLS→System Devices (中文版:工具→系统设备) 将显卡对应的框勾上,例如: 可能会有很多子设备,但是没关系,一般来说不需要配置它们,让它们自动勾上就行了。(如果需要声音输出可以在声卡一项选择) (2) 创建虚拟机,配置随意,但是 BIOS 最好选 OVMF/OVMF TPM。 显卡选择跟核显的差不多,不过图形 ROM BIOS 一般来说可以留空,但是部分显卡可能无法正常启动或者驱动报43错误,这种情况下就需要整 vbios 了。由于感觉比较麻烦(需要一个打好驱动的物理机系统,用GPU-Z导出vbios,也许还需要小小修改一下),我这里就放弃了一些显卡。不过至少我的 GTX 1660 SUPER 还是可以正常驱动的,就不需要指定了。(GT 740 无法驱动,懒得整了) (3) 启动虚拟机,安装系统,安装独显驱动,然后看看设备管理器的显卡设备有没有叹号,没有的话就大功告成! (如果有的话,慢慢折腾吧,我也帮不了你)(尤其是万恶的43错误) 那么核显和独显直通的分享就这么多了,希望能够帮到初入 UNRAID 的大家。我大概花了3天来解决这些问题,期间还更换过方案,最终还是定下核显+独显的方案(主要还是考虑到 PCIe 以后的分配问题),虽然在论坛里问了但是并没有人理我(哭哭),最后也是按着外面的教程一步步试错,最终总结出这样的步骤。当然也会有按我的步骤不行的人,我希望能够多多探讨,而不是直接丢下一个“没用”就拍拍屁股走人了,如果你真心想说这话的话,我建议你还是直接关掉这个帖子吧,此贴不适合这类言论。 参考: https://www.right.com.cn/forum/thread-6006395-1-1.html https://github.com/my33love/gk41-pve-ovmf https://post.smzdm.com/p/ag8l254m/
    2 points
  2. Might be some suggestions in the attached that help, for me it manifests as shfs @ 100% when array is running, but usb kworkers are the culprit when stopped. Possibly moving usb connections around to different (ports) controllers helped.
    2 points
  3. Pre-6.10 release: if someone wants to try out Unraid with a Trial key they have to give us an email address. When someone makes a key purchase they have to give us an email address. Hence we already have email/key database, i.e., "accounts". The primary purpose of the UPC is to now make these accounts accessible by users. We leveraged the IPS (forum) member capabilities to do this. That is why we enabled 2-factor authentication for the forum. When you "sign-up" via a sever we create an association between the account screen name/email address and the particular key GUID. If you already have a forum account then "sign-in" will do the same association. This lets us simplify a great deal of support having to do with keys. You also get a nice "dashboard" (the My Servers menu option) that shows all your servers. In order to show this dashboard, the sign-up/sign-in process will upload your server name and description. This is so we can present a link to your webGUI on your LAN. But of course this is a local IP address and only works if you click from a browser running in a PC on the same LAN. We don't send up any other info and the code that doing the sending up is open source in the webGUI - you can examine and see this. If you don't want to use your forum account to associate with your keys, then just create a different forum account. Yes, having "accounts" will open the door for us to provide "cloud based" services. For example you can install our My Servers plugin and get real-time status of your server presented on the dashboard as well as automatic flash configuration backup. If you don't want this, don't install the plugin. If you don't want your server to appear "signed in" then sign out. For those who think the will never sign-in and are disturbed by having a "sign in" link in your header - well we will consider cosmetic changes. No doubt some may have more questions and want more details. So let's do this: go ahead and fire away but please ask only one question or ask for only one clarification per post and I'll try to answer them all until we're all exhausted.
    2 points
  4. Recently Filestash has been added to the community applications https://github.com/mickael-kerjean/filestash Since I'm not fiddly enough to figure out how to make it work, and there is no official docker thread, does anyone know if this is an appropriate tool to make unraid shares folders searchable and editable from remote?
    1 point
  5. yep yep, just waiting until magic-blue-smoke gets his adapters back in stock in january and looking for something to hold me over. understand itll only leverage one
    1 point
  6. I think the picture is wrong, the description says: Support system: WINXP WIN7 WIN8 WIN10 32/64BIT /LINUX/MAC support interface: PCI-E 1X M.2 KEY-A/KEY A-E. But keep in mind this will only enable one TPU not both of them. I use this card with a Dual Edge TPU (as said only one enabled) and it works just fine. )
    1 point
  7. I've spent some time today and managed to get it up and running 'stably' for an hour. I wiped and re-setup the cache drive after replacing the SATA cable (as recommended here). After doing that I copied the data back over to the re-formatted cache. In this case, the warning still appeared by the drives that didn't use the cache. After removing the empty folders on the cache, the warnings went away. If I don't post anything further then this was hopefully just an anomaly that I ran into during my setup and this issue is resolved.
    1 point
  8. Got it. I assumed that the older minecraft versions would work on the latest version of java, that does not seem to be the case. All of my servers were working on the default java version before I updated the docker container. I was unaware multiple versions of java were installed in the container as well. I set the java path of my 1.16.5 server to /usr/lib/jvm/java-8-openjdk/jre/bin/java and it loaded up properly!
    1 point
  9. Fuck me! What a torrent of abuse! You do realise @ich777 and other key developers are doing this for FREE, in their OWN time right?!? We don't paid and certainly shouldn't have to put up with shit like this!,you think you can do better, fork it and go see how easy it is, Sheesh please don't come to any of my support threads with an attitude like that. Sent from my SM-T970 using Tapatalk
    1 point
  10. Version 6.10 allows the docker custom network type to be set to ipvlan instead of macvlan. This setting was introduced to try to work around the macvlan call trace issue. For some it has helped. Since I implemented a docker VLAN on my router and switch, the problem has disappeared for me.
    1 point
  11. Your first call trace is the macvlan broadcast call trace when docker containers are assigned custom IP address on br0. This is well documented in this thread. That particular call trace will not always cause immediate lockups, but eventually you will get a server lockup from these call traces.
    1 point
  12. Ersten Post editieren mit dem Prefix [SOLVED], hab ich schon für dich gemacht.
    1 point
  13. @KentBrockman I'm glad you are happy. KB/s is Kilo Bytes per Second. 1 Kilo Byte is 1024 Byte and 1 Byte is 8 bit. So 1 KB/s is 8192 bits per second or 8192 bps. Your chosen values of 500 KB/s are 4096000 bps or 4 Mbps and 1000KB/s are 8192000 bps or 8 Mbps. If other things in your network besides your FTP server generate some traffic too, this would probably explain the overhead reported by your router.
    1 point
  14. FYI for reference I ordered: Wifi Adapter: https://www.amazon.co.uk/dp/B089K499B5/ref=cm_sw_em_r_mt_dp_S288MH5GWCK9BVZV7PHP Coral: https://coral.ai/products/m2-accelerator-ae So far its all working and registered with unRAID and Frigate.
    1 point
  15. this is expected, if you want to run a minecraft server less than version 1.17 then you need to specify JAVA_VERSION 8.
    1 point
  16. I've been hosting my own email server for years & actually have 4 domains on it, but I use one only the other domains are set up as cach-all. I started with an unmanaged VPS for playing around and testing but through time & more experience I host mine now on a VM on my unraid setup connected with ADDC LDAP & have another VM with proxmox mail gateway installed acting as buffer filtering my email from spam and viruses <-- note I have no port forwarding so I have another remove VPS server with pfsense connected via VPN forwarding what I need back to my local pfsense to keep everything running on 5G internet. So basically, I use a Windows server VM with a business class email server.
    1 point
  17. Gibt mal wieder 25% auf Plex Lifetime https://www.reddit.com/r/PleX/comments/qzlpzl/25_off_lifetime_pass
    1 point
  18. There are a few dead links floating around. Start from the top of the wiki by clicking the "manual" link at lower right of your webUI, or the Docs link at top or bottom of the forum. https://wiki.unraid.net/Manual/Storage_Management#Replacing_failed.2Fdisabled_disk.28s.29
    1 point
  19. No. Keep an eye on disk1 and if it continues to have problems you will have to get a replacement. Do you have Notifications setup to alert you immediately by email or agent as soon as a problem is detected? How much data is on disk1? If it would all fit on old disk2 then maybe you could copy it all there as an Unassigned Device and then New Config that disk into the array in place of disk1 and rebuild parity. Don't do anything with original disk2 though until you are satisfied with the rebuild results.
    1 point
  20. It passed and except for a small number of Reported Uncorrect attributes look OK. But those syslog entries do seem like problems with the disk and not something else. You could check filesystem on the rebuilt disk2 but I expect it is OK. If you have finished with the backups I guess you could replace disk1.
    1 point
  21. If the controllers are bound to vfio-pci Unraid will not load the drivers and count the devices, I guess it depends on how you are passing them through.
    1 point
  22. Das hat geklappt. Vielen Dank für deine Hilfe. 👍 LG Sakis
    1 point
  23. Bitte dann noch das hinzufügen (hoffe du kannst ein wenig englisch): Die Meldung kannst du getrost ignorieren oder mit dem Eintrag "entfernen". 2 Posts unter dem verlinkten Post hab ich auch noch geschrieben das die Meldung nicht trivial oder besser gesagt nicht schädlich ist. Nicht jeder Fehler in Linux ist auch wirklich ein Fehler.
    1 point
  24. Schau mal meinen Post von 12:32. Es war der Rebind Schutz
    1 point
  25. OK, nothing in disk1 appdata you need rm -r /mnt/disk1/appdata
    1 point
  26. Hey, Could be the 3.3V Pin Issue in White Label Disks Shucked From Western Digital usb enclosures. You can try searching here. I think it was discussed. Sent from my ONEPLUS A6003 using Tapatalk
    1 point
  27. Um die Fehlermeldung los zu werden füge das hier deiner syslinux.conf hinzu: tbsecp3.enable_msi=0 Das sollte den Fehler beheben. Wie das funktioniert ist im DVB Driver Support Thread beschrieben (1. Post unter DigitalDevices, bitte verwende aber den oben geposteten Eintrag und nicht den aus dem DVB Driver Thread da der Befehl ein anderer ist für DigitalDevices Karten): Danach musst du deinen Server neu starten. Ich würde dir auch empfehlen wenn du viele Neustarts machst das du den Server komplett vom Strom trennst, ein paar mal auf den An-/Aus Schalter und den Reset Knopf drückst (zum entleeren der Caps), ca. 20 sekunden wartest, dann dem Server erst wieder Strom gibst und dann an schaltest. Das hat den einfachen Grund das sich die Karten bzw. besser gesagt der FPGA nach vielen Neustarts aufhängt, nicht mehr richtig startet bzw. die Karte nicht mehr richtig resetted und die Karte wird dann unter Umständen nicht mehr richtig erkannt bzw. funktioniert dann auch nicht mehr. DVB Karten sind hier ein wenig "schwierig", kenn ich selbst von meinen DigitalDevices Karten einmal wird nur eine erkannt, dann manchmal kein und ein anderes mal funktioniert wieder alles ganz normal.
    1 point
  28. So jealous! That is just too cool. One day I will get there but my better half would kill me if I did something like that all in one shot. She does not care about this stuff at all the way I do. Concerts... dang you for suggesting a new media category to consider that will require more drives.... lol
    1 point
  29. So I finally fixed my RTX 3060 Ti disappearing by just buying a damn new computer Went with an Asrock X570 and an AMD 5950X and now it all works.. Maybe it was just some hardware issue. but it's nice to be able to use this now without it disappearing
    1 point
  30. And what do you get with these? ls -lah /mnt/cache/appdata/binhex-krusader ls -lah /mnt/disk1/appdata/binhex-krusader
    1 point
  31. What do you get from the command line with these? ls -lah /mnt/cache/appdata ls -lah /mnt/disk1/appdata
    1 point
  32. I would actually love to see what the power would be like with the Windows VMs turned off and Unraid just running on the E cores, and the transcoding done through the iGPU 🙂. I would love to see if it get anywhere near my current i9 mini nuc unraid server which runs at ~20W with 10 dockers and 24TB of NVMes
    1 point
  33. ok, das ist ja dann ohne Umwege zu lösen, viel Spaß damit.
    1 point
  34. New cables added. 18TB Parity drive added. New config config'ed. Rebuilding with no errors so far. Happy days. Thank you for all your help and fast reply.
    1 point
  35. How did you verify it? Check through Chrome's Network Monitor and not Nextcloud internal check.
    1 point
  36. yep, overkill. And that's pretty much the case for 90%+ of the builds here, ... hence ignore the first line. We all like decent hardware, dont we? Regarding NVME caching. I'd go for two similar SSDs for a Raid 1, or 4 similar for Raid 0+1. The latter is my config - 4x 960GB in Raid 0+1. I have my appdata on cache only (and backed up to array). Depending on your share config you can of course direct services to specific shares which dont use the cache. I am doing this for one of my Backup shares. I would review the drive setup and consolidate there if you have budget left.
    1 point
  37. That's not a good answer. Please clarify, as there are consequences for using locations that aren't appropriate for logging. Runaway docker image size, running out of RAM and crashing, etc. Since you are the one recommending the changes, YOU need to know the details about what you are telling people to change, so you can let them know of the possible consequences and how to deal with them if things go wrong. Telling people to blindly follow your directions when you don't know the possible issues of what you are telling them to do is not good.
    1 point
  38. 1 point
  39. With a proper design/layout of the usage of system resources that's no problem. That's what I wrote exactly: "Cache/Pool-Writes". That does include Cache/Pool-Reads as well.
    1 point
  40. The first dropdown *should* work with the pre-populated items The second one (after the "or") has to be entered in manually due to security changes in 6.10
    1 point
  41. For 24 bays, I'd still go with a rackmounted 4HE version of a case....and up to 36 bays (like supermico CSE-847)...these are the only ones, where you can go with hotswap trays all over. You can/should upgrade/change the Fans, especially go for 120mm fans inside. Some 24x, 4HE chassis offer to mount an ATX PSU. When it comes to feature of "under the table" storage, look at the perfect rack, made by IKEA of sweden: https://wiki.eth0.nl/index.php/LackRack#The_LackRack
    1 point
  42. One thing to note. In your guide you never said to add the Photonix container to your photonix_net Docker network. I set a username and password in the Docker settings, but it just stays at loading every time I log in. I've tried restarting the container with no change. It is pointed at a directory with 15 photos for testing, so it shouldn't be taking too long to load them I'd assume. I cannot even log in when I run the container in demo mode.
    1 point
  43. You've got (presumably) another browser open to Fix Common Problems close it. Reboot and try again.
    1 point
  44. Pull your flash drive. Stick in another computer. Go to the config folder. Delete the passwd , smbpasswd , shadow files. Eject the flash drive. Put the flash drive back in the server and start it back up. This procedure should leave all passwords blank. You can now reassign new ones.
    1 point
  45. @shEiD @johnnie.black @itimpi Oh how we love to be comforted! While it is true that the mathematics show you are protected from two failures, drives don't study mathematics. And they don't die like light bulbs. In the throes of death they can do nasty things, and those nasty things can pollute parity. And if it pollutes one parity, it pollutes both parties. So even saying single parity protects against one failure is not always so, but let's say it protects against 98% of them. Now the chances of a second failure are astronomically smaller than a single failure. And it does not protect in the 2% that even a single failure isn't protected, and that 2% may dwarf the percentage of failures dual parity is going to rescue. I did an analysis a while back - the chances of dual parity being needed in a 20 disk array is about the same as the risk of a house fire. And that was with some very pessimistic failure rate estimates. Now RAID5 is different. First, RAID5 is much faster to kick a drive that does not respond in a tight time tolerance than unRaid (which only kicks a disk in a write failure). And second, if RAID5 kicks a second drive, ALL THE DATA in the entire array is lost. With no recovery possible expect backups. And it takes the array offline - a major issue for commercial enterprises that depend on these arrays to support their businesses. With unRaid the exposure is less, only affecting the two disks that "failed", and still leaving open other disk recovery methods that are very effective in practice. And typically our media servers going down is not a huge economic event. Bottom line - you need backups. Dual parity is not a substitute. Don't be sucked into the myth that you are fully protected from any two disk failures. Or that you can use the arguments for RAID6 over RAID5 to decide if dual parity is warranted in your array. A single disk backup of the size of a dual parity disk might provide far more value than using it for dual parity! And dual parity only starts to make sense with arrays containing disk counts in the high teens or twenties. (@ssdindex)
    1 point