xyzeratul

Members
  • Posts

    153
  • Joined

  • Last visited

Everything posted by xyzeratul

  1. 谢谢已经解决了,怎么说呢,算半个BUG吧,同样为6.11.5环境下测试,也是Openclash的http代理: 因为前面我也是按照那篇文章自己修改的,后面用你这个插件后,没有先清理自己修改的文件,导致你的插件修改代理地址失败,重新手动清理掉文件后,再用你的插件修改,成功了。🤣
  2. Background: Running Unraid with onboard realtek 8125b nic as ETH0, running tho customize gateway(192.168.1.10) Installed a Intel i350 nic as ETH1, setup running in a default gateway(192.168.1.1) I would like to setup Unraid to run all the system traiffic tho ETH0, and running some specific dockers tho ETH, is this possible? if so how do I setup this? I search around but seems find nothing on this.
  3. 目前机器上有一张空闲intel i350-T2的网卡,想做以下功能,但又不知道怎么弄: 想让特定的几个docker容器所有网络流量走这个网卡 其他的容器和Unraid系统自身的流量还是走板载的原网卡 目前在使用旁路游透明网关,板载网卡为eth0,网关为透明网关地址,i350的网卡为eth1,网关地址设置为主路由地址,在特定docker里面指定使用eth1。 但我不太清楚的是,Unraid系统的流量具体会走那个网卡,是需要怎么设置?
  4. holy s#########, I think I found out the reason, it's the proxy setting behind all these I am using this plugin to set system and CA go tho proxy, so it would modife my go file, so even in safe mode, the go file has the proxy setting, remove all the setting in go file, no more emhttp errors
  5. 一般来说SATA这个占用比网卡的要高很多,网卡就算直通也不会真正解决问题。 如果要直通SATA控制器,几种办法: 最好是自己加一个PCI-E的SATA控制器再直通,那玩意也不贵,百来块就能拿下 用IT模式的SAS直通卡,不过那东西好像发热很猛,放小机箱可能有点问题,好处是硬盘背板支持的情况下接线很方便 有些主板上原生SATA口不够,本身都加了额外的控制芯片,直通那个芯片就行了,比如前段时间很火的什么N5105,6143之类的妖版,不过要先搞清楚你硬盘接的是哪个上面的。
  6. Yes normally u need to replace it at least every 5 years, that's assuming you are using the good one. I would suggest using SHINETSU 7783 or 7921, Noctua NT-H1, ARCTIC MX4, those are good and affordable in my book. Application you can find many guide video online, but please understand the principle behind thermal paste, they are not as thermal conduct as the pure metal, you just want them to fill the mico gap between your cooler and CPU, so use as little as possible.
  7. OK, seems I have this one option left, I keep a flash drvie backup, format the drive then install a new one: Just copy super.dat, pools folder, key file and the container templates on the new drive for testing, hopefully the problem is gone, right?
  8. oops, forgot I reboot between this one, this should have: 185409-diagnostics-20230407-1858.zip
  9. yes, I understand these, but what troubles me are these error: Apr 7 17:50:27 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/disks.ini.new /var/local/emhttp/disks.ini Apr 7 17:50:28 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/disks.ini.new /var/local/emhttp/disks.ini Apr 7 17:50:29 185409 emhttpd: error: update_ini, 704: No such file or directory (2): unlink: /var/local/emhttp/var.ini.new Apr 7 17:50:29 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/disks.ini.new /var/local/emhttp/disks.ini Apr 7 17:50:29 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/shares.ini.new /var/local/emhttp/shares.ini Apr 7 17:50:29 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/sec.ini.new /var/local/emhttp/sec.ini Apr 7 17:50:29 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/sec_nfs.ini.new /var/local/emhttp/sec_nfs.ini Apr 7 17:50:30 185409 emhttpd: error: update_ini, 711: No such file or directory (2): rename: /var/local/emhttp/var.ini.new /var/local/emhttp/var.ini and the UI laggy happens from time to time, but after many test, I noticed this laggy sometimes go away when I restart the NAS, sometimes it doesn't, I don't know if this related to any error showing in the log.
  10. It happens again after the reboot, I grab the dignostics for this time, don't know if this would help 185409-diagnostics-20230407-1733.zip
  11. 试用了下大佬的插件,CA市场一切正常,docker更新正常,但不知道为啥,所有插件检查更新都不行,直接显示: Checking connectivity ... No response, aborting! 不知道是为啥
  12. 应该不完全是网卡的问题,SCSI或者SATA控制器才是主要问题,我只是猜测哈,因为根据你的描述无法确定,我推断下估计是这情况: 硬盘A和B都应该是接在主板自带的sata口,然后这样你不管你直通任何一个块硬盘,你数据传输都是要经过sata控制器,也就是Unraid还是在管这破事,而且你还没直通网卡,数据还要从虚拟网卡上面过来,又给unraid增加负担。 如果要让unraid完全从这些里面解脱,那正确的方式应该是: 直通一块物理网卡和一个sata控制器给群晖,群晖在这个sata控制器下面直接接管硬盘B,然后直通的物理网口通过交换机或者路由器连接到同一个局域网,这样你拷贝文件就纯粹是smb网络传输的事情了。 PS:别把自己主板上的sata控制器直通了,那样你硬盘A直接挂。
  13. still same problem, boot in safe GUI mode, UI update is still very laggy shut down and start in normal mode, everything is fine this time, this is making me crazy
  14. 185409-diagnostics-20230406-1633.zip Got it, it doesn't appear on every reboot or restart, but 1 out 3 times it happens. UI update is very laggy, and the web refeash on itself, very strange....
  15. docker run 一般代指控制台命令行来创建docker容器,其实和在UI里面手动添加容器没啥区别,一般看作者docker run命令怎么写的就也知道模板该怎么填了,最多再看看作者有没有提什么额外参数要填。 docker compose在unraid上面没有直接支持,需要Docker Compose Manager插件,但是现在和Unraid的UI整合的不太好,不过适合批量部署和修改容器,以前我在OMV上面用portainer这样管理docker,感觉有时候比Unraid上面还方便。
  16. 那和CA就没啥关系了,直接没模板手动添加了一个docker容器。 我也就是按照作者给的docker compose设置的,没啥特殊的啊: version: '2' services: docker: image: likun7981/hlink:latest # docker镜像名称 restart: on-failure ports: # 这个端口映射 - 9090:9090 volumes: # 这个表示存储空间映射 - $YOUR_NAS_VOLUME_PATH:$DOCKER_VOLUME_PATH environment: - PUID=$YOUR_USER_ID - PGID=$YOUR_GROUP_ID - UMASK=$YOUR_UMASK - HLINK_HOME=$YOUR_HLINK_HOME_DIR # 这个是环境变量 $YOUR_USER_ID、$YOUR_GROUP_ID、$YOUR_UMASK、$YOUR_HLINK_HOME_DIR、$YOUR_NAS_VOLUME_PATH、$DOCKER_VOLUME_PATH为变量,根据自己的情况自行设置
  17. OK,shouldn't be too hard for reproducting this problem, it always happens right after I reboot or shutdown and start my nas. I will post a new one as soon as I am have some free time to do so.
  18. 185409-diagnostics-20230402-2155.zip This is the dignostics I get after I reboot my NAS, don't know if this caught the error or not.
  19. Guess I have to wait for some plugin update for that, I am look into rc2 and find out: I am using R8125 driver patch, GPU state plugin, and some UI button plugin right now.
  20. 我一直在用这个补丁,但不知道是什么原理:我的机器上面主板B460迫击炮自带的8125网卡,单独使用没任何问题,但加了一张i350-T2的网卡,发现没法单独直通给Openwrt用,打了这个补丁后,i350神奇的可以屏蔽后直通。
  21. after a clean shutdown the "emhttpd: error" is back again😫
  22. This happens to me one or twice every week, WEB UI just mess up: system log show some sort of nginx mem error: Apr 1 22:41:21 185409 nginx: 2023/04/01 22:41:21 [error] 4793#4793: MEMSTORE:00: can't create shared message for channel /shares Apr 1 22:41:22 185409 nginx: 2023/04/01 22:41:22 [crit] 4793#4793: ngx_slab_alloc() failed: no memory Apr 1 22:41:22 185409 nginx: 2023/04/01 22:41:22 [error] 4793#4793: shpool alloc failed Apr 1 22:41:22 185409 nginx: 2023/04/01 22:41:22 [error] 4793#4793: nchan: Out of shared memory while allocating message of size 9416. Increase nchan_max_reserved_memory. Apr 1 22:41:22 185409 nginx: 2023/04/01 22:41:22 [error] 4793#4793: *1357081 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" Apr 1 22:41:22 185409 nginx: 2023/04/01 22:41:22 [error] 4793#4793: MEMSTORE:00: can't create shared message for channel /devices Could some help me understand what's the cause and how to I fix this
  23. OK I find out the resaon with wrong csrf_token, some leftover from last login mess with the setting. but still can't find anything on the "emhttpd: error"