Videodr0me

Members
  • Posts

    141
  • Joined

  • Last visited

Everything posted by Videodr0me

  1. Just wanted to provide the info for the 6510T - and yes for this hacky solution i had to modify the fan control script. I did not yet had the time to test the new driver on that asustor model. If it works, its naturally a better solution than these tricks.
  2. For just fan control you do not really need this, you can just force a similar enough it87 module. On the 6510T I found modprobe -r it87 modprobe it87 force_id=0x8628 fix_pwm_polarity=1 to work quite well (you also need the lax option enabled). On my previous 5110T just modprobe it87 fix_pwm_polarity=1 sufficed. But i its of course much neater to have a proper asustor it87 module, I will try loading it via modprobe and see if it works on the 6510T later. Just as you i always wanted to get around controlling the LEDs and the LCD display, and also found the mentioned repos, but i also never got around to actually doing something with it. If anybody has already done so (regardless of asustor model) any code snipped to change the LCD text would be much appreciated.
  3. How do i use this? I installed it on a Asustor6510T running unraid 6.12.8 and rebootet. Nothing really changed - all LEDs are still always on. Do i need to do something to control the LEDs?
  4. When updating from 6.12.6 to 6.12.8 one of my dockers no longer started (SFTPGo). Turned out that one of the ports (50057) was already in use by a process: rpc.mountd. This process (AFAIK) is mainly responsible for NFS discovery. I had turned NFS support on, but no shares where exported NFS. In previous versions rpc.mountd used port 10499 (the port might be randomly assigned, but previously never in the 50000 range.- at least not on my servers). I turned NFS completely off and the docker started normally. So if you get an error like : Docker: Error response from daemon: driver failed programming external connectivity on endpoint SFTPGo (ee1aca2871bbdc630466f81d8b7a7c24ec39c91afec730ba61a050efc8cb4850): Error starting userland proxy: listen tcp6 [::]:50057: bind: address already in use. keep this in mind.
  5. Reboot restored functionality. Still this is a very annoying bug.
  6. Changed Status to Open Changed Priority to Urgent
  7. Updated recently to 6.12.6 (from 6.9.2) and after 5 days of uptime all user shares disappeared. The GUI is still accessable but the it shows 0 user shares. I narrowed it down to this error in the log: Jan 16 16:56:56 Tower-II shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. It seems ever since that error all shares just disappeared. Never had such issues with 6.9.2 (uptime continuously for two years without restart). tower-ii-diagnostics-20240117-1457.zip
  8. root@Tower-II:~# fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 970 EVO Plus 2TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 3907029167 3907027120 1.8T 83 Linux
  9. I just updated from 6.9.2 to 6.12.6 and while everything seems to work as expected, if i click on the main tab and then on the cache drive the partition size is reported as 0. This is clearly wrong as i can access all files and fdsk or lblk report the correct partition size, also on 6.9.2 it was reported correctly . Is this just a cosmetic bug? Addition: The bug persists with 6.12.8 tower-ii-diagnostics-20240112-1055.zip
  10. I completely forgot to follow up on my post. Two Years later, i can confirm that it did fix the issue completely for me. So setting: global-share-settings->tunable(enable hard links) to no solves the issue and the oppo now displays all entries.
  11. Thanks for answering. I also noted that the UI flashed "starting array" repeatedly in the status line, even though the array was already running and accessable from other machines. I guess not being able to read from the boot flash device confused the ui completely. If rebooting is fine, i figured a shutdown would be even better, as i might alvage some of the flash data. I proceeded like this: 1) I shutdown the machine, which was only partly successfull. Telnet, network and all other services seemed to have stopped but the machine was still running after 15 minutes. So i did a hard power off. 2) I put the USB-Stick in a windows machine, it said it had errors. I was still able to access all files and made a backup. 3) Then i used the windows check and repair function which ironically finished by stating no errors found. I used the safe remove feature and unplugged the stick. 4) i plugged it back in again to check if windows would still find errors. It did not. Then i checked some of the files against a backup and everything seemed fine. 5) I plugged the stick back into the Unraid server (same usb port as its the only one) and turned the machine on. 6) Everything seems normal, except for the unsafe shutdown and a parity check which is running now. 7) The plugins i tried to update when the whole incident startet seemed to not have written anything to the boot device, both plugins had the update available again through the normal update check. I updated both without any problems. So the problem seems solved, but it might only be a matter of time before the usb stick fails again. I will try to get a new one in the next days and transfer everything to the new stick. Is there anything i should know/do in advance to make the license transfer as smooth as possible. Also i think that unraid should be made a little bit more resilient and graceful if a flash device error occurs. For example if the error message flash device error is displayed why does it still try to access the boot device from numerous pages of the webui? Also it should probably not try to start the array constantly in such cases (or at least not display the message in the status line). And finally a shutdown should still be possible. I know its probably a rare failure - my first of this kind in years with two servers - but everything that makes unraid more robust is welcome.
  12. One of my servers that has been running for years (6.9.2) suddenly shows "flash device error" in the upper right corner. It happened after i installed an update to community applications. During the update the log showed an error that it could not create a directory. Then the message in the upper right corner appeared. Also on the dashboard page there a a couple of error messages way down that indicate that a couple of cfg files (probably from the flash device) can no longer be read. I guess the flash drive went bad. So how do i proceed from here? Is there a way to backup the currently running config or should i go back to a previous backup? How do i transfer the licence to a new usb drive? Or might it me just a fluke and is it safe to reboot the system and retry?
  13. Thanks very much for explaining. I think i understand the issue better now but one question remains. If the uuid is identical how does unraid know which drive to include in the array? Lets say I have the new drive inside the array (the old drive is removed from the server) and everything is already rebuilt: If i now shutdown the server, connect the old drive (now both drives are connected), power on and start the array - how does unraid know which of the two drives to include in the array? Or does it remember the drives not only by uuid but other info as well?
  14. Thanks for the info. I am still unsure exactly when I need to change the UUID. Do i have to do it before i remove the drive, or do i change the UUID before mounting it with unassigned devices for the first time? Is there a reason why i have to change the uuid? I thought that when i set the drive to "not assigned" (step1) and start the array (step 2) unraid will forget that this uuid belongs to the array. Is this assumption wrong?
  15. There are also some other potential bottlenecks/pitfalls: 1) on low cpu powered servers dual parity might be cpu contrained. On my Intel® Atom™ CPU C3538 @ 2.10GHz unraid server going from single to dual more than halved parity speed 2) make sure the cpu scaling governor is set to performance (tipps and tweaks plug-in). This can also make a big difference 3) on systems with a high number of drives the i/o performance of the disk controllers and or number of available pci lanes can make a difference 4) check and optimize the tunables under disk settings - there are a number of threads in the forum give information about this 5) if you use usb drives, make sure that all drives really use > usb 3.0. Because if only one drive falls back to usb 2.0 it completely limits the parity speed.
  16. I have a question regarding the replacment of drives. I want to replace an old smaller drive with a new larger one. As far as i can gather the procedure should be as follows: 1) Stop array and set the old drive to "not assigned" 2) Start array and then shutdown the unraid server 3) replace drive with new larger drive 4) Let unraid rebuilt the data on the new drive 5) unraid automatically expands the rebuilt disk to maximum capacity If this is the correct procedure, will i be able to access the data from the old drive with any xfs capable computer. And more importantly, could i just plug the disk in the unraid server and mount it with unassigned devices to access the data? Or would there be problems because unraid recognizes that the drive was from the array previously?
  17. Same problem here with NFS. The oppo 203 only shows part of a directory (approx 200 entries out of 800). Over SMB everything is fine. NFS had been working on previous unraid versions - but i do not know exactly when it stopped showing all files. I think i found a solution by setting global-share-settings->tunable(enable hard links) to no. The oppo now seems to display all entries again. I will monitor this to see if this is just luck or if its really fixed.
  18. I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its true that bug reports based on unofficial builds complicate matters. Also its maybe frustrating that people are reluctant to go the extra mile to go back to stock unraid and try to reproduce the error there. Especially since they might be convinced (correctly or not) it has nothing to do with the unoffial build. Granted from an engineers point of view that might be seen as a nuisance. But from a customer driven business point of view its a self destructive perspective. Obviously these builds fill a need that unraid could not, or else they would not exist and there wouldn't be enough people using them to be a "bug hunting" problem in the first place. They expand unraids capabilities, bring new customers to unraid, demonstrate a lively and active community and basically everything i love about unraid. I think @limetech did not mean it in that way, but i can fully see how people who poured a lot of energy and heart into the unraid ecosystem might perceive it that way. I think if you would have said instead: "Finally we incorporated these new features x,y, and z formerly only available in the builds by A, B and C. Thanks again for your great work A,B and C have being doing for a long while now and for showing us in what way we can enhance unraid for our customers. I took a long time, but now its here. It should also make finding bugs more easy, as many people can now use the official builds." then everybody would have been happy. I think its probably a misunderstanding. I can't really imagine you really wanting to discourage the community from making unraid reach out to more user.
  19. The problem persists on 6.9 beta25. It seems to be related to the docker service. Turning off docker completely in the settings page->docker solved it. This is strange because all containers were already stopped. So maybe its related to the docker service somehow, whether containers are running or not.
  20. Same here. Shutting down docker (in the settings page->docker) fixed it here, too. Strange because all containers were stopped already, so it must be something with the docker service itself.
  21. Same thing here with beta 25. Drive temp of first parity drive is misreported (5588 Degrees). Unfortunately the drive spun down before i could make a screenshot. Smart data was normal.
  22. Just another update after 31 days of uptime with 6.9.0beta22. No pagefaults. I consider this issue fixed (at least in beta22).
  23. Installed 6.9.0 beta 22 and so far so good. No page faults, yet. Will keep you posted.
  24. I wonder if i should go back to 6.8.3 or wait for a new beta. Is there any rough timeframe for when the next beta will be dropped upon us?