Noim

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Noim

  1. As far as I can tell, it really crashes for me. The server in general doesn't even respond to pings after the crash. franz-diagnostics-20230623-2012.zip
  2. Easy is very expensive and cost more time. What are some other options? This is the fdisk output: Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WDC WD80EZAZ-11T Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 8253E41B-E414-4985-BD29-FBE663B171ED Device Start End Sectors Size Type /dev/sdb1 64 15626928094 15626928031 7.3T Linux filesystem
  3. Yes, he doesn't like the partition layout. Any way to fix this?
  4. What will happen if I start the array with the new config? Will it wipe my drives? I mean, they are formatted correctly with xfs by unraid. I can't remember if unraid prompts you first, or just format automatically.
  5. Hi guys, Because of the rising energy costs, I decided to move my drives from my r710 to a more energy efficient hardware. In my r710 I used a PERC card which just passed the disk to the system. Because of this, the disk are now called different. This is why unraid says they are the wrong disks. But they are not. Both disk just contain the file system, and were not used with any raid feature of the PERC card. I verified this with Unassigned Devices. How can I force unraid to accept the renamed drives? Greetings Nils Edit: I don't have any parity disk. Just this 2 drives. Both a xfs (formatted by unraid). What happens if I just add both drives as drive 3 and 4? Will unraid format them again? Or just use these with their files?
  6. And it just brakes randomly sometimes. After some time it starts working again, and then randomly brakes again. Version 1.34.81 Chromium: 97.0.4692.99 (Official Build) (x86_64) Brave And no request gets blocked if I look into dev tools. And I also expect there should be some kind of ws message. I think this is why it shows nothing: Maybe I am dum and did something wrong, but dk what. So far it didn't brake in safari.
  7. For me there are two broken things: 1. The array page: 2. The cpu stats:
  8. Oh, I am sorry. There was no response to @HarryMuscle question and a quick google search wasn't helpful either. So, I just expected there is no repo. I will do a proper pull request.
  9. Because nobody cared, and the dev doesn't even publish the plugin on GitHub, so I could make a pull request I investigated by myself. The abort how it is implemented is flawed. This is my fix. Not elegant, but it works better than the current version. Starting at line 115 in exec.php: case 'abortScript': $script = isset($_POST['name']) ? urldecode(($_POST['name'])) : ""; $pid = file_get_contents("/tmp/user.scripts/running/$script"); exec("pkill -TERM -P $pid"); exec("kill -9 $pid"); $processListOutput = null; exec("ps aux | grep -i '/tmp/user.scripts/tmpScripts/$script' | grep -v grep", $processListOutput); foreach ($processListOutput as $emergencyKill) { $emergencyKill = str_replace("root","",$emergencyKill); $emergencyKill = trim($emergencyKill); $rawKill = explode(" ",$emergencyKill); logger("Kill pid: ".$rawKill[0]); exec("kill -9 ".$rawKill[0]); } @unlink("/tmp/user.scripts/running/$script"); file_put_contents("/tmp/user.scripts/finished/$script","aborted"); file_put_contents("/tmp/user.scripts/tmpScripts/$script/log.txt","Execution was aborted by user\n\n",FILE_APPEND); break; The original version didn't kill all running script instances. And the grep command not even found the correct processes if I tried it manual.
  10. @Squid My issue still persists. Here is a test script: After aborting it, the script process is still running: What am I doing wrong?! This is how it gets spawned if I press background process: The process 10197 survives if I press abort. Plugin is up-to-date and running on unraid 6.9.2.
  11. But it doesn't spawn any new process. The content of fan-script.sh is the content of the user script.
  12. Hey guys, I have a problem. Currently I use User Scripts for controlling my fans. For this I have two scripts. One for the night and one for the day. Always when I go to bed, I abort the day script and start the night one. After some time I noticed, my script stopped working and throws the same error again and again. The error itself has nothing to do with user script. It only made me realize something. When I press abort, the script itself doesn't get killed. Here you can see the output of ps -ax | grep script after I aborted the script: The highlighted process is the process I aborted. It is still running which obviously is an issue. The first time I had 20 processes running without noticing. All doing the same thing which overloaded my ipmi controller. I attached the script to this comment. fan-script.sh Why does user script not kill the script. This is what I expect when I press abort. Or is it even a problem of my script?
  13. Your tipp didn't work. Database corruption still happens.
  14. This would defeat the hole purpose of using unraid. At least for me. Btw updated unraid to 6.7.2. However, the corruption happened again.
  15. Which disk: I only have one 8TB WD Drive. One you can find in the wd my book things. Formatted: xfs appdata config path, don't know how this helps you: /mnt/user/appdata/PlexMedia/ direct io: Default? Never changed this settings. I am not even able to find it.
  16. Didn't took long. Database again corrupted. Unraid 6.7.1. I use a r710 with a H700 Raid Card, if you want to know.
  17. Yes. An entry in another forum confused me about the settings, but "Yes" is what should fit the best my needs. This would be weird. Created a new zip archive with the content of /var/log At least I want to try fixing it. It took forever to copy the files, and I don't really want to do it again. var.log.zip
  18. Hello Guys, Recently I added a ssd cache to my configuration and backuped an old drive over network to my server. Of course, the cache disk got filled up, but the mover doesn't seems to work. It only spams this message into the system log: move: create_parent: /mnt/disk1/Backup error: No space left on device But my array isn't full: Only the cache disk is full. Already tried to fix my permissions with the "New Permission" tool. I attached my diagnostics file to this post. I hope you can help me to fix the problem. ~ Nils s3-diagnostics-20181027-0317.zip