Tjareson

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Tjareson

  1. So, I could test it today. And it doesn't work. The script is only executed by the time when the array is actually up and running, not if the start sequence of the array is waiting for the password or key file. I couldn't see any other start option which would make sense here. The crontab syntax @reboot is also not supported. Any other ideas anyone?
  2. But this only offers the "at startup of array" event. My array is encrypted, so needs a password to start. So is this event fired already, if array is waiting for password? (couldn't test that yet myself right now)
  3. +1 Best thing would be to get it as part of the regular notification settings, as I'm usually using the pushover interface to get msg about server issues. Edit: just noted that was May 2019. 😄 Anyway, does anyone know if there was an outcome? So far I couldn't find any settings allowing for notification in case of reboot.
  4. ah, ok. So what I just notice is that there doesn't seem to be a mode, where data is kept on cache, but they are copied whenever the file is not open and mover runs. If either cache only or cache prefer usually all files will be only on the cache pool. All other options will make sure existing files will be move to the array or be created only in the array in the first place. My issue is: I don't have a cache pool, but only one SSD. Of course I like docker containers and e.g. mysql files to be on the fast ssd. But I wouldn't mind to have a daily copy of those in the array to have higher security than a single drive. But if I understand right, I have to do that then by script, as mover just helps to achieve the setting as per share configured, right?
  5. Hi, I want to create another folder in /mnt/user/appdata. appdata is configured as prefer: cache. If I need to download data into this new folder with wget directly on the server, where do I download it to? To the array (/mnt/user/appdata/...) or to the cache? (/mnt/cache/...) I'm not quite sure, where the setting "prefer cache" is intercepting the creating of a file/folder exactly. (as I would do a wget directly on the server, not from a client via SMB) Would creating it newly in cache be sufficient for mover to replicate it to the array later on? Or is it working the other way around? (Context is installation of LibrePhotos by the way.)
  6. So I've switched on Testing log to syslog on 15:45:30, then I try to switch two times the resume/pause notification to yes and stopped Testing log to syslog precisely at 15:46:00 So syslog shows: Feb 21 15:45:30 server01 ool www[23124]: /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php 'updatecron' Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON begin ------ Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: Creating required cron entries Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: Deleted cron marker file Feb 21 15:45:30 server01 Parity Check Tuning: DEBUG: Created cron entry for monitoring disk temperatures Feb 21 15:45:30 server01 Parity Check Tuning: DEBUG: updated cron settings are in /boot/config/plugins/parity.check.tuning/parity.check.tuning.cron Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON end ------ Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:30 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:45:37 server01 ool www[22985]: /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php 'updatecron' Feb 21 15:45:37 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:37 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:45:37 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON begin ------ Feb 21 15:45:37 server01 Parity Check Tuning: TESTING: Creating required cron entries Feb 21 15:45:38 server01 Parity Check Tuning: TESTING: Deleted cron marker file Feb 21 15:45:38 server01 Parity Check Tuning: DEBUG: Created cron entry for monitoring disk temperatures Feb 21 15:45:38 server01 Parity Check Tuning: DEBUG: updated cron settings are in /boot/config/plugins/parity.check.tuning/parity.check.tuning.cron Feb 21 15:45:38 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON end ------ Feb 21 15:45:38 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:38 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:45:43 server01 ool www[24619]: /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php 'updatecron' Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON begin ------ Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: Creating required cron entries Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: Deleted cron marker file Feb 21 15:45:43 server01 Parity Check Tuning: DEBUG: Created cron entry for monitoring disk temperatures Feb 21 15:45:43 server01 Parity Check Tuning: DEBUG: updated cron settings are in /boot/config/plugins/parity.check.tuning/parity.check.tuning.cron Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: ----------- UPDATECRON end ------ Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: setting locale from dynamix setting Feb 21 15:45:43 server01 Parity Check Tuning: TESTING: Multi-Language support active, locale: Feb 21 15:46:00 server01 ool www[24618]: /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php 'updatecron' Doesn't look very comprehensive to me - in case you need anything else, let me know.
  7. Sure, can you let me know where I can do that? (I'm using unraid since a couple of days only)
  8. Just fyi: I've updated to version: 2022.02.18. But this issue with the notification of resume/pause (Yes/No topic) still persists. I'm just mentioning it because I think it was addressed in the release notes of this version, if I remember right. I also completely reinstalled the plugin, same behaviour. My server runs on 6.9.2.
  9. Thanks! That was the right hint - somehow I wasn't exepcting to find array start and stop events as a hook there. Yes, it is a bit of a workaround, I know, but after I've spent 40 bucks on a DDR3 already which didn't work with that board despite promising specs, I'll give that a shot now. Somehow the kernel paging strategy seems to be so effective that the machine works well. It's probably worth to mention that this is not a multi user environment here with 100 of clients interacting with the server all the time. Well, my understanding is that Unraid is not meant to being exposed in a DMZ or similar anyway. (For me the problem on that end would start already with not having 2FA or certificate based root logins, no integrated brute-force protection etc. ) So all good from my perspective. Actually I'm quite happy that it works so well with Unraid. Enough flexibility to tweak it a bit so that this old box here can still be useful for something. I'm currently wondering what will die first - harddisks or swap ssd... 😄
  10. It is an old TS-439 box, which is intel atom 1,8GHz with one RAM slot. Technically it should be able to deal with 4GB but it seems to be a bet with mixed chances. There are some people who managed to upgrade to 2GB, I found one who said 4GB. I've tried a couple of DDR3 SO-DIMMS and even got a new one with 4GB, nothing above 1GB is booting. The original module is a DDR3 1GB SO-DIMM 1333.
  11. Well, my experience here is just diffferent. It works at all and more than a pure file server. Obviously linux brings along options to make Unraid fit on platforms below these minimum requirements. I don't get why insisting on hardware RAM if a swapfile is compensating for it with acceptable speed, as long as an SSD is used. Of course it doesn't make a lot of sense to replace VM memory with swap files. But the ability to use docker containers (like logitech mediaserver here right now) is already good enough. And it is responsive. So why I would require a swapfile is clear: just to keep my old hardware longer deployed. Upside for Unraid is: I will buy another license for that box as well. And only the swapfile paves the way for that additional license revenue, as I cannot upgrade the onboard memory. Upside overall: We safe ressources on producing new hardware, as long as we can find smart solutions to continue using existing one.
  12. Server 1: 16GB Server 2: only 1GB + 4GB swapfile on SSD cache, also works quite well so far. 🙂
  13. Quick feedback about how it went with the plugin and swapfile in general on a really limited system (1GB, 1 SSD 400GB and 3 x 1,5TB) Maybe it is useful for others trying to use unraid on really small/old systems. - I had to install 6.9.1 as 6.9.2 already gets into memory deadlock when just booting first time. - I stopped docker and vm to release some additional memory during the swap setup, but: I wasn't able to generate the swapfile with the plugin unfortunately as somehow generating the swapfile alone is consuming so much memory that the process gets killed early. (not visible on the gui, it is just stuck in the apply situation...) Also I wasn't able to use a swapfile on btrfs, because swapon always came back with wrong attributes error or similar. What I did to make swap work: 1. Format ssd cache drive with xfs 2. fallocate -l 4096M /mnt/cache/swapfile 3. mkswap /mnt/cache/swapfile 4. chmod 600 /mnt/cache/swapfile 5. swapon swapfile As the swap is on an ssd it works quite well. Also docker containers (like logitech mediaserver) are running at sufficient speed. The only "issue" I would need to solve now, is to automatically issue a swapon command after the array starts and vice versa a swapoff before the array gets stopped. Any ideas where to implement that ideally? I think it would be great to somehow make it a bit easier to activate swap in Unraid in general. Maybe also in an early boot phase. (as it looks this system is stuck with 6.9.1 due to memory requirements during boot already - does anyone know more details on that part?) That would reanimate a lot of existing hardware which cannot be used anymore because the vendors are not providing any updates anymore or it is outdated somehow otherwise. Personally I always feel bad to dispose otherwise well working hardware because of a software issue... It would be great if Unraid could further help to reduce that early hardware retirement trend.
  14. Is there any way I can install this app directly via URL? Background is: I'm trying to get an old server up and running with unraid. But with this server I do not have enough memory to load the apps via the community app. So getting the app installed directly would be worth a try.
  15. For me it was: 1. Transparent system/file layout and along with that the option to include own scripts in a clean and sorted way. 2. Finally the end of non-understandable constant hdd activity (I'm qnap-damaged...) 3. And last but not least everything which is programmed/contributed by the community. ah, I almost forgot the documentation.
  16. Not sure if I'm missing something, but whenever I try to switch on notifications for resume/pause the configuration switches back to "No" when I hit the Apply-button. Am I the only one with that issue? I'm using Version: 2021.10.10 on 6.9.2
  17. This QNAP comes along with 1GB only. Somehow I've got the impression that unraid can also make older hardware useable, where the original vendor gave up on. But in fact due to the architecture of unraid to keep everything in RAM this doesn't work for all NAS which are limited on RAM, pity. I have an ssd in one bay and 3 bays with hdds. Any other system (like ubuntu server) runs w/o problems as they all can make use of a fast swap on the ssd or do not keep everything in RAM to start with. My production system has 16GB RAM but I can't risk a migration w/o being able to test it before. And also here unraid doesn't allow tests in a vm setup. 😞
  18. I found the issue as it looks. It works as long as the parity sync is not running. As soon as I start it, I'm running indeed out of memory if the community app tries to download the app catalog and get just killed by linux lacking memory. Now, the issue is: I cannot upgrade the RAM, I have already to failed tries in finding a SO-DIMM which is accepted by the board/bios. (2G/4G nothing works) Anyway, question is, if I can still make use of this otherwise working hardware by somehow enable swap. Right now I can't do this via the swap plugin: it requires the array to be online to have a space to store the file. If the array is online there is not even enough memory so the swap file can get created and written. The process simply gets killed again before finished. Any ideas how to still make use of that box with unraid? Background is also that I really wanna test it a bit, before considering switching my productive nas. Unfortunately it doesn't seem to be possible to checkout unraid in a vm, at least with a quick shot mapping an USB stick as device into an vmdk...
  19. Ah, ok, thanks. That was the right hint. I was searching for the kernel panic error msg. which misled me a bit. I wonder why that parameter is necessary. To start an installation with that kind of hack right away is a bit surprising. I'm currently considering to also use unraid on my productive NAS, which is a brand new ts-364. But I'm already facing the next issue in unraid with loading the apps being stuck with "updating content". I've changed the DNS to google already, but then cannot not see why unraid shouldn't be able to work with the router like any other appliance here. The ts439 only has 1GB but then if then in htop that doesn't look like an issue, zero swap used so far.... Looks difficult.
  20. Hello, wanted to try unraid on an QNAP-TS439II+. Currently the machine runs Ubuntu Server 20.04 LTS with ZFS. Unfortunately I don't even get Unraid to boot. It stops with: unraid kernel panic not syncing: vfs: unable to mount root fs on unknown-block And yes, I've tried different USB sticks, different versions (6.9.2, 6.9.1) and manual and tool way to setup the USB stick. Memory test w/o issues. I also see quite a lot of the same issue outside of this forum on reddit etc. (often in systems running already for a long time) and rarely there was a clear solution to it. Any ideas? Not the best start so far where reliability is kind of part of the equation... Cheers, Tjareson