cholzer

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by cholzer

  1. i know for sure that this takes less than 30sec at most. but i will measure this too.
  2. i will wait for the current parity check to finish then to a reboot and see what happens.
  3. Hmm... I do have a restart script that runs every sunday at 7:10 AM, that seems to be the issue for some reason. /usr/local/sbin/powerdown -r however that should trigger a clean reboot, right? that said, syslog indicates that this command is deprecated? May 21 07:10:01 NAS root: /usr/local/sbin/powerdown has been deprecated here is the full syslog from my syslog server:
  4. I have this system for well over 2 years and never had this sort of problem THE ISSUE: Yesterday I had to shut down unraid for a few hours because of some wiring work in the fuse box for the AC I install next week. when I booted unraid back up it began a parity check - no clue why as it did shutdown gracefully and the parity was good. after the parity check finished (with no errors found) the system went into S3 sleep late at night (5AM). after it woke up again today at 7AM it began yet another parity check at 7:14AM. nas-diagnostics-20230521-0810.zip
  5. thank you for the suggestion! sadly that does not work, promtail still takes control over what is displayed on the unraid machine's monitor.
  6. I am very sorry to necro this old thread but I could really need help with this. I want promtail to launch at boot so that it can start to send logs to my log collection server. For this I added the following to /boot/config/go cp /boot/tools/promtail/promtail /usr/local/bin/promtail chmod +x /usr/local/bin/promtail /usr/local/bin/promtail -config.file /boot/tools/promtail/promtail-local-config.yaml /usr/local/bin/promtail -config.file /etc/promtail/promtail-local-config.yaml Result: promtail starts at boot and starts to send logs! Problem: on the monitor connected to the unraid machine I now see the promtail output - I nolonger have access to the unraid GUI Is there a way to start promtail "in the background?" inside the go file? I could I could use the userscript plugin, but then I can only start the script when the array starts - should the array fail to start or anything else bad happens then I dont get logs of that. So I would really like to launch promtail on boot. I tried to launch it with nohup but that did not solve it
  7. I guess that is what you meant, right? docker run -d --name='proxmox-backup-server' --net='host' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="NAS" -e HOST_CONTAINERNAME="proxmox-backup-server" -e 'TZ'='Europe/Vienna' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='https://[IP]:[PORT:8007]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/devzwf/unraid-docker-templates/main/images/pbs-logo.jpg' -v '/mnt/user/appdata/pbs/etc':'/etc/proxmox-backup':'rw' -v '/mnt/user/appdata/pbs/logs':'/var/log/proxmox-backup':'rw' -v '/mnt/user/appdata/pbs/lib':'/var/lib/proxmox-backup':'rw' -v '/mnt/user/pbs/':'/backups':'rw' --restart unless-stopped --memory=2g --mount type=tmpfs,destination=/run 'ayufan/proxmox-backup-server:v2.3.2'
  8. sadly this docker does not work for me. this is the log when I try to start it. rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_csoOuQ" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_wKpkHV" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_BbQS8O" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_B5DdoE" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_jusdOt" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_RoeFxu" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_8fWJAf" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_aoS2Vf" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_xCJ5F5" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_uQz7m3" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented PROXY: Starting... /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... PROXY: Starting... /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... PROXY: Starting... /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... PROXY: Starting... /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... PROXY: Starting... /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... PROXY: Starting... Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_Z4j4ST" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory /etc/proxmox-backup is a mountpoint /var/lib/proxmox-backup is a mountpoint /var/log/proxmox-backup is a mountpoint /run is a mountpoint API: Starting... Error: failed to move file at "/var/log/proxmox-backup/api/access.tmp_l8TgnR" into place at "/var/log/proxmox-backup/api/access.log" - ENOSYS: Function not implemented PROXY: Starting... Error: failed to move file at "/var/lib/proxmox-backup/rrdb/rrd.tmp_ZxDwHD" into place at "/var/lib/proxmox-backup/rrdb/rrd.journal" - ENOSYS: Function not implemented rm: cannot remove '/etc/proxmox-backup/.*.lck': No such file or directory rm: cannot remove '/etc/proxmox-backup/*.lock': No such file or directory
  9. I want to have Unraid send its syslog to a remote syslog server. However it appears that unraid does not use the rfc 5424 format (Promtail only supports rfc5424 for syslog, would like to avoid having to send it through a local rsyslog just to fix this). Can this be changed?
  10. I would like to install and run Promtail as a service directly inside the Unraid OS to forward logfiles to my monitoring PC running Grafana/Loki. The reason why I want to install Promtail instead of running it inside a docker is so that I receive logs from unraid even when docker isnt running. I have searched online but could not find an answer to this, so before I brick my unraid install (I do have a backup of the flash drive tho) I thought I should better ask here first. Thank you!
  11. Well that is certainly not good for a file server, is it? I suppose I have to make a feature request for this then? I am stunned that there is no alert for filesystem issues.....
  12. 6.11.5 I discovered these in the log today: 2023-05-09 15:17:08.643 XFS (nvme3n1p1): First 128 bytes of corrupted metadata buffer: 2023-05-09 15:17:08.643 XFS (nvme3n1p1): Unmount and run xfs_repair 2023-05-09 15:17:08.643 XFS (nvme3n1p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x62cece1f dinode Even though I have notifications enabled for: Array status, Notices, Warnings, Alerts I did not get any notification. That does not seem right does it?
  13. thx! But I had to get my nas operational again. so I copied everything over to the array, nuked the cache, and went xfs this time.
  14. So I get this error, based on other threads in the forum it means the cache is full? Mar 16 22:24:37 NAS kernel: BTRFS error (device nvme3n1p1: state EA): bad tree block start, want 557456637952 have 0 But it clearly isnt? I also read that many had issues like this which is why they switched their cache to xfs, should I do that too or are their downsides? I do have my NAS go to S3 every night - though this error showed up in the middle of the afternoon.
  15. my filemanager plugin keeps disapearing. I just installed it for like the 4th time in the last 6 months. this is the only plugin I have this problem with.
  16. Works! I see the WB_Blue NVME available in the dropdown menu again!
  17. yup, but only a thumbdrive shows up there (this is not the Unraid boot stick it is also an Unassigned Device), the UnassignedDevice 2TB WB_Blue nvme ssd still does not show up.
  18. S3 Sleep Plugin It does not seem to see my NVME WD_Blue which I'd like to select to be monitored for activity as well. What is also confusing is the fact that it lists the array HDDs here. *edit* Updated the plugin to 2023.02.13, now the array disks and the unraid thumbdrive are nolonger listed, however my nvme WD blue is still not showing up in the dropdown menu.
  19. LanCache Docker -> advanced -> Extra Parameters here you can set i.e. -e DISABLE_WSUS=true -e CACHE_DISK_SIZE=800000m I disabled windows updates and set the max cache size to 800GB The download path for the cache as set in LanCache docker is used by the docker for me.
  20. I tried that but these changes do not survive a docker restart. In fact you even have to reinstall nano again after restarting the docker. Here is how I fixed the permission issue: 1. stop the onedrive docker 2. go to the folder you selected for the "Configuration:" of the docker 3. download the original config file from here https://raw.githubusercontent.com/abraunegg/onedrive/master/config 4. remove the # ONLY infront of the settings you want to change, and change the setting accordingly (i.e. sync_dir_permissions = "777" and sync_file_permissions = "777") 5. do not touch any of the other settings you dont want to change! 6. (optional) delete the items.sqlite3, items.sqlite3-wal and all your allready synced files to start fresh 6. start the docker 7. all newly synced files will have their permissions set to drwxrwxrwx / -rwxrwxrwx
  21. Currently running this onedrive application in an ubuntu VM and want to move over to docker. As I am using webhooks for realtime updates so I need to set webhook_enabled, webhook_public_url and the webhook_listening_port in the conf. But where is the config for this docker located? It is not in the /conf folder that has to be configured. 😅 *edit* I figured it out. 1. ideally you do this before you even add the onedrive docker template to your unraid server 2. create the configuration folder as described in the unraid onedrive docker manual 3. download the original config file from here https://raw.githubusercontent.com/abraunegg/onedrive/master/config 4. store that config file in the configuration folder of the onedrive docker 5. remove the # ONLY infront of the settings you want to change, and change the setting accordingly (in my case webhook_enabled, webhook_public_url and the webhook_listening_port as well as set sync_dir_permissions = "777" and sync_file_permissions = "777") 6. do not touch any of the other settings you dont want to change! (i.e. do not change the _dir paths!) 7. follow the rest of the onedrive docker manual This way your changes also survive a docker restart.