• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

iarp's Achievements


Apprentice (3/14)




Community Answers

  1. Parity check happened last night, I'm setup to get email, gotify, and browser based notifications. I noticed today that the email, gotify, and browser messages all state 0 errors. The Main tab and the History modal both states 2 errors. I'm more inclined to believe the Main and History over the notifications. storage-diagnostics-20230404-0908.zip
  2. Setting disable_xconfig=true within /boot/config/plugins/nvidia-driver/settings.cfg solved this.
  3. Unraid 6.11.5 Motherboard: ASUS P8Z68-V PRO GEN3 - Initial graphics adapter is set to iGPU - Legacy boot mode PCI GPU: NVIDIA Quadro P400 - installing driver latest: v525.78.01 with nvidia-driver plugin - intel_gpu_top is not installed. - /boot/config/modprobe.d/i915.conf exists as an empty file to enable intel drivers as per these instructions - I tried adding nomodeset to the boot append options I've tried a number of other settings that I've now forgotten. All I'm getting is a black screen with a solid, not blinking, cursor in the top left corner. I'm wanting unRAID on igpu leaving the p400 for jellyfin. What else can I do? It works if I remove nvidia-plugin and prevent the driver from installing.
  4. It won't. The move script that mover uses is not standard, its custom written and seems to copy to .partial, deletes the original and then renames the partial to the real name. That renaming does not trigger inotifywait.
  5. I've been trying to track down why so many NEW files do not have a hash. I get emails every single time due to either hash mismatch (usually on nextcloud.log despite excluding *.log files) or hash missing altogether. I dug into the source code of the plugin and replicated the inotifywait command so that i could watch for myself what was going on. inotifywait -mr -e close_write --format '%w%f' /mnt/disk1 /mnt/disk2 and after running mover here is what i see /mnt/disk2/documents/ubuntu-18.04.1-server-amd64.iso.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/editor.cfg.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/super.dat.CA_BACKUP.partial /mnt/disk1/backup/SQL/mariadb/vmosa/2022-11-12-08.00.01.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/airsonic/airsonic_2022-11-12-08.05.02.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/authelia/authelia_2022-11-12-08.05.03.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/family_photos_dev/family_photos_dev_2022-11-12-08.05.03.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/family_photos_prod/family_photos_prod_2022-11-12-08.05.04.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/film_convert_dev/film_convert_dev_2022-11-12-08.05.04.sql.tgz.partial /mnt/disk1/backup/SQL/postgres11/film_convert_prod/film_convert_prod_2022-11-12-08.05.05.sql.tgz.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/plugins/dynamix.file.manager/dynamix.file.manager.txz.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/plugins/dynamix.file.manager.plg.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/plugins/unassigned.devices.plg.partial /mnt/disk1/backup/unraid/STORAGE/flash/config/plugins/dynamix.file.integrity/disks.ini.partial As you can see, every file moved by mover ends with .partial and if i run getfattr on the actual file (since there is no .partial file) [email protected]:~# getfattr -d /mnt/disk1/backup/SQL/postgres11/airsonic/airsonic_2022-11-12-08.05.02.sql.tgz [email protected]:~# When I navigate into the airsonic backup folder and run getfattr on any file in there (because backup always saves to cache and then mover moves) getfattr -d airsonic* is blank for all files.
  6. 6.11.1 with latest plugin versions installed. I'm attempting to preclear a 3TB drive. It's not going to be an array drive, it'll be more of a monthly backup rotation drive. I'm using the preclear as a test. At this point I've attempted to run preclear three (maybe four) times. If I trigger the clearing with the GUI with all default settings on the post-read I get: Nov 08 21:16:13 preclear_disk_WD-WCC7K6PJ0D5Z_786: Post-Read: post-read verification failed! Nov 08 21:16:17 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Error: Nov 08 21:16:17 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Nov 08 21:16:17 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUS Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 - Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Power_On_Hours 42487 42498 Up 11 Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Temperature_Celsius 22 28 Up 6 Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Reallocated_Event_Count 0 0 - Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Current_Pending_Sector 0 0 - Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Offline_Uncorrectable 0 0 - Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 - Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: S.M.A.R.T.: Nov 08 21:16:18 preclear_disk_WD-WCC7K6PJ0D5Z_786: error encountered, exiting ... I'm not see errors in syslog for the drive.
  7. I wanted to added ".Recycle.Bin,backup/SQL" to the excluded folders and i accidentally hit my enter key. Doing so resets all settings to default.
  8. I've been digging into various plugins already created for unraid to understand how the events system works. I would like to make use of stopping_docker but i'm trying to understand how to make sure the plugin event file executes before the dockerman event runs. Internally its using the find command and when I run that manually it runs alphabetically. I tried naming the plugin pre and post D to see and i didn't see much of a difference, output in syslog stayed the same log-position wise. Is there something else controlling the order at which events are triggered or is it actually alphabetical?
  9. I went to restart my thunderbird container today as VNC wouldn't connect (it connected yeserday) and now i'm greeted with the following. Its looping to infinity at this time, these exact same lines over and over again. ---Ensuring UID: 99 matches user--- usermod: no changes ---Ensuring GID: 100 matches user--- usermod: no changes ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Checking configuration for noVNC--- Nothing to do, noVNC resizing set to default Nothing to do, noVNC qaulity set to default Nothing to do, noVNC compression set to default ---Taking ownership of data...--- ---Starting...--- ---Version Check--- ---Thunderbird not installed, installing--- ---Sucessfully downloaded Thunderbird--- ---Sucessfully downloaded Thunderbird--- ---Preparing Server--- ---Resolution check--- ---Checking for old logfiles--- ---Checking for old display lock files--- ---Starting TurboVNC server--- ---Starting TurboVNC server--- ---Starting Fluxbox--- ---Starting noVNC server--- ---Starting noVNC server--- WebSocket server settings: - Listen on :8080 - Web server. Web root: /usr/share/novnc - No SSL/TLS support (no cert file) - Backgrounding (daemon) ---Starting Thunderbird--- XPCOMGlueLoad error for file /thunderbird/libxul.so: libasound.so.2: cannot open shared object file: No such file or directory Couldn't load XPCOM. UPDATE: According to release notes, the now latest version 102 is not yet upgrade-able from 91 and prior. I set THUNDERBIRD_V to 91.11.0 and we're working once again.
  10. Ok so this is curious, I was looking up grafana and unraid videos on youtube and while watching the video below I noticed that none of his ports and volume mappings are collapsing. The video is only a little over a year old at this point and I've been on the most recent versions as they released. Now I'm wondering if there is an option buried somewhere that disables the collapsing system.
  11. readmore-js is used on the docker tab to allow collapsing of Port Mapping and Volume Mapping columns. I have 40 docker containers of which 8 have enough port mappings and 15 have enough volume mappings to require the readmore-js-toggle chevron-down to show up. (I believe) If the dom resizes in any way then readmore-js's resizeBoxes function is called which causes an immense amount of lag that progressively gets worse the more times the dom size changes. The first time I resize the dom the lag is only about 3-4 seconds. The second time is 11 seconds, the 3rd time is 21 seconds. Looking at firefoxs' profiling recording tool I'm unsure if the running samples means how many times this function was called, but for a simple resize of the browser (un-maximize and then maximized) it says samples for resizeBoxes is 21,856 having spent 21,906MS traced running time. 66% of the time is spent resizing boxes. It isn't just resizing of the dom either, if i leave the docker tab open and i'm focused elsewhere or I've left my computer for a bit, it'll crash the browser. On android if i try to zoom on my phone or tablet, it'll also lag horribly. My main profiling checking has been on Linux Mint with Firefox but this has been happening for a long time over various different machines including Windows 7 Firefox and Chrome, Windows 10 Firefox and Chrome, Linux Mint 20.3 Firefox, Brave, and Chromium, Android phone and tablet both chrome and firefox. The androids get so bad it usually requires i kill the browser. Having now found the profiling tool and realizing its attempting to resize these boxes, I used the following js to delete the readmore-js boxes and everything on the page runs so much smoother. document.querySelectorAll('.readmore-js-section').forEach(e => e.remove());
  12. Have you checked what is in appdata and growing?
  13. I was playing around with stacks in portainer and ran into an issue where it was recreating a container every 10 minutes (until I realized it was happening). I checked unraid and it had 16 randomly named containers stopped. Rather than go slowly through each one with unraids gui i went through portainer to delete the garbage containers. However ever since doing that, those 16 containers all still appear under the advanced settings section. I can't find them using any `docker` type command on the server itself. I'm not sure where its pulling the list from. There has been no backup of appdata since this occurred. So its not like its pulling the 16 extra containers from a backup file or something. Edit: its a runtime thing. unraid seems to store information about docker its containers in a json file rather than querying directly from docker itself. Anything that then uses the DockerClient php class pulls from this json file which'll return the containers that no longer exist. Once i restarted the server everything reset back to what containers actually exist rather than the missing ones.
  14. This connection is direct using putty from windows. Aug 4 13:03:16 storage sshd[6898]: Close session: user root from port 58866 id 0 Aug 4 13:03:16 storage sshd[6898]: Connection closed by port 58866 Aug 4 13:03:16 storage sshd[6898]: pam_unix(sshd:session): session closed for user root Aug 4 13:03:16 storage sshd[6898]: Transferred: sent 2800, received 2416 bytes Aug 4 13:03:16 storage sshd[6898]: Closing connection to port 58866
  15. I only leave terminals in root's home directory /root (via a plain cd command), it's a habit after running into lock issues many years ago. This issue just requires logging into the server and stopping the array, the connection will be immediately disconnected.