michielvv

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

michielvv's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. Thanks, noticed indeed in a parallel topic this might be the issue. Fixed it for me!
  2. All, I have been using UnRaid to much happiness for some time, super! However, I now run into a problem after i did this: I removed my second parity drive and added that drive as new drive (disk clear, format). Then I invoked a manual move to get my cache drives emptied. After that, i can visit any other GUI part except for the Dashboard section. It tries loading and then at some time gives up and looks like this: If I open other 'tabs' of the gui in a separate tab in parallel, those pages give a gateway timeout 504. When the Dashboard section finally stops loading, i can load the other 'tabs' successfully, no problem. All docker, vms run, syslog reports no errors. A reboot did not fix it unfortunately... Diagnostics are attached! Any idea? Thanks in advance, Michiel nasi-diagnostics-20240306-1724.zip
  3. Hi all! After an accidental power outage, Unraid boots and started its parity check. I checked the logs and noted the following: I cannot find any guidance on these errors (I searched at length, though maybe no good enough :)) - should I be concerned about these errors?
  4. This worked, thanks! I just tested it by moving /mnt/user/data/test-share to /mnt/user/test-share and indeed all files in this folder scattered over different drives and cache where moved in the background to the destination. Note that apparently SMB will restart, i.e. any SMB connection with the server will be reset.
  5. All, I am re-arranging my user shares from: data (< user share) Movies Series [whatever] into: Movies (< user share) Series (< user share) [whatever] (< user share) I have done extensive searching, but failed to find solid advise if this is as simple as mv command in /mnt/user/data to move all subfolders to /mnt/user/ which would *magically* align the (underlying) cache and disks folder structure? Reason to do it this way (instead of making new shares and moving data from the subfolder to the new shares) is that there is a lot of data to move. If I can prevent moving actual data by just changing the folder pointer, it could be done in a second. Maybe I need to manually change each folder structure on the cache and disks as well...? Anyways, I am afraid I somehow will wreck the disk/share framework Unraid is using and lose data, so looking for guidance. What would work? regards, Michiel
  6. Solved all of this: bottom line was that i implemented the wrong way of getting dockers to use the vpn docker's network instead of their own: 1. I made a custom network via the command line pointing to the VPN docker 2. in each of the docker's pull down lists I selected this network to use. This causes problems when updating (custom network does not seem to be refreshed with the new docker ID of the VPN docker) and portmapping problems somehow. Instead, I should've done the following: 1. for each docker, set network to 'none' 2. for each docker, add extra parameter "--net=container:[your vpn dockername - only lowercase allowed!!!]" And bingo: updating the vpn dockers refreshes the relying dockers automatically and portmapping is transparently working without errors. super!
  7. Thanks again for the help. Restarting of containers is then clear. The only challenge remaining is installing an update, which seems to apply the default template (with port mapping) rather than the one installed (without port mapping). Any clue how to solve that?
  8. Thanks for taking the time to debug this one! If i do what you suggest, it does solve the problem of the network restart loop (question b) indeed. The problem i do not solve is question c : how to update the other containers without port mapping (ie. this is what breaks restarting the updated container, it seems)? If i update the container, it will update, but fail to restart (due to the default port mapping). I have to remove the orphaned docker and reinstall the app from the docker templates section. There should be a way to update the container without the default port mapping....?
  9. Hi all! First of all: unraid rules. I enjoy using it every day and most 'challenges' I solve myself (and learn more tech as a bonus). Not so this one. I have setup gluetunvpn for a vpn connection. Then, I have configured a couple of containers to use the gluetunvpn network, so that all traffic will only flow through the gluetunvpn connection (ie. through the actual VPN connection). I have two challenges: 1) When i update the gluetunvpn container, the container:gluetunvpn networks used by the other containers become invalid. As a result, those other containers get stuck in a restart loop. I can understand that, as the container ID of gluetunvpn will change (hence, I assume, so does the network reference for each of the containers). 2) When i update any of the other containers that use the container:gluetunvpn they fail to restart, because by default there is a port mapping implemented, meant to be used in bridge/host network mode, but not in the case of container:gluetunvpn it seems. This is part of my setup: Three questions: a) I may not use the right implementation of networks to get what i need? is this the case and if so, what would be the preferred way? b) how to update gluetunvpn without getting the other containers in a restart loop? c) how to update the other containers without port mapping (ie. this is what breaks restarting the updated container, it seems)? Most grateful for your help!
  10. Thanks, i know. Just want to be absolutely sure i do not somehow screw up the Nextcloud database, so used the docker convenience to separate those databases as much as possible
  11. ...and fixed it: my Nextcloud installation used /mnt/user/appdata/mariadb-official/ which is what the mariadb-official installer by default refers to. Because the Nextcloud Mysql is running, the filelock prevented havoc. My bad - though maybe an error as in 'data dir already exists!' or have the name of the data-dir rely on the name of the docker in Unraid would be convenient (though you might get other user errors )!
  12. All, First of all: thanks for making the MariaDB docker available. Got a challenge though. Fresh out of the box (with the only change of port 3306 >> 3307 due to an existing MariaDB for nextcloud running on 3306 already) I get the following log with errors (and not working database): 2022-03-10 11:39:35+01:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.7.3+maria~focal started. 2022-03-10 11:39:36+01:00 [Note] [Entrypoint]: MariaDB upgrade information missing, assuming required 2022-03-10 11:39:36+01:00 [Note] [Entrypoint]: MariaDB upgrade (mariadb-upgrade) required, but skipped due to $MARIADB_AUTO_UPGRADE setting 2022-03-10 11:39:36 0 [Note] mariadbd (server 10.7.3-MariaDB-1:10.7.3+maria~focal) starting as process 1 ... 2022-03-10 11:39:36 0 [ERROR] mariadbd: Can't lock aria control file '/var/lib/mysql/aria_log_control' for exclusive use, error: 11. Will retry for 30 seconds 2022-03-10 11:40:06 0 [ERROR] mariadbd: Got error 'Could not get an exclusive lock; file is probably in use by another process' when trying to use aria control file '/var/lib/mysql/aria_log_control' 2022-03-10 11:40:06 0 [ERROR] Plugin 'Aria' init function returned error. 2022-03-10 11:40:06 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed. 2022-03-10 11:40:06 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2022-03-10 11:40:06 0 [Note] InnoDB: Number of transaction pools: 1 2022-03-10 11:40:06 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 2022-03-10 11:40:06 0 [Note] InnoDB: Using Linux native AIO 2022-03-10 11:40:06 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728 2022-03-10 11:40:06 0 [Note] InnoDB: Completed initialization of buffer pool 2022-03-10 11:40:06 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=33207616354,33207616354 2022-03-10 11:40:06 0 [ERROR] InnoDB: Malformed log record; set innodb_force_recovery=1 to ignore. 2022-03-10 11:40:06 0 [Note] InnoDB: Dump from the start of the mini-transaction (LSN=33207616354) to 100 bytes after the record: len 100; hex 66696c655f6c6f636b7302075052494d415259030473697a6500350206806c20bc270120063e590000033924df00205e0082b9030d070f20000120063ee0590000033924df096e657874636c6f75640d6f635f66696c655f6c6f636b73075052494d4152; asc file_locks PRIMARY size 5 l ' >Y 9$ ^ > Y 9$ nextcloud oc_file_locks PRIMAR; 2022-03-10 11:40:06 0 [Warning] InnoDB: Log scan aborted at LSN 33207681024 2022-03-10 11:40:06 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error 2022-03-10 11:40:06 0 [Note] InnoDB: Starting shutdown... 2022-03-10 11:40:06 0 [ERROR] Plugin 'InnoDB' init function returned error. 2022-03-10 11:40:06 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2022-03-10 11:40:06 0 [Note] Plugin 'FEEDBACK' is disabled. 2022-03-10 11:40:06 0 [ERROR] Could not open mysql.plugin table: "Unknown storage engine 'Aria'". Some plugins may be not loaded 2022-03-10 11:40:06 0 [ERROR] Failed to initialize plugins. 2022-03-10 11:40:06 0 [ERROR] Aborting What does this mean and how to fix this? I have searched a lot and usually succeed, but I cannot solve this one... Thanks in advance!