-
Posts
821 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by dopeytree
-
-
Not sure if aware but ZFS in the main array is kind of pointless as no bit rot protection. It will let you know there is a problem but cannot fix it (where as zfs pools can).
You can do ZFS receive tho to receive snapshots from a ZFS pool.
I keep trying to add this info to the documentation but no one is merging it so..
-
Valid but equally could just add an option to select a location to store the files. So default is usb but allow a manual setting too.
A script to delete the old files in the .git could be something like:
import os import datetime def clean_folder(folder_path, days_to_keep): # Get the current date current_date = datetime.datetime.now() # Calculate the date threshold threshold_date = current_date - datetime.timedelta(days=days_to_keep) # Iterate over files in the folder for file_name in os.listdir(folder_path): file_path = os.path.join(folder_path, file_name) # Check if the file is older than the threshold date if os.path.isfile(file_path): file_creation_time = datetime.datetime.fromtimestamp(os.path.getctime(file_path)) if file_creation_time < threshold_date: # If the file is older than the threshold date, delete it os.remove(file_path) print(f"Deleted {file_path}") # Specify the folder path to clean and the number of days to keep files folder_path = "path/to/your/folder" days_to_keep = 7 # Change this to the number of days you want to keep files # Call the function to clean the folder clean_folder(folder_path, days_to_keep)
In the future perhaps unraid connect should have a companion app (docker container) and this could be what does the encryption processing in the future for more secure backups. This would then use appdata share to process data.
Also avoids excess read/writes to the usb stick.
-
Ok I used the search function for the term 'USB'.
Perhaps we could make legacy docs a different colour? i.e blue. Not sure how to implement this in docusaurus.
In either case my usb drive is 2GB which is defined in the up to date documents..
Anyway the bug is that connect cloud backup is not cleaning up after it's work.
I know it has to process some files (to remove any identifiable items) so it was probably the easiest thing to do but after a successful upload it should be removing the files.
1.25GB is quite alot of wastage.
This work could probably be done in ram or simpler an option to use one's appdata folder as the default shares are creates for all user.
i.e the .git folder.
-
yeah but its been enabled since 6.11
Is that what the .git folder is for? the usb cloud backup function?
or is it where a manual click flash backup is generated?
Is it safe to delete?
It appears to have lots of small files inside folders. Biggest I can see is 1mb.
Here's my usb drive. Everything except the .git folder = about 400MB.
The .git folder is 1.25GB
-
-
I have 2x zfs pools (one is 2x nvme mirror & the ether is 4x 12TB drives in z1) and they have stopped randomly spinning up after following the steps above.
I also removed zfs master plugin but may add it again. basically if you open the MAIN tab it spins up all zfs drives.
Hope you get it sorted mate.
-
- Stop all your docker containers and then start then 1 by 1 until you find the devil.
- Check your settings for 'folder caching'
- In several cases I found some containers like 'Dash dot' running in privileged caused disk spin up.
- Also check plex scanning.
- Install turbo write so not all drives need to spin up to write to parity.
-
Do you have a realtek network device?
-
Perhaps this is better suited to feature requests
-
-
So on my mac it looks like this.
Power options are the top right.
On the usb stick's GUI mode you ONLY get username.
So if one accidentally boots into GUI mode and wanted to reboot how would a noob do that without typing in the crazy secure password manually..
If we had a nice button for shutdown it would be more user friendly.
It's minor but a nice to do when someone has the time.
-
The login page of the GUI mode (when you boot you can choose normal remote based or boot GUI mode)
The above screen shot is after login and from the web browser
I am talking about before any login to the linux desktop thats is available on the usb stick.
It shows only option of entering a login username. then it changes to a password form.
I think it should below or above show options for shutdown, restart etc.
Just so it matches every other computer experience.
windows, linux, mac etc you get a user name, usually the current time and options for shutdown.
-
It's only minor. I'll grab a photo next time I reboot.
-
I got this error yesterday and have no idea what caused it as I had not changed anything (apart from updating netdata container & turning turbo off) server had been running for few days.
All drives good.
Only noticed it as plex was saying file was not available.
Rebooted server and all ok.
Did we establish the cause & or fix of this error?
Thanks
-
-
read somewhere it's recommended to use 'mask' rather than disable.
echo disable > /sys/firmware/acpi/interrupts/gpe69 2>/dev/null
-
Hmm its reappeared today..
Here's a screenshot & diagnostics attached...
Maybe it is just a feature..
-
Maybe list the dockers you are using. It is almost definitely one of those.
I'm now running 3x ZFS pools and have fixed this issue...
QuoteCulprit seems to be Dashdot container that is spinning up the main array and stopping it staying spun down.
If you edit the container and turn 'privileged' to off it should stop the constant array spinning up.
-
Culprit seems to be Dashdot container that is spinning up the main array and stopping it staying spun down.
If you edit the container and turn 'privileged' to off it should stop the constant array spinning up.
-
Did a test in safe mode & it still spins up all disks in the array so we KNOW it is NOT a PLUGIN issue.
Next I will go through each container 1 by 1.
-
-
I don't think it's a zfs issue I think it's something to do with the main array regardless of format.
Will run a safe mode tonight to double check.
Is there another way to detail what process is requesting data?
- 1
-
The spin up is happening with out ZFS master installed so it's something about 6.12.x
I also removed any ZFS drives from the main array so now the only ZFS drives are SSD caches. and 1 of those pools spins down (standby for ssd). so it's some but to do with 6.12.x and the main array.
Possibly something built into the OS? It's not something that shows up in disk activity or open files.
However I do also notice that my hard drive lights are not lining up showing data access so perhaps the drives are not spun up but are showing as spun up??
Would expect the UPS usage to be lower though if the array drives were spun down...
It happens with & without the cache plugin installed.
It also happens if plex is stopped.
-
Open Files shows this below. Apologies not sure how to print a better output. Can't see any array activity!?
php-fpm 313 KILL 1 1 /usr/local/emhttp (working directory) qemu-system-x86 1156 KILL 3 2 /mnt/cache-zfs/isos /mnt/cache-zfs/domains/Home-assistant-haos_ova-10.3.qcow2 /etc/libvirt/qemu/nvram/4251e015-01de-3002-6f1d-7d91b9c23346_VARS-pure-efi.fd php-fpm 3555 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 5163 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 5164 KILL 1 1 /usr/local/emhttp (working directory) dockerd 7964 KILL 23 18 /var/lib/docker/buildkit/cache.db /var/lib/docker/buildkit/metadata_v2.db /var/lib/docker/buildkit/snapshots.db /var/lib/docker/buildkit/containerdmeta.db /var/lib/docker/volumes/metadata.db /var/lib/docker/volumes/metadata.db /var/lib/docker/buildkit/containerdmeta.db /var/lib/docker/buildkit/snapshots.db /var/lib/docker/buildkit/metadata_v2.db /var/lib/docker/buildkit/cache.db /var/lib/docker/containers/098...506814c570cc2197e7d23fa340993dbdf112bb382-json.log /var/lib/docker/containers/208...c6c7a68ab2e9afabdd0a5f9e8cd753fc92041d449-json.log /var/lib/docker/containers/473...75d673b577dc526f448dba162ee874e660c4652a4-json.log /var/lib/docker/containers/a4e...2f5bde7679559640049ebe9a692d65760a0fe5890-json.log /var/lib/docker/containers/4c8...13f3e4a27318811e507f9d21ba1375194f1201568-json.log /var/lib/docker/containers/bda...c90becb36f67bb975973913c11f4c1a992b52fd47-json.log /var/lib/docker/containers/920...fe8864a75de68f5ddcc7e27c923325a04fa72e9c3-json.log /var/lib/docker/containers/d22...0a195699cf376537f9efad07c76d007f83a4b519a-json.log /var/lib/docker/containers/dfc...edb8e73e17604de529453e5fc3dd50fd35b18ab47-json.log /var/lib/docker/containers/ba9...624678d3719209709e4c182d40ad4d78cb142cf25-json.log /var/lib/docker/containers/711...697d0c618c422c67bfc03330dc86672d5d8f9bef9-json.log /var/lib/docker/containers/02c...b88c3b1e9aff8777d5ba70e72545ecf3d3e8b4e53-json.log /var/lib/docker/containers/2b7...fffc0cd791a6d1236e770ced3065e60863b0092f4-json.log containerd 8131 KILL 2 1 /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db unraid-api 8359 KILL 1 1 /usr/local/bin/unraid-api (working directory) php-fpm82 10207 KILL 2 2 /config/log/php/error.log /config/log/php/error.log php 10208 KILL 1 1 /config/www/app.sqlite nginx 10211 KILL 2 2 /config/log/nginx/access.log /config/log/nginx/error.log nginx 10336 KILL 2 2 /config/log/nginx/access.log /config/log/nginx/error.log nginx 10337 KILL 2 2 /config/log/nginx/access.log /config/log/nginx/error.log nginx 10338 KILL 2 2 /config/log/nginx/access.log /config/log/nginx/error.log nginx 10339 KILL 2 2 /config/log/nginx/access.log /config/log/nginx/error.log supervisord 10501 KILL 1 1 /config/supervisord.log rsyslogd 11541 KILL 1 1 /mnt/cache-zfs/system/syslog-192.168.22.2.log s3_sleep 11697 KILL 1 1 /usr/local/emhttp (working directory) supervisord 12197 KILL 1 1 /config/supervisord.log supervisord 12601 KILL 1 1 /config/supervisord.log supervisord 13258 KILL 1 1 /config/supervisord.log Lidarr 13385 KILL 6 5 /config/lidarr.db-shm /config/lidarr.db /config/lidarr.db-wal /config/lidarr.db-shm /config/lidarr.db /config/lidarr.db-wal Plex Media Serv 14045 KILL 63 47 /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libh264_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libflv_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libmp2_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libaac_encoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libaac_decoder.so /config/Plex Media Server/Code...8217c1c-4578-linux-x86_64/libmpeg2video_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libhevc_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libmpeg4_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libmp3_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libvp9_decoder.so /config/Plex Media Server/Code.../8217c1c-4578-linux-x86_64/libmsmpeg4v3_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libdca_decoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/liblibx264_encoder.so /config/Plex Media Server/Codecs/8217c1c-4578-linux-x86_64/libac3_decoder.so /config/Plex Media Server/Plug...Databases/com.plexapp.plugins.library.blobs.db-shm /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-shm /config/Plex Media Server/Logs/Plex Media Server.log /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-shm /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...n Support/Databases/com.plexapp.plugins.library.db /config/Plex Media Server/Plug...pport/Databases/com.plexapp.plugins.library.db-wal /config/Plex Media Server/Plug...ort/Databases/com.plexapp.plugins.library.blobs.db /config/Plex Media Server/Plug...Databases/com.plexapp.plugins.library.blobs.db-wal /config/Plex Media Server/Plug...Databases/com.plexapp.plugins.library.blobs.db-shm /config/Plex Media Server/Plug...ort/Databases/com.plexapp.plugins.library.blobs.db /config/Plex Media Server/Plug...Databases/com.plexapp.plugins.library.blobs.db-wal php-fpm 14062 KILL 1 1 /usr/local/emhttp (working directory) Radarr 14495 KILL 6 5 /config/radarr.db-shm /config/radarr.db /config/radarr.db-wal /config/radarr.db-shm /config/radarr.db /config/radarr.db-wal Plex Script Hos 14638 KILL 2 2 /config/Plex Media Server/Plug...upport/Data/com.plexapp.system (working directory) /config/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log python3 14650 KILL 13 12 /config/tautulli.db-shm /config/logs/tautulli.log /config/logs/tautulli_api.log /config/logs/plex_websocket.log /config/logs/plexapi.log /config/tautulli.db /config/tautulli.db /config/tautulli.db-wal /config/tautulli.db-shm /config/tautulli.db-wal /config/tautulli.db /config/tautulli.db-wal /config/tautulli.db mono 14670 KILL 11 9 /config/sonarr.db-shm /config/logs.db-shm /config/logs.db /config/logs.db-wal /config/logs.db-shm /config/logs.db /config/sonarr.db /config/sonarr.db-wal /config/sonarr.db-shm /config/sonarr.db /config/sonarr.db-wal Prowlarr 14869 KILL 4 3 /config/prowlarr/prowlarr.db-shm /config/prowlarr/prowlarr.db /config/prowlarr/prowlarr.db-wal /config/prowlarr/prowlarr.db-shm php 14933 KILL 1 1 /config/database.sqlite node 14971 KILL 6 5 /config/db/db.sqlite3-shm /config/logs/overseerr-2023-07-06.log /config/logs/.machinelogs-2023-07-06.json /config/db/db.sqlite3 /config/db/db.sqlite3-wal /config/db/db.sqlite3-shm Plex Tuner Serv 15631 KILL 1 1 /config/Plex Media Server/Logs/Plex Tuner Service.log notify_poller 16028 KILL 1 1 /usr/local/emhttp (working directory) session_check 16030 KILL 1 1 /usr/local/emhttp (working directory) system_temp 16032 KILL 1 1 /usr/local/emhttp (working directory) wg_poller 16035 KILL 1 1 /usr/local/emhttp (working directory) update_1 16038 KILL 1 1 /usr/local/emhttp (working directory) update_2 16040 KILL 1 1 /usr/local/emhttp (working directory) update_3 16042 KILL 1 1 /usr/local/emhttp (working directory) sleep 18465 KILL 1 1 /usr/local/emhttp (working directory) device_list 19968 KILL 1 1 /usr/local/emhttp (working directory) disk_load 19970 KILL 1 1 /usr/local/emhttp (working directory) parity_list 19973 KILL 1 1 /usr/local/emhttp (working directory) cache_dirs 20091 KILL 1 1 /usr/local/emhttp (working directory) ttyd 21913 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 23612 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 24692 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 24926 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 25802 KILL 1 1 /usr/local/emhttp (working directory) sleep 26951 KILL 1 1 /usr/local/emhttp (working directory) cache_dirs 27067 KILL 1 1 /usr/local/emhttp (working directory) timeout 27081 KILL 1 1 /usr/local/emhttp (working directory) find 27085 KILL 8 1 /usr/local/emhttp (working directory) /usr/local/emhttp /mnt/cache-zfs/appdata/nextcloud/www/nextcloud/core/doc/admin/_sources /mnt/cache-zfs/appdata/nextclo...w/nextcloud/core/doc/admin/_sources/file_workflows /mnt/cache-zfs/appdata/nextcloud/www/nextcloud/core/doc /mnt/cache-zfs/appdata/nextcloud/www/nextcloud/core/doc/admin /mnt/cache-zfs/appdata/nextcloud/www/nextcloud/core/doc/admin/_sources /mnt/cache-zfs/appdata/nextclo...ud/core/doc/admin/_sources/configuration_mimetypes sh 27301 KILL 1 1 /usr/local/emhttp (working directory) lsof 27304 KILL 1 1 /usr/local/emhttp (working directory) lsof 27305 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 31417 KILL 1 1 /usr/local/emhttp (working directory) php-fpm 32429 KILL 1 1 /usr/local/emhttp (working directory)
Update sizes.. data should be in RAM not on USB stick
in Stable Releases
Posted
Is this being tracked?
The connect plugin just needs to occasionally clean up the .git folder on the usb stick.