Jump to content

BRiT

Members
  • Posts

    6,575
  • Joined

  • Days Won

    8

Everything posted by BRiT

  1. What benefits does "anacron" bring that isn't already provided by base unraid or unraid with user-scripts plugin?
  2. It worked well for 6.7.3. However 6.8 flipped the tables on tunables. Things may yet change again once 6.9 final hits, it all depends on what Limetech implements to address the performance decreases in parity build and check times some users on 6.8 have reported.
  3. Cleanliness is next to godliness. The only large quantity of data to be backed up from inside AppData is an Emby Media Library, and even with that all the real metadata and images are stored at the same level as the media so it's decentralized when compared to things like monster Plex DBs (for instance: /mnt/user/Movies/movie_name/* or /mnt/user/TV/tv_name/*). Any and all of my development databases are backed up using native sql tools.
  4. By comparison, my backup of AppData takes 75 SECONDS. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:00:02 REAVER CA Backup/Restore: Community Applications appData Backup Dec 16 03:00:02 REAVER CA Backup/Restore: Applications will be unavailable during Dec 16 03:00:02 REAVER CA Backup/Restore: this process. They will automatically Dec 16 03:00:02 REAVER CA Backup/Restore: be restarted upon completion. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:01:14 REAVER CA Backup/Restore: ####################### Dec 16 03:01:14 REAVER CA Backup/Restore: appData Backup complete Dec 16 03:01:14 REAVER CA Backup/Restore: #######################
  5. Your backup is taking over an hour, more like 82 minutes... Starting at 5am and ending around 6:21 am. Dec 17 06:21:34 NAS2 CA Backup/Restore: ####################### Dec 17 06:21:34 NAS2 CA Backup/Restore: appData Backup complete Dec 17 06:21:34 NAS2 CA Backup/Restore: #######################
  6. Dec 17 05:00:01 NAS2 CA Backup/Restore: ####################################### Dec 17 05:00:01 NAS2 CA Backup/Restore: Community Applications appData Backup Dec 17 05:00:01 NAS2 CA Backup/Restore: Applications will be unavailable during Dec 17 05:00:01 NAS2 CA Backup/Restore: this process. They will automatically Dec 17 05:00:01 NAS2 CA Backup/Restore: be restarted upon completion. Dec 17 05:00:01 NAS2 CA Backup/Restore: ####################################### Dec 17 05:00:01 NAS2 CA Backup/Restore: Stopping duckdns Dec 17 05:00:05 NAS2 CA Backup/Restore: docker stop -t 60 duckdns Dec 17 05:00:05 NAS2 CA Backup/Restore: Stopping lidarr Dec 17 05:00:10 NAS2 CA Backup/Restore: docker stop -t 60 lidarr Dec 17 05:00:10 NAS2 CA Backup/Restore: Stopping medusa Dec 17 05:00:23 NAS2 CA Backup/Restore: docker stop -t 60 medusa Dec 17 05:00:23 NAS2 CA Backup/Restore: Stopping PlexMediaServer Dec 17 05:00:45 NAS2 CA Backup/Restore: docker stop -t 60 PlexMediaServer Dec 17 05:00:45 NAS2 CA Backup/Restore: Stopping radarr Dec 17 05:00:50 NAS2 CA Backup/Restore: docker stop -t 60 radarr Dec 17 05:00:50 NAS2 CA Backup/Restore: Backing up USB Flash drive config folder to Dec 17 05:01:12 NAS2 CA Backup/Restore: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/B../" > /dev/null 2>&1 Dec 17 05:01:17 NAS2 CA Backup/Restore: Changing permissions on backup Dec 17 05:01:17 NAS2 CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backups/unraid [email protected] Dec 17 05:01:17 NAS2 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/Backups/unraid [email protected]' --exclude "DarkStat" --exclude "headphones" --exclude "home-assistant" --exclude "jackett" --exclude "medusa" --exclude "officialplex" --exclude "radarr" --exclude 'docker.img' * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress Dec 17 05:43:12 NAS2 CA Backup/Restore: Backup Complete Dec 17 05:43:12 NAS2 CA Backup/Restore: Verifying backup Dec 17 05:43:12 NAS2 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/user/Backups/unraid [email protected]' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
  7. You have Auto-Update enabled as well as AppData Backup enabled.
  8. Correct, no audio drivers on the host system so likely wont be able to get the USB Speakers setup either. You could attempt to pass through the USB device to docker and have it's audio drivers installed in that docker, but no promises on that working. It seems to work for USB BlueTooth devices though.
  9. Never ever put your unraid server directly on the internet. That's what WireGuard or other VPN setups are for. Before you do anything, make sure you wont be exposing your server directly to the internet. Typical setups are using your own router where you can make adjustments to settings and then have your PC(s) and Server(s) setup to use DHCP from the router. If you want static ip addresses, that is accomplished by setting up Static DHCP entries on the router to map from MAC Address to IPV4.
  10. beep Here's advanced beepage: beep -f 659 -l 460 -n -f 784 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 880 -l 230 -n -f 659 -l 230 -n -f 587 -l 230 -n -f 659 -l 460 -n -f 988 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 1047-l 230 -n -f 988 -l 230 -n -f 784 -l 230 -n -f 659 -l 230 -n -f 988 -l 230 -n -f 1318 -l 230 -n -f 659 -l 110 -n -f 587 -l 230 -n -f 587 -l 110 -n -f 494 -l 230 -n -f 740 -l 230 -n -f 659 -l 460 https://github.com/adamrees89/beep-songs
  11. I don't see things being nearly as bad but do see sizeable differences from going through the SHFS layer. I do have cache-dirs running so maybe that artificially boosts the direct disk numbers? On top of that, I notice the slight visual difference when just running the following 2 commands in a putty SSHd window as well. Naturally going through FUSE has it's overhead but I don't think it was ever this drastic. ls /mnt/disk1/Media/TV/* ls /mnt/user/Media/TV/* Dec 15 15:34:22 REAVER cache_dirs: Arguments=-m 11 -M 30 -l off Dec 15 15:34:22 REAVER cache_dirs: Max Scan Secs=30, Min Scan Secs=11 Dec 15 15:34:22 REAVER cache_dirs: Scan Type=adaptive Dec 15 15:34:22 REAVER cache_dirs: Min Scan Depth=4 Dec 15 15:34:22 REAVER cache_dirs: Max Scan Depth=none Dec 15 15:34:22 REAVER cache_dirs: Use Command='find -noleaf' Dec 15 15:34:22 REAVER cache_dirs: cache_dirs service rc.cachedirs: Started: '/usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -m 11 -M 30 -l off 2>/dev/null' root@REAVER:~# time ls /mnt/user/Media/TV/* | wc -l 2317 real 0m0.528s user 0m0.017s sys 0m0.098s root@REAVER:~# time ls /mnt/user/Media/TV/* | wc -l 2317 real 0m0.511s user 0m0.011s sys 0m0.103s root@REAVER:~# time ls /mnt/user/Media/TV/* | wc -l 2317 real 0m0.520s user 0m0.012s sys 0m0.107s root@REAVER:~# time ls /mnt/user/Media/TV/* | wc -l 2317 real 0m0.689s user 0m0.012s sys 0m0.111s root@REAVER:~# time ls /mnt/disk1/Media/TV/* | wc -l 2317 real 0m0.013s user 0m0.009s sys 0m0.009s Even running just the top level directory listing there is a slight difference with far less entries. root@REAVER:~# time ls /mnt/user/Media/TV/ | wc -l 75 real 0m0.025s user 0m0.006s sys 0m0.007s root@REAVER:~# time ls /mnt/disk1/Media/TV/ | wc -l 75 real 0m0.006s user 0m0.007s sys 0m0.004s
  12. If you have 2 Parity disks, they are not copies of one another as each one stores data to a different calculation / algorithm (mathematical formula used for parity determination).
  13. I make all of my important files as Immutable on top of being set to read-only and owned by root.
  14. I'd likely do a 2 step process to prevent having to deal with issues: 1. use CA Backup to do the docker backups to a location in the array. 2. use own rsync backup script from the location in the array to offsite location(s) or at least different server(s).
  15. Use Community Applications to find and install "CA Backup / Restore Appdata". It has capabilities to stop the dockers, do the backup, then restart them. Copying and backing up files in use with your own process may prove troublesome.
  16. 1. Yes, bridging enabled with br0. 2. Nothing pinned at all. No VM running. Only Dockers and no pinning defined. That core 27 was entirely 100% WSDD. As soon as I stopped and Disabled WSDD in SMB Settings, usage went to 0%. The CPU time was 116 hours (4.83 days), which puts it around December 21st when it went bezerk. The only items in the log that stand out are very typical to my system (happening for months if not years): A. nut_libusb get_string errors B. ffdetect from emby docker not liking certain media and it segfaulting. Here's entire diagnostics, but I had not seen any triggering events in syslog. reaver-diagnostics-20191226-1500.zip
  17. Just wanted to report back that wsdd is hitting 100% CPU even with the "-i br0" options set. So that is not a fix, its just a band-aid.
  18. Settings > SMB > WSD Options.
  19. Your USB Flash drive is "bugging out". You need to fix that.
  20. Stop putting your unraid server on the internet. You're getting login attempts from the following ips in Amsterdam, China. Your server is turning off because most likely because some kind sole has found you incorrectly exposed your server to the internet and is turning it off so your data is not nuked or hacked. You need to stop exposing your server to the entire internet. Dec 21 07:24:44 TheBasement sshd[2668]: Accepted none for adm from 89.38.96.13 port 33456 ssh2 Dec 21 07:24:46 TheBasement sshd[2686]: Accepted none for root from 218.92.0.208 port 17220 ssh2 Dec 21 07:24:46 TheBasement sshd[2686]: Received disconnect from 218.92.0.208 port 17220:11: Dec 21 07:24:46 TheBasement sshd[2686]: Disconnected from user root 218.92.0.208 port 17220 Dec 21 07:25:08 TheBasement sshd[3001]: Accepted none for root from 218.92.0.208 port 52922 ssh2 Dec 21 07:25:08 TheBasement sshd[3001]: Received disconnect from 218.92.0.208 port 52922:11: Dec 21 07:25:08 TheBasement sshd[3001]: Disconnected from user root 218.92.0.208 port 52922 Dec 21 07:25:09 TheBasement sshd[3019]: Invalid user named from 94.191.85.216 port 53034 Dec 21 07:25:09 TheBasement sshd[3019]: error: Could not get shadow information for NOUSER Dec 21 07:25:09 TheBasement sshd[3019]: Failed password for invalid user named from 94.191.85.216 port 53034 ssh2 Dec 21 07:25:10 TheBasement sshd[3019]: Received disconnect from 94.191.85.216 port 53034:11: Bye Bye [preauth] Dec 21 07:25:10 TheBasement sshd[3019]: Disconnected from invalid user named 94.191.85.216 port 53034 [preauth] Dec 21 07:25:21 TheBasement sshd[3237]: Failed password for root from 219.90.67.89 port 59574 ssh2 Dec 21 07:25:21 TheBasement sshd[3237]: Received disconnect from 219.90.67.89 port 59574:11: Bye Bye [preauth] Dec 21 07:25:21 TheBasement sshd[3237]: Disconnected from authenticating user root 219.90.67.89 port 59574 [preauth]
  21. One of the days you had an entire drive disappear from md slot 4. What happened there? On most of the days in your syslog it shows jackett docker being auto-uodated and restarted. Does that docker container actually get daily updates?
  22. Not for a long while, maybe decade ago with a 2 for 1.
  23. All the free posts you can handle in the forums.
×
×
  • Create New...