guru69

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by guru69

  1. Script to recover corrupt Radarr or Sonarr databases Recently my radarr and sonarr logs have been showing: ...Corrupt (11), message = System.Data.SQLite.SQLiteException (0x800007EF): database disk image is malformed Hopefully the cause of the corruption is discovered soon. #!/bin/bash # Script to recover / recreate radarr/sonarr database when log shows: # ...Corrupt (11), message = System.Data.SQLite.SQLiteException (0x800007EF): database disk image is malformed # # Don't forget that both radarr/sonarr create zipped db backups in /Backups/scheduled folder # Which can be restored if this recovery fails # Set your paths/names for radarr/sonarr dockername=radarr dockerpath=/mnt/user/appdata/radarr dockerdbname=radarr.db # Stop radarr/sonarr docker docker stop $dockername > /dev/null echo "$dockername is stopped" # Perform sqlite3 .recover into new database file sqlite3 $dockerpath/$dockerdbname .recover | sqlite3 $dockerpath/$dockername-recover.db # Rename any db-shm or wal files mv $dockerpath/$dockerdbname-shm $dockerpath/$dockerdbname-shm-BAD mv $dockerpath/$dockerdbname-wal $dockerpath/$dockerdbname-wal-BAD # Rename the corrupt database mv $dockerpath/$dockerdbname $dockerpath/$dockerdbname-BAD # Move recovered database into place mv $dockerpath/$dockername-recover.db $dockerpath/$dockerdbname # Fix ownership on newly recovered database chown nobody:users $dockerpath/$dockerdbname # Start radarr/sonarr docker docker start $dockername > /dev/null echo "$dockername is started" # Check docker log and web GUI # Optionally run this to clean up # rm $dockerpath/*-BAD
  2. How do you update Lidarr version in this docker? Log shows: [Info] Bootstrap: Starting Lidarr - /app/lidarr/bin/Lidarr.exe - Version 0.7.2.1878 I'd like to try the newer update: 0.8.0.2041 Jan 25 2021
  3. Does UD or UD+ plugin support MTP? I would like to mount a GoPro 8 over USB so I can dump my 4k videos onto my Unraid array. I did notice this in the SlackBuilds Repo: https://slackbuilds.org/repository/14.2/multimedia/mtpfs/
  4. Yes it does. I am doing the same thing right now and I'm guessing it writes the drives preclear signature after the zero process, I aborted a preclear in the Post Read phase and it still appeared as precleared on the other Unraid
  5. I have a question about preclearing external USB drives. I apologize if this has already been discussed. I have been shucking WD MyBook drives from the external enclosure, applying the tape to the 3rd pin so they work, inserting them into the array and running preclear 2x on the disk before use. So far I have had no issues. I'm wondering if I should be preclearing them as-is in the USB enclosure so that if I get a bad one, its much easier to repack and return, without any signs of cracking the enclosure open. I assume this will be slower over USB3 than connecting directly to SATA. During my last storage upgrade I ran 10 instances of preclear simultaneously on the array without issue. Will I hit any bottleneck trying to do the same using with the drives still in the USB enclosures? Are there any other reasons I should avoid running preclear on untouched WD USB drives using USB?
  6. guru69

    Jitsi?

    +1 I was about to start an install in a VM, but would much rather use a single Unraid docker
  7. There might be some way, but you are talking about vpn traffic inside your docker, not sure how you would measure that. Maybe someone else here has some ideas.
  8. If you are running the Preclear plugin, it will continue running on your Unraid server. Click the Preclear plugin icon to see the progress again (and the result when done)
  9. Add a new user script, replace qbittorrent with your vpn docker name in Unraid something like this: #!/bin/bash arrayStarted=true noParity=true docker restart qbittorrent Once its saved, click "Run Script" in User scripts to make sure its restarting your docker. If it works, click the dropdown on the right and change from "Schedule disabled" to "scheduled hourly". Click Apply at the bottom.
  10. You might be able to script that, but if your speeds keep dropping, its probably better to solve the underlying problem than keep restarting the docker. I'm having no issues with the linuxserver/qbittorrent docker Are you using a VPN? If not your provider might be the cause of any speed loss, stalling, etc. Another thing that can help is to use a private tracker
  11. Not sure I understand why the traffic threshold, is the docker getting really hung? (no qbittorent web page loading) or you mean a stalled download? To simply restart hourly, how about making short user script with: docker restart {dockername} and cron schedule it on the right side --> Custom, and use: 0 * * * * this will trigger a restart every hour on the hour
  12. Temperature on this board wasn't available at all when I started, so I have kind of been driving blind until now 🙂 I replaced the Norco center fan plate with the 120" one, dumped their fans, put Noctua everywhere and used liquid metal on the CPU and hoped for the best. Here are what mine look like, I just assumed the CPU will be hotter than the mobo when picking, maybe have the sensors switched? I haven't had any thermal shutdown even when the room reached 90 one day (A/C died) and this server has been running around the clock for over a year, so guess its ok. I live in the deep south, always hot, so I have a 5,000 BTU window AC running cool/max 24/7 about 2 feet my server. I smoke one of these window AC units about every 2 years, but I buy a few of them when on sale.
  13. Having similar results here: k10temp - Tdie - 60.8C k10temp - Tctl - 87.8C The Github for k10temp says: "There is one temperature measurement value, available as temp1_input in sysfs. It is measured in degrees Celsius with a resolution of 1/8th degree. Please note that it is defined as a relative value; to quote the AMD manual: Tctl is the processor temperature control value, used by the platform to control cooling systems. Tctl is a non-physical temperature on an arbitrary scale measured in degrees. It does _not_ represent an actual physical temperature like die or case temperature. Instead, it specifies the processor temperature relative to the point at which the system must supply the maximum cooling for the processor's specified maximum case temperature and maximum thermal power dissipation. The maximum value for Tctl is available in the file temp1_max. If the BIOS has enabled hardware temperature control, the threshold at which the processor will throttle itself to avoid damage is available in temp1_crit and temp1_crit_hyst." To see the difference from the temp1_input and temp1_max I ran: find /sys -name temp1_input My sensors are here: /sys/devices/pci0000:00/0000:00:18.3/hwmon/hwmon0/temp1_input /sys/devices/pci0000:00/0000:00:19.3/hwmon/hwmon1/temp1_input cat /sys/devices/pci0000:00/0000:00:18.3/hwmon/hwmon0/temp1_input 61625 cat /sys/devices/pci0000:00/0000:00:19.3/hwmon/hwmon1/temp1_input 62000 cat /sys/devices/pci0000:00/0000:00:18.3/hwmon/hwmon0/temp1_max 70000 cat /sys/devices/pci0000:00/0000:00:19.3/hwmon/hwmon1/temp1_max 70000 Because its a relative reading I see that I'm about 88% to temp max (Norco 4224 case with air cooling) I must not have enabled hardware temp control because I have no temp1_crit and temp1_crit_hyst Next time I bring server down I will look for it.
  14. My Borg backup script for dockers and VMs... I have been using CA Backup for my Unraid backups for quite a while, but I discovered Borg Backup in Nerd Tools so I decided to try it. My goal was to make my backups smaller and faster with less downtime. I prefer to have individual schedules and backups for each VM and docker on Unraid so I thought I'd share my unified (VM/Docker) Borg backup script. I have reduced my Plex downtime to 25 mins on first backup, and around 7 minutes on additional backups due to deduplication. Script assumes you are using the default locations for dockers and VMs and the email log only works when the script is scheduled (User Script Log is written) and not while run manually in User Scripts. Maybe you guys know a better way to capture the output? I am not using an encrypted Borg repo for these backups, but I might make an additional version for backing up to an encrypted Borg repo. I have it set to retain 4 backups currently. Any additions or changes to improve this are greatly welcomed 🙂 #!/bin/bash arrayStarted=true # Unraid display name of docker/vm displayname=NS1 # Unraid backup source data folder # (VM = /mnt/user/domains... Docker = /mnt/cache/appdata...) backupsource=/mnt/user/domains/NS1 # The name of this Unraid User Script scriptname=backup-ns1-vm # Path to your Borg backup repo export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted # Email address to receive backup log [email protected] ###### Don't Edit Below Here ###### # Build variables, clear log today=$(date +%m-%d-%Y.%H:%M:%S) export backupname="${displayname}_${today}" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes >/tmp/user.scripts/tmpScripts/$scriptname/log.txt # Determine if Docker or VM, set backupsourcetype var backupsourcetype= [[ $backupsource = */mnt/cache/appdata* ]] && backupsourcetype=Docker [[ $backupsource = */mnt/user/domains* ]] && backupsourcetype=VM echo "Backup source type: $backupsourcetype" # Shutdown the Docker/VM [[ $backupsourcetype = Docker ]] && docker stop $displayname [[ $backupsourcetype = VM ]] && virsh shutdown $displayname --mode acpi # Create backup echo "Backing up $displayname $backupsourcetype folder..." borg create --stats $BORG_REPO::$backupname $backupsource sleep 5 # Start the Docker/VM [[ $backupsourcetype = Docker ]] && docker start $displayname [[ $backupsourcetype = VM ]] && virsh start $displayname # Pruning, keep last 4 backups and prune older backups, give stats borg prune -v --list --keep-last 4 --prefix $displayname $BORG_REPO # Email the backup log echo "Subject: Borg: $displayname $backupsourcetype Backup Log" > /tmp/email.txt echo "From: Unraid Borg Backup" >> /tmp/email.txt cat /tmp/email.txt /tmp/user.scripts/tmpScripts/$scriptname/log.txt > /tmp/notify.txt sendmail $emailnotify < /tmp/notify.txt Always test your backups before you rely on them! Here are the commands I use for testing/restoring the backups. Stop your docker/VM first and rename or move the folder if you are testing and are not replacing the data Borg will restore to the current folder, so change to / directory to have it restored to original directory, or run it elsewhere if not Edit the first 2 commands with your docker/vm and repo location displayname=NS1 export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes borg list $BORG_REPO Copy the backup name you want to test/restore from the Borg repo list output to your clipboard, mine is: NS1_02-07-2020.19:00:01 so I will set the restorebackupname variable to this export restorebackupname=NS1_02-07-2020.19:00:01 cd / borg extract --list $BORG_REPO::$restorebackupname restorebackupname= Start up your docker/VM and make sure it works.
  15. I had the same message from Fix Common Problems on Dell Precision T5500 with dual Xeons. The issue was that "Intel Speedstep Technologies" was disabled in the bios. All good now!
  16. It doesn't seem to be causing any issues, I think Unraid is trying to spin down the SSDs, thought might need some change in Unraid so it excludes SSD drives from spindown.
  17. I've recently updated to the Nvidia version of Unraid 6.7.0 and am also noticing the errors. I have 4 Samsung 960 Pro NVME SSD cache drives (3 on motherboard, one on a PCI-E adapter). They are BTRFS/RAID10 and I have the trim enabled. Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme1n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31778): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31779): /usr/sbin/hdparm -S0 /dev/nvme2n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme2n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31779): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31780): /usr/sbin/hdparm -S0 /dev/nvme0n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme0n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31780): exit status: 25 Jun 25 10:04:32 Tank emhttpd: shcmd (31781): /usr/sbin/hdparm -S0 /dev/nvme3n1 Jun 25 10:04:32 Tank root: HDIO_DRIVE_CMD(setidle) failed: Inappropriate ioctl for device Jun 25 10:04:32 Tank root: /dev/nvme3n1: Jun 25 10:04:32 Tank root: setting standby to 0 (off) Jun 25 10:04:32 Tank emhttpd: shcmd (31781): exit status: 25
  18. Excellent Quiks! This one has been so annoying, Thanks for sharing the fix!
  19. I'd like to request this it87 driver be added to unRAID to support X399 motherboard's temperature sensors: https://github.com/groeck/it87 I am trying to get thermal readings on Gigabyte Designare EX motherboard (Ryzen Threadripper board) Seems it is working from reading this: https://github.com/groeck/it87/issues/65 Thanks!
  20. Has anyone been able to get the temperature sensors working on the Gigabyte Designare EX? Found this, looks like an updated driver is required: https://github.com/groeck/it87/issues/65
  21. Disregard about RAID10, now having same issue with RAID1. Looks like I got a bad Samsung 960 SSD, works for about 3 days then goes missing. Whats odd is that it passes every test I can throw at it. I will replace this SSD and give RAID10 another go.
  22. I just completed my Ryzen build a few days ago and I agree with ars92, I did not change the C-States, nor did I do the zenstates fix. My system has now been up for a few days and have been waiting to see if it hung, but so far been stable. I did upgrade the Bios to latest right away and also upgraded Unraid to 6.5.3 at same time. My new build: Norco 4224 case Gigabyte X399 Designare EX Ryzen Threadripper 1950X 64 GB Kingston ECC ram 4 x 512gb Samsung 960 NVME SSDs Raid1 cache (Raid10 was not stable for me) 18 x HGST 4TB NAS drives, Dual parity Dual 10gbe bond (LACP) I still haven't been able to get the temperature sensors working on this board
  23. If it were a connection issue I'd assume once it landed in Unassigned Devices that it would produce errors there as well. It completed Preclear twice and passed Smart tests. It also produced the same odd behavior in a different socket once back in the cache pool. I haven't seen any issues on RAID1 yet (up for an hour now), but I'll keep you posted. If its a connection issue then I'm sure the errors will return (fingers crossed).
  24. Maybe I spoke too soon. After 24 hours w/ 4 SSDs on RAID10, Got a message about a cache drive missing. Logs showed lots of "read errors corrected.." and 2 of my dockers disappeared. I ran the Preclear/Erase all Data on the newly unassigned SSD, all 100% Success. I switched the SSD drives around to different sockets (new build, to rule out the socket). I attempted once more on RAID10 with same result (on new socket, same SSD). I switched the array to RAID1, all errors in log have ceased, so will restore appdata and see how it goes. Interestingly, DD reports the exact same speeds of 1.5GB/s to 1.6GB/s using RAID1, hopefuly its a bit more reliable.