Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. huh? https://github.com/Squidly271/user.scripts
  2. Does your Windows box ever make the sound out of the blue that indicates a USB device was disconnected then reconnected for absolutely no reason?
  3. Nov 28 03:50:23 valhalla kernel: BTRFS error (device sdb1): bdev /dev/sdb1 errs: wr 248, rd 778, flush 0, corrupt 0, gen 0 This is being caused by Nov 25 03:46:18 valhalla kernel: ata1.00: failed command: WRITE FPDMA QUEUED Nov 25 03:46:18 valhalla kernel: ata1.00: cmd 61/00:50:a0:61:bc/04:00:00:00:00/40 tag 10 ncq dma 524288 out Nov 25 03:46:18 valhalla kernel: res 41/40:48:08:4a:af/00:00:19:00:00/40 Emask 0x9 (media error) Which is in turn related to either cabling to the cache drive (loose?) or alternatively 197 Current_Pending_Sector -O--CK 100 100 000 - 64 You can try running the extended SMART tests against the cache drive to see if it clears this stuff up. On the other hand, since you only have a single cache drive in the pool you're going to have best results by reformatting it as XFS instead of BTRFS. BTRFS has tendencies to not be very forgiving in certain situations (or it is buggy) whereas XFS is rock solid and can handle any weirdness Converting will require you to stop all services (docker and virtual machines) from the settings tab and then moving everything off of the cache drive onto the array temporarily (set all the shares to be use cache:YES) and then running mover. After everything is finished, change the format of the drive to XFS start the array and set all the applicable shares to be use cache: PREFER and then run mover. Afterwards, you should be able to (hopefully) re-enable the services and you're back in business.
  4. Check one of these links before posting any issues: GitHub Status - Apps Tab Application Feed, Plugins, Icons etc https://www.githubstatus.com/ Docker Hub Status - Docker Containers https://status.docker.com/ AmazonAWS - Apps Tab Backup AppFeed, Unraid OS downloads, much of the rest of the world https://status.aws.amazon.com/ Also, if you are using piHole etc make sure that it is not blocking any outgoing connections
  5. Should be able to. But, Docker - Make sure the service is disabled then reboot and then delete it.
  6. Did you check the boot order that the stick is #1 (most BIOS's work best when selecting the stick under the Hard Drive boot order) If you're booting UEFI, did you rename the folder EFI~ on the flash drive to be EFI? Other than that, what exactly is the problem? Can't boot is a bit vague and tends to be used a lot around here to mean completely different things.
  7. Most "modern" attempts at doing this is done by humans at click farms in various countries. Captcha's don't work. https://www.diggitmagazine.com/articles/look-behind-scenes-click-farms
  8. smb.conf would be autogenerated and you can't hook into it (why smb-extra.conf exists) What you'd have to do is run a script to make the applicable modifications to the file and restart smb after the array is started (via user scripts)
  9. https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.10.0rc2-x86_64.zip Extract all the files onto the flash drive formatted FAT32 (which is named UNRAID), then run MAKE_BOOTABLE as administrator
  10. You can always update within the OS itself by going to Tools - Update OS and selecting "Next" as the update stream
  11. Part of your problem is that the cache drive is fully allocated even though it has free space available on it. A Balance should fix that up. Side note though is that if you're not planning on running a multi-device pool, you're better off using XFS instead of BTRFS for the cache drive as it's more tolerant of abnormal situations.
  12. Mount the drive via Unassigned Devices and transfer the data (assuming the filesystem is supported via UD or UD+) once it's physically connected to the server Directly adding the drive to the array (yes it is possible) isn't recommended because the partitioning may be different etc and there are extra steps required to not have the OS begin to automatically clear the drive
  13. If it's the same share, then "move" does the rename. If it's a different share, then ideally "move" looks at the include / exclude / use cache settings and if everything identical then it renames.
  14. Then you know people will complain that the speed in doing this is slow if it's within the same share (or it's within 2 different shares both with the same use cache settings.
  15. I don't see any real speed degradation to the cache drives. Note that if you're using vdisks, then naturally there's going to be a slow down due to overhead etc. Additionally, most people don't set up their VMs to have the vdisks stay "sparse", which has the ultimate result that trim isn't going to work on the vdisk which will cause a large hit on write speeds But to directly answer the question, if there are concurrent r/w happen to where you have the vdisks currently stored then of course their will be a hit and separating them to a separate cache pool (preferred over using an Unassigned Disk) will remove that. If a perfect world, passing through the entire disk to the VM will always give you the best results
  16. Updated for 6.10 As usual, VERIFY EVERY PATH IT OFFERS UP
  17. Pretty sure that those are coming from a container and safe to ignore At the very least, I guarantee that Nov 27 00:51:56 Tower nginx: 2021/11/27 00:51:56 [error] 11634#11634: *1314326 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost" Comes from NetData
  18. Yeah, I was talking about prior to upgrading
  19. no I'm sure via applicable docker commands (inspect?) you could determine the digest of the container you were running and then compare that to the digest of the various tags to determine what :latest equated to, but it's a pain.
  20. The logs would have stated that an error of some sort happened during the backups. In an error situation, the old backup sets don't get deleted. This is why your earliest set 4/25 still exists. Pretty sure that a notification goes out in the error situation
  21. Please post the entire diagnostics
×
×
  • Create New...