Jump to content

JonathanM

Moderators
  • Posts

    16,708
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. That's the issue then. Requests sent to your WAN IP on port 80 MUST be redirected to port 180 at the server's IP. You will need to talk to your ISP and get instructions on how to accomplish that, or look up the make / model of that router on google and see if anyone posted instructions. It's also possible that your ISP blocks port 80 on your WAN IP, which means you can't use that method to get certificates. BTW, this thread should NOT be here in general support, there is already a support thread specifically for SWAG that addresses these and other issues. You can find the support thread for containers by clicking on the container in the GUI and selecting the support link.
  2. Parity doesn't have anything to do with file data, and it doesn't effect share behavior. If you change a share from prefer to no, all existing data files will stay on the existing drive, new data files will be written to the parity array to whichever drive meets parameters. Reads happen from whichever drive currently contains the file. If you change the share back to prefer, files that aren't open will be moved to the designated pool when mover runs. Cache no disables the mover for the share, and mover can never move open files. What are you trying to accomplish? If you would have stated an end goal I probably could have given direction on how to accomplish it.
  3. yep. just make sure not to use /mnt/user paths in the cp command. only /mnt/cache and /mnt/newpoolname
  4. Depends on your experience level. The main thing to keep in mind is vdisks are created by default as sparse, so whichever method you land on needs to take that into consideration. Also, if the paths in your XML definition files are referencing /mnt/cache instead of /mnt/user, those paths will have to be changed as well.
  5. If they were originally in a single pool together at some point you will probably have to remove both pools, blkdiscard /dev/sd(whatever) on both SSD's, and then try again.
  6. couple scenarios off the top of my head 1. You set up some subdomains that have since been removed, so those specific certs are no longer being renewed because they aren't needed. 2. Your authentication method isn't working properly, so renewal is failing. 3. Something else is preventing the overnight scheduled renewal check from completing. What does the container log show?
  7. mc at a terminal. Midnight commander is a two pane file manager, GUI-ish. Just be sure you stay out of the /mnt/user tree since you need to work with /mnt/cache and /mnt/diskX
  8. You can use the "open files" plugin to see which files are still in use when you try to stop the array.
  9. Which shares are staying on the cache? Read the help descriptions for the cache usage on one of the shares configuration page, you probably have the wrong settings for one or more of your shares.
  10. Yes 😁 https://forums.unraid.net/topic/74460-external-sas-storage-101-query-not-guide/?do=findComment&comment=686248
  11. Unless you can figure out a way to get a SAS external connector in the laptop, and I don't know of any, there really is no good way to do multiple drives. USB doesn't work consistently with a parity array. Some laptops have an eSATA port that would be good for 1 more drive.
  12. Pretty sure either model can be molex only, the PCIe is only used for power.
  13. Remove both devices from the pool, start the array, stop the array, assign both devices back to the pool and see if that works.
  14. I recommend moving the power for the server to another circuit so you can leave it running and communicating with the UPS. For a dummy load, a portable heater on low or medium, or a hair dryer on low heat works well. WAG from the screenshot, looks like you want somewhere around 500W to simulate max possible draw, assuming the screenshot was showing typical load. If you set it up this way, you can monitor the server's reaction to the power loss, and make sure it shuts down properly without further input from you. BTW, testing like this is recommended for any UPS setup, especially if it's new. That way you have confidence that A. it works at all B. the server shuts down properly before the UPS runs out of steam and quits powering the dummy load.
  15. Rebuilding parity is the quickest way. You can write all zeroes to the entire drive so parity is still valid when you remove it, but that takes much longer than simply rebuilding parity. Unraid parity has no concept of files or filesystems, it's calculated across the entire capacity of the drive whether there are any files or not. The used space showing on the drive is the filesystem, which is the metadata set up in preparation for the organization and retrieval of files. All that information is part of the parity calculation, as is all the free space. It's all included in parity.
  16. Post the docker run commands for both containers.
  17. Tools, diagnostics, download the zip file. That should contain clues to the paths where the files are expected to be. Currently are the VM and Docker tabs available in the GUI?
  18. You experienced first hand the results of mixing disk shares (/mnt/diskX,/mnt/cache/,/mnt/<poolname>) and user shares (everything under /mnt/user) /mnt/user normally merges all the identically named root folders on all the disks into those named user shares. Since they are the same files and but appear in different paths, linux doesn't understand what you want to do when you copy between them, so it ends up overwriting the files with zero byte files. So, to move files from disk to disk, use the full disk paths. To move files between user shares, only use /mnt/user/* paths. Don't mix the two. You can try doing a recovery, hopefully you will be able to get your deleted files back. This is the reason disk shares are disabled by default, with a warning in the help text for disk shares. Using mc allows you to access things not normally exposed.
  19. Are there any other data drives? Or 2 drives, 1 data 1 parity only?
  20. Check your docker containers for any reference to media or Media, odds are that you accidentally typed the wrong case somewhere, and as soon as you restart that container your problem will come back.
  21. media is not the same as Media
  22. How are you connecting the keystone to the server?
  23. Pools can participate in user shares even if there is only one device in each pool. Since the introduction of multi pools is new, I don't remember the limit of number of pools that can be defined, but you could possibly just keep adding XFS single device pools, but you would have to manage space allocation manually, there is currently no mechanism to send writes to multiple pools based on free space like there is in the parity array data disks. All pool disks with the share name root folder would be read, but new files written would go to the share's named pool only or overflow to the parity array if the named pool went below specified free space and configured for cache prefer. A user share is simply a root folder on any array or pool disk, so you can span whichever volumes you choose.
×
×
  • Create New...