Jump to content

JonathanM

Moderators
  • Posts

    16,711
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. https://wiki.unraid.net/Manual/Storage_Management#Adding_disks Read through that section and see if that answers your questions.
  2. Assign the disks as they were, assuming you haven't changed any disks since.
  3. Just don't mix /mnt/user paths with /mnt/diskX or /mnt/<poolname> paths when manipulating files manually. Typically I only use /mnt/user paths to consume media or files, and disk paths to manage stuff in the background, but less technical users are encouraged NOT to directly access the disks, just use the user shares and the accompanying settings to allow Unraid to manage the initial and final file locations. Much work has gone into removing the need to directly use disk paths for almost any scenario. One of the ways I manually manipulate the system is to use the /mnt/user/domains stock location with cache : only settings, which means that when I use the wizard to create a VM, it's initially created on the assigned cache pool. However, I can manually move the VM subfolder for VM's I don't use very often from /mnt/cache/domains/NewVM to /mnt/disk1/domains/NewVM and everything still works without changing any VM path definitions. All my VM's show up in /mnt/user/domains, and I control which ones take up fast expensive SSD space, and which are relegated to slow cheap array space. cache : yes and cache : prefer both tell the mover to do something, cache : no and cache : only tell the mover to ignore the share.
  4. That's the issue then. Requests sent to your WAN IP on port 80 MUST be redirected to port 180 at the server's IP. You will need to talk to your ISP and get instructions on how to accomplish that, or look up the make / model of that router on google and see if anyone posted instructions. It's also possible that your ISP blocks port 80 on your WAN IP, which means you can't use that method to get certificates. BTW, this thread should NOT be here in general support, there is already a support thread specifically for SWAG that addresses these and other issues. You can find the support thread for containers by clicking on the container in the GUI and selecting the support link.
  5. Parity doesn't have anything to do with file data, and it doesn't effect share behavior. If you change a share from prefer to no, all existing data files will stay on the existing drive, new data files will be written to the parity array to whichever drive meets parameters. Reads happen from whichever drive currently contains the file. If you change the share back to prefer, files that aren't open will be moved to the designated pool when mover runs. Cache no disables the mover for the share, and mover can never move open files. What are you trying to accomplish? If you would have stated an end goal I probably could have given direction on how to accomplish it.
  6. yep. just make sure not to use /mnt/user paths in the cp command. only /mnt/cache and /mnt/newpoolname
  7. Depends on your experience level. The main thing to keep in mind is vdisks are created by default as sparse, so whichever method you land on needs to take that into consideration. Also, if the paths in your XML definition files are referencing /mnt/cache instead of /mnt/user, those paths will have to be changed as well.
  8. If they were originally in a single pool together at some point you will probably have to remove both pools, blkdiscard /dev/sd(whatever) on both SSD's, and then try again.
  9. couple scenarios off the top of my head 1. You set up some subdomains that have since been removed, so those specific certs are no longer being renewed because they aren't needed. 2. Your authentication method isn't working properly, so renewal is failing. 3. Something else is preventing the overnight scheduled renewal check from completing. What does the container log show?
  10. mc at a terminal. Midnight commander is a two pane file manager, GUI-ish. Just be sure you stay out of the /mnt/user tree since you need to work with /mnt/cache and /mnt/diskX
  11. You can use the "open files" plugin to see which files are still in use when you try to stop the array.
  12. Which shares are staying on the cache? Read the help descriptions for the cache usage on one of the shares configuration page, you probably have the wrong settings for one or more of your shares.
  13. Yes 😁 https://forums.unraid.net/topic/74460-external-sas-storage-101-query-not-guide/?do=findComment&comment=686248
  14. Unless you can figure out a way to get a SAS external connector in the laptop, and I don't know of any, there really is no good way to do multiple drives. USB doesn't work consistently with a parity array. Some laptops have an eSATA port that would be good for 1 more drive.
  15. Pretty sure either model can be molex only, the PCIe is only used for power.
  16. Remove both devices from the pool, start the array, stop the array, assign both devices back to the pool and see if that works.
  17. I recommend moving the power for the server to another circuit so you can leave it running and communicating with the UPS. For a dummy load, a portable heater on low or medium, or a hair dryer on low heat works well. WAG from the screenshot, looks like you want somewhere around 500W to simulate max possible draw, assuming the screenshot was showing typical load. If you set it up this way, you can monitor the server's reaction to the power loss, and make sure it shuts down properly without further input from you. BTW, testing like this is recommended for any UPS setup, especially if it's new. That way you have confidence that A. it works at all B. the server shuts down properly before the UPS runs out of steam and quits powering the dummy load.
  18. Rebuilding parity is the quickest way. You can write all zeroes to the entire drive so parity is still valid when you remove it, but that takes much longer than simply rebuilding parity. Unraid parity has no concept of files or filesystems, it's calculated across the entire capacity of the drive whether there are any files or not. The used space showing on the drive is the filesystem, which is the metadata set up in preparation for the organization and retrieval of files. All that information is part of the parity calculation, as is all the free space. It's all included in parity.
  19. Post the docker run commands for both containers.
  20. Tools, diagnostics, download the zip file. That should contain clues to the paths where the files are expected to be. Currently are the VM and Docker tabs available in the GUI?
  21. You experienced first hand the results of mixing disk shares (/mnt/diskX,/mnt/cache/,/mnt/<poolname>) and user shares (everything under /mnt/user) /mnt/user normally merges all the identically named root folders on all the disks into those named user shares. Since they are the same files and but appear in different paths, linux doesn't understand what you want to do when you copy between them, so it ends up overwriting the files with zero byte files. So, to move files from disk to disk, use the full disk paths. To move files between user shares, only use /mnt/user/* paths. Don't mix the two. You can try doing a recovery, hopefully you will be able to get your deleted files back. This is the reason disk shares are disabled by default, with a warning in the help text for disk shares. Using mc allows you to access things not normally exposed.
  22. Are there any other data drives? Or 2 drives, 1 data 1 parity only?
  23. Check your docker containers for any reference to media or Media, odds are that you accidentally typed the wrong case somewhere, and as soon as you restart that container your problem will come back.
  24. media is not the same as Media
×
×
  • Create New...