Jump to content

JonathanM

Moderators
  • Posts

    16,684
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. You've got the wrong end of the stick. If you are connected through a VPN tunnel, nothing you change in your router will open an incoming port. That has to be done at the VPN endpoint, controlled by your VPN provider. Who is your VPN provider?
  2. The tone of your message leads me to believe that you think something is still not working. What do you expect to see when you go to that url?
  3. @jang430, You need to click on "Theme" on the bottom left of the forum and change to Unraid Light, then edit your post so it shows up in both themes. Right now it's completely unreadable for those using the light theme.
  4. No, what you are doing is giving the entire unpartitioned drive to the VM. You need to format the disk and mount it so it has a path in /mnt/disks, then you can assign whatever size vdisk you want and point it at /mnt/disks/mountpoint/VM/vdisk1.img. Assuming you use a format type for the device that supports sparse files, the actual space occupied by the vdisk will only be what the VM actually allocates, however it will appear to be whatever size you tell it. You could tell it to put 4 200GB vdisks in 250GB of space, and it will work until any combination of VM's tried to use more than 250GB total and caused one or more vdisk files to become corrupt.
  5. Simplest method for you is probably going to be formatting the drive using the destructive mode of UD and using the space for as many vdisk files as you wish.
  6. Look at it this way, you would have been masking a latent issue that could have bit you later. Now, you know.
  7. Remove or comment out the sed line in the go file and restart unraid.
  8. Running next branch implies that you are willing to put in the extra time to interact with the forum to help with issues that may come up. Either pointing them out and posting diagnostics, or suggesting changes that would improve things. If you don't want to visit the forums and keep current, don't run the next branch.
  9. It's definitely because of the theme difference. If you leave the text default and just type, it's fine. If you really MUST play with fonts, colors and sizes then I recommend you spend the time to switch back and forth between the themes to make sure it's visible for everyone. You can switch themes quickly at will, and preview how your post will appear. An unposted reply will seem to be gone when you switch, but as soon as you click reply it will come back.
  10. lose the /mnt/user in the remote path
  11. https://docs.fedoraproject.org/en-US/Fedora/12/html/Deployment_Guide/s1-samba-mounting.html
  12. If you can figure out what changed and what needs to be updated to fix this, I'm sure limetech would be happy to consider modifying unraid to get it to work, as long as the fix doesn't effect bare metal users negatively. That said, you are pretty much on your own figuring out a solution. There is a small sub community here that runs unraid under ESXI, but there is no official support for doing so. Limetech's position on running unraid under a hypervisor is pretty much as follows. They don't actively discourage it, but it's up to you to get it running. Any issues with unraid must be reproduceable while running bare metal in order to get troubleshooting assistance from them. I suggest you get together with the other people running unraid under ESXI and collaborate with them about this issue.
  13. Close. Once you do the new config you can keep all the drives, and it will build parity based on those drives. You can preserve all slots, and add the unassigned drive before you start the array. Only the parity drive(s) will be overwritten at that point. New config tells unraid that parity is not valid, it needs to be rebuilt from scratch. That's why it says the new config can not be used to recover data from a failed drive.
  14. @opticon, your post is a little difficult to read, you may want to quit messing with font, color, or size.
  15. Depends on how you move it. If the array is already parity protected, adding the drive to the array would invalidate parity, so it will be cleared before adding. If you don't have a parity disk assigned, it will be added intact. If you wish to add it to the array and you already have valid parity, you will have to set a new config and rebuild parity after adding the disk. If this is not clear, ask specific questions before doing anything. You can easily erase data if you push the wrong button.
  16. Or, disable the global share settings tunable direct io. Either option should work.
  17. Very true. I only said mildly dangerous, as it will work without issue almost always. It's the almost that gets me, and since I very rarely shut down, it makes sense to watch the process to ensure my system is still well behaved. Others have higher risk tolerance, I prefer to keep an eye on things and not assume all is well until I've verified it to be so. That has not been advertised as well as it should. Occasionally people should time the array stop, and use that information to customize their shutdown time to suit. Can the array start and stop timing be readily harvested from log files? If so, I think it would be useful to keep a separate saved list similar to parity check statistics, that way you could look back in the history to find how long it generally took the array to start and stop. Also, a notation that the previous shutdown was unclean BECAUSE of an elapsed timer and not a hard crash would be useful, especially in the OP's case. If the timer kills the array, it should notify you.
  18. Yeah, don't do that. Stop the array first, then hit reboot. The forced shutdown currently in place is mildly dangerous in my opinion, and should only be used in an emergency, or by a power failure signal by the UPS.
  19. Try to replicate the state of the system before you hit reboot, and hit stop instead. Time how long it takes for the array to finish stopping.
  20. Awesome! Now we just need to convince @bonienl to embed this into the docker advanced view page. A size column, with a button at the bottom to calculate all sizes just like the share calculation page. The log size should be displayed beside the view log link as part of the page render, since that figure doesn't require the docker to be started.
  21. That suppresses the error, but leaves the blank size. It would be better to substitute the null size output for text like this. activ-lazylibrarian Size: N/A, Container couldn't be started, Logs: 1.3kB
  22. Similar or duplicate functions in multiple containers, each mapped to the same port, ensures only one can be started at any given time so they don't both try to modify the files while the other is running. It's actually a fairly useful function. Another usage, I can have a limited permissions krusader container running with very restrictive mapping set to auto start, then when I need more access I stop that container and start the container that's mapped to / rw instead of mapping /mnt/user ro
×
×
  • Create New...