Jump to content

JonathanM

Moderators
  • Posts

    16,275
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Yep, I totally missed they only had one data drive. Preclear will do it, just be sure you are operating on the correct drive.
  2. Running Unraid as a VM is not officially supported. I moved your post to the correct area, hopefully someone familiar with running it in proxmox can help. If you can recreate the crashes running Unraid bare metal you can open a new topic in the general support forum.
  3. Parity doesn't have a file system, or recognizable files. It's theoretically possible to run forensic data recovery software on it and possibly see fragments of content, but at a glance there would be nothing to even attempt to recover. If you really feel the need to remove any traces, you can unassign it from the array and and run a preclear on it, but unless you have a reason to believe you are targeted for an investigation I wouldn't bother.
  4. Easiest method would be Dynamix File Manager. There is no automated process, you will need to manually move files from drive to drive.
  5. Preclear is always optional, it's an addon that is used to zero and test a drive before it's put into a parity protected array. IF the drive being added to a parity protected array slot does NOT have a valid preclear signature, then the normal operation of clearing the drive by writing all zeros is started, and the drive is not available for formatting and use until that clearing has completed, so adding the drive doesn't invalidate parity. Preclear was developed to allow you to do that step BEFORE adding the drive to the array, so it could be formatted and used immediately. What JorgeB was saying, is that using the preclear utility on a solid state drive is not recommended for several reasons. If you are using SSD's in the parity protected array, clearing is still needed, but preclear isn't. Using SSD's in the parity protected array can reduce their performance, TRIM is not possible, and write speed will be limited by the parity drive.
  6. Make a copy of the old usb key to another machine. Do this again, but use the old usb key instead of a new one. Copy just the config folder out of your backup to the old USB key.
  7. No, but it is necessary to test it once it's in the Unraid server. Good memory can have errors if the BIOS settings are wrong or out of spec, or the motherboard isn't fully compatible. Your Unraid rig complete as you want to run it must pass a memtest with zero errors for as long as you can stand, preferably 24+ hours. Since Unraid installs and runs in RAM, it's extremely critical that the RAM operation must be flawless. Parity check is not a reliable test for good RAM. You could have a bad stick or some fault that doesn't effect the parity check that still causes other issues.
  8. nomachine installed on both the VM and the machine you are using to view it.
  9. It's the percentage of the docker.img file that is in use. If it starts growing rapidly that usually means a container is not configured properly. If it's been close to that figure for a while and is slowly increasing a little only when you add new containers, it's probably normal. With the docker service stopped you can increase the size of the image file, but don't just increase it if you can't point to a reason it's getting larger, if something is configured incorrectly just making the image bigger delays the inevitable troubleshooting. It's got absolutely nothing to do with RAM.
  10. That is a USA 110v version, if you looked at the UPS he referenced, it is NOT a 110v. Power meters must match the regional requirements. What I linked to IS available if you live in Australia, to see that you must select an AUS delivery address.
  11. No. You can power off and change sticks or whatever as soon as any errors show up. Don't attempt to run Unraid on a machine that has any memtest errors, you will likely corrupt your data. After you can complete many (preferably at least 12) hours of memtest with no errors shown then you can try to run Unraid. However, not all memory errors are caught by memtest, but it's a good start to weed out obvious problems.
  12. Depends. If the error count stays the same, then you are probably fine. If it keeps increasing, I'd return the disk for a new one. Don't manufacturer warranty return it if possible, the drives you get back from a manufacturer warranty swap are typically refurbished. Return it for credit and get a new one if possible. If the count stays at 11 and never increases, I'd keep the drive. Regardless, you always need to be alerted to changes in disk health, you shouldn't need to wait until you log back in. Make sure notifications are set up and working. You should be getting a daily "everything is OK" notification so you know the server can contact you with errors.
  13. That will work for some applications, but based on a brief search I don't think that will work with torrents. From what I read torrent clients embed the port they see in the traffic for returns, so if that port is remapped it won't connect. Hopefully there is some way around that.
  14. This looks like the correct option to me, mainly because if the port changes, it's likely that the network is "down" anyway until the application reconnects. While you are playing with all this, I have a current scenario that you may be able to take into consideration. I run a couple downloaders through delugevpn, but when the vpn container restarts or is updated, the downloaders are unable to connect out until I restart them afterwards. I'm unsure whether that's a consequence of how docker networking works, or simply the change in IP not being detected properly. I know restarting and / or updating the master container is much different than it simply detecting a port change, but it's a scenario that would help automation if it were covered. With that in mind, do you have a method you are looking at to blindly reconnect when needed? Maybe the master vpn container could have variables defined with dependent container names to blindly restart when the connection changes? Can a container manipulate another container like that? Or would there need to be a "helper" script running on the host to monitor and restart things?
  15. It's not that simple. To figure out the correct (for you) size for a UPS you need more information. Critical parts of the whole picture are... 1. Idle power draw in watts (not critical to measure accurately, more just good info) 2. Max continuous draw with all drives spinning and actively being accessed, parity check with all typical VM's and containers running is a good time to measure. 3. Instantaneous surge when transitioning from idle with no drives spinning to a parity check. (Hard to measure without really special equipment, but still important to estimate) 4. Unattended shutdown time. How long does the machine take from fully active to powered off if you stop the array then power down, assuming everything is running like it normally does, no fair preemptively stopping VM's or containers. 5. Typical outage length. Do you have multiple 5 minute outages? Or just 5 second or less blinks? If the power is out for at the 1 minute mark, how likely is it to still be out hours later? For power draw measurements, you really need a meter, something relatively cheap and simple should work fine. I can't tell what part of the world you are in from that amazon link, but here is something like what you need. link The max capacity of your PC power supply has little if anything to do with actual draw, you really must measure how much it pulls from the wall to figure out what you need. Keep in mind that if the UPS uses SLA batteries (almost all do), they really dislike being discharged below 50%, it's not good for their lifespan, so after you get your real load numbers you should try to stay in the first 50% of the runtime chart for the full shutdown time. The UPS you linked may be overkill, or just right, or too small. No way to tell without finding actual usage figures.
  16. Thanks for your effort, I used screen for many years. I'm now a tmux convert, perhaps you could look into that? https://www.howtogeek.com/671422/how-to-use-tmux-on-linux-and-why-its-better-than-screen/
  17. This. Agree. Maybe your current effort to allow multiple containers access to the forwarded port will be usable with other containers besides this one? Unfortunately I'm not enough of a network guru to know if this is feasible, but how about redirecting the changeable vpn port forward to a static port inside your container network?
  18. That's because the absolute path you are using isn't correct. You are mapping /data to /mnt/user/Total_Library/Downloads/, so if you want the completed downloads to go there, the absolute path is /data and for incomplete downloads, the absolute path is /data/incomplete mapping is path substitution. container side = visible inside container. host side = where those files appear on the host.
  19. So in the eventual future, do you see independent VPN and application containers as the norm? Not to get too far ahead of you, but I'd love to see the possibility of multiple VPN endpoint containers, each able to support multiple of your application containers. For example, VPN1 with a PIA port forwarded connection, torrent and other file sharing containers pointed there, VPN2, a PIA or whatever, with various downloader apps assigned there. Some downloader apps get upset with out of area connections, it would be nice to easily set up 2 tunnels, one foreign, one local.
  20. Try changing the base folder to /data instead of /config
  21. Are you planning on shucking those and using them as internal drives? USB is unreliable for server connections, and if you use USB for the parity array it's likely to cause issues.
  22. Are you sure? I didn't hear that in the podcast you posted.
  23. I highly recommend NOT using water cooling in a server, especially since you are new to building. Liquid cooling is fine for a gaming machine that is only on when someone is sitting right there with it and can react to any issues. I'm not saying liquid cooling is unreliable, just that the consequences of pump failure or leaks are much more drastic than a normal fan failure. A regular heatsink with fan can operate with a failed fan for a long time because of the thermal mass of the heatsink, and natural air convection.
  24. When I restored my lsio backup to this container, all my credentials were transferred. See if this thread helps https://community.ui.com/questions/Migrate-From-UDM-Pro-to-UXG-Pro-and-Windows-Controller-Advice-Needed/4ec2f4fd-b7d2-4cf2-b756-bcef632b0c5a?page=2
×
×
  • Create New...