Jump to content

JonathanM

Moderators
  • Posts

    16,708
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Not directly what you are asking for, but you should be able to adapt it. https://forums.unraid.net/topic/82198-support-binhex-urbackup/?do=findComment&comment=977896
  2. Have you tried setting up a VNC server hosted by the VM itself instead of the console VNC hosted by Unraid?
  3. While visually unappealing in the list, multiple containers are very resource friendly, as the bits are reused across all the containers. Being able to manage them as separate entities but still use virtually the same amount of space as a single container is the upside of using granular containers.
  4. Why? I suppose it's not a bad idea to do an export occasionally, but that method of moving repositories is not recommended unless you are the only user. If I did that all my other users would kill me.
  5. If the array is running while you take a backup, then the files reflect that, and if you restore using that state it will show an unclean shutdown and want to do a parity check. No other issues that I can think of. Any time you make a change to the settings, like users, shares, add containers, etc, you should keep in mind that the last backup won't reflect those changes. The only devastating thing is if you change disks reusing your parity drive in a data slot (like what may happen when you upgrade parity size) and don't take a new backup afterwards, you risk overwriting that data disk as the backup will put it back in the parity slot. I recommend destroying any flash backups made prior to disk slot changes, or if you really want to keep the backup at least wipe out the super.dat file so it can't ruin your day.
  6. Either way works, keep in mind the two parity disks are unrelated, so if you set up a 6TB as Parity2, it will stay parity2 even after you remove parity1. If your OCD would object to having a parity2 disk with no parity1 showing, then you must replace the current Parity. When is the last time you had a parity check showing exactly zero errors?
  7. It's /boot/config if you ssh in to the Unraid console.
  8. That's the price you pay for the flexibility of virtualization. Each VM requires host resources to manage what would normally be handled by the motherboard chipset.
  9. Are there any resources to ELI5 for what plotting and farming are in terms of what is being done with the computer resources, and how it compares to proof of work stuff? Something on the level of https://blog.programster.org/bitcoins-mathematical-problem
  10. It can be, but the process to get the files moved after the fact is more work than simply setting them up there in the first place. Not a huge deal, but it feels like we have to walk someone through the process at least once a week and it gets old. It really is no more complex than disabling the docker and vm services then changing the settings and running the mover, but it seems like people have a hard time following explicit directions for some reason. The biggest reason to have a SSD cache is to keep all the random accesses for containers and VM's fast, and allow the parity and data drives to spin down periodically. No cache means parity and at least one data drive spun up whenever the server is running. Which computer? Unraid has to be ethernet hard wired into something that has an internet gateway, maybe describe your device topology a little more clearly? So, you have 5 2TB drives with data currently? How long has it been since they have been health checked? I stand by my original suggestion to start with new drives, but it sounds like you could get away with a pair of 8TB, one parity, one data, and your 2 spare 2TB all formatted XFS, and copy the data from the ReiserFS drives leaving them for backup. Not touchy, just complex. Too many variables in technology and world markets at the moment. If you like to tinker and dig around for solutions and stuff, AMD seems to have a better performance for the price, but you may be chasing gremlins. Intel seems safer and more stable but cost more. There is no such thing as being over prepared for VM's and containers, it's just a matter of capacity. It's "easy" to set up a single VM with hardware passthrough and a dozen or so docker containers, it's a whole 'nother thing to set one up with 3 simultaneous VM workstations all with their own monitor keyboard and mouse and 30+ docker containers. If you are shooting for the latter, better find someone who has already done it and copy their build, because it's going to require a specific recipe of server grade board and video cards.
  11. Marvell based controllers are no longer recommended, so you may need to replace that if you need more SATA connections than the motherboard has. Can the router be programmed as a wifi extender that can pick up your phone wifi and translate it to ethernet? That would probably be better, keep the old disks intact as backups and load the files to new disks. By my calculations you should be able to easily fit all your data on 2 8TB disks. So, you could have all your current stuff with 1 parity 2 data. However, if you don't want to spend $500 on new drives right now, you can just use your current drives.
  12. Formatting means create a new blank file system. You can't just change formats, you must put the data elsewhere, change format type, then put the data back.
  13. I would like any share without an existing config, as in any NOT created on the shares GUI but detected as a new root folder, to automatically not be exported. Create the config file as normal, but default to hidden private not exported. As a bonus, I would like orphan share config files to show up on the shares page if the root folders no longer exist, like container orphan images do on the advanced docker GUI.
  14. Do you have active cooling on the controller, or at least constant air movement? Server grade controllers assume they will be in rack mount or other flow through designs, they must not be in stagnant air.
  15. The only downside is that it still burns a license slot, so if you are bumping against a 6 drive basic license you may as well use the slot for something meaningful. With no parity assigned, the drive is as fast as it can be, the only real limit is no trim for SSD's in array slots. What I'm getting at is I don't see why people are so hung up on the "requirement" that you have an array defined when the definition of an array is pretty much any block storage device. Pretty much any implementation I can think of can use that storage for something.
  16. Try reducing the number of cores assigned to the VM so the emulator has more power available.
  17. No. Currently you must have 1 data disk defined. That shouldn't be an issue, you could technically assign a scrap 1GB USB to it if you want.
  18. That's not ideal, as I can envision a situation where you want as much as possible of a set of files on the pool restricted by the free space setting. I was hoping for a warning that alerts the user that they may not actually want that setting, but allow it if forced.
  19. I would like to see the cache : prefer selection display some sort of notice if making that choice on a share where the current amount of data in the share is larger than the amount of available space in the selected pool.
  20. Any way you could intercept that setting in your plugin and inform the user if the total of data in that share is larger than the designated pool size? I feel that would at least warn people that they may be in for a bad time if they proceed.
  21. May I ask why? It's just very strange to me that your first post to the forum is a name change request. Why not just create a new account?
  22. No, mount the share like you would if you were accessing Unraid from a normal PC on the network.
  23. Can I ask a favor? I would love for you to benchmark the 9p mount when it's working, then do a cifs mount to the same share and benchmark that. In the past, the cifs mount was a fair amount faster, and less trouble.
×
×
  • Create New...