Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. I think you have misunderstood how ZFS works. In ZFS pools the parity information is striped across the drives in the pool and not to a dedicated parity drive.
  2. He said he started the booted off a .vmdk - not that he had the licence AND configuration file on different locations. It is quite easy to set the first stage of the boot process to run off something else, but that does not stop the flash drive and configuration folder needing to be on the same flash drive.
  3. That is not possible. The configuration information is always on the same flash drive as the licence file. The Unraid boot process is described here in the online documentation and you will see in the later stage of the boot process the licence file and configuration information are read from the same flash drive.
  4. You should post the full diagnostics zip file (which includes the syslog) so we can see more about your hardware and how you have things configured. It might also be a good idea to enable the syslog server to get a syslog that survives a reboot which you can then post if you get another crash.
  5. That determines whether the UD device will be available as a share accessible over the network (LAN). Yes if the docker container is trying to access it as a network share. No if it is doing it via a mapped drive at the docker level (which is internal to the Unraid server).
  6. Have you actually gone as far as uninstalling the plugin?
  7. All this information is stored in the 'config' folder on the flash drive. If someone has physical access to the server then you should assume they can log in and see all the information that is NOT on the encrypted disks. You also do not want to expose the Unraid GUI to the internet except via a secure mechanism such as a VPN for the same reason.
  8. If by 'break-in' you mean read the contents of the drives then this is true as long as the pass phrase has not already been input and the disks are currently mounted so their contents are now visible.
  9. According to the diagnostics you need to run check filesystem on disk3.
  10. Yes it is normal if the pool is single drive so there is no redundancy. You probably want to use the appdata backup plugin to make periodic scheduled backups to the array.
  11. The format option is on the Main tab (near the button to start the array). Make sure that the drive(s) listed are the one you expect.
  12. Was the array stopped as the text states is required to change these settings?
  13. OK - the following should work: Disable the docker and VM services it looks like the ‘nas’ pool has not yet been formatted? If so I would recommend it is set to use btrfs format, and then format it. under Settings->Global settings set the option to allow Exclusive access. This is to later allow it to be used on suitable shares. Set the ‘system’ and ‘appdata’ shares to have Primary storage set to be the ‘nas’ pool with secondary storage set to be the array. If you intend to run Vms you might want to also do this for the ‘domains’ share. set the mover action for these shares to be array->cache. click the Move Now option on the Main tab to make mover start moving files for these shares to the ‘nas’ pool. This could take some time. If you want to do it when mover completes you can unset the option to have Secondary storage as now all files for these shares should be on the ‘NAS’ pool you should now be able to set the ‘exclusive’ option for these shares which will maximise performance. Enable the docker and VM services under Settings. your dockers should now be running off the ‘nas’ pool and should perform faster than when on the array.
  14. Have you clicked the icon near the top right to unlock the position so they can be moved?
  15. You need to give a bit more information such as: provide your system’s diagnostics zip file so we can see more about your system setup. What is the start of the path for your current containers (e.g. Is it something like /mnt/user/appdata/plex) as you only give the last part.
  16. The 6.12 options are no different to earlier release - it is just that different terminology is now being used which is intended to be clearer for new users and also position things for new features in future release. The data IS stuck on the pool with array->cache. That is normally done for performance reasons with docker containers and/or VMs as pools have much higher write speeds than disks in the main array. It is also stuck on the pool if there is no secondary storage option set, but then you can activate "Exclusive Mode" which gives even better performance when writing to User Shares as it by-passes the Fuse layer normally involved in handling User Shares. You can then get redundancy/protection by making a pool multi-drive. You can also use plugins such as Appdata Backup to get periodic backups of the shares concerned to a nominate location (typically somewhere on the main array). Many people use both approaches.
  17. I know there are reports of a speed issue to single ZFS formatted drives in the main array but not heard about it for single drive pools. Have you checked that none of the HDD concerned are SMR drives (particularly #3) as that could skew your results badly.
  18. Have you checked that your system does not have an option in the BIOS to do a Wake on LAN even from a complete shutdown as many systems do. I agree with the time delay though so I have my own systems set to power on at a given time.
  19. Not sure if that applies to Parity swap/rebuild operations You could install my Parity Check Tuning plugin and then in its settings enable the option to Pause array operations while mover is running. This solves the problem of mover starting for any reason while the array operation is in progress. Whether you then want to use any of the other features the plugin provides is up to you.
  20. Is there a reason for this? If not it would seem easier than having to do any manual checking of the server. i personally DO shutdown my server overnight. This was one of the things that incentivised me to add a ‘restart from position reached’ for array operations to the Parity Check Tuning plugin as these can easily take over a day to complete with large drives in the array.
  21. Unfortunately not. The rebuild process requires the replacement drive to be at least as large as the drive it replaces. The rebuild process has no concept of file systems as it works at the raw sector level so it has no idea of how much data was on the drive.
  22. my experience is that as long as you are not writing large amounts of data then SMR drives perform fine. It is when you do the initial load when you overload the cache on the drive and get the severe slowdown. Having said that there no longer seems to be a significant cost savings to buying SMR drives.
  23. I can think of a number of options for backup: Use the My Servers plugin to create a web based backup Use the Appdata backup plugin to create a backup at the interval you specify Use the User Scripts plugin to run a simple script you write to perform the backup at whatever frequency you desire.
  24. The flash drive IS the boot partition as far as Unraid is concerned. Unraid has a cut down version of Slackware as the underlying OS. The flash drive holds all your basic server configuration information in the 'config' folder on the flash drive. However this will NOT hold encryption keys as that information is in principle entered via the GUI during the initial startup phase and then only held in RAM. If you want to automate starting encrypted arrays without involving the GUI then you will have stored the keys somewhere else and put in place a process to read the keys from that location as part of the boot process. In that case the keys are stored wherever you have specified (ideally off the Unraid server) so that is where you need to secure them.
×
×
  • Create New...