Jump to content

itimpi

Moderators
  • Posts

    20,734
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Not quite sure what might be going wrong - for me it “just works”. Maybe post a screenshot of the docker settings you are using for the container. You should be able to get the server component working regardless of whether the Alexa skill is installed.
  2. There is no "right" answer to this. If you do not care how files get split across drives in the array, then let any directory be split ad that requires the least supervision. If that is not what you want, then you need to work out what value suits your requirements. I personally use level 0 (manual control) as it happens to suit my work pattern and the way i organise my media, but many people use values in the low single figure range depending on how they organise their files.
  3. Pools can use any of the pseudo RAID levels supported by BTRFS. It is a BTRFS specific implementation of RAID that can be dynamically expanded, can change RAID levels and can use odd numbers of drives. The downside is that BTRFS seems to be more susceptible to file system level corruption than XFS (which is the default for the main array).
  4. This is true as far as writing to the array is concerned. if you have SSDs then these are normally used in a ‘pool’ external to the main array. Pools can be made redundant by having multiple drives in the pool, but you are still constrained by the slowest component. There is little point in having a SSD as an array drive with a HDD as parity, particularly with the way Unraid updates parity as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. With the drives you mention you could have a single drive array using the HDD. Then you could have a single drive pool using the SSD to host VMs.docker containers and run them with the full performance offered by the SSD. Since this configuration does not provide redundancy then you could could do periodic backs (frequency chosen by you) to make backups of the VM/docker files from the SSD to the HDD drive in the array. Plugins are available that can automate this process.
  5. I think you are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can get an idea of what is happening internally in the server.
  6. @bkastner you have a very restrictive value for the Split Level setting. In the event of there being contention between the various settings for the share about which disk to select for a new file, then the Split Level is the one that wins. This can force a file to be written to a specific drive regardless of the other settings such as allocation method or minimum free space. it is also worth pointing out that if a file already exists then a file is always updated in situ on the same drive as where it already exists.
  7. Unraid does not mind if you leave gaps in the assigned drives. Many people find it triggers their OCD, but as long as it does not worry you can leave the gaps in the assignments. When you start the array only the disk slots that have drives assigned will be shown. Just in case it is relevant it is worth pointing out that Unraid does not care how/where the drive is connected as drives are identified by their serial number. There is therefore no reason that the assignments to disk slots have to match the physical layout if that is not convenient.
  8. I think you have misunderstood how parity works? The requirement is that the parity drive must be at least as large as the largest data drive, but it can protect multiple drives of that size or smaller.
  9. That suggests you forgot to disable the VM service before running mover. Mover will not move open files, and the libvirt.img file would be kept open by the VM service if it is running.
  10. That setting means it will not be visible or accessible on the network. You need it set to Yes to see it on the network.
  11. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  12. Have you set any of the shares to be visible on the network (I.e. the Export option) under the share settings. I believe by default they are not on current Unraid releases.
  13. It looks like your last SMART extended test failed which suggests the disk needs replacing. You could try running it again to confirm (but disable spindown on the drive before starting the test).
  14. How much RAM do you have on the server? You need a minimum of 4GB to successfully update via the GUI. If you have less than that you will need to use the Manual upgrade process as described in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  15. Chances are that the flash drive dropped offline for some reason.
  16. That setting means you want any new iso files to be written to cache, and later transferred to the array. The rational being if you have just downloaded an iso for a new VM you may temporarily want maximum performance. for files you want to keep on the cache the setting would be Prefer or Only. the help built into the GUI describes how the various values for the setting work and how they interact with the mover application.
  17. What have you tried and what Apple device types are you using? If you have multiple Apple devices have you set inFuse to sync its settings between them via iCloud. There should be no problem doing this as I use inFuse from my iPad, iPhone and Apple TV all accessing the media off my Unraid server.
  18. The New Config tool should not be that scary as long as you start with the option to keep all the current assignments, and then return to the Main tab and simply unassigned the disk you want to remove prior to starting the array to commit the change and rebuild parity based on that change.
  19. itimpi

    Turbo write

    There is no way to speed up the parity check simply by changing some settings as its speed is determined by the hardware capabilities of your system. Normally the limiting factor is the speed of the slowest drive in the array, although it is also possible for the way the drives are connected to have a limiting effect. You also want to ensure that no other disk accesses are being made to the array at the same time as in such a case both of them have their performance adversely affected. If you have large drives then it may be worthwhile installing the Parity Check Tuning plugin to at least restrict the check to running outside prime time.
  20. If the files already exist on the array then mover will not move them. From your description it sounds like this might be the case.
  21. That would only work for parity2 if the disks were in the same order. Not clear if this was the case.
  22. I suspect that is the time that you have mover scheduled to run and that activity associated with that is causing the issues.
  23. I do not see any reason why those settings will not work. I use something similar when doing development/testing of the plugin. If you want me to look any further then you should enable the ‘Testing’ mode of logging in the plugin and let me have the resulting logs ( via diagnostics). As was mentioned there is no way that running a parity check should cause a system to crash unless you have some sort of underlying hardware issue.
  24. In the worst case the approach detailed here in the the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page could help.
  25. That will be because when you make that sort of change Unraid restarts Samba so any transfer in progress at the time gets aborted. There has been discussion about changing this to just tell Samba to reread its configuration file but as far as I know that has not happened and I do not know of any immediate plans to make such a change.
×
×
  • Create New...