Jump to content

itimpi

Moderators
  • Content Count

    9821
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by itimpi

  1. Strange that the 'e' model needs this whereas the 'i' model (which I have) does not. You would have thought they would be identical except for the connector being external rather than internal.
  2. The time to do the parity sync is not affected by the amount of data on the drives - but just by the drive sizes. The parity process is not aware of the ‘data’ as it works purely at the raw sector level. having said that it definitely sounds like it is going to take far too long. The syslog says you are getting continual resets on disk5 which could explain the speed. I would suggest you need to carefully check the power and SATA cabling to the drive.
  3. I also,notice that you have the scheduled parity check set to correcting. It is recommended that you have timber to non-correcting. The rationale is that you do not want to corrupt/invalidate parity if you have a disk acting up. A correcting check is only recommended when you think all disks are fine and you have had something like an unclean shutdown so some parity errors are expected and need fixing.
  4. this means that you lost all settings (including container settings). It is advisable to make regular backups of the USB drive either by clicking on it on the Main tab or by using the CA backup plugin. The Previous Apps feature relies on templates that are stored on the USB drive. You wiped this thus losing the templates which means the containers need setting up again. Since you still have the appdata folders intact the apps will find their working files intact if you use the same settings as you used previously.
  5. This is nothing to do with Unraid - it is internal to the docker container. It may have been more obvious to Unraid users because of the fact that a docker image is used and there was an issue in 6.8.3 and earlier that could cause significant write amplification when writing to that image stored on a SSD. If the docker container files were stored directly in the file system (as the latest Unraid betas allow) then this is probably far less noticeable particularly if using a HDD with a file system like XFS that has far less inherent write amplification than BTRFS does. This could not do any harm although they may simply reply that the feature is being mis-used and the fix should come from the container maintainer.
  6. Not obvious on the best way to handle this as technically this is an issue with specific docker containers rather than an UnRAID issue. There may be good reasons the maintainer of a particular container has set this up so that over-riding it becomes a bad idea. I wonder if the Fix Common Problems plugin could be enhanced to detect this and suggest the fix? Alternatively make it a configuration option in the docker templates?
  7. You have to manually copy them and then change any configurations that reference them to point to the new location. however, if you do not currently have a cache drive it might be easy easier to set one up and simply use it as an application drive. One advantage is the cache can participate in Use Shares whereas Unassigned drives cannot so if relevant shares have the Use Cache=Prefer setting UnRAID will automatically migrate the content of such shares to the cache when mover runs.
  8. How big are your VM disks? I notice that the setting for the ‘Domains’ share is set to use Cache=Prefer which means that mover will try to move its contents to cache. It currently has content on disks 1,3,4 so sounds like there might be a lot to move? the same applies to the ‘appdata’ share which is currently not on the cache. A thought - would it be worth enhancing the ‘diagnostics’ command so that the information on each share gave an indication on space used?
  9. itimpi

    HELP!!!!

    Once you have a licence it is valid for any Unraid release.
  10. Are you running the 6.9.0 beta release and taken the steps detailed in the Release Notes to significantly reduced the write load?
  11. Tha should have been fine as you do have a cache drive. i will see if I spot anything else
  12. If you did not change any settings then the path for the ServerFiles is set to /mnt/cache-btrfs/appdata/fivem. If this path does not exist on your system (and it will not since you are running the 6.8.3 release) then this location will be in RAM. You need to edit to something that is suitable for your system.
  13. After some testing I can confirm that you can get multiple time slots using the Custom scheduling options. When you switch on the custom option the field will be pre-filled with the options from the non-custom option. To get the slots you want set the field for Resume to be 0 0,15 * * * * and the Pause field to be 0 7,17 * * * * In other words use comma separated values in a field (no spaces) for multiple options for that field and a space between the fields.
  14. Not that I have ever tested, and it is not something I remember anyone asking for before. Thinking about it I think it should be easy enough to do using the "Custom" scheduling options - it is something I will do some experiments with.
  15. It was exactly the common availability of disks of this sort of size (and larger) leading to this sort of elapsed time (and longer) for parity checks that was the impetus for developing the plugin. No, other than that if you have to reboot for any reason the check would have to start again at the beginning. If your server runs 24x7 this is unlikely to be an issue I am hoping that at some point Limetech will provide a capability at the API level to start a parity check at a defined offset . If this happens the plugin can easily be enhanced to exploit such a capability and restart from where it was after a reboot thus removing this restriction
  16. I would expect that the problem is simply disk contention. You might want to consider using the Parity Check Tuning plugin to offload parity checking to only run outside prime time.
  17. What do you have for the Disk Shares option under Settings->Global Share Settings (if you had posted the system's diagnostics zip file I could have seen for myself). If you click on the Shares tab does it show the cache under the Disk Shares? If so you should be able to set the security level from there. However whether it should be showing up under the Disk Shares section by default becomes an interesting question.
  18. There is a known bug in the QEMU component in beta 29. Should now be OK in beta 30.
  19. Addresses in the 169.254.x.x range are normally associated with being unable to get a DHCP address for the interface in question. I would not normally ever expect such addresses to be seen at the router level so not sure why you see this.
  20. If you still have the RAM over clocked then be aware this can cause both stability issues and (potentially) data corruption.
  21. Do you have any drives showing as "unmountable" on the Main tab? If so running File System check/repair is the way forward. Providing you system diagnostics zip file (obtained via Tools->Diagnostics) is a good idea to see if anyone can spot anything there. Failing that have you tried rebooting the server?
  22. It may be possible to set this up, but even ignoring the limitations imposed by parity I do not see why you think this would improve performance. You would still be only reading/writing to one of the vdisk files at any particular point in time - this type of arrangement does not give any sort of ‘striping’ which is the technique RAID uses to improve performance.
  23. Difficult to help without more information to go on. I would suggest that you post a screenshot of the settings for the problem Docker container so that we can see if anything looks wrong. It might also make sense to post your system’s diagnostics zip file (obtained via Tools->Diagnostics).
  24. Unless you have a reason not to do so, why not leave the option to run it automatically overnight active as well? That way you may get most of the moves happening in what would otherwise be idle time.
  25. That would work It has the added advantage that the system should now boot slightly faster.