Jump to content

itimpi

Moderators
  • Content Count

    9821
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by itimpi

  1. I would not expect the machine that is behind that router to be able to accept incoming connections (unless that router happens by chance to be setup so that incoming connection can be specified by the server using DNLA).
  2. It would have been unmounting disk drives (not shares) which is a standard step in stopping the array.
  3. I think you are getting confused between the default format for array drives and that for cache drives/pools. For cache it always defaults to btrfs (and this is the only option if there is more one drive in a pool) and you have to manually change it to get something else.
  4. 2) With the rate at which hardware changes it would be impossible to keep such a list up-to-date. This is one of the main reasons why Unraid has a trial license (valid for 30 days and extendible for an additional 30 days) so that you can check it runs on the hardware you want to use. Note that if you have very new hardware then you probably need to use the 6.9.0 beta release as it has a much newer Linux kernel and newer drivers than the current stable 6.8.3 release.
  5. It works fine on my VMs - not sure why you are having a problem.
  6. yes - as long as that is the correct serial number for the disk you want to pass through. FYI: the /dev/sdX number do not change often as normally they are allocated in the order Linux sees them (which is timing dependent so normally consistent (but not guaranteed). However if you accidentally pass through the wrong disk because the sdX value changed it can be catastrophic making the contents of that disk corrupt - not something one wants to take even a small chance of happening.
  7. This entry will typically be of the form source dev=/dev/disk/by-id/xxxxx where 'xxxxx' is correct for the disk in question.
  8. If you look under /dev/disk at the Linux level you will see several folders giving ways that drives are identified in an invariant manner.
  9. Not quite true - this user DOES exist at the Linux level (look in /etc/passwd to see users at the Linux level) - but not with the ability to login to the system.
  10. You will not be able to add the 8TB drive as a parity drive. It is always possible that the disk in question has not really failed - I would wait for feedback on that as that would simplify things. If the drive really has failed then in such a scenario one should use the Parity Swap procedure which is covered here in the online documentation.
  11. The sdX designations are always subject to change as they are assigned dynamically by Linux. Unraid does not rely on the device name to identify drives, but instead the disk serial number.
  12. It is really up to you as to which you will find most convenient. I normally use the option to retain all settings and then when I return to the Main tab make any appropriate adjustments before starting the array (as that typically involves the minimum amount of effort for the sort of change I am considering). The end result should be the same.
  13. Why do you even want to use the sdX type name as that is subject to change at any time if you reboot (as the sdX names are assigned dynamically by Linux during the boot process). Is it possible for your use case to use one of the names that show up under the /dev/disk hierarchy as these should be invariant.
  14. It looks like you currently seem to have 8 drives set up in the array but only 6 physical drives + 2 parity drives. The missing array drives are being emulated using the combination of the other data drives plus the 2 parity drives. A screen shot of the Main tab would confirm this.
  15. I think that Apple mandates that all browsers on iOS use their rendering engine, so this is probably an issue (bug?) at that level.
  16. Not sure I agree here. Changing slots still does not necessarily mean you want the shares to be set up any differently. Perhaps the best way forward would be to add a new check box to the New Config dialog as to whether shares settings should be left unchanged or reset to defaults? At least that would give visibility to the effect that share settings might be affected. Making the default the current behaviour would mean users only get the share settings retained if they explicitly ask for them. do you think this should explicitly be raised as a feature request?
  17. A reboot will fix that as Unraid unpacks itself into RAM from the archives on the flash every time it boots.
  18. I suspect that technically this is not a bug in that the New Config is working as designed. Having said that I agree it might make a lot of sense to leave all share settings unchanged when usin New Config - especially if using the option to retain current disk assignments. I personally would find that more convenient than current behaviour. I do not like your second option as that would cause problems for users who have their shares exported and set to Public.
  19. I think he showed it starting up under a DOS emulator. It may well have been Unraid v5 if the emulator was only 32 bit. Seem to remember though that although technically it was running it took an inordinately long time to boot (several hours?) so was not viable except as a thought experiment
  20. You don’t think the fact this is a show-stopper is not relevant
  21. You have mis-interpreted the syslog output. What you see is just a listing of all the possible array drive positions (regardless of licence level). The statement about it working previously suggests that some of your drives are removable. The check for the number of attached drives is carried out when starting the array so removing such drives may allow you to start the array. You can plug removable drives in at any time after the array is started without Unraid complaining about the number of drives.
  22. There is no specific target date. It will get released when Limetech decide it is in a suitable state to be released.
  23. All the steps required in different scenarios for disk replacement are covered here in the online documentation.
  24. 1) even if an algorithm could be found that might implement this it would almost certainly be much slower than normal writes to the array since as parity is updated in real time you would end up with the parity drive having to continually move the disk heads between tracks.