Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. Turns out it was easy to find the problem. If custom disk utilization settings are used unRAID creates the file config/smart-one.cfg, e.g.: [disk1] hotTemp="" maxTemp="" warning="70" critical="90" If you then delete those values on that disk settings page, so it should start using the global settings smart-one.cfg is changed like so: [disk1] hotTemp="" maxTemp="" warning="" critical="" And this is when the global settings for that disk stop working, I'm guessing because the blank values are overriding them.
  2. It works correctly on a new installation, so something else is causing the issue, strange that it happens on 2 different servers, I'll try to find out the cause but it might not be easy so ignore this report for now.
  3. Strange, although you're using a different release I assume nothing was changed for notifications, I'll try to investigate this further on a clean install to try and confirm the problem.
  4. They are, I get them if I use individual disk settings, and I don't think the problem is in my end as I tested on 2 different servers and they behave the same.
  5. I'm not getting a system notification when the warning disk utilization threshold is reached, though the disk changes color on main page. If I set a specific warning for that disk it changes color and it generates the system notification. Example how to reproduce: disk1 is @ 80% usage and it's using the global utilization thresholds. set global warning threshold to 75% - disk will change color to orange (if using a color theme) but there won't be a notification set global warning threshold to 90% - disk will change back to green, again no notification set disk1 warning threshold to 75% - disk will change color to orange (if using a color theme) and generate a system notification set disk1 warning threshold to 90% - disk will change color to green (if using a color theme) and generate a system notification
  6. USB 3.0 port is not recommended, if you were already on USB 2.0 it can be a failing flash drive (USB 3.0 flash drives are also not recommended as they usually have a much higher failure rate)
  7. You can enable compression different ways, for example remounting the disk with compression enable after array start, e.g. by using the user script plugin, but probably the easiest way is use chattr to set a share (or folder) to compressed, then all files inside that share/folder will inherit compression.
  8. I believe this is the correct one, but never used multiple options
  9. You can use it by adding btrfs to vfs objects on /etc/samba/smb-shares.conf for the share you want to use it then restart Samba: samba restart You'll need to redo it every time you start/stop the array, since it will default to the previous setting.
  10. It should, SAS2008 based controllers don't support trim on most SSDs, SAS2308 and newer support.
  11. ST8000AS002 does make a few click sounds, it's the only model I have, newer models might be different.
  12. Not AFAIK, but you can use btrfs as the filesystem and it does support compression.
  13. You'll need to edit config\domain.cfg on the flash drive and change SERVICE="enable" to "disable", then reboot.
  14. It is, PCIe 3.0 is backward compatible.
  15. Adding vfs objects = btrfs to a btrfs formatted SMB share allows copy-on-write (i.e. instant) copies to be done using windows explorer (on windows 8/server2012 or newer) using Samba's server side copy functionality, obviously as long as the copy is done on the same filesystem, e.g., from cache to cache, disk1 to disk1, etc, this would allow for example to very quickly and easily backup a vdisk before any significant change without dropping to the CLI. Alternatively allow the user to add specific vfs objects to any share using the GUI.
  16. You can use a spare flash drive as the single data disk.
  17. First time I hear that, maybe if it's limited to SAS drives since they are not used by many, still would think other people would have complained by now it it's a general problem.
  18. 50 to 60MB/s are the normal unRAID writing speed due to how parity works with the default settings, you can enable turbo write for faster writes.
  19. This is usually caused by a PCIe device, it can be fixed by moving the offending device to a different PCIe slot, updating the bios or adding pci=nommconfto your syslinux.cfg after append initrd=/bzroot, so it would look like: append initrd=/bzroot pci=nommconf
  20. Good point, forget to mention that, scheduled parity checks should always be non correct.
  21. The automatic parity check in case of an unclean shutdown is always non-correct, if one starts and finds one or more errors might as well cancel it and start a correcting check.
  22. Not sure that is the problem, though the filesystem is fully allocated there's still space for metadata, this problem is more serious if it is fully allocated and the metadata is almost all used up since there's no space to create a new chunk, still run a balance because it's not good to be like that: https://lime-technology.com/forums/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551 P.S. disk1 needs a new SATA cable.
  23. Possibly, but that's beyond my knowledge
×
×
  • Create New...