Jump to content

itimpi

Moderators
  • Posts

    20,776
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. You could also try: ls -l /var/log so you see files directly stored under /var/log rather than in folders
  2. BTW: there is no need to Preclear the drive as Unraid’s built-in Clear is much faster. The only reason you normally run pre-clear is to stress test a new drive before adding it to the array, and since this drive has been performing OK as parity1 it does not need to be stress tested.
  3. Are you sure there is not a checkbox to confirm you want to start the array without that drive (and that checking it enables the Start button). I would expect a reboot to bring the drive back as the change is not committed until you start the array without the parity1 drive.
  4. If the card is NOT flashed into IT mode then it functions in RAID mode which stops Unraid managing it effectively.
  5. Whether mover transfers files when it runs and in which direction is controlled by the settings for a particular share. you can find out how much space is used by each share and on what drives by using the Compute button on the Shares tab. That would be expected as only the Yes setting causes file to be transferred from cache to array. However, you do not normally want all shares set to Yes. The appdata and system shares you normally want set to Prefer (which means keep files on cache if space permits) to maximise performance of docker containers and VMs. You will also find that mover can be slow if it is moving lots of small files because it does quite a few checks before moving any file.
  6. Are you saying the following sequence of commands do not work to remove parity1? Stop array unassign parity1 start array without parity1 assigned to commit its removal? Note you cannot assign the old parity1 to the array.until you have successfully started the array without parity1 (I.e. it has to be done in 2 stages).
  7. I notice that you have the Minimum Free Space setting for the cache pool set to 0. It should be set to at least be larger than the biggest file you expect to cache so that Unraid knows when it should stop writing new files to the cache and instead by-pass the cache and write directly to the array. Btrfs file systems seem to misbehave when they get too full so setting this may well help.
  8. I think it is unlikely that you need a docker image file this large unless you have a container mis-configured so that is writing data internally to the image. Have you had problems with it filling up in the past? If properly configured then any location within a container where anything other than a trivial amount of data is being written should be mapped to a location on the host external to the image. Whether this is related to your current issue I have no idea but thought it was worth mentioning just in case.
  9. FYI: If you get corruption of the docker image then it normally only takes a few minutes to fix it. The settings for all docker containers installed via the Apps tab are stored on the flash drive and those containers can be re-instated with their settings intact via Apps->Previous Apps.
  10. Were those all plugged in at the point you started the array? Removable drives plugged in after the array has been started do not count towards the licence limit.
  11. @bugthumpin Just for interest what share do you think Mover should be moving? Even from the screenshots I can see that the only share set to Use Cache=Yes (which is required for files to be moved from cache to array) is the 'isos' share and that is already all on the array.
  12. Rebuilding parity does not touch the other disks so your appdata should be intact. It is worth pointing out that by itself parity contains NO data. It just has the information that is required to repair a failed drive (in conjunction with all the other drives that are OK).
  13. Once a drive is disabled (which means a write to it failed) then it needs to be rebuilt to get it back into normal operation. If you are reasonably certain the drive is OK then you can use the process documented here in the online documentation. You might want to first check the cabling to the drive as that is the commonest cause of write failures to what is otherwise a healthy drive.
  14. That file is only in RAM so not stored anywhere on persistent storage. I think if someone has access to the /root folder you already have a serious security issue. However whether it actually needs those permissions I do not know - I would think only Limetech know for certain.
  15. You need to start the array, and then a Format button is displayed (just under the Start button) that will list the drives to be formatted if you use it.
  16. The speed you quote of 105 MB/s seems about the limit of a gigabit LAN.
  17. You can run a scrub by clicking on the drive (or first member if a pool) and selecting the scrub option. a scrub is completely independent of the Unraid’s parity system. A btrfs formatted drive has internal block checksums so that it can check its own data integrity.
  18. it is worth pointing out that there are multiple references in that section that the rebuild process will NOT clear an unmountable status and the section on handling unmountable disks says that the correct handling (ideally before attempting the rebuild) is to use the check filesystem process.
  19. Looking at the syslog as soone as you start the rebuild you get May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 Sense Key : 0x2 [current] May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 ASC=0x4 ASCQ=0x0 May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 CDB: opcode=0x8a 8a 00 00 00 00 00 01 9b 04 58 00 00 04 00 00 00 May 8 21:08:23 MegaAtlantis kernel: I/O error, dev sdb, sector 26936408 op 0x1:(WRITE) flags 0x0 phys_seg 128 prio class 0 May 8 21:08:23 MegaAtlantis kernel: md: disk1 write error, sector=26936344 May 8 21:08:23 MegaAtlantis kernel: md: disk1 write error, sector=26936352 Followed by a lot more write errors. It may really be a failing disk, so I suggest you run the click on the drive on the Main tab; disable spindown; and run the extended SMART test. If that fails then you really have a failing drive. BTW: it was not relevant in this case, but often we will want the diagnostics with the array started in normal mode so we can see if emulated disks are mounting.
  20. Yes - what you mention is the way to go if you want a ZFS array. Just make sure any shares are set to only be on the ZFS array (or the SSD one for appdata/dockers). At some point in the future (maybe in the 6.13 release) the requirement to have that dummy flash drive in the array will be removed as the existing array type just becomes another pool type you can use.
  21. One thing to consider is if you will want to add additional drives. This is not easy with ZFS pools, but is easy for array drives (whatever file system you use). There is also the option of using btrfs for a pool which has many of the advantages of xfs but has the big advantage over ZFS that it is easy to add drives and/or dynamically change the raid profile used. The possible downside of btrfs is that is considered slightly less resilient to failure if you have hardware issues.
  22. This applies to the fact that when you set up a virtual disk (vdisk) file for a VM it is by default set up as a Linux ‘sparse’ file. This means that the space is not actually used until the VM writes to it, but over time as the VM runs and writes to different parts of the vdisk file it can grow to be the full size you specified when setting up the VM. As long as you allow for this you are fine.
  23. Our normal recommendation is to post the whole file, but it is a text file so it is up to you to decide. There should be nothing obviously sensitive in it but some people do not like sharing information that most of do not think is not sensitive.
×
×
  • Create New...