unRAID Server Release 6.0.1-x86_64 Available


Recommended Posts

I have a newly folder & file created on my flash drive that I've never seen before (hence posting here).

 

Folder name = FOUND.000

Contents name = FILE0000.CHK

 

The type of file is "Recovered File Fragments".

 

Is this something new? Safe to delete?

 

Thanks

See if you can view the file as text or attach it.
Link to comment
  • Replies 192
  • Created
  • Last Reply

Top Posters In This Topic

I have a newly folder & file created on my flash drive that I've never seen before (hence posting here).

 

Folder name = FOUND.000

Contents name = FILE0000.CHK

 

The type of file is "Recovered File Fragments".

 

Is this something new? Safe to delete?

 

Thanks

 

Did you plug the drive into a Windows machine?  They're windows files as far as I know.

 

Bingo. Unraid was complaining that my flash wasn't read/write (after my upgrade), and a repair in windows fixed it.

 

Thanks  :)

Link to comment

I have a newly folder & file created on my flash drive that I've never seen before (hence posting here).

 

Folder name = FOUND.000

Contents name = FILE0000.CHK

 

The type of file is "Recovered File Fragments".

 

Is this something new? Safe to delete?

 

Thanks

 

Did you plug the drive into a Windows machine?  They're windows files as far as I know.

 

Bingo. Unraid was complaining that my flash wasn't read/write (after my upgrade), and a repair in windows fixed it.

 

Thanks  :)

 

If everything seems to be working fine, I would suggest that you stop the array and back up your flash drive. 

 

There are two possibilities for the error that Windows chkdsk fixed.

 

First, power was lost to your server while it was writing to the flash drive. (most likely cause)

 

Second, the flash drive is going bad.  (Will happen eventually!)

 

In either case, a backup can serve you well if either one happens again.

Link to comment

If everything seems to be working fine, I would suggest that you stop the array and back up your flash drive. 

 

There are two possibilities for the error that Windows chkdsk fixed.

 

First, power was lost to your server while it was writing to the flash drive. (most likely cause)

 

Second, the flash drive is going bad.  (Will happen eventually!)

 

In either case, a backup can serve you well if either one happens again.

 

There were a series of events that likely meant I caused the event, which have since been rectified.

 

And I have my entire flash drive backup with Crashplan, though I did manually make a copy before I let Windows touch it (just in case) :)

Link to comment

Tom / Jon,

 

Just wanted to let you know that I'm having issues on 6.0.1.  I can't say for sure it is a v6 issue, as its simply coincidental that my drive died while on v6.

 

First, I'm having log issues.  Sometimes the log is so full of errors, that clicking Tools>System Log only gave me an error, and didn't display the log.  I could Tail the log by clicking Log on the menu bar, but that wasn't very helpful.  Right now, my System Log is completely blank.  I even downloaded it to a zip file, of zero bytes.  I'll see if I can get a Log file with some data in it, but this seems pretty bad when the default troubleshooting path is not operational.

 

Second, my drive issues:  I had a flaky Parity drive, so I replaced it.  Either during or shortly after the New Parity build, a Data drive died.  The bad Data drive reports as Unmountable.  I am not sure if the Parity build finished successfully:  the messages on the GUI were not clear if I had good parity or not.  I did notice that I could not browse the bad drive's contents, something I'd been able to do under earlier versions when the drive was being simulated from Parity - perhaps the only indication that Parity was bad.

 

It was at this time I noticed I could not stop the array.  I would checkmark and click Stop, and unRAID would display status updates like it was stopping, and then after a while it would refresh the screen and the array was still up.  Other services, like shares, were down, but the array itself was up.  Retrying to Stop the array several times produced identical results.  The only way to stop the array is to reboot with array autostart disabled.  Since I can't stop the array, I also can't powerdown or reboot safely.

 

Last night I attempted to rebuild the bad Data drive with a new drive.  The process appeared to complete, but since I have no Log, I can't tell for sure.  The New Data drive still reports as Unmountable, I still can't browse the drive, but I do have a nice green orb on this drive telling me it is "NORMAL OPERATION / DEVICE IS ACTIVE".  I still can't stop the array.  The Dashboard tells me Parity is valid.

 

In summary, my various issues:

  • Being told Parity is Valid, even though rebuild from Parity seems to have failed.
  • Being told the new Data Drive is Normal/Active, even though rebuild from Parity seems to have failed.
  • Inability to Stop the Array
  • Error messages when trying to view the System Log, or NULL log

 

I still have my flaky Parity drive, so I'm going to try a rebuild from that drive next.

 

-Paul

 

EDIT:  I'm attaching a SYSLOG from a fresh boot.  Not sure it will reveal much.

tower-syslog-20150630-0958.zip

Link to comment
...

First, I'm having log issues.  Sometimes the log is so full of errors, that clicking Tools>System Log only gave me an error, and didn't display the log.  I could Tail the log by clicking Log on the menu bar, but that wasn't very helpful.  Right now, my System Log is completely blank.  I even downloaded it to a zip file, of zero bytes.  I'll see if I can get a Log file with some data in it, but this seems pretty bad when the default troubleshooting path is not operational...

I think there is still an issue where the system stats plugin uses up all of the space for logging. See here for a workaround.
Link to comment

My personal view is to create anyway more space than the default 128MB for /var/log. In v6 Docker logs are stored in this folder too and the limit may be reached quickly.

 

The Dynamix Stats plugin creates a 6MB log file for each subsequent day, which rotates monthly, they will eventually fill-up /var/log when set to 128MB.

 

Link to comment

My logging issues are cropping up within 24 hours.  I'm not using Docker (my low end G1610 processing isn't capable of such things).  I was noticing a ton of bad drive errors that might have been filling up the log, but my latest boot with a new drive (bad drive unplugged) and somehow I still experienced logging issues (NULL log) in about 18 hours (perhaps related to a failed rebuild, guessing issues from incomplete Parity data).

 

That said, 128MB is still a lot of space for text.  On some enterprise systems I work with, we have to enable debug level logging and SQL tracing to get that sort of excessive log file usage in a short time period.  Seems silly that on a near stock system with a bad drive and perhaps bad parity that the log file would fill up so fast, if that's in fact what is happening.

 

 

Link to comment

Is there any way to enable compression on the log folder? A few brief googles mentioned possibly mounting a btrfs compressed file system in the tmpfs space, or something like that. Since almost all of the space overrun issues are caused by highly repetitive data, it seems like that would be a logical solution.

Link to comment

My personal view is to create anyway more space than the default 128MB for /var/log. In v6 Docker logs are stored in this folder too and the limit may be reached quickly.

 

The Dynamix Stats plugin creates a 6MB log file for each subsequent day, which rotates monthly, they will eventually fill-up /var/log when set to 128MB.

 

I believe I have seen it use 9 Megs per day on my server. I do know I had to use more than 256 Megs since SA chewed through at least 256 megs. At a value of 384 meg for log my system is stable. I can validate this in a few hours.

Link to comment

My personal view is to create anyway more space than the default 128MB for /var/log. In v6 Docker logs are stored in this folder too and the limit may be reached quickly.

 

The Dynamix Stats plugin creates a 6MB log file for each subsequent day, which rotates monthly, they will eventually fill-up /var/log when set to 128MB.

 

I believe I have seen it use 9 Megs per day on my server. I do know I had to use more than 256 Megs since SA chewed through at least 256 megs. At a value of 384 meg for log my system is stable. I can validate this in a few hours.

 

Yes, file size may vary depending on your hardware, e.g. more hard disks is more info to store.

 

Link to comment

My personal view is to create anyway more space than the default 128MB for /var/log. In v6 Docker logs are stored in this folder too and the limit may be reached quickly.

 

The Dynamix Stats plugin creates a 6MB log file for each subsequent day, which rotates monthly, they will eventually fill-up /var/log when set to 128MB.

 

I believe I have seen it use 9 Megs per day on my server. I do know I had to use more than 256 Megs since SA chewed through at least 256 megs. At a value of 384 meg for log my system is stable. I can validate this in a few hours.

 

Yes, file size may vary depending on your hardware, e.g. more hard disks is more info to store.

 

I only have 4 data disks and 1 parity disk, but have the top license (if that matters at all).

Link to comment

Yeah, I don't expect you to run out of space within 24 hours, but a faulty disk can generate quite some 'noise'. Did you telnet into the system and check the folder /var/log ?

 

No, didn't think to check there.  I assumed the log was toast and went on with other troubleshooting steps.

 

On a positive note, I was able to do a 'New Config' with my original drives, and got lucky that my bad drive data showed back up - and the Array Stop feature is working again.  So it seems it only failed to Stop the array when there was a drive issue at play.  Probably not a normal test case for L-T.

 

It would be nice if there was a feature to tell unRAID to try again on a drive it has flagged as bad.  Years ago I had an old Norco case, and it had a flaky slot.  The drive would randomly go offline, unRAID would flag it as bad, and the only way to get it online again was to do a New Config.  It is inconvenient that unRAID sees the drive but it just won't let you use it because it doesn't trust it anymore, even if it was physically still a good drive - the only thing unRAID offered was the ability to format the drive.  So easy to destroy your data, so hard to convince unRAID your data was still good and to simply try again.

Link to comment

Yeah, I don't expect you to run out of space within 24 hours, but a faulty disk can generate quite some 'noise'. Did you telnet into the system and check the folder /var/log ?

 

No, didn't think to check there.  I assumed the log was toast and went on with other troubleshooting steps.

 

On a positive note, I was able to do a 'New Config' with my original drives, and got lucky that my bad drive data showed back up - and the Array Stop feature is working again.  So it seems it only failed to Stop the array when there was a drive issue at play.  Probably not a normal test case for L-T.

 

It would be nice if there was a feature to tell unRAID to try again on a drive it has flagged as bad.  Years ago I had an old Norco case, and it had a flaky slot.  The drive would randomly go offline, unRAID would flag it as bad, and the only way to get it online again was to do a New Config.  It is inconvenient that unRAID sees the drive but it just won't let you use it because it doesn't trust it anymore, even if it was physically still a good drive - the only thing unRAID offered was the ability to format the drive.  So easy to destroy your data, so hard to convince unRAID your data was still good and to simply try again.

When unRAID redballs a disk, it is because of a write failure. It has, however, "written" the data. It does this by updating the parity anyway. That way, the data that failed to write can be simulated by the rest of the array, and the data that failed to write can be rebuilt. Also, any subsequent writes to that disk are handled the same way.

 

So it is not simply a matter of telling unRAID the disk and the data is still good, because the disk's data is not still good, but the data does exist in the array anyway and can be read or rebuilt.

Link to comment

... the only way to get it online again was to do a New Config. 

 

That will work but that has never been a requirement.  If a drive reports "unrecoverable write error" then unRaid will 'disable' that device - it has no choice because the data on that device is now wrong.  If higher-level code immediately reads back the block that didn't write correctly then it will return the wrong data.  We take great pains in the driver to makes sure disabling a device is 'synchronous' with the I/O stream so as not to permit this.

 

Once a device is 'disabled', the normal course of action is:

 

1. Stop array

2. Yank bad drive

3. Install new drive

4. Start array - this will trigger a rebuild because unraid sees that a new device has been installed in a disabled disk slot

 

One can also do this:

 

1. Stop array

2. Yank bad drive

3. Start array

 

Now you are running with on-the-fly reconstruct active for all I/O to the missing device.  The assumption is that sometime soon you will:

 

1. Stop array

2. Install new drive

3. Start array - this will trigger a rebuild because same reason above.

 

If you wanted to try and re-use the "bad" drive you can thus do this little dance:

 

1. Stop array

2. Yank bad drive (or just unassign it)

3. Start array

4. Stop array

5. Reinstall bad drive (or re-assign it) - unraid will "think" this is a new drive because step 3 erased disk-id of that bad slot

6. Start array - this will trigger a rebuild

 

Make sense?

Link to comment

are the 6.x releases going through beta/rc versus internal only testing for 6.x.x or are they all internal only?

It will depend on what goes into the release.  Plan is to release 6.1-rc1, then 6.1.0.  If before 6.1.0 is released, something is found in  6.0.1 (the current stable) that needs immediate attention, we'll generate a 6.0.2, but once 6.1.0 is released, that's the end of the line for 6.0.x.

Link to comment

It will depend on what goes into the release.  Plan is to release 6.1-rc1, then 6.1.0.  If before 6.1.0 is released, something is found in  6.0.1 (the current stable) that needs immediate attention, we'll generate a 6.0.2, but once 6.1.0 is released, that's the end of the line for 6.0.x.

 

Now that we are on a stable release with 6.0.1, will the update plugin still suggest upgrading to betas/release candidates, or will it stick to stable releases?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.