-
Posts
19,873 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by itimpi
-
-
If you put your Unraid servers address into the Remote server field then it will write to itself in the location you have set. Ideally this should be a share that resides on a pool to avoid spinning up array drives.
-
Have you trie without the Unraid server connected to the UPS? It looks like the UPS may be telling Unraid to shutdown.
-
For any issue around licence keys you need to contact support.
-
14 hours ago, da_banhammer said:
Well dang. I was thinking it was only a couple years old but it's actually been in service for 4 years now so I guess I shouldn't be too surprised. Thanks for your help!
Those errors do not always indicate a drive problem. They can also be caused by power/cabling issues to the drive (in particular power as you mentioned it making a noise). Running the SMART extended test is a good indication as to whether a drive is healthy or not. The easier step of getting the SMART information after a reboot to get the drive back online might also give an indication.
-
The standard handling of disks going unmountable is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.
-
Have you put the old key file into the config folder on the flash drive?
-
You may find this section useful from the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
-
8 minutes ago, chris_netsmart said:
@JorgeB many thanks for having a look, I am guesting that the Diags had no usefully information, I have done as advise and posted what I have done. all I can do now is wait and see for when it next crash
That is not sufficient - you have so far only set the server into ‘listening’ mode. To get something actually being logged you either need to put your servers address into the Remote Server field or set the mirror to flash option. -
11 hours ago, DaveL said:
Per your suggestion I downloaded the diagnostics file; what of the many files in the zip file to I provide?
Post the whole zip file. Analysing them for error causes typically means quite a few different files need to be examined without knowing ahead of time exactly which ones they will be.
-
Obviously something strange going on as that shows the syslog file with a size of 0 which I did not think was possible 😖
-
1 minute ago, RadRom said:
As per documentation, stopped array and ran it in GUI terminal.
Where in the current documentation did you see this? When I looked it said:
If you ever need to run a check on a drive that is not part of the array or if the array is not started then you need to run the appropriate command from a console/terminal session. As an example for an XFS disk you would use a command of the form: xfs_repair -v /dev/sdX1
- 1
-
The syslog suggests you are getting macvlan related crashes and the syslog has
Apr 26 13:14:13 Nexus root: Fix Common Problems: Error: Macvlan and Bridging found
This combination is known to cause instability on many systems.
As mentioned some time ago in the 6.12.4 release notes you need to either switch docker networking to use ipvlan or if you want to continue using macvlan then disable bridging on eth0.
-
13 hours ago, chowpay said:
Ok I changed System to be cache, Stopped all the containers, enabled the mover. Once the mover was complete I enabled the dockers again. But I see its still utilizing disk1. Is there something I should do to ensure that docker.img is in cache and not on disk
It should definitely have worked if you did the following steps.
- Disabled Docker and VM services under Settings
- Set the 'system' share with cache as primary storage; array as secondary storage; and mover direction as array->Cache
- Run mover manually. When this completes the 'system' share should now only exist on the cache. You can easily check this using something like Dynamix File Manager.
- Change 'system' share to have cache as primary storage and nothing set as secondary storage.
- (optional) enable Use Exclusive mode under Settings->Global Share settings. This improves performance on share that are all on a pool
- Re-enable the Docker and VM Services
You could also use Dynamix File Manager to manually move the 'system' share from disk1 to cache if you prefer this to mover.
You should also make sure the cache has a Minimum Free Space value set that is bigger than the largest file you expect to cache (ideally something like twice that size). This is to stop the cache filling up to far which can cause problems.
-
8 minutes ago, Ezekial66 said:
No sir the only dynamix plugins are currently System Temp and File Manager, but both are disabled in Safe Mode where the issue still persisted
No idea then I am afraid.
-
You o not by any chance have the dynamix cache-directories plugin installed? That can cause excessive load if you have it badly configured so it not being limited in any way as to what it scans.
-
The syslog is full of entries of the form
May 21 17:19:54 NAS kernel: sd 1:0:4:0: attempting task abort!scmd(0x00000000118655b3), outstanding for 30364 ms & timeout 30000 ms May 21 17:19:54 NAS kernel: sd 1:0:4:0: [sdf] tag#3018 CDB: opcode=0x88 88 00 00 00 00 00 00 53 06 d8 00 00 04 00 00 00
which refers to the parity drive. It has also apparently dropped offline so there is no SMART information for it in the diagnostics.
-
20 minutes ago, Ezekial66 said:
despite this apparently being a new 'setting'.
The dynamix system monitoring has always been there. It is likely some other job being started by cron that is causing you an issue.
-
Just now, Barry Staes said:
This is the easy one i'd guess. Or did I miss how / what plugin does this?
Why not simply create a new empty one and then restore the contents via Apps->Previous Apps.
-
Just now, Barry Staes said:
restoring all known (dozens) of docker configs should be possible also.
This is already possible via Apps->Previous Apps which is why backing up the docker.img file is not worth doing as you can simply reinstate the contents of the docker.img file.
-
56 minutes ago, rtgurley said:
Somebody on reddit shared this document with me. So far I am about 74% (18 hours) into copying data from old parity to new parity. This seems faster than having to redownload data and rebuild parity.
Not sure that version is still accurate as it is in the 'legacy' part of the documentation. The current online documentation for that is here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.
-
Why not switch to using 2TB 2.5" HDD - they are reasonably priced nowadays.
-
Looks like the docker.img file (in the system share) is on disk1. That means that any time the docker service is running those two drives will be spun up. Ideally you want all of the 'system' share to be on the cache both for best performance, and also to let the array drives spin down.
-
1 hour ago, loady said:
next i will swap sticks over, could it also be the RAM slot and not the actual RAM
It could be. It could also be simply the fact of having the extra RAM module installed where each one checks out fine individually, but you get failures when both installed.
-
Can you successfully stop the array?
- 1
Accidentally hit the plug on a drive and now it's disabled, how can I remove that without completely rebuilding the drive?
in General Support
Posted
Note that this is no longer quite accurate as the device name for array drives can vary according to the Unraid release and whether encryption is being used or not. Much better to do it from the GUI as then you do not need to worry about the device name.