unRAID Server Release 6.0-beta13-x86_64 Available


limetech

240 posts in this topic Last Reply

Recommended Posts

  • Replies 239
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

 

I wonder if the config folder got damaged somehow on your key?

 

I hope this helps you troubleshoot.

 

-- stewartwb

 

From Putty I issued

 

shutdown -r now

 

all was fine when it started up again.  not sure what caused it, but I seem to be okay again.  Thanks for the advice.

Link to post

this issue tracked down and resolved. Not sure why you're still getting it. Can you copy and paste the output you get from this command:

 

v /var/log

 

You're quick  ;D... I deleted the post right after I posted it.  Had so many putty windows opened, I attached the wrong code.  Trying it again.

Just confirmed that it's still doing it...

 

Here's my output:

 

 v /var/log
total 168
-rw------- 1 root root      0 Feb 14 14:56 btmp
-rw-r--r-- 1 root root      0 Dec 11 18:26 cron
-rw-r--r-- 1 root root      0 Dec 11 18:26 debug
-rw-rw-rw- 1 root root  44877 Feb 16 23:50 dmesg
-rw-rw-rw- 1 root root   1564 Feb 16 23:50 docker.log
-rw-r--r-- 1 root root      0 Feb 14  2014 faillog
-rw-r--r-- 1 root root    292 Feb 16 23:50 lastlog
drwxr-xr-x 6 root root    120 Feb  3 16:04 libvirt/
-rw-r--r-- 1 root root      0 Dec 11 18:26 maillog
-rw-r--r-- 1 root root      0 Dec 11 18:26 messages
drwxr-xr-x 2 root root     40 May 15  2001 nfsd/
drwxr-xr-x 2 root root   2860 Feb 14 14:56 packages/
drwxr-xr-x 2 root root     80 Feb 15 19:49 plugins/
drwxr-xr-x 2 root root     40 Oct 27  1998 removed_packages/
drwxr-xr-x 2 root root     40 Oct 27  1998 removed_scripts/
drwxr-xr-x 3 root root    100 Feb 16 23:50 samba/
drwxr-xr-x 2 root root     40 Feb 14 14:57 scripts/
-rw-r--r-- 1 root root      0 Dec 11 18:26 secure
drwxr-xr-x 3 root root     60 Feb 14 14:57 setup/
-rw-r--r-- 1 root root      0 Dec 11 18:26 spooler
-rw-r--r-- 1 root root 107167 Feb 16 23:50 syslog
-rw-rw-r-- 1 root utmp   7296 Feb 16 23:50 wtmp
drwxr-xr-x 2 root root     40 Oct  8 21:07 xen/

cat /var/spool/cron/crontabs/root
# If you don't want the output of a cron job mailed to you, you have to direct
# any output to /dev/null.  We'll do this here since these jobs should run
# properly on a newly installed system.  If a script fails, run-parts will
# mail a notice to root.
#
# Run the hourly, daily, weekly, and monthly cron jobs.
# Jobs that need different timing may be entered into the crontab as before,
# but most really don't need greater granularity than this.  If the exact
# times of the hourly, daily, weekly, and monthly cron jobs do not suit your
# needs, feel free to adjust them.
#
# Run hourly cron jobs at 47 minutes after the hour:
47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
#
# Run daily cron jobs at 4:40 every day:
40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
#
# Run weekly cron jobs at 4:30 on the first day of the week:
30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
#
# Run monthly cron jobs at 4:20 on the first day of the month:
20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null
# Scheduled Parity Check
0 0 1 * * /root/mdcmd check  1>/dev/null 2>&1
root@Server_B:~#
Broadcast message from root@Server_B (Mon Feb 16 23:47:24 2015):

The system is going down for reboot NOW!
login as: root
root@192.168.1.62's password:
Linux 3.18.5-unRAID.
root@Server_B:~# cat /var/spool/cron/crontabs/root
# If you don't want the output of a cron job mailed to you, you have to direct
# any output to /dev/null.  We'll do this here since these jobs should run
# properly on a newly installed system.  If a script fails, run-parts will
# mail a notice to root.
#
# Run the hourly, daily, weekly, and monthly cron jobs.
# Jobs that need different timing may be entered into the crontab as before,
# but most really don't need greater granularity than this.  If the exact
# times of the hourly, daily, weekly, and monthly cron jobs do not suit your
# needs, feel free to adjust them.
#
# Run hourly cron jobs at 47 minutes after the hour:
47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
#
# Run daily cron jobs at 4:40 every day:
40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
#
# Run weekly cron jobs at 4:30 on the first day of the week:
30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
#
# Run monthly cron jobs at 4:20 on the first day of the month:
20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null

Link to post

Ugh my bad, output from just /var

NP

 

v /var
total 0
lrwxrwxrwx  1 root root   3 Feb 14 14:55 adm -> log/
drwxr-xr-x  5 root root 100 Feb 16 23:58 cache/
drwxr-xr-x  2 root root  40 Feb 16 23:58 empty/
drwxr-xr-x 13 root root 260 Feb 14 14:57 lib/
drwxr-xr-x  3 root root  60 Feb 14 14:57 local/
drwxr-xr-x  3 root root  60 Oct  8 21:07 lock/
drwxr-xr-x 12 root root 500 Feb 16 23:59 log/
lrwxrwxrwx  1 root root  10 Feb 14 14:55 mail -> spool/mail/
drwxr-xr-x 10 root root 540 Feb 16 23:59 run/
drwxr-xr-x  7 root root 140 Jan 26 19:43 spool/
drwxr-xr-x  3 root root  60 Jul 12  2013 state/
drwxrwxrwt  2 root root 100 Feb 16 23:59 tmp/
drwxr-xr-x  3 root root  60 Oct  8 21:07 xen/

 

 

Link to post

 

Hmm, we had this issue tracked down and resolved. Not sure why you're still getting it. Can you copy and paste the output you get from this command:

 

v /var/log

Just confirmed its happening on both my servers.  One with all the Dynamix plugins and various dockers running, and one with no plugins (and no dockers either)

Link to post

this issue tracked down and resolved. Not sure why you're still getting it. Can you copy and paste the output you get from this command:

 

v /var/log

 

You're quick  ;D... I deleted the post right after I posted it.  Had so many putty windows opened, I attached the wrong code.  Trying it again.

Just confirmed that it's still doing it...

 

Here's my output:

 

 v /var/log
total 168
-rw------- 1 root root      0 Feb 14 14:56 btmp
-rw-r--r-- 1 root root      0 Dec 11 18:26 cron
-rw-r--r-- 1 root root      0 Dec 11 18:26 debug
-rw-rw-rw- 1 root root  44877 Feb 16 23:50 dmesg
-rw-rw-rw- 1 root root   1564 Feb 16 23:50 docker.log
-rw-r--r-- 1 root root      0 Feb 14  2014 faillog
-rw-r--r-- 1 root root    292 Feb 16 23:50 lastlog
drwxr-xr-x 6 root root    120 Feb  3 16:04 libvirt/
-rw-r--r-- 1 root root      0 Dec 11 18:26 maillog
-rw-r--r-- 1 root root      0 Dec 11 18:26 messages
drwxr-xr-x 2 root root     40 May 15  2001 nfsd/
drwxr-xr-x 2 root root   2860 Feb 14 14:56 packages/
drwxr-xr-x 2 root root     80 Feb 15 19:49 plugins/
drwxr-xr-x 2 root root     40 Oct 27  1998 removed_packages/
drwxr-xr-x 2 root root     40 Oct 27  1998 removed_scripts/
drwxr-xr-x 3 root root    100 Feb 16 23:50 samba/
drwxr-xr-x 2 root root     40 Feb 14 14:57 scripts/
-rw-r--r-- 1 root root      0 Dec 11 18:26 secure
drwxr-xr-x 3 root root     60 Feb 14 14:57 setup/
-rw-r--r-- 1 root root      0 Dec 11 18:26 spooler
-rw-r--r-- 1 root root 107167 Feb 16 23:50 syslog
-rw-rw-r-- 1 root utmp   7296 Feb 16 23:50 wtmp
drwxr-xr-x 2 root root     40 Oct  8 21:07 xen/

cat /var/spool/cron/crontabs/root
# If you don't want the output of a cron job mailed to you, you have to direct
# any output to /dev/null.  We'll do this here since these jobs should run
# properly on a newly installed system.  If a script fails, run-parts will
# mail a notice to root.
#
# Run the hourly, daily, weekly, and monthly cron jobs.
# Jobs that need different timing may be entered into the crontab as before,
# but most really don't need greater granularity than this.  If the exact
# times of the hourly, daily, weekly, and monthly cron jobs do not suit your
# needs, feel free to adjust them.
#
# Run hourly cron jobs at 47 minutes after the hour:
47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
#
# Run daily cron jobs at 4:40 every day:
40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
#
# Run weekly cron jobs at 4:30 on the first day of the week:
30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
#
# Run monthly cron jobs at 4:20 on the first day of the month:
20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null
# Scheduled Parity Check
0 0 1 * * /root/mdcmd check  1>/dev/null 2>&1
root@Server_B:~#
Broadcast message from root@Server_B (Mon Feb 16 23:47:24 2015):

The system is going down for reboot NOW!
login as: root
root@192.168.1.62's password:
Linux 3.18.5-unRAID.
root@Server_B:~# cat /var/spool/cron/crontabs/root
# If you don't want the output of a cron job mailed to you, you have to direct
# any output to /dev/null.  We'll do this here since these jobs should run
# properly on a newly installed system.  If a script fails, run-parts will
# mail a notice to root.
#
# Run the hourly, daily, weekly, and monthly cron jobs.
# Jobs that need different timing may be entered into the crontab as before,
# but most really don't need greater granularity than this.  If the exact
# times of the hourly, daily, weekly, and monthly cron jobs do not suit your
# needs, feel free to adjust them.
#
# Run hourly cron jobs at 47 minutes after the hour:
47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
#
# Run daily cron jobs at 4:40 every day:
40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
#
# Run weekly cron jobs at 4:30 on the first day of the week:
30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
#
# Run monthly cron jobs at 4:20 on the first day of the month:
20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null

My bad. Different issue I was thinking of related to erroneous cron messages through the notification system. Your issue is different. Will investigate.

 

Hmm, we had this issue tracked down and resolved. Not sure why you're still getting it. Can you copy and paste the output you get from this command:

 

v /var/log

Just confirmed its happening on both my servers.  One with all the Dynamix plugins and various dockers running, and one with no plugins (and no dockers either)

 

Link to post

My bad. Different issue I was thinking of related to erroneous cron messages through the notification system. Your issue is different. Will investigate.

Already filed the defect report.  This issue was previously reported on b12 on January 1st.  Links in the defect report  http://lime-technology.com/forum/index.php?topic=38207.0

 

Autostart of scheduler after reboot is still not working in b13. The same is true for notifications.

 

I have created a temporary 'bandaid' plugin which overcome these current shortcomings. Install the plugin from the plugin manager, copy and paste the URL below to the install box:

 

https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.bandaid.plg

 

 

Link to post

My bad. Different issue I was thinking of related to erroneous cron messages through the notification system. Your issue is different. Will investigate.

Already filed the defect report.  This issue was previously reported on b12 on January 1st.  Links in the defect report  http://lime-technology.com/forum/index.php?topic=38207.0

 

Autostart of scheduler after reboot is still not working in b13. The same is true for notifications.

 

I have created a temporary 'bandaid' plugin which overcome these current shortcomings. Install the plugin from the plugin manager, copy and paste the URL below to the install box:

 

https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.bandaid.plg

Works like a charm  ;D  Thanks
Link to post

Updated to b13 via webui. Rebooted.

 

Now my cache drive shows up as unformatted (it's a 250gb ssd with xfs). It has all my xen vms on there so am a bit stuck.

 

Any thoughts?

 

Thanks.

 

**********************************************************

Just noticed this is the second similar report already - suggest holding off on applying this update until we see some kind of limetech response.....

***********************************************************

 

Meep, when you have the array stopped, if you click the device, what filesystem do you have assigned there?

 

Hi Jonp

 

It's XFS

 

Thanks

 

Link to post

Updated to b13 via webui. Rebooted.

 

Now my cache drive shows up as unformatted (it's a 250gb ssd with xfs). It has all my xen vms on there so am a bit stuck.

 

Any thoughts?

 

Thanks.

 

**********************************************************

Just noticed this is the second similar report already - suggest holding off on applying this update until we see some kind of limetech response.....

***********************************************************

 

Need a system log please.

Link to post

On the Tools page there is a "Trial Info" button that gives a nice welcome to the Trial version and lists the restrictions of it.

 

This probably shouldn't be there for a licensed version  :)

 

Yeah it's put there to get at it for testing during beta/rc.

Link to post

Stopped array ... disabled auto start of the array

rebooted

all disk show up....

start array ....

says my XFS cache disk is unformatted ??????

so no dockers nothing works :(

 

how to check and repair xfs drive ?

XFS_REPAIR but sdp or sdp1 ?

 

As mentioned by others, my cache drive now displays as unformatted. Unlike the previous posts, mine was btrfs formatted. Manually mounting the device results in a "bad superblock" message. I'm a long time unRAID user and am happy to financially contribute. That being said, I have a hard time justifying an upgrade from a free/trial license when my entire cache drive has just been compromised by a system upgrade.

 

Post a system log please.

Link to post

Stopped array ... disabled auto start of the array

rebooted

all disk show up....

start array ....

says my XFS cache disk is unformatted ??????

so no dockers nothing works :(

 

how to check and repair xfs drive ?

XFS_REPAIR but sdp or sdp1 ?

 

As mentioned by others, my cache drive now displays as unformatted. Unlike the previous posts, mine was btrfs formatted. Manually mounting the device results in a "bad superblock" message. I'm a long time unRAID user and am happy to financially contribute. That being said, I have a hard time justifying an upgrade from a free/trial license when my entire cache drive has just been compromised by a system upgrade.

 

Post a system log please.

 

Send all systemlogs to your email

please check your email

Link to post

what i don't get is why the disk is thinking he is reiserfs now

 

he was XFS 100 % sure

 

root@R2D2:~# xfs_db /dev/sdp1
xfs_db: /dev/sdp1 is not a valid XFS filesystem (unexpected SB magic number 0x00000000)
root@R2D2:~# xfs_db> sb 0
0: No such file or directory

fatal error -- couldn't initialize XFS library
root@R2D2:~# xfs_db /dev/sdp
xfs_db: /dev/sdp is not a valid XFS filesystem (unexpected SB magic number 0x00000000)
root@R2D2:~# dmesg | tail
mdcmd (21): import 20 0,0
mdcmd (22): import 21 0,0
mdcmd (23): import 22 0,0
mdcmd (24): import 23 0,0
ntfs: driver 2.1.30 [Flags: R/O MODULE].
REISERFS (device sdp1): found reiserfs format "3.6" with standard journal
REISERFS (device sdp1): using ordered data mode
reiserfs: using flush barriers
REISERFS warning (device sdp1): sh-462 check_advise_trans_params: bad transaction max size (66316). FSCK?
REISERFS warning (device sdp1): sh-2022 reiserfs_fill_super: unable to initialize journal space
root@R2D2:~#

Link to post

Updated without any hitches, it was a little disconcerting to see all my dockers come up blue and needing update.

 

"updating" them didn't pull any data though and everything restarted ok.

 

Yes there were some changes to how repos are handled (thanks gfjardim!) and so this one-time experience will get things synced up, but shouldn't require an actual download if your apps are current.

 

This should be in the install notes.

 

My bad. Different issue I was thinking of related to erroneous cron messages through the notification system. Your issue is different. Will investigate.

Already filed the defect report.  This issue was previously reported on b12 on January 1st.  Links in the defect report  http://lime-technology.com/forum/index.php?topic=38207.0

 

Autostart of scheduler after reboot is still not working in b13. The same is true for notifications.

 

I have created a temporary 'bandaid' plugin which overcome these current shortcomings. Install the plugin from the plugin manager, copy and paste the URL below to the install box:

 

https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.bandaid.plg

Works like a charm  ;D  Thanks

 

Also should be in the install notes.

 

We also need people to reinsatll all their dockers to fix the COW bug but we have not supplied a procedure to do so.

Link to post

After upgrading to -beta13 if your cache disk now appears 'unformatted', please do not run file system check or scrub, or reformat.  Instead please capture the system log without rebooting and post here.  Probably what's happening is this: those who experience this issue have probably manually partitioned their cache disk, or perhaps partitioned and built file system on some other linux distro.  I need to know if this is the case before implementing a fix.  thx

Link to post

I admit being a bit scared. I finally have one docker setup, with the help of Sparklyballs. Thanks for that. It runs in unraid 6 beta10 BTW.

 

Will this docker still work in beta 13? And will my Pro license still work.

 

I hate to troubleshoot afterwards. So what are the odds?

 

 

Link to post

I admit being a bit scared. I finally have one docker setup, with the help of Sparklyballs. Thanks for that. It runs in unraid 6 beta10 BTW.

 

Will this docker still work in beta 13? And will my Pro license still work.

No reason to suspect that the docker will not work with B13.  In fact it is likely to work better due to the bugs being fixed in this area.  Having said that it would not do any harm to back up your docker image file first.  You also get to take advantage of the built-in Dynamix GUI introduced at b12 and the enhanced functionality around this.

 

You Pro license will definitely continue to work without problems.

Link to post

Upgrade went very smoothly here, including upgrade of additional plugins.  Points of note, as already documented here - refreshing of dockers and having to remove and reinstall apcupsd plugin.

 

A slight problem with apcupsd is that NISIP is set to loaclhost in the distributed apcupsd.conf file, preventing other machines from chaining off the same UPS via the NIS server running on unRAID.  Edits to this file will not survive a reboot, so need a little SED script to change this setting.

 

No problems here with the cache drive - but I know that I pre-cleared it and formatted it to xfs on unRAID.

 

Thanks for all the hard work on this update!

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.