bobe99

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

bobe99's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Is there any plan to update to avidemux 2.7.6? This container is now 2 revs back.
  2. Oops. Yah, the array was running. I see that the GUI works fine in the regard when stopped My bad.; I should have known better (I'm a little stressed at moment, if that's an excuse ;-). I've used the GUI to set xfs on an non-encrypted array, started the array ... same result.
  3. I went to my the disk1 page and found the field "File system type" to be non-editable via the GUI. I edited /boot/configs/disk.cfg:diskFsType.1="xfs" (and .2), rebooted and restarted the array. This time, the main page "FS" reported xfs on the 2 disks, but unfortunately had the same mount error.
  4. I stopped the array and performed xfs_repair -L on two of the drives. The output from xfs_repair was uneventful. I restarted the array an got the same error on these drives (all 7 drives for that matter). For grins, I rebooted and started the array again; same results.
  5. I first did the -n flavor which looked pretty benign, so then I did the no-option check (e.g. xfs_repair /dev/sdc1). I have not tried the -L option as I thought I'd ask here first before performing the "last resort" (per the xfs_repair help message). I'm not sure what the implications of -L are.
  6. I had a working unraid install on an i7 based server and attempted to migrate to a xeon based system (r720xd). I moved the existing drives and flash into the new server. Created a new config based on the existing data layout and attempted to start the array. The arra started but all 7 data drives are reporting "Unmountable: Unsupported partition layout". I logged into the CLI and looked at each disk (some LUKS, most unencrypted). I was able to mount each partition by hand (e.g. mount /dev/sdc1 /mnt). I tried this with each disk and was able to successful mount each drive. It appears that all my data is intact (whoo whoo). After unmount, I looked at each disk with sgdisk -v and each disk looked normal, e.g.: # sgdisk -v /dev/sdc No problems found. 2166478 free sectors (1.0 GiB) available in 2 segments, the largest of which is 2166448 (1.0 GiB) in size. After a little forum reading, I tried a "new config" from scratch with no drives defined, then rebooted. After logging back in, all drives showed up in the unassigned devices section. Each disk was properly identified and in each case, reported the FS as "xfs". I was able to mount each drive using the UD "mount" button without issue. I then reinstated my original disk config, rebooted and started the array. Same mount error seen. In looking at syslog.txt, the only abnormality I see is the Title error: Sep 14 10:58:32 unraid emhttpd: /mnt/disk1 mount error: Unsupported partition layout Sep 14 10:58:32 unraid emhttpd: shcmd (502): umount /mnt/disk1 Sep 14 10:58:32 unraid root: umount: /mnt/disk1: not mounted. Sep 14 10:58:32 unraid emhttpd: shcmd (502): exit status: 32 Sep 14 10:58:32 unraid emhttpd: shcmd (503): rmdir /mnt/disk1 Sep 14 10:58:32 unraid emhttpd: shcmd (504): mkdir -p /mnt/disk2 < etc for all 7 drives > This is really not a very helpful message. It would be nice if the entire failing mount command was logged. If it was, I might be able to glean some insight into the root cause of this error. I then tried looking for info on emhttpd (hoping to find source code so I could figure out the mount command used), but was unsuccessful. I should also say that xfs_repair is happy with each drive, e.g.: # xfs_repair /dev/sdc1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done I'm hoping someone can tell me why unraid is so unhappy with these drives, maybe provide a "partition inspection command" to look closer? Maybe tell me what the actual mount command format would be? Remember, all disks were very happy in my old server. As these drives are all almost full with data, re-formatting is not an option without me first copying the data from each old "unsupported partition layout" drive to a new, properly formatted disk. I could do this, but it would mean my server is non-functional for about a week as I would have to serially copy each large 5400 RPM at less than 200MB/s. Hoping someone can provide me the secret handshake for fixing the partition issue that unraid is complaining about. Thanks (and fingers crossed, someone can help)! unraid-diagnostics-20190914-1059.zip
  7. From the silence I take it people are generally not having this issue. Can anyone confirm this is supposed to work with this docker?
  8. Hi, I'm getting notifications that state I need to update the brute-force-settings app to v1.0.3. I went into the apps->security and clicked the update button. All that happened was an "updating" message (state never changed). I refreshed the screen and the button reverted to "Update to 1.0.3", but now also shows "You have 1 app update pending" in a banner across the top of the page. After waiting for an hour without the app updating, I restarted the nextcloud docker. After logging back in, the state was the same (app needs an update and the "pend" message at the top). Is there some secret recipe to get app updates to work (akin to the nextclould update procedure)? I somewhat remember trying to initially install some apps (way back when) and saw the same behavior, so I think this is more a general problem with app install/update rather than specific to this update. I did check a few logs an saw nothing (but I'm no expert - probably looking in the wrong places). I also checked the general file permissions by logging into the docker and found most files were the 'abc' owner and group except for a few files owned by 'root'. I believe this is normal. Any pointers would be most appreciated!