dcoens
-
Posts
51 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by dcoens
-
-
This is my tail log. I was copying my data to my usb drive as a backup. I'm getting this error when tring to bring it online. Is their a XFS repair command or something? I have no idea about a duplicate UUID.
Sep 6 18:21:50 COENSSERVER unassigned.devices: Partition 'WD Elements_25A3' cannot be mounted.
Sep 6 18:22:03 COENSSERVER kernel: usb 2-1.4: USB disconnect, device number 14
Sep 6 18:22:22 COENSSERVER kernel: usb 2-1.6: new high-speed USB device number 15 using ehci-pci
Sep 6 18:22:22 COENSSERVER kernel: usb-storage 2-1.6:1.0: USB Mass Storage device detected
Sep 6 18:22:22 COENSSERVER kernel: scsi host9: usb-storage 2-1.6:1.0
Sep 6 18:22:23 COENSSERVER kernel: scsi 9:0:0:0: Direct-Access WD Elements 25A3 1030 PQ: 0 ANSI: 6
Sep 6 18:22:23 COENSSERVER kernel: sd 9:0:0:0: Attached scsi generic sg12 type 0
Sep 6 18:22:23 COENSSERVER kernel: sd 9:0:0:0: [sdm] Spinning up disk...
Sep 6 18:22:40 COENSSERVER kernel: .................ready
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] Very big device. Trying to use READ CAPACITY(16).
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] 27344699392 512-byte logical blocks: (14.0 TB/12.7 TiB)
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] 4096-byte physical blocks
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] Write Protect is off
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] Mode Sense: 47 00 10 08
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] No Caching mode page found
Sep 6 18:22:40 COENSSERVER kernel: sd 9:0:0:0: [sdm] Assuming drive cache: write through
Sep 6 18:22:41 COENSSERVER kernel: sdm: sdm1
Sep 6 18:22:41 COENSSERVER kernel: sd 9:0:0:0: [sdm] Attached SCSI disk
Sep 6 18:22:41 COENSSERVER unassigned.devices: Adding disk '/dev/sdm1'...
Sep 6 18:22:41 COENSSERVER unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime '/dev/sdm1' '/mnt/disks/WD_Elements_25A3'
Sep 6 18:22:41 COENSSERVER kernel: XFS (sdm1): Filesystem has duplicate UUID 5ecba8e2-00f3-4555-b4e5-4af0c877c5be - can't mount
Sep 6 18:22:41 COENSSERVER unassigned.devices: Mount of '/dev/sdm1' failed: 'mount: /mnt/disks/WD_Elements_25A3: wrong fs type, bad option, bad superblock on /dev/sdm1, missing codepage or helper program, or other error. '
Sep 6 18:22:41 COENSSERVER unassigned.devices: Partition 'WD Elements_25A3' cannot be mounted.
Sep 6 18:22:55 COENSSERVER unassigned.devices: Warning: Can't get rotational setting of 'sdo'. -
The latest pull where I can get the version papermerge 2 with the GUI to work is linuxserver/papermerge:v2.0.0rc45-ls29. I just started with the latest and work backwards to one that worked.
linuxserver has a comment after this release that may be causing our issue with latest, but I don't know.
As I will hit "publish release" button, an automatic github action will be triggered to build and publish docker image for this release 🎉
Besides, I also changed little bit docker image, so that papermerge.conf.py and production.py (i.e. settings) are symlinks to /opt/etc/papermerge.conf.py and /opt/etc/production.py which will make easier to map those files from your local filesystem. More on this in following screencasts :) -
Thanks. It happened again. it lasted for a few days... First errors seem to show up in log as XFS I/O errors is sectors xxxxx then turns into BTRFS errors...
All of my shares are not accessible during this stage either. didn't think the cache drive would cause non access to a share... especially if I'm not using the cache for a particular share.
Thanks for your help.
-
error.bmpmy unraid server become unresponsive share wise and I see this in the log. my cache drive is formatted with xfs. I've rebooted and everything is fine as far as I can tell. is something happening with my docker img file?
Thanks
-
On 11/15/2020 at 2:19 PM, chrisgo said:
I have found a solution for this problem:
Set all volume mountings for this docker to the appdata share. Then just move the media folder to another share which has cache prefer and change it in the config. The docker needs this share also mapped. Now the media is on the array and the discs spin down correctly.
I have done this for both my /media and /queue. It no longer spins up the drive even if I process files in via the /queue. I'm therefor assuming that the "papermerdge.db" that remains in the /mnt/cache/appdata/papermerge is what was keeping the drives spun up. Now, how do keep the database i.e. "papermerdge.db" file and the current /media files in sync. Do you depend on "CA Appdata Backup/Restore v2" plugin to keep this up to date? periodically?
-
7 hours ago, BRiT said:
The answer to this is "From your backup(s)".
unraid does not remove the need of backups. If you value your data, have a backup plan to follow.
I agree... That is the BEST answer... I guess I'm living on the wild side... also, not be tired and using "Midnight" commander and messed up a f6 vs f8 function - LoL.
-
7 hours ago, Squid said:
Its explorer is read only. It saves the files elsewhere AFAIK if it doesnt then yeah you will have to rebuild parity
Thank you
-
29 minutes ago, Squid said:
You would need to install tcl to get that script to work. Not sure if it's available in NerdPack or DevPack. Personally, I'd go with UFS Explorer, although it does cost $
Thanks - UFS Explorer seems to be the answer. What do I do. Shut down the server, pull the drive, put it into my windows machine and run UFS Explorer and undelete, put it back into unraid server and start it back up and everything would be fine? wouldn't parity be messed up?
-
I just ended up doing the same thing and deleted a folder while using MC while cleaning up some other files by mistake. I saw this, but its not installed on the unraid box...
https://github.com/ianka/xfs_undelete
I'm also on 6.8.3... any advice
-
15 hours ago, saarg said:
Have you installed the dvb plugin and downloaded a build?
Thanks. I didn't know about the linuxserver.io dvb plugin. I installed it and tried a couple of his builds. even though the dvb was installed, it showed that a tv card was not installed. I then checked out the DVB site and my card (ASUS) is not compatible (although not directly listed although it is an ASUS line of cards)...
I also have a HDHomerun network tuner also, so I'll use that instead.
Thanks.
-
This is the first time trying to setup tvheadend and cannot see my tuner card in the TV adapters section . I've tried adding in --device /dev/dvb, but then the container will not start.
On my system devices list, it does show up...
[1745:2100]03:00.0 Multimedia controller: ViXS Systems, Inc. XCode 2100 Series
What am I doing wrong? or is this card not compatible...
-
On 4/8/2020 at 3:26 AM, lewispm said:
I did this and the document_consumer would run, but the webserver wasn't running. There was an error in the log about /etc/passwd being locked, not sure if that was the problem.
I switched the two lines in the entry.sh (listing the webserver first, then the document_consumer second, as below) and it works now.
#! /bin/bash /sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload & /sbin/docker-entrypoint.sh document_consumer & wait
And I also had to make the file executable (chmod +x).
I was getting frustrated in getting it to run as one docker... Until I read this post. once I reversed the order and chmod +x the entry.sh file, and
made it a single line as below, the stopping and starting the docker became stable and worked every time. I was getting mixed results until I made it on a one liner.
#! /bin/bash
/sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload & /sbin/docker-entrypoint.sh document_consumer & wait
Now everything works as a single docker. Thanks Louispm, Bling and TOa for making paperless a great tool. Perhaps this script or equivalent can be embedded into the next docker update.
-
ok. will do - thanks for the link
-
I know this is a older thread, but my docker.img is also 50GB and I wanted to shrink it, but I get an error. I tried it both with the docker mounted and unmounted. my docker takes uses only 6GB and wanted to shrink it to 10GB. is their a new way of doing this or any suggestions? Thanks in advanced.
_________
Label: none uuid: f49bb50d-4601-4f39-8969-7fe3ac318b0a
Total devices 1 FS bytes used 4.45GiB
devid 1 size 50.00GiB used 6.07GiB path /dev/loop2
_________
btrfs filesystem resize 10GB /mnt/cache/docker.img
ERROR: resize works on mounted filesystems and accepts only
directories as argument. Passing file containing a btrfs image
would resize the underlying filesystem instead of the image. -
I am absolutely interested in this. I tried to get a fully working copy, but could not. I also think that this could serve as the primary tftp/pxe server for other bootable images
-
Thanks... Boy, I was way off base on what I though this was.. Ill take a look at mediaBrower3
-
I was thinking of using this instead of plexmediaserver as I thought XMBC it has greater options for serving online streaming videos
I have a headless server and a upnp box connected to the tv.
-
Running Docker Koma
I'm lost in how to setup this. Any help would be appreciated.
myip:8080 loads XBMC interface and if i click on profiles / Master user, I get "connection to server lost"
myip:3306 downloads a file and there is no interface. How does marindb work with XBMC.
-
I'm not a RFS hater because it's really been proven a great file system and recovery has been outstanding to say the least.
...
I personally have switched over to XFS all the way for these reasons, although I not suggesting we simply abandon RFS immediately.
Just curious, but why isn't anyone talking about ext4?
although technical and a little old, this was a good watch.
-
I'm not a RFS hater because it's really been proven a great file system and recovery has been outstanding to say the least.
However this isn't the first time this type of thing has happened with RFS.
Remember this? this was a code change to make things more better also... streamline... and was causing corruption. Tom had to do a intermediate fix until the RFS developers fixed it in the kernel.
http://lime-technology.com/forum/index.php?topic=28231.0
I do think that jonp "hunch" (see below) may be coming sooner that we may think. I don't think that RFS get's the same attention as other file systems in the linux kernel -- based on the last two serious instances (that I know of).
http://lime-technology.com/forum/index.php?topic=34783.0
"reiserfs vs. xfs for array devices
I need to do more and more of this testing, but generally speaking, I think it's fairly clear that xfs vs. reiserfs is more about a making a chess move now that will play out in the longer term to our advantage. Quite simply: we believe that if we fast-forward the clock, sooner or later there will be a point in time where xfs is going to have advantages for users over reiserfs. Call it a hunch, an educated guess, a prediction...whatever. We just really think resierfs is on it's way out the door. The bottom line is that for array devices, I would suggest migrating away from reiserfs as users find the opportunity to do so. Its not a rush... Its not going to break... Its been very stable... That said, think chess moves... In addition ALL cache pool devices should be BTRFS for now in my opinion unless you're never planning to expand past a single unprotected cache disk. If you don't have a secondary cache device yet, you can straggle along, but if you want to use a cache pool, you will need two btrfs disk devices assigned."
Here is some polling data that bjp999 put together. I think this is saying something.
XFS vs RFS vs BTRFS polling.
http://lime-technology.com/forum/index.php?topic=34776.0
I personally have switched over to XFS all the way for these reasons, although I not suggesting we simply abandon RFS immediately.
-
Been doing some reading and learning on the FOG project (Technically a cloning solution). This could be developed into a docker solution and it would be a PXE server for many PXE served systems.
-
I will create a badass PXE Server that uses Tiny Core Linux and a HTTP Server (faster than tftpboot) for Menus and isos / images. I will work on it tomorrow and post it for all of you to test.
Perhaps gfjardim wouldn't mind creating a slick WebGUI for adding / removing images and configuring the PXE Menu. I can get it close but he will need to take it the last mile.
NOTE: I am not going to support this long term so one of you will have to take it and own it.
Looking forward for your new PXE Server, I'm using the tftp one based on your instructions posted several months ago and I'm very happy with it.
Thank you!
I also am currently using the PXE Server and boot clonezilla via PXE along with a couple of other apps.
I came across this and this would be an excellent PXE server and HD cloning utility. Is there an docker expert that could roll this into a docker format?
-
No more off topic posts please
Okay
Has anyone tried making a docker for makeMKV? I know there is a linux version, but it looks like it needs compiled, and they usually have a new version about once/month, so it probably will need 'constant' maintenance. I'm not sure if building one will be easy or good, but I'd really like to put the work of creating my mkv files onto my unRAID box, and not have to continue doing it on my laptop, over wifi, which takes forever.
maybe this helps one get started...
http://www.makemkv.com/forum2/viewtopic.php?f=3&t=2047
I have latest makemkvcon installed on unraid, still playing with it........ But it looks good.
Has there been progress on the makemkv docker or otherwise?
-
A interesting note in 6b8: If you try xfs_check ( the primary tool for checking XFS disks) it returns with "xfs_check is deprecated and scheduled for removal in June 2014. Please use xfs_repair -n instead."
Something is wrong: code definitely uses 'xfs_repair', never used 'xfs_check'. Where are you seeing this message?
Looks like the xfs_check is on its way out the door and replaced with xfs_repair -n.
If if did:
xfs_check /dev/sdg1
it returns:
xfs_check is deprecated and scheduled for removal in June 2014.
Please use xfs_repair -n <dev> instead.
xfs_repair -n /dev/sdg1 works just fine.
however its still in the man pages "xfs_check and xfs_repair can be used to cross check each other":
http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch11s02.html
found this:
https://bugzilla.redhat.com/show_bug.cgi?id=1029458
That's why 'Check Filesytem' button for xfs uses xfs_repair. Why are you using 'xfs_check' from the command line?
Defaulted to old school:
Didn't realize the "check filesystem" was in the GUI... I see it now...
Summary of changes from 6.0-beta6 to 6.0-beta7
"- webGui: add "check filesystem" functions on DiskInfo page."
USB unassigned drives XFS error
in General Support
Posted
Ok... I used xfs_admin -U generate /dev/sdm1 command. It seemed to fix it. I am not sure why this would have happened in the first place.