-
Posts
88 -
Joined
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by MisterWolfe
-
-
Good morning,
I have just installed an ssd on an existing unraid server (Version: 6.12.4). I'm attempting to copy the docker.img and appdata files from the array to the cache drive. After a short period of time into the copy process I get an error message saying that the drive is read only and can't be written to. I have tried copying using MC and a simply copy command from an ssh window and I get the same behaviour.
I want to be clear that the cache drive is initially readable and writable before I start the copy process. It becomes read only after I start the copy process and about 10GB of data has been copied. I did notice that the SSD temperature went up to 55 degrees during the last copy.
The ssd is a Samsung 870 EVO.
I initially formatted the drive as BTRFS when I first got the error. I reformatted as XFS and I still get the error. The drive is formatted XFS currently.
The drive is connected via a PCIE expansion card. (Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)).
I've attached the diagnostics file.
Thanks for looking!
-
Thank you for replying. It was indeed the seek error rate that was concerning me. I appreciate your time.
-
-
I'm on 6.9.1 and have an LSI 9200-8i controller. No issues at all with my ironwolf drives, thankfully. I wonder if the issue is card version specific.
-
Good morning,
I was following a video tutorial on enabling ssl on the web gui, and had completed setting up my router to allow dns rebinding for unraid.net. After I did that I refreshed the web gui and saw that certificate showed provisioned (I did not manually provision). I wasn't being redirected to the unraid.net web url. After trying to navigate to the dashboard I was kicked out of the web gui and am now no longer able to log in using my admin creds via the server's ip address on the gui or via ssh.
Is there any way I can reset these changes?
I've pulled the flash drive, and have a tower diagnostic file that seems to have been auto created around the same time things topped working.
I'd appreciate any help you may have to offer.
Thanks!
edit: If I set ssl to yes or no in the ident.cfg file and I can access the gui. If I change to auto, I cannot. I've set it to yes while I test.
edit2: d'oh. So YES works in ssl settings because I'm already using a letsencrypt certificate that I pull from the letsencrypt docker. I use a script to import that into unraid so I can use the cert for plex. The fact that I cannot access unraid on the LAN using a subdomain of my primary domain means that I should figure out the port and dnsmasq settings in my router.
-
Have you thought of using a VPN between the two servers? Then you could potentially mount shares via unassigned devices and make cron jobs that you could use with rsync.
The main downside I can see here might be transfer speed with a VPN, which partially depends on the cpu's in both your machines.
-
I've been using unraid for 9 years. I like how responsive the devs are for adding new features. I also really like how use friendly it is.
-
I had this issue some time ago. I bypassed the problem by writing a script that checks for the existence of a file on the unassigned disk path before allowing the copy to take place. Not the most elegant solution, but it works.
-
Edit: Upgrading from 6.6.1 to 6.6.2 and the plugin works properly again.
Having some issues with the NerdPack plugin. When I go into settings and choose nerdpack, none of the packages show up. I've tried removing the plugin, deleting the nerdpack directory, rebooting, & reinstalling the plugin with no change. Is this a just my server issue or have others also experienced this?
I'm on v6.6.1.
I'm including a screenshot
and diagnostics. tower-diagnostics-20181012-1254.zip
Thanks
-
Thank you for the information. It's greatly appreciated.
-
Sorry to resurrect an old thread.
I have been using the above method for cron jobs for a number of years successfully. On Feb 23, cron jobs stopped running that I've set up in the /boot/config/plugins/mycron folder. The cron jobs are inside a file called bkp.cron. The cron jobs in the file are all rsyncs that I use to back up primary data to a secondary location.
The cron jobs stopped working around the time I moved from 6.4 to 6.5. I haven't downgraded to test if an earlier version works yet. It's a production environment and I don't want to take it offline unless I have to.
Upgrading to the latest version of Unraid, 6.5.1-rc1, hasn't resolved the issue.
Has anyone else experienced this?
I've attached diags.
-
4 hours ago, Hogwind said:
I just added port 2203 and now it works. By the looks of it you have to have two ports mapped 2202 and 2203.
I noticed this in the sys log also:
Fix Common Problems: Error: Docker Application ubooquity, Container Port 2203 not found or changed on installed applicationhttps://hub.docker.com/r/linuxserver/ubooquity/ tells you:
-p 2202
- the library port-p 2203
- the admin portThat did it, thanks!
-
I notice that the docker has been updated Ubooquity version 2.1. It looks like the update made some internal changes that made the existing databases inaccessible. The databases are repopulating from the path set in the docker setup, rather than from the path set in the ubooquity admin area. The admin page is also no longer accessible via the normal url method.
Is there a new method I should be using to access the admin page, and if not, how can I revert back to v 1.10?
Thanks!
-
Good to know. Thanks.
-
I've had some trouble updating mylar to update. The web gui tells me there is a newer version available with 33 new commits. I hit the update option, the web gui goes to an updating page, and then shows me that none of the updates have been applied.
Log file gives me this error:
2016-09-22 11:04:37 WARNING Could not get the latest commit from github
2016-09-22 11:04:36 INFO Retrieving latest version information from github
I've got it set yp pull from the development branch.
I've tried deleting the docker and re-adding it. I've tried creating a new appadata directory with no change.
Suggestions?
-
It was a reply to me. I discovered my error and deleted my post and he must have seen it before I removed it.
Excellent script. Works as expected when I try to run it from the proper spot.
That's expected, since the beta plugin is not compatible with Unassigned Devices. Please go to Settings>Preclear Disk Beta
gfjardim ... was that reply for me?
If so i do not understand what you are trying to tell me. I do not have a preclear disk beta tab within settings. FYI i downloaded the 'amended' version by 'search'ing for 'preclear' on the CA apps tab. It downloaded the new version (dated today) and the install script confirmed this but the plugin it installed still shows old timestamps and version number against it and hasn't got 'beta' anywhere to be seen. You know much better than me how these things work. I'm just happy to confirm that it does!!
-
Thanks. I just browsed the man page for sfdisk. -R has been depracated in 2.26/
"Since version 2.26 sfdisk no longer provides the -R or --re-read option to force the kernel to reread the partition table. Use blockdev --rereadpt instead"
edit: I modified the two instances of sfdisk -R in the preclear script with blockdev --rereadpt and the preclear is now running. I'll update as to the status when done in case there are any gotchas.
edit 2: I knew it wouldn't be that easy! It starts scanning and throws an error. I'll wait for a script update. I don't have spare hardware to run preclear on.
-
Good evening,
I'm having an issue running preclear on 2 new 4TB Seagate NAS drives.
The drives show when I do a preclear_disk.sh -l. When I run preclear_disk.sh /dev/sdx (x=drive slot) I get a message that the drive is busy.
I thought it might be a power issue so I moved the connectors to different power cables. It's a 750W supply, which should be good for the 10-11 or so drives I
have. No effect.
Using 6.2 beta 18.
I've included the output of the preclear below. Syslog attached.
Thanks!
root@Tower:/boot# preclear_disk.sh -l ====================================1.15 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== /dev/sdg = ata-ST4000VN000-1H4168_Z305RYB1 /dev/sdf = ata-ST4000VN000-1H4168_Z305RYKG root@Tower:/boot# preclear_disk.sh /dev/sdf sfdisk: invalid option -- 'R' Usage: sfdisk [options] <dev> [[-N] <part>] sfdisk [options] <command> Display or manipulate a disk partition table. Commands: -A, --activate <dev> [<part> ...] list or set bootable MBR partitions -d, --dump <dev> dump partition table (usable for later input) -J, --json <dev> dump partition table in JSON format -g, --show-geometry [<dev> ...] list geometry of all or specified devices -l, --list [<dev> ...] list partitions of each device -F, --list-free [<dev> ...] list unpartitions free areas of each device -s, --show-size [<dev> ...] list sizes of all or specified devices -T, --list-types print the recognized types (see -X) -V, --verify [<dev> ...] test whether partitions seem correct --part-label <dev> <part> [<str>] print or change partition label --part-type <dev> <part> [<type>] print or change partition type --part-uuid <dev> <part> [<uuid>] print or change partition uuid --part-attrs <dev> <part> [<str>] print or change partition attributes <dev> device (usually disk) path <part> partition number <type> partition type, GUID for GPT, hex for MBR Options: -a, --append append partitions to existing partition table -b, --backup backup partition table sectors (see -O) --bytes print SIZE in bytes rather than in human readable for mat -f, --force disable all consistency checking --color[=<when>] colorize output (auto, always or never) colors are enabled by default -N, --partno <num> specify partition number -n, --no-act do everything except write to device --no-reread do not check whether the device is in use -O, --backup-file <path> override default backup file name -o, --output <list> output columns -q, --quiet suppress extra info messages -X, --label <name> specify label type (dos, gpt, ...) -Y, --label-nested <name> specify nested label type (dos, bsd) -L, --Linux deprecated, only for backward compatibility -u, --unit S deprecated, only sector unit is supported -h, --help display this help and exit -v, --version output version information and exit Available columns (for -o): gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize sgi: Device Start End Sectors Cylinders Size Type Id Attrs sun: Device Start End Sectors Cylinders Size Type Id Flags For more details see sfdisk(. Sorry: Device /dev/sdf is busy.: 1
-
Inside SQL, try changing the owncloud user from owncloud@localhost to owncloud@%
I was getting the same type of error you are, and this solved my issue. Using % allows connections from any host.
It's no good setting privileges on localhost as to all intents and purposes it's not running on localhost as it's in a separate docker container. localhost would be suitable if they were both running in the same container.
I change from localhost to IP. I see the newly granted permissions. Still no luck.
I can safely connect to the database using "mysql -u owncloud -p -t owncloud". It prompts for password and I can log in without any issues.
I've to admit though, that once (only once) I got past this stage and got to the 'binlog' issue. I fixed 'my.conf' and got back to redo the step and never got beyond the original issue of database connection. I restarted both DB and OC few times though.
-
Thanks, your reply was helpful.
I had to set the user given access to the owncloud database owncloud@% instead of owncloud@localhost or owncloud@IPADDRESS
2 things possibly..
Try the IP address instead of localhost in the owncloud login.
And depending on the error after that, you could replace 'localhost' during the DB/user creation process with '%' instead.
- NinthWalker
-
Using delugeVPN docker
1)Unraid 6.1.8
2) Intel q9550 processor
3) no
4)
Module Size Used by
xt_nat 1665 10
veth 4401 0
ipt_MASQUERADE 981 11
nf_nat_masquerade_ipv4 1649 1 ipt_MASQUERADE
iptable_nat 1615 1
nf_conntrack_ipv4 5626 2
nf_nat_ipv4 4111 1 iptable_nat
iptable_filter 1280 1
ip_tables 9310 2 iptable_filter,iptable_nat
nf_nat 9789 3 nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4
md_mod 30680 6
tun 16465 0
hwmon_vid 2028 0
coretemp 5044 0
r8169 57464 0
mii 3339 1 r8169
ata_piix 24047 6
i2c_i801 8917 0
skge 27834 0
pata_marvell 2843 0
ahci 24107 0
sata_sil 7159 2
libahci 18687 1 ahci
asus_atk0110 6874 0
acpi_cpufreq 6074 0
5) - attached supervisord.log and supervisord.log.1
6) I used deluge under 6.1.7 and it worked.
After looking at supervisord.1.log, which was generated when I installed delugevpn, I saw that the openvpn folder couldn't be accessed. I changed the owner to nobody.users (from root.root) and restarted. supervisord.log is the file generated after restarting.
Edit: Still no webUI access on the LAN
-
Long story short, I had an existing owncloud docker installed and working for the last year or so. Today, I decided to wipe my docker.img file and start fresh.
Using mariadb as a backend.
I created the user and db in mariadb with mysql workbench as follows
CREATE USER 'owncloud'@'localhost' IDENTIFIED BY 'ThePasswordYouWant';
CREATE DATABASE IF NOT EXISTS owncloud;
GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'ThePasswordYouWant';
It returns successful and I can see the db and user.
Then I installed owncloud. Owncloud installed fine and I can get to the web page at https://localhost:8000
data folder mapped from /var/www/owncloud/data/ in the container to /mnt/user/OWNCLOUD/data on the array
I select storage and database and select the mysql/mariadb option and enter the database id and credentials I used when setting it up. I enter the admin account info that I want
I get this error:
Failed to Connect to the database... SQLSTATE[HY000] [2002] No such file or directory
I read on the owncloud site that this is could be a problem with permissions on the owncloud.db file in the data directory ( /var/www/owncloud/data/), so I chmod'ed it to 777 and then 755.
Same error.
I've tried creating and using different databases and users with the same result.
The owncloud.log file in the data directory is empty. I attached an image of the web page after the connection fails.
Does anyone have any suggestions?
Thank you!
New Cache drive becomes read only during copy process
in General Support
Posted
Switching to onboard SATA controller resolved the issue. Thanks for replying.