![](http://content.invisioncic.com/u329766/set_resources_34/84c1e40ea0e759e3f1505eb1788ddf3c_pattern.png)
nanohits
-
Posts
27 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by nanohits
-
-
I have the same issue. I have my parity check set to last day of the month. Today I upgraded to 6.12.3 and rebooted after the update and woke up this morning to find out that parity check had started.
-
1 minute ago, theryno said:
Created a tutorial HERE for anyone interested in how to get Mastodon working on Unraid.
Thanks. Great stuff. I will give it a try and let you know if I get stuck somewhere.
-
1 minute ago, trurl said:
ADD mean assign disks to new slots in the array. You want to REPLACE disks, which means you assign them to the same slot as the disk you are replacing.
Thank you so much. Thats helps.
-
37 minutes ago, apandey said:
Hard to say without understanding your array. Do you have parity? Single or Double?
If you want to avoid lots of questions, post diagnostics so it's clear what your current setup is and what you want to do
I would always pre clear, to avoid running into a disk failure while adding it to the array. If directly adding, I will be adding them one-by-one and wait for clear to happen even if I have dual parity
So single parity of 1 X 14TB drive.
Current setup is I have 8 drives, 6 of them are 14TB and 2 of them are 4 TB. One of the drives which is 14TB is a parity drive. The rest are part of the array as data. Now I started seeing some errors in one of the 4TB drive and I decided to replace these 2 4TB drives with 14TB. So what I want to do is replace those 2 drives and add them to the array.
-
3 minutes ago, trurl said:
When you format a clear disk, it is no longer clear. If you don't already have valid parity you don't need a clear disk. If you add a formatted disk to an array that already has valid parity, Unraid will clear it so parity remains valid.
Format means "write an empty filesystem (of some specific type) to this disk". That is what it has always meant in every OS you have ever used.
When replacing a disk for rebuild, formatting the replacement disk is completely pointless. The replacement disk is going to be completely overwritten by rebuild. Doesn't matter at all what is already on the disk, freshly formatted empty filesystem, clear, full of pron, whatever.
The main thing I worry about is someone will format a disk in the array. Many have made that mistake thinking they can rebuild. Even if the array disk is missing/disabled/emulated/unmountable. When you format a disk in the parity array, Unraid treats that write operation exactly as it does any other, by updating parity so it remains valid. So after formatting a disk in the parity array, the only thing that can be rebuilt is an empty filesystem.
Fair enough. So what should I do in this case. Just remive the 2 X old 4TB drives and stick in the new 2 X 14TB drives and add them to array?
-
Just now, trurl said:
Please don't use the word format when talking about replacing disks. Don't even think it.
Do you know what format means?
I have always done this. This is what I did when I first installed all the drives in Unraid. Pre clear then format and then add to the array and had no issues. I am not talking about formatting the drives I already have in Unraid.
-
Just now, trurl said:
I assume you want to keep the data from the older disks by rebuilding them onto the new disks?
Yeah preferably.
-
-
2 minutes ago, trurl said:
Your post is completely unreadable in light mode. Please don't color background or fonts. Try again.
I have never colored or or used any different fonts. THis is how it looks for me:
I have edited it in non dark mode. Should be ok now?
-
I am getting 2 new 14TB drives and will need to decommision 2 older 4TB that are starting to get errors from the parity and then install these new drives. What is the process to decommision them? do I just remove the old 4TB drives, install the new drives, pre clear and format them and start the array?
My parity drive is already a 14TB drive.
-
Hi guys,
I am trying to run the Nextcloud AIO container and I am running on CGNAT and I have it successfully running on the CF Argo tunnel. However when i go to setup the app at the start where it asks for domain, it has an issue, how do I disable the domain check in the Unraid Nextcloud AIO container?
Any ideas? Tks
-
On 11/29/2022 at 8:45 AM, Sharpie said:
Been bashing my head trying to get lscr.io/linuxserver/mastodon container on CA to run but I feel like I am misssing something, Anyone know of a guide or a youtube video on how to set this up? Or is this going to be like the Martix server all over again?
Thanks in advance and I hope this post helps futuer travelers
Same here ad havent found anything so far as a tutorial or a video.
-
Will it show there is a new version on the dashboard. It isnt showing up yet. I guess it will take some time to show up there.
-
I just. installed Filebrowser but there is an issue with folders.
So if I am an admin and have a folder, and if I add a user they can see everything in my own folder. How do we prevent this from happening?
-
I notice there is a new update for the Emby server. Will there be a new container update for this?
Thanks
-
On 9/13/2020 at 9:33 PM, DuneJeeper said:
Here is some more information that might help regarding how to find or where to place the php.ini
Thanks
[ { "Id": "c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8", "Created": "2020-09-13T11:52:55.623306182Z", "Path": "docker-php-entrypoint", "Args": [ "apache2-foreground" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 26696, "ExitCode": 0, "Error": "", "StartedAt": "2020-09-13T11:52:56.445654777Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:ca10c78920b2d9dce5384438aa02f63743cb085c44175609483dd492bd860c59", "ResolvConfPath": "/var/lib/docker/containers/c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8/resolv.conf", "HostnamePath": "/var/lib/docker/containers/c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8/hostname", "HostsPath": "/var/lib/docker/containers/c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8/hosts", "LogPath": "/var/lib/docker/containers/c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8/c722cdca72996bb1e1578db3ab664adf02d2138329278de0b7e37889e24308f8-json.log", "Name": "/chevereto", "RestartCount": 0, "Driver": "btrfs", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": { "max-file": "1", "max-size": "50m" } }, "NetworkMode": "bridge", "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "8011" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": null, "Name": "btrfs" }, "Mounts": [ { "Type": "volume", "Name": "490b7297445ab5d3aded3874ffc058cf49ff7acf81733f85bc7d7f2f2a2af7ce", "Source": "/var/lib/docker/volumes/490b7297445ab5d3aded3874ffc058cf49ff7acf81733f85bc7d7f2f2a2af7ce/_data", "Destination": "/var/www/html/images", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "c722cdca7299", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "80/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "HOST_OS=Unraid", "CHEVERETO_DB_HOST=192.168.86.23", "CHEVERETO_DB_USERNAME=XXXXXX", "CHEVERETO_DB_PASSWORD=XXXXXX", "CHEVERETO_DB_NAME=chevereto", "CHEVERETO_DB_PREFIX=chv_", "CHEVERETO_DEFAULT_TIMEZONE=", "TZ=America/Chicago", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "PHPIZE_DEPS=autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c", "PHP_INI_DIR=/usr/local/etc/php", "APACHE_CONFDIR=/etc/apache2", "APACHE_ENVVARS=/etc/apache2/envvars", "PHP_EXTRA_BUILD_DEPS=apache2-dev", "PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi", "PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64", "PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64", "PHP_LDFLAGS=-Wl,-O1 -pie", "GPG_KEYS=CBAF69F173A0FEA4B537F470D66C9593118BCCB6 F38252826ACD957EF380D39F2F7956BC5DA04B5D", "PHP_VERSION=7.3.22", "PHP_URL=https://www.php.net/distributions/php-7.3.22.tar.xz", "PHP_ASC_URL=https://www.php.net/distributions/php-7.3.22.tar.xz.asc", "PHP_SHA256=0e66606d3bdab5c2ae3f778136bfe8788e574913a3d8138695e54d98562f1fb5", "PHP_MD5=", "CHEVERETO_DB_PORT=3306" ], "Cmd": [ "apache2-foreground" ], "Image": "nmtan/chevereto", "Volumes": { "/var/www/html/images": {} }, "WorkingDir": "/var/www/html", "Entrypoint": [ "docker-php-entrypoint" ], "OnBuild": null, "Labels": { "build_signature": "Chevereto free version master; built on 2020-09-11T13:58:29Z; Using PHP version 7.3.22", "maintainer": "Tan Nguyen <tan.mng90@gmail.com>", "org.label-schema.license": "Apache-2.0", "org.label-schema.name": "Chevereto Free", "org.label-schema.url": "https://github.com/tanmng/docker-chevereto", "org.label-schema.vcs-url": "https://github.com/tanmng/docker-chevereto", "org.label-schema.version": "master" }, "StopSignal": "SIGWINCH" }, "NetworkSettings": { "Bridge": "", "SandboxID": "f7dde77cdb2b9a952d7284bbd2dd1911ded2202ad486cf59bc5523f14abbb138", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8011" } ] }, "SandboxKey": "/var/run/docker/netns/f7dde77cdb2b", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "ebabaa3a12fbe65f4c76f020e44d3c8ec49a7b3916b20f9a49b1a0b235291c4a", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.6", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:06", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "e0d14f00d07d3bd9dfcc25380ea63422eea475a2eb7beeeb6157b78da305eda7", "EndpointID": "ebabaa3a12fbe65f4c76f020e44d3c8ec49a7b3916b20f9a49b1a0b235291c4a", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.6", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:06", "DriverOpts": null } } } } ]
So where is this file and how can I find this config file?
-
4 minutes ago, itimpi said:
You do not normally want to run a pre-clear on a SSD as it is unnecessary wear.
Yeah I found out about that a bit too late. Oh well.
-
Thank you, Running pre clear on some SATA's now so will reboot in about 2 days as Pre clear will take that much time. Will let you know.
-
I got a new system and it has the Samsung 980 1TB NVMe installed.
In UnRaid, I did a pre clear and then formatted the system to XFS and then it says success but then it goes back to NTFS.
Here is a screenshot: https://d.pr/i/h9hm4O
Any ideas?
Tks
-
4 minutes ago, KingfisherUK said:
My server runs on an E5-2650L v4 (14-core, 1.7GHz), 64GB RAM with 7 x 18TB and dual 1TB M.2 cache. That is running Plex, multiple *arrs, Home Assistant etc. and it barely breaks a sweat (I do have a nVidia Quadro P400 for transcoding though).
That's great to know. I think for 1 user for Plex transcoding even in 4K, 14 cores is more than enough.
-
Hi guys,
Thinking of getting this server for my home which will run Plex mainly but other things like home automation, cameras etc.
Intel E5-2660 V4 (14 cores)
128GB Memory DDR4
32TB SATA drives (8 X 4)
10Gb port
1TB USB SSD drive
Thoughts?
-
One more question.
So at a later stage can I stick the Flash drive in another computer and as long as the drives are parity etc on the network, will this be ok or it has to be on the same computer that Unraid is run on?
-
You are a legend. Thanks
-
1
-
-
Thanks so much, Start button now appears, so later when I add more hard disks, I can parity them right? for now I can start experimenting with just disk1 then,.
Cheers
Error "Cannot create or write into the data directory"
in Docker Engine
Posted
I have the same problem.