-
Posts
16091 -
Joined
-
Last visited
-
Days Won
65
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by JonathanM
-
-
UrBackup is what I use. Discussion here.
-
8 minutes ago, Jebberino84 said:
I did mention in my first post that I already tried safe mode
There are many different options in Windows safe mode. You didn't mention how many you tried, it implied just the single option (with internet)
-
1 hour ago, professeurx said:
I'm speak french sorry ,
-
2 hours ago, sannitig said:
I thought the wiping of the disk was the same as formatting.
Nope. Wiping the disk writes all zeroes, removing any traces of files or filesystem formats. Think of a format as a filing cabinet with drawers and folders. It allows files to be stored in an organized fashion so they can be easily retrieved, as opposed to just tossing the files on the floor in an empty room. Filesystems take up space even when there aren't any files stored.
-
Windows safe mode might reveal some information.
-
Try looking in the support thread for this container. Click on the container icon in Unraid's GUI, and select support.
-
1 hour ago, cosmickatamari said:
I've sent Unraid support an email for assistance but wondering how fast are they typically on resolving these issues?
Did you get an auto reply? If not try sending again.
1 hour ago, cosmickatamari said:the dockers were stored in a /mnt in the array
Generally containers put their executables and other common program parts inside the docker.img file, and their customizable parts live in ./appdata/* and the templates are on the flash drive. Do you have any backups of the flash drive?
1 hour ago, cosmickatamari said:(the parity drive has NOT changed). But do the other drives need to be listed on the array in that order?
If you only had a single parity drive in Parity1, then data drive order doesn't matter. If for some reason you had that parity drive in Parity2, then order does matter. The two parity slots use different calculations.
-
2 hours ago, Getting Goin said:
zero response
Pick one app to work on, read the first post in that app's thread and follow the troubleshooting steps. When you have the appropriate log file, read through it and XXXX out any credentials then post it in that app's thread with a description of what you have done so far and where you are getting stuck. Since you say you have the same issue on multiple containers, I'm betting if you fix one, you can easily figure out how to deal with the rest.
-
6 hours ago, dbs179 said:
How do I get the template back onto the unraid USB?
Restore it from your last flash drive backup.
-
There are at least 5 different implementations of nextcloud, each with their own support areas. Please click on the nextcloud icon in the gui and select the support item, then read and post in that specific area.
-
I don't know if the time being wrong will give that error, but try setting the BIOS time to GMT.
You should be able to save the diagnostics zip file to the flash, shut down the machine, then attach the zip file to your next post here.
-
-
39 minutes ago, stainless_steve said:
I am at 80% (737GB)
80% is about 16GB, not 737GB
On 4/15/2024 at 3:22 PM, stainless_steve said:I checked and the docker.img file is just 21.5GB big
20 hours ago, JonathanM said:It's the percentage of space used INSIDE the docker.img file. Not the free space on the drive holding the image file.
-
I don't understand what you think is wrong, everything looks good to me.
-
14 hours ago, asbath said:
This I believe will tell the system I want to move everything in appdata from the array to the cache.
Everything will only be files not in use, so to get "everything" moved you need to stop the docker and vm services in settings. You will know when the services are stopped because the docker and vm tabs will be missing from the GUI.
-
6 hours ago, stainless_steve said:
Where is the maximum size of 128GB coming from?
Motherboard reported, may be accurate, maybe not.
6 hours ago, stainless_steve said:what determines the maximum size of the Log ?
fixed allocation by Unraid
6 hours ago, stainless_steve said:and most importantly: why is the docker at 79% (732GB)? I checked and the docker.img file is just 21.5GB
It's the percentage of space used INSIDE the docker.img file. Not the free space on the drive holding the image file.
-
Use the 6.12.10 files, overwrite everything on the USB except the config folder. Make sure you keep the config folder, make a backup of it before you do anything else.
-
39 minutes ago, asbath said:
So I figured, easy enough, I'll just use cp to manually bring the files over from /mnt/user/appdata/* to /mnt/cache/appdata/*. That went swimmingly.
Pretty sure that didn't work like you thought it did, because those locations are the same.
/mnt/user paths are the combined view of all the root paths on the array disks and the pools combined. user share and disk share should never be mixed in a file operation.
-
On 4/14/2024 at 8:50 AM, ConnerVT said:
apcupsd may have this functionality as well,
It does, I use it extensively, each of my VM's runs apcupsd in slave mode, and is set to begin shutting down a minute or two after the host server reports a loss of power. Server shutdown is much smoother for me if the VM's are all shut down before the server itself starts the shutdown sequence.
- 1
-
22 hours ago, rutherford said:
I noticed some IP in China is slamming my ports. What's a good way to shut that down?
Block the entire 140.210.*.* network in your firewall.
-
17 hours ago, steve1977 said:
Any idea what could be driving this.
Ask your ISP if they put you on a CGNAT address. If so, ask if they can put you back on a direct internet address instead.
-
I just thought of a another VERY good reason I want the parity drive to be read all the way to the end.
Hidden errors. Unless you run a complete SMART test, that portion of the parity drive beyond the last data drive is NEVER read or written during normal use. The last thing I want is for there to be a bad sector lurking in the last bit of the parity drive, just waiting for me to add a data drive and start exercising it. At least with a parity check that portion of the drive is read regularly.
Count me as 1 vote to keep the current behaviour, for the above reason alone.
-
22 hours ago, thatdude78 said:
The issue is my back up is a year old. During this period i replaced a failed 3TB HDD with an 18TB HDD plus I also had two parities when this backup was created which my current config did not.
Just be glad that one of your current data drives wasn't in use as a parity with that old backup. That sort of error is generally fatal to the data that was on said drive.
Lesson here, keep current backups, and delete any old backups made before a drive change. You really don't want to accidentally use an old backup.
-
16 hours ago, Gragorg said:
I agree it should skip the portion after the largest data drive.
If that were implemented, it would require a change in the drive addition and replacement code, because it would no longer be a given that the full capacity of the parity drive is indeed zeroes. I don't see that kind of code rewrite happening, mainly because what is there now works, and mucking around with such important code that is currently working would require a VERY strong reason, given the amount of work needed to test for edge cases and general bugs that could be introduced.
- 1
How can one securely autostart an encrypted unRAID array
in Security
Posted
It means recovery from corruption can be impossible with encryption in the way.
Corruption can happen with hardware errors, like bad RAM, cables, or power issues. The problem is, you don't know it's going to happen until it does, and RAID (of any sort, not just Unraid) can't always compensate, meaning unless you have complete backups, you will lose data.
Unraid or any RAID can't help with file deletion or overwriting good data with bad, so backups are always needed, but with encryption, the recovery options are even more limited, so backups are even more necessary.
If the data is important enough to encrypt, it's important enough to keep multiple copies in multiple locations.