-
Posts
16148 -
Joined
-
Last visited
-
Days Won
65
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by JonathanM
-
-
Try zipping it. Logs compress incredibly well because of all the repeated text.(I have the logs file but it weights 22MO)
-
Posted from another thread, Tom definitely has this feature in mind, at least the part about bringing snap into the limetech supported feature list.
2. The cache pool - this is one or more devices organized as a btrfs "raid1" pool. There's lots of information out there on btrfs vs. zfs. No doubt zfs is a more mature file system, but the linux community appears highly motivated (especially lately) to make this file system absolutely robust, and most would say it's destined to be the file system of choice for linux moving forward.
Like data disks, the cache disk (single device pool) or cache pool can be exported on the network. At this time we export "all or nothing" but there are plans to let you create subvolumes and export those individually as well.
The cache disk/pool also supports a unique feature: we are able to "cache" creation of new objects there, and then later move them off cache storage and onto the array. The main purpose for doing this is to speed up write performance when you need it: at the time new files are being written to the server.
3. Ad hoc devices - these are devices not in the array or pool. Sometimes they are referred to as "snap" devices (shared non-array partition). Officially we don't support the use of snap devices but people do make use of them. Eventually we will formalize this storage type though, especially for use by virtual machines.
OP, Can you expand more on the use case and how you see this part of your request working?
if you can have one cache pool, why not two, or three, sync'd to different folders obviously.Why do you want to subdivide the cache pool?
-
Just to clarify, multiple cache pools as the traditional "cache" doesn't really compute, the original intent of the cache drive was to provide a temporary fast write location for data intended to end up on the array.
What you are asking for exists already as a third party plugin, snap. Alternatively, it can be done at the command line and scripting. I think this feature request would be better stated as a need for a limetech supported and maintained app drive(s) or pool(s), that wouldn't directly participate in array writes. Now that we have baked in virtualization and more app support, I think it's a good idea to separate the idea of cache and apps, possibly even with a backup routine. It would be nice to have all that supported in the main gui.
-
Before you do that, would it be possible to code the update button so it excludes those of us still on earlier 6.0 beta versions? I think a version check with exclusions for 6.0 beta 1-10 would be sufficient.No, right now the upgrade is a manual one, but I likely will link the v1 updater to the v2 plugins soon so it's a one button update for everyone.
The v2 plugins exist on a different github folder also, more consolidated that way
I'm perfectly ok with you stopping support for 1.x plugins, I can update software versions myself if necessary, but please don't force the update.
Unless the landscape changes drastically, I'm planning on sticking with 6b6 until 6.0 goes final.
-
I agree with your premise, however I think you may be tilting at windmills. I would recommend setting up a disposable gmail account with a password like unraidssmptpasswordhandlingsucks or something like that, and use the forwarding and filtering rules to accomplish what you need.it's extremely annoying that the recommendation I'm given is to update my password in every single location that I use google services b/c unraid can't handle a '#' sign... -
+1 for dual parity. This is a must feature!
;D+1 for dual parity. This is a must feature!
Extra redundancy for the win!!!
-
I'd recommend updating to 14a, if you submit a bug report on an old version, the first thing anybody is going to say is "does it do it on the current beta?"
-
Logically, since if I can lose any single drive and recover, but any 2 or more drives and I lose the data on all dead drives, I would prefer to have the parity drive fail, because I haven't lost any data on it. If you lose 2 data drives and the parity is still ok, you gain nothing because you still have nothing from the 2 data drives. Unraid requires ALL remaining drives to be healthy to recover from 1 failure.Parity is the keystone to unRAID, and with a potentially failing drive you're really asking for trouble.
I would argue the parity is the least important of the drives for the sake of data redundancy, and the most important for performance, as each write must touch the parity drive, but reads only involve each individual data drive.
-
I would love to see a proof of concept and some use examples. Would you be able to run it against selected user shares instead of selecting specific drives to protect?How do people feel about snapraid on top of unRAID as an added level of protection?
Last time I tried, I was able to compile it in a slackware dev environment.
-
Not so silently wondering.This indeed sounds promising, but I am silently wondering how you can be sure it was 14a that solved the issue on your test system if you were actually unsure previously how you reproduced the issue with 14?
What changed from 14 to 14a that fixed the issue? If you don't know, how will you be able to keep the problem from coming back in future releases?
-
As an aside, has the coding been changed to keep a long test from being interrupted by a spindown? Since the test is now part of the main interface and not an addon, it would probably be fairly trivial to change.When you have a few hours and the server will be unused, you can Click on the Run Disk Selftest and select long test (which will take a long time i.e. hours.).
When it's complete click on the Disk self-test log and Disk Error-log (if any) and post the results.
-
Short answer, parity has not been fully created or checked yet, so you aren't protected against a drive failure yet.Just switched from Windows Server to Unraid. Dashboard showing my PARITY drive is Faulty? but it has been working all these years no problem, what does this mean?
The red smart indicators may show that both the parity drive and disk 1 have issues, but until you examine the smart reports and possibly post them here for evaluation there is nothing that can be said definitively about their condition.
Lastly, just because a drive has been working for years doesn't mean it isn't about to fail. All hard drives eventually fail, the important thing is to accurately predict when.
-
Did you mount it internally to the other computer, or use a USB connection? Some USB adapters munge existing data by doing some internal translation in the SATA-USB controller and require a fresh format to work, or they just plain don't work correctly with drives over a certain size. Also, not all linux distributions include reiserfs support out of the box, it has to be downloaded and configured with the applicable package manager.so now, my question is: why is this disk readable in a new unraid server, but not on any linux computer? I want to make sure that my other disks would be readable outside of the array, this is supposed to be a feature of unraid, and in my opinion, an important one!
-
You have the cart before the horse. Opening up unraid and then trying to secure it after the fact pretty much guarantees a hacked network.I was able to do this by setting up my modem to be a dumb switch and my Netgear router to do the PPOE dialing and then port forwarding gets easy as well, since you have to forward the port on your router only.
Now only need to figure out on how i can add some security to the server to prevent intrusion.
-
Yes. There is a line in the package.conf file that describes what file and possibly version to match, and if it matches, it shows installed.What does it mean in unmenu when its says installed but not downloaded? Does that mean unraid has it and unmenu is just seeing it?
In your specific example, these are the relevant lines. The package file doesn't exist in the package folder on your flash drive, so it shows not downloaded. The package installed matches what is in the /usr/lib folder already.
PACKAGE_URL http://slackware.cs.utah.edu/pub/slackware/slackware-12.1/slackware/a/cxxlibs-6.0.9-i486-1.tgz PACKAGE_FILE cxxlibs-6.0.9-i486-4.tgz PACKAGE_INSTALLED /usr/lib/libstdc++.so.6 PACKAGE_MD5 ad8c0c5789581a947fd0a387c2f5be8a PACKAGE_DEPENDENCIES none PACKAGE_INSTALLATION test -f /usr/lib/libstdc++.so.6 && echo "/usr/lib/libstdc++.so.6 already exists. Package not installed." PACKAGE_INSTALLATION test ! -f /usr/lib/libstdc++.so.6 && installpkg cxxlibs-6.0.9-i486-4.tgz PACKAGE_VERSION_TEST ls --time-style=long-iso -l /usr/lib/libstdc++.so.6 | awk '{print $10}' PACKAGE_VERSION_STRING libstdc++.so.6.0.9
-
Exactly as it should. The drive is currently fine to use, but that could change very quickly to not fine.Now those last two replies put me in a quandary...
All drives will fail, the important bit is predicting when.
-
I don't have any research to back this up, but my gut feeling is that you lost an entire address line on a chip, or something similar, that effected all 4096 sectors at once. Instead of reallocating onesies and twosies, SSD's might fail whole blocks at a time.
-
Many DDR2 motherboards will only handle 2GB or 4GB sticks anyway. Which motherboard do you have that will use an 8GB stick?Yes, DDR2 are very expensive compared to DDR3.
Luckly, yesterday while browsing through Craig's List, I found someone selling 8GB DDR2 800 MHz for $30. So, I snatched it.
-
What are the IP addresses of the unraid box, your desktop PC, and your laptop? Are you connecting to a guest wifi on your router?Hello, so i set up the tower from my dektop PC which is wired tot the server, now im on my laptop which is wifi and i cannot login from it, meaning i can log into the tower from the website only if i login from my PC first ?
-
what exactly did you type, were you in maintenance mode, and what was the output of the command?so no go on the reiserFSchk...
-
reiserfs is extremely resilient, you may still be able to recover the data from the new disk after the rebuild is complete. DO NOT WRITE ANYTHING TO DISK1.well crap.
It was riserfs and was redballed.
I removed the drive, installed new precleared disk. Formatted (xfs) and started the rebuild.
Rough outline of what needs to happen to start to try to recover disk 1's data.
1. Finish rebuild
2. Restart array in maintenance mode
3. Run reiserfsck --check /dev/md1
4. Post output of that command here on the forum and solicit further advice
-
Please list exactly what you did. What format was disk1 before it failed? Rebuilding puts back the original file system exactly as it was, you can't switch file systems with the data intact. If you formatted the new drive, you erased it.So, I haven't had a failed drive a year or two... rebooted my unraid and disk1 won't come back to life.
Swap in a brand new disk that I had precleared, set as XFS and it started to rebuild.
Should I see data on the /mnt/disk1 as it's rebuilding? When I telnet into unraid and look at /mnt/disk1 there is nothing there.
thanks
-
Actually, you could use the free version and just assign a couple drives each time. Just be very careful to NEVER populate the parity slot. You could take the opportunity to browse the disks and label them appropriately, then when your key arrives you will be ready to assign everything at once.Thanks will do, just waiting for my key before I can.
-
Telnet in and get a syslog, zip it up and post it.
Swapping data disk howto?
in General Support
Posted