Jump to content

WonderfulSlipperyThing

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by WonderfulSlipperyThing

  1. Hi, I've had my unraid setup using HTTPS for a while now using Settings>Identification>Management Access' settings. It works well for the most part, however if the internet connection goes down the web UI is inaccessible. I don't think this is a bug - obviously things like Let's Encrypt are dependent on an internet connection. However, I was wondering if there was a way around this for when the internet connection goes down? I've tried connecting using http://x.x.x.x:80, but that just redirects me to the HTTPS version. Right now the only way around this that I can tell is to SSH into the system and changing USE_SSL auto to no, then restarting the server, but that's fairly cumbersome. Does anyone have any nifty solutions to this issue or am I stuck being dependent on an internet connection for smooth operation of the server's web UI? Thanks!
  2. I'm getting the same thing - hit the button and nothing happens. I just emailed the dev about it then realised I could just check the forums. Out of interest what device are you using? I'm using a Galaxy S9+
  3. Thanks for the writeup. Presumably auto in the future will use turbo write if all the drives happen to be spinning already. All of my drives are currently spinning but auto is still using read/modify/write which is why I came to this post, didn't realise that feature wasn't actually implemented yet. It's a great feature and works very well but I'm really looking forward to being able to get better control over it. If all the drives are spun up already it would be great if it just used reconstruct write. Also I think for many people they don't really care that much about write speeds unless they're actually actively doing something themselves so having it just enabled for SMB shares or something would also be really useful. Either way thanks for the explanation, makes a lot of sense and seems to make a huge speed difference!
  4. That's a good idea. Looked around and found the command (b2sum), you can specify the hash algorithm (running b2sum with no args even gives you something to copy paste into a shell script for loop). So I gave it a go with an 11GB file and these are the results I got: blake2b - 1 core: 93MB/s blake2s - 1 core: 167MB/s blake2bp - 4 cores: 315MB/s blake2sp - 8 cores: 620MB/s Judging by those stats the plugin is using standard blake2b. All of them maxed out whichever cores they were using (except for blake2sp which seemed to use around 85% of each). It would be great to have the option to use a different blake2 algorithm as it clearly makes quite a large difference, at least on my system (which I believe is fairly popular). Of course I'd rather not make writing files to the array quite that intensive so I'd probably use the blake2bp just so there's a little CPU wiggle-room left for other tasks, but it's always nice to have options!
  5. Interesting, probably just a quirk of my processor then. I did a quick Google search for "C2750" and "BLAKE2" and came up with this https://github.com/minio/blake2b-simd/issues/11 Not sure if that's the blake2 algorithm this is using (or if the problems posted there are related to my slow speeds) but I would guess that it's something to do with the "Seems to indicate that there's some kind of performance penalty on Atom when executing SSE with 64-bit operands" comment. Thanks for the benchmarks, always useful to have. So I guess for most people, BLAKE2 is probably the best option but for us Atom users, probably best stick with MD5.
  6. To anyone in the future interested in this, I did some very basic testing of this by creating a user share with a few files in and excluding every other share, then doing a build in order to find out which was fastest on my processor. Interestingly it doesn't seem like any of them are multi-core optimised (and I guess the build only does one file at a time, at least if they're all on one disk). I got 100% CPU load on one core of my processor whichever algorithm I used. At the end, the build gives you an average speed. I ran all the tests a couple of times with all different files and this is what I got: SHA1: Was around the 90 MB/s mark (unfortunately can't remember the exact results) BLAKE2: 93 MB/s MD5: 323 MB/s So if anyone wants to install this plugin and their primary concern is speed, at least on the C2750 8 core atom, MD5 is by far the fastest to use. I find it crazy that BLAKE2, which is supposed to be the fastest, is less than a third of the speed of MD5, but this may well just be a quirk of the C2750 processor.
  7. This looks like it'll be a great plugin and I've installed it but not set it up yet (as my server is currently doing some fairly intensive stuff and I don't want to complicate matters). As this plugin has been out a little while now, I was just wondering if anyone had any experience with which hashing algorithm to use on an Atom processor with the least performance impact? I have an Intel C2750D4I - it's not the most powerful processor ever but it's perfect for my usage scenario. The last thing I want, however, is for something to start writing to the array while I'm watching or transcoding something in plex and for it to interfere with that so it's pretty important to me that I use the hashing algorithm with the least performance impact. I see that blake2 is supposed to be the fastest of the bunch, but I also saw somewhere that you have to make sure your processor is compatible (plus sometimes things are fast on some processors but not on others). If anyone has any input on this, especially if they've used it with an Intel Atom C2750, I'd really appreciate it as obviously hashing all of my storage 3 times to find the fastest one would take a long time when someone's probably already got some information available!
  8. I've just set this up because without it, Headphones is painfully slow... anyway, it completed it's stuff (from what I can tell) within a couple of hours and I can access the server's web interface just fine. But Headphones is still taking absolutely ages to actually get anything out of it. Just a few questions... When the web interface is available, databases searchable etc., is it done? Or is it still doing stuff in the background? Do I have to manually enter any commands to get the database to optimise or to build a search index? Is there any way to easily test API calls into it? ...or is it just that Headphones is insanely slow and there's nothing anyone can do about it? These are the last few lines I see in the log, and there hasn't been anything since: Mon Sep 12 13:53:41 2016 : Creating search indexes ... (CreateSearchIndexes.sql) Mon Sep 12 14:09:33 2016 : Setting up replication ... (ReplicationSetup.sql) Mon Sep 12 14:09:33 2016 : Optimizing database ... VACUUM Mon Sep 12 14:11:09 2016 : Initialized and imported data into the database. Mon Sep 12 14:11:09 2016 : InitDb.pl succeeded INITIAL IMPORT IS COMPLETE, MOVING TO NEXT PHASE LOG: received fast shutdown request waiting for server to shut down...LOG: aborting any active transactions .LOG: shutting down .........LOG: database system is shut down done server stopped [cont-init.d] 30-initialise-database: exited 0. [cont-init.d] 40-config-redis: executing... [cont-init.d] 40-config-redis: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [614] 12 Sep 14:11:20.992 # Server started, Redis version 2.8.4 [614] 12 Sep 14:11:20.993 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Thanks in advance. All of your plugins for unRAID are awesome by the way!
×
×
  • Create New...