mgranger
-
Posts
168 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by mgranger
-
-
25 minutes ago, mrbilky said:
Well the only other suggestion to possibly eliminate something, it was recommended that I created a ram disk for testing and rule out any bottleneck on that front if correctly configured you should saturate transfers with no problem
so for creating a ram disk what does this entail. I created on one the Windows pc that is about 2.5 GB. Should i just transfer files to this from the unraid server. I did this and I was getting about 140 MB/s
-
41 minutes ago, mrbilky said:
Go to network adapters click on the 10 GBE network hit properties tab hit internet protocol version 4 tab hit properties tab ensure you have set your ip correctly then hit the configure button hit advanced tab scroll down to jumbo packet, I set mine to 9000 now go to your unRAID server and hit the settings tab and go to network settings and check that your 10 GBE network is set to the correct ip and the MTU is set for the same 9000 as your windows box Now I'm no expert on diags for sure but if you are bonding a 1Gb and a 10Gb network I'm told that's no good I also made my ip's for my setup as 10.10.10.10 and 10.10.10.20 respectively as I have read on many forums and setups this is good practice to prevent any bottlenecks
All of these settings seemed to be set like you said except mine was 9014 rather than 9000 and the Unraid was blank so i made it match. I used 192.168.10.x rather than the 10.10.10.x but this is different from my 1 Gb network so I think it would be ok.
-
41 minutes ago, 1812 said:
a quick look through and nothing jumped out at me. are you transferring small files in these folders? do you have a single larger 20GB+ file you could transfer? if the large single file transfer runs fast, and if the folder has lots of smaller files, then that's your problem.
I have 8 files in the folder i am copying. All between 5 and 10 gb. I tried just doing the 10 gb one and it did the same thing where it copied for a little bit at 300 MB/s then dropped to 80 MB/s
-
15 minutes ago, 1812 said:
perhaps your'e either filing up your ram cache and/or filling up your ssd cache buffer and it is then writing directly to the array... without knowing your setup it's hard to say.
What settings do you need to know. here is my tool diagnostics log file. Not sure if that would help or give you what you need to know. obviously the windows side is not included in this.
-
I am getting the expected speed (about 250-300 MB/s) for about 10 seconds but then it goes down to 80 MB/s.
Edit: So it seems to be working kind of. If I transfer 1 or 2 files at a time then it hits the 400MB/s no problem. When I did a folder that was 56GB in size it worked for a couple of seconds at 300 MB/s but then eventually dropped to 80 and stayed at 80. Is this really what is supposed to happen or is there something I am missing where it is getting throttled somehow.
-
1 hour ago, mrbilky said:
Did you map the share with their ip address (and did you assign ip's for both cards) if not your transfer speeds will reflect what you are getting now been down this road the last few weeks and that was what worked for me. Also enable jumbo packets for both your unRAID server and your windows rig to match each other
Yes i mapped the ip address and assigned ip's for both cards on a different network. I am not sure how to enable jumbo packets
1 hour ago, johnnie.black said:You can use iperf to test max Ethernet bandwidth and see if that's the problem or it's not lan related.
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.10.190, port 54292
[ 5] local 192.168.10.192 port 5201 connected to 192.168.10.190 port 54294
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 985 MBytes 8.26 Gbits/sec
[ 5] 1.00-2.00 sec 1022 MBytes 8.58 Gbits/sec
[ 5] 2.00-3.00 sec 1.00 GBytes 8.61 Gbits/sec
[ 5] 3.00-4.00 sec 1023 MBytes 8.58 Gbits/sec
[ 5] 4.00-5.00 sec 1.00 GBytes 8.61 Gbits/sec
[ 5] 5.00-6.00 sec 1019 MBytes 8.55 Gbits/sec
[ 5] 6.00-7.00 sec 1.00 GBytes 8.62 Gbits/sec
[ 5] 7.00-8.00 sec 1015 MBytes 8.51 Gbits/sec
[ 5] 8.00-9.00 sec 1013 MBytes 8.50 Gbits/sec
[ 5] 9.00-10.00 sec 1021 MBytes 8.56 Gbits/sec
[ 5] 10.00-10.04 sec 37.3 MBytes 8.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.04 sec 9.98 GBytes 8.54 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------Here is what i got for iperf
-
I just installed two ASUS XG-C100C 10G Network Adapter PCI-E cards (one on my windows PC and one on my Unraid server) I ran CAT6a cable between the two (100 ft cable) I both seem to be recognized ok but when I go to transfer the files I am only getting 80 MB/s. I was hoping to get something more around 300-400 MB/s. Is there something I should be doing different?
-
so on the first one how do you know when it is an external issues vs failing disk issue. is it the number of sectors? what is typically acceptable and is there anysthing to do to salvage it. although i don't know if i can trust it now out of concern that it will fail on me.
-
ok. on my other server today i now have an error on one of its drives as well i just started using unraid but seems strange to see these errors on these disks. i guess i never had anything really scanning them but is there something i can do better. the toshiba drive doesn't seem very old
-
Here is the diagnostics files. The SMART scan for the Toshiba drive never finished. It had been running for over 20 hours (and probably the last 6 or 8 hours at 90%) so I stopped it.
-
i am still waiting for it to finish the smart scan. it has been on 90% for a couple of hours now
-
27 minutes ago, Kewjoe said:
No, are you sure the CPU usage is being caused my pihole?
Sent from my ONEPLUS A3000 using Tapatalk
Well I looked into it a little more. It may not have been as bad as I thought It looks like my CPU went up 4%-5% when I was running pihole which seems like quite a bit for what it is doing though.
-
Last night my 2nd parity disk came up with 302 errors. I am currently running a SMART extended test on it and I will post it once it is complete. What should I be looking for as far as determining whether to replace it or fix the issues? Is there anything else I can post to help determine this?
-
When I use pihole my cpu usage goes way up. is this normal?
-
Well so I made the database using the command line but when i try to find the database in my appdata/influxdb folder it is not there; but it is there when i use SHOW DATABASE from the command line.
Edit: Nevermind I also figured this out. My home-assistant had to point to my IP address rather than localhost. Then it was created in my appdata/influxdb
-
19 minutes ago, atribe said:
Once the container is started you can get into the docker containers command line and then follow the instructions in the docs: https://docs.influxdata.com/influxdb/v1.2/introduction/getting_started/#creating-a-database
Sorry i am still learning but how do you command line into a docker?
Edit: Nevermind I figured it out.
-
Ok so here is what I am trying to do even though I am sure it is unecessary. I am trying to monitor how many days are left on my let's encrypt cert. So I came across this forum but it requires a script that someone wrote.
https://community.home-assistant.io/t/sensor-to-show-expiry-date-of-ssl-certificate/13479
So I am kind of unsure where to put this script file to get it to run from home-assistant.
@trurl I would suggest trying home-assistant sometime. I started with the basic stuff but have branched out over about a year and half of using it.
-
Is there a way to create a blank database. I am trying to set this up with Home-Assistant but it requires the database to be setup already and I can't figure out how to do this in InfluxDB Docker.
I tried the following in terminal but it didn't create a database.
$ docker run --rm -e INFLUXDB_DB=home_assistant influxdb /init-influxdb.sh
-
I am trying to add a script that can be called from the command line but not sure where I should add this script file. I understand there is a user scripts plugin and I have used this and it is great however I am trying to run a script from Home Assistant through a command line and I don't think this plugin would work with that. Where should I add my script to make this work?
-
10 hours ago, mgranger said:
cp -al $Destination1/TVShows/Daily0 $Destination1/TVShows/Daily1
I was thinking this line would move everything from Daily0 to Daily1 and make them links and then from then on everything would get passed with mv as a link but obviously I am wrong because it is not doing this.
-
I am trying to use rsync to make a local backup of my data. I would like to make incremental backups where the first folder is a full backup and then the folder will get moved as days go on keeping a hard link to the first folder. I found an article on this here: http://www.admin-magazine.com/Articles/Using-rsync-for-Backups/(offset)/2 The problem i am having is that when I seem to do the mv command it copies all the files rather than making a hard link thus I lose all my hard disk space. Any advice on what I am doing wrong.
Here is my script:
#!/bin/bash Source4=/mnt/user/Recordings/Test Destination1=/mnt/disks/BackupB1/Daily ##################### ### Destination 1 ### ##################### ### Source 1 ### rm -rf $Destination1/TVShows/Daily5 mv $Destination1/TVShows/Daily4 $Destination1/TVShows/Daily5 mv $Destination1/TVShows/Daily3 $Destination1/TVShows/Daily4 mv $Destination1/TVShows/Daily2 $Destination1/TVShows/Daily3 mv $Destination1/TVShows/Daily1 $Destination1/TVShows/Daily2 cp -al $Destination1/TVShows/Daily0 $Destination1/TVShows/Daily1 rsync -a --delete $Source4/ $Destination1/TVShows/Daily0/ |& logger
I have also tried it with this as my last two lines but no luck:
mv $Destination1/TVShows/Daily0 $Destination1/TVShows/Daily1 rsync -avh --delete --link-dest=$Destination1/TVShows/Daily1/ $Source4/ $Destination1/TVShows/Daily0/ |& logger
-
Well I am not even an amateur on this. Is there a good place to read up more on this. All I have right now is a duckdns. I do have my own domain though but have never set that up for websites although I don't know if this is possible to set up for websites/letsencrypt. I would like to figure this out but not sure where to start. I am sure this is outside the scope of this forum but would take any advice on where to start.
-
@DZMM I got it to work with your information. I was trying to follow the reverse proxy instructions using https://cyanlabs.net/tutorials/the-complete-unraid-reverse-proxy-duck-dns-dynamic-dns-and-letsencrypt-guide/ however it did not have information to use Home Assistant however when I used something similar for the home assistant line it did not work. I was hoping to be able to use some of these and add home assistant to it however that did't seem to work. Here is the default file from the above guide.
upstream backend { server 192.168.1.3:19999; keepalive 64; } server { listen 443 ssl default_server; listen 80 default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location = / { return 301 /htpc; } location /sonarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.3:8989/sonarr; } location /radarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.3:7878/radarr; } location /htpc { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.3:8085/htpc; } location /downloads { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.3:8112/; proxy_set_header X-Deluge-Base "/downloads/"; } #PLEX location /web { # serve the CSS code proxy_pass http://192.168.1.3:32400; } # Main /plex rewrite location /plex { # proxy request to plex server proxy_pass http://192.168.1.3:32400/web; } location /nextcloud { include /config/nginx/proxy.conf; proxy_pass https://192.168.1.3:444/nextcloud; } location ~ /netdata/(?<ndpath>.*) { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend/$ndpath$is_args$args; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } }
UPDATE:
It seemed to work when I brought those locations into @DZMM default above.
-
This is probably an easy question but I am learning. I have set up home assistant and I have configured Let's Encrypt to give me a cert so that I am able to access Home Assistant using a DDNS however I am not sure how I get my configuration.yaml to point to the cert and key file which reside in the Let's Encrypt container. I tried setting up a Host Path from Home Assistant but I don't think I did this correctly.
80 MB/s on 10GB ethernet
in Hardware
Posted
Is there a way to tell if it is filling up the ram cache or ssd cache buffer?