Leaderboard

Popular Content

Showing content with the highest reputation on 04/15/19 in all areas

  1. I'm a FreeNAS migrant and a complete noob to unRAID but now I'm convinced. I have been a keen supporter of FreeNAS for 5 years. Went through the trouble of learning the ins and outs of FreeNAS versions 9, 10 and 11. It worked well for me in this time, obviously some troubles with jail updates and permissions at times etc, but I got by and each time I had trouble with the update I toyed with the idea of unRAID and have done so for about a year or so, but I just did not have the courage to migrate to it...until last weekend whilst trying to update Plex and Transmission...what a hassle and more of a joke. I tried everything. serveral new installs, adjusting permissions, moving data from the NAS, wiping drives and starting fresh...finaly I thought I had it working...alas no, what I had working stopped, so I thought to myself FI! Time to migrate...half the jobs done anyway with having all the data stored off the machine. unRAID to the rescue! First problem I incountered...wrtiting the boot data to the USB...the several (3x) large (64mb) USB drives I had did not work, constant problem was on booting into unRAID there was no LAN connection....5-10min reading and found that USB compatiblity was or is or seems to be an issue, so I used the cheapest 8mb one, some no name brand I had and it nooted up with LAN first pop (actually 4th pop). Sweet. Second problem I encounted....oh that's right there was none. I followed some tubers instructional video to setup unRIAD along with reading the manual, installed Plex and Transmission and it all works. I then decided to transfer files from USB backup to the arrany using Unassigned devices and Krusader. All files transfered in a day, parity check has been done and the system flash boot is backed up and all is humming along beautifully. Hardware I use for unRAID (previously FreeNAS)... M/B: ASUSTeK COMPUTER INC. - Z170M-PLUS CPU: Intel® Core™ i7-6700K CPU @ 4.00GHz Memory: 16 GB Cache Drive: 1x 120GB SSD Drives: 8x 3tb HDD @ 5400 (two drives being used for parity) Currently the system is only on a trail key but I'm convinced and will be purchasing license once the trail key gets closer to timing out. It could not be any easier!
    2 points
  2. Personally, Unraid is because I needed easily expandable storage - that ran on what I had. It just turns out it does more.
    2 points
  3. Question 1): I don't think that's a good reason to use unraid. Question 2): Maybe Question 3):
    2 points
  4. Hello all, I've just build a new unraid server and after playing around with a bit before I really start using it I was finding it the drive writes seemed to be a bit slow. I first noticed the slowness while I was doing a badblock run on the new drives and after 50 hours it had just finished the first pass. For 4TB drives I understand it should take a total of 40hours or so for this size of drive. I did not see any errors and killed the process and proceeded to build the array with the drives and when it was doing the parrity build the stated write speed to the drive with 47MB/sec which seemed to me a bit slow for the drives. I checked the spec's from the manufacturer and it said it should be closer to 200MB/sec so I'm getting about 25% of the speed. Next thing I did was boot into windows and ran a few benchmarking tools from there and they all reported read/write speeds about 200MB/sec... I did find that one of the LSI cards was in a PCIe 4x slot instead of an 8X one so I moved it to a new slot. Next try was a live linux USB to see if the problem was with unix after things seem to be fine in windows. Used knoppix and ran a few tests and the drives speeds where in the 200MB/sec again?!? My conclusion is the Unraid distro is the problem and not with hardware. Now for some hardware details of my system: Supermicro x10DRi-ln4+ 2x intel e5-2630's 3x LSI 9211 HBA's connected to a supermicro BPN-SAS-846A backplane 6x HGST 4tb 3.5" HDD HUS726040ALS214 SAS drives root@Thor:~# hdparm -Tt /dev/sdh /dev/sdh: Timing cached reads: 21860 MB in 1.99 seconds = 10970.43 MB/sec Timing buffered disk reads: 570 MB in 3.00 seconds = 189.82 MB/sec root@Thor:~# dd if=/dev/zero of=/dev/sdh bs=128k count=10k 10240+0 records in 10240+0 records out 1342177280 bytes (1.3 GB, 1.2 GiB) copied, 28.5249 s, 47.1 MB/s I'm not really sure where to look at next. thor-diagnostics-20180716-1149.zip
    1 point
  5. If you go to the Dashboard and scroll down to the Array section showing the drives then the SMART icon for the drives in question will be orange. If you click on that you get a small drop-down menu with ‘Acknowledge’ being one of the options. Once you have done that the icon turns green and you only get new notifications if the value changes.
    1 point
  6. Caaan doooo! Didn't realize 6.6 is stable ... sorry. Will try it and report back, thanks!
    1 point
  7. If you do not have a requirement for NAS type storage (which is Unraid’s core functionality) then it is unlikely to be the best match for your needs. The capability for VMs and Dockers is additional capability to make better use of the hardware you have.
    1 point
  8. 04/15/19 got me a trucker hat can't wait
    1 point
  9. This thread is 2 years old, so I guess I won't accuse you of hijacking another users support thread. It is usually better to start your own thread though. Just throwing another disk into the array for no particular reason is not really a good idea. I recommend not using any more disks than needed for capacity and add others as needed. Fewer disks means fewer opportunities for problems. And I don't recommend using old or small disks. Larger disks are more cost effective in several ways. You get more storage per dollar. A large disk gets you the same amount of storage as several smaller disks and only uses one port. And larger disks typically perform better than smaller disks due to increased density. And as mentioned in the previous paragraph, fewer disks means fewer opportunities for problems. And you should never use any disk of any size unless you trust it. Parity by itself cannot recover anything. ALL bits of parity PLUS ALL bits of ALL other disks must be reliably read in order to reliably reconstruct a disk. So the idea that you might put in a small disk that you just happen to have and are uncertain about should really be reconsidered.
    1 point
  10. By default, Plex only encodes with this unraid build and the GPU passed through. There's a way to enable decoding as well; but it is not officially supported by the Linuxserver.io, Plex, or unraid teams. Here's a script I've modified that can be used via the user scripts plugin on a schedule to enable the decoding support: https://gist.github.com/Xaero252/9f81593e4a5e6825c045686d685e2428 I will update that script as needed, until Plex adds official nvdec support. Also note that nvenc and nvdec DO NOT offload audio transcoding. This generally is of no concern, unless you are dealing with lossless Atmos audio, which can be quite substantial to downmix and transcode.
    1 point
  11. I would like some help with understanding my problem. First, I have things "working" but it bothers me that following the video from Spaceinvader didn't quite work. As I mentioned, I followed the instructions. When I point myself to the subdomain, I get the nginx default page. Somehow it isn't seeing the subdomain redirection. I followed some other instructions, and they mentioned that I can create a file in appdate/letsencrypt/nginx/site-confs directory, and I did so with the name nextcloud. I can't seem to get it to properly work using the same method with sonarr, as I get a bad gateway message. So, the questions I have are: 1. Why does it not work with proxy-confs (note that I tried to use port 444, and also the IP in proxy_pass, no difference). 2. What is the secret sauce to get sonarr working the same way as I got nextcloud working, or to properly get it to work in proxy-confs Hopefully this wasn't answered before, as I did search and read quite a few posts. File: appdata/letsencrypt/nginx/site-confs/nextcloud server { listen 443 ssl; server_name nextcloud.domainname.org; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass https://10.99.2.10:444/; proxy_max_temp_file_size 2048m; include /config/nginx/proxy.conf; } } File: appdate/letsencrypt/nginx/proxy-confs/nextcloud.subdomain.conf server { listen 443 ssl; listen [::]:443 ssl; server_name nextcloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_nextcloud nextcloud; proxy_max_temp_file_size 2048m; proxy_pass https://$upstream_nextcloud:443; } }
    1 point
  12. johnnie.black redirected me to this post As the writing speed of Parity Clearing time was showing 12 hrs. sdparm command not found was the message at first but as per AnnabellaRenee87 post, installed nerdpack GUI plugin. after that everything was magic. 12 hrs parity work reduced to 3 hrs. This is very helpful post. Thanks
    1 point
  13. Just wanted to come back to this thread and give an update. Since disabling C-States I have had no further hangs and the server is now sitting at 32 days uptime. Thanks everyone for your help!
    1 point
  14. After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
    1 point
  15. In advanced mode the list is always expanded. Switch to basic mode to get a collapsed list.
    1 point
  16. THANKYOU followed your points, and got it working - i need to read/play a bit with the vm stuff i think. I didn't have the disk set to sata, it was set to virtio.
    1 point