neilt0

Members
  • Posts

    1509
  • Joined

  • Last visited

Everything posted by neilt0

  1. I've Googled for an answer, but can someone advise why an inbound backup is failing? Config: * New build with unRAID 6.2 Stable * 2 drives installed -- 2x 8TB, 1 is parity. No cache drive. ~7TB free. fails: Thanks!
  2. Just to follow up on this, I started a parity build on my Gen8 and it was running at about 39MB/sec. I booted in to the BIOS and switched on the disk write cache and restarted the parity build. It's now running at 185MB/sec.
  3. You may need to set the array to JBOD/AHCI. Also, the 5th SATA port runs at 3Gbit/sec IIRC. On mine, I set it up as AHCI and not as a RAID IIRC.
  4. My parity build this week for 6x 4TB drives took 11.1 hours on an N54L, which is about right.
  5. Sounds like one of the drives is attached to port 5 or 6 and the SATA unlock firmware has not been flashed. You can find more info in the thread in my sig.
  6. You won't get 80MB/sec writes to the array. I get around 40MB/sec on my N54L, maybe a little faster with an empty drive. I'm due to re-add a 7200rpm warranty replacement drive soon, so I'll post speeds when that's in.
  7. When I've built systems for people in the past, I've got commitment in writing and they paid up front. This f***er better at least pay for your losses.
  8. I run unRAID on an N54L and it's fast enough for my purposes. I have 10 drives attached (3 are 2.5" and one is external). I use an Adaptec 1430SA. I also have a Gen8, but am not running unRAID on it yet. I think I paid £180 with £100 cashback. Also used to have an N36L, but sold it recently.
  9. I have 10 drives in mine (well, one is external and 3 are 2.5"). 1430SA card to attach 4 in addition to the 6 onboard SATA ports.
  10. Is that from a HP microserver? Never had any issue with mine, always good constant speed in every unraid release, but only have 4 array disks + cache. My last parity check: Last checked on Wed 28 Oct 2015 06:20:53 AM GMT (today), finding 0 errors. Duration: 10 hours, 12 minutes, 35 seconds. Average speed: 108.9 MB/sec Yes, that's what I got before this release. Are you running 6.1.3? I have an Adaptec 1430SA as well, so this release appears not to like it (and others like the SAS2LP?) This issue has been resolved !! 8) Thanks to Eric Schultz, who found that if you change the nr_requests for all drives that have certain interface characteristics [specifically any drive using the 'ATA8-ACS' spec] to a lower value than the default 128 it will resolve this issue. The simplest thing is to just do it for all of your drives ... i.e. type the following line for EVERY drive in the array: echo 8 > /sys/block/sdX/queue/nr_requests ... where sdX is, of course, the actual identifier for each drive (sda, sdb, etc.). No need to do this for your cache drive or USB flash drive. Note that the 1430SA cards have a Marvell chip on them, which is why they are causing this issue. In any event, Eric has clearly isolated this issue -- I tried it and it works perfectly => my parity check speeds are back to what I always had with v4 and v5. Eric's detailed post is here (and follows through the rest of the thread): http://lime-technology.com/forum/index.php?topic=42629.msg417261#msg417261 Note that if you read on past my original post, my setup with a 1430SA does not in fact have a severe slowdown during parity build. ETA: I'll try it with a parity check later.
  11. It completed in 12.04 hours, which is about right. Maybe a little slower than under earlier OS builds, but not appreciably. (4 TB) per (43 368 seconds) = 92.2339052 megabytes per second
  12. It seems to have sped up and is now at over 100MB/sec with 62% done. Weird. I guess we'll see how quick it does it when it finishes overnight!
  13. Is that from a HP microserver? Never had any issue with mine, always good constant speed in every unraid release, but only have 4 array disks + cache. My last parity check: Last checked on Wed 28 Oct 2015 06:20:53 AM GMT (today), finding 0 errors. Duration: 10 hours, 12 minutes, 35 seconds. Average speed: 108.9 MB/sec Yes, that's what I got before this release. Are you running 6.1.3? I have an Adaptec 1430SA as well, so this release appears not to like it (and others like the SAS2LP?)
  14. I've read most of this thread, but not every thread on the board. Has the slow parity check/build problem been identified? This usually takes about 14 hours on my setup (6 drives on motherboard SATA, 2 on a 1430SA), not 31 hours!
  15. There may be a workaround, there was a thread about it here, but ISTR it didn't work.
  16. Hyper V won't pass through USB for the unRAID license AIUI.
  17. http://www.theregister.co.uk/2015/09/01/seagate_8tb_triplets_2tb_mobile_nipper/ "Seagate births 8TB triplets and a 2TB mobile nipper Who says spinning rust is finished? Seagate_8TB_HDD_trio 1 Sep 2015 at 12:03, Chris Mellor Who says spinning rust is finished? Seagate has rolled out 8TB triplets and a 2TB mobile nipper, using shingled recording on its 8TB Kinetic. The three 8TB disks use ninth generation perpendicular magnetic recording, and represent a 33.3 per cent capacity uplift on the existing 6TB technology that Seagate uses in its 3.5-inch disk drives. There is an 8TB enterprise capacity drive, one for network-attached storage (NAS), and a Kinetic Ethernet-addressed, key:value store drive. Seagate_8TB_HDD_table Overview table of Seagate's 8TB HDD trio. Note on MiBs and MBs below* We'll note that HGST has an existing 8TB drive, the He8, and also the shingled 10TB He10, each with 7 platters inside their low-friction helium-filled enclosure. Both Toshiba and Western Digital's (WD's) 3.5-inch drive product ranges top out at 6TB. (Actually WD has a 6.3TB Ae archive drive, but what's 0.3TB between friends.) The 8TB Kinetic drive, a direct-addressed, object-storing drive, comes in 4TB or 8TB versions. It spins at a relatively slow 5,900rpm and the 8TB model uses shingled magnetic recording. Shingling involves overlapping the wider writing tracks but not the narrower read tracks to increase capacity. Blocks of tracks have to be re-written if any data in the block changes, which slows write speed compared to non-shingled drives. There are 4 x 1TB platters in the 4TB version and 6 x 1.333TB ones with the 8TB drive. The sustained data transfer rate is up to a relatively slow 60MiB/s* at 4TB and 100MiB/s at 8TB. It is fitted with either a 64MB cache (4TB version) or a 128MB one (8TB version) and comes with an 800,000 MTBF (mean time before failure) rating. The workload rating is less than 180TB/year with 24/7 operation – 8,760 power-on hours/year. Regard this as a base spec and we'll move on to the Enterprise NAS and Capacity drives. Seagate_OneStor_AP_2584 Seagate I+OneStor AP-25844 Seagate is building a Kinetic OneStor 5U enclosure for November availability. It will have 84 drive bays and, filled with 8TB Kinetic drives, will provide 672TB per chassis, 5.3PB per rack. There is a data sheet for the existing OneStor AP-2584 here, which is based on using Seagate's 6TB drives. The box is described as being a foundation for scale-out architectures, and has single or dual pluggable embedded server modules. Seagate is encouraging Kinetic drive and system momentum. It says SwiftStack controller software and Scality software support will arrive in the October-November period. The 8TB Kinetic drive will arrive in January 2016 along with Open vStorage software. This is an open-source VM storage router for virtual machines, a software layer installed on a host or cluster of hosts on which virtual machines are running. A Google cached webpage states, "These VM storage routers (VSRs) operate like a grid leveraging local flash memory or SSDs and any storage back-end (S3 compatible object store, (distributed) filesystem, NAS) to provide an extremely high-performance and reliable storage system as compared to buying expensive SANs or NAS for VM storage." Envisage Open vStorage as an open source rough equivalent to VMware's VSAN, but with multiple hypervisor support. NASty drive business The 8TB NAS spinner rotates faster, at 7,200rpm, and is for mid-range NAS, server and scale-out cloud-storage in 1 – 16-bay enclosures. There are six 1.333TB platters. The capacity points are 2, 3, 4, 5, 6, and 8TB. It has a 6Gbit/s SATA interface, a larger 256MB cache, and the sustained data rate is 216 to 240MB/sec. The drive is also designed for a 24x7 operation/8760 power-on hours/year, but with a 300TB/year workload. Seagate_NAS_HDD_Positioning How Seagate positions its NAS disk drives Enterprise Capacity drive The Enterprise Capacity drive comes in a single 8TB capacity point with either 12Gbit/s SAS or SATA interfaces. Seagate's previous product topped out at 6TB and used, we understand, 6 x 1TB platters with a 633Gbit/in2 areal density. Its sustained data rate was up to 226MB/sec, whereas the new drive reaches up to 237MB/sec, helped by having a 256MB cache instead of the 128MB used previously. Seagate says the new drive has a 100 per cent random write performance improvement over the last generation, helped by better caching. The prior drive featured a 1.4-million-hour MTBF rating. The new one has been uprated to 2 million hours, so it's both fatter and faster and more reliable. The workload rating is 550TB/year with 24x7 operation. Seagate_8TB_HDD_Trio_650 Mighty mobile mini Seagate has also revealed 2TB, 2-platter mobile drive technology. Previously it used 500GB platters, but now it has doubled that. The drive is 7mm thick and Seagate says it's 3.17 oz in weight. It is 25 per cent lighter than Seagate's previous generation with Seagate saying it combines "new mechanical firmware architectures, with state-of-the-art heads, media and electronic design." We can expect this to be used in Seagate small PC and notebook branded drive products. It might, we suppose, bring out a thinner still, 1-platter, 1TB version if there is OEM demand for it. The company says it's evaluating a hybrid flash/disk version of this drive with a NAND cache to provide faster data access speeds. The company isn't saying anything about spin speeds. We reckon 7,200rpm is a certainty but 10,000rpm a little less so. The company thinks that helium-filled drives cost more to make than air-filled drives and it won't suffer profit and revenue-wise through not having them in its portfolio this year. We think that its engineers are working on Seagate's own helium-filling technology though, as this enables high capacity through adding a platter and the disk drive future is a capacity-focused one. It must be a racing certainty that WD will use HGST's helium-drive technology once China's MOFCOM gives the go-ahead to its operational merger with HGST. ® * 1MB is based on 1KB = 1,000 bytes while 1MiB is based on 1KiB = 1024 bytes."
  18. The June 2015 update fixes the fan control in AHCI. I certainly wouldn't have bought the Gen8 at the original price, but mine was £100, so it's a bargain. It's much quieter than my N54L. My drives ran cool in mine, albeit in a different location from my N54L. I like ILO a lot. What's wrong with it? I did look at the TS140, but was too late to hop on the deal for that.
  19. I run mine with the controller set to AHCI. Do a full (ISO boot) upgrade of the BIOSes first.
  20. SuperNews is a cheaper version of GN (see my sig - same servers, just cheaper). I found a small, but significant, speedup using the native version vs. the Docker - that's with slowish CPU and a fast connection. I went from about 18.5MB/s to closer to 20MB/s (line speed).
  21. Yes, I was going to say, try the universal installer. It got my speeds up. I get line speed (20MB/sec) and that's on a HP N54L (slow). AW has had problems recently. Try GigaNews or SuperNews and more connections. Also, make sure your article cache is at least 500MB. Mine is at 1.5GB. You may also need to use an SSD.
  22. You can power up the server from the OFF state using ILO. Including from the ILO phone app.
  23. The Gen8 uses power for ILO4 even when powered off, so to save electricity, you need to power it off at the wall. I have one and before it becomes an unRAID server, I have been powering it off at the wall. I'm sure my network switch uses more power, overall though... [emoji16]
  24. Another vote here. I've never understood why this hasn't been implemented. I could never get that vfs_recycle thing working.