Leaderboard

Popular Content

Showing content with the highest reputation on 03/08/17 in all areas

  1. So they responded with a new bios to test (L1.87). I'll give it a whirl and post back my results. It's different from the last beta bios they sent me (P1.89E), so hopefully it will solve this!
    2 points
  2. For some time now I wanted to upgrade my homeserver, the one I have now is based on a 1st gen core i5 processor with 8GB of Ram sitting on an m-itx motherboard with only four sata connections. The four sata connections quickly became an obstacle to expand my array of harddisks. I solved that by buying a cheap Sata Controller I/O board, and to my surprise it actually worked very well. The Sata Controller allowed for an additional 4 Sata connections, which of course is great, but now I don’t have the opportunity to add other I/O cards because of the m-itx formats limitation of only having one pc
    1 point
  3. Sorry to double post, but I'm not getting any assistance in the docker container support thread. My Crashplan docker continues to crash randomly, about a week or so, and I have no idea what it's complaining about or why it's crashing. This is what pops in the log when it happens: *** Shutting down runit daemon (PID 22)... XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":1" after 2120 requests (2120 known processed) with 0 events remaining. ./run: line 20: 39 Killed $JAVACOMMON $SRV_JAVA_OPTS -classpath "$TARGETDIR/lib/com.backup42.desktop.jar:$TARGETDIR/lang"
    1 point
  4. Welcome to v6+. The best way to add "Apps" in unRAID 6 is to use the Community Applications Plugin: https://forums.lime-technology.com/topic/38582-plug-in-community-applications/ When you add this plugin you will get a tab within the unRAID GUI called Apps. Within this tab you can search for "Plex" and choose the linuxserver container and install. Once installed the container will appear in the Docker tab where you can configure. Useful links for you: v6 getting started: https://lime-technology.com/wiki/index.php/UnRAID_6/Getting_Started#Getting_
    1 point
  5. Install Community Applications The notion of repositories isn't really used anymore within unRaid (and that's not the repository URL that you posted)
    1 point
  6. I thought it would be helpful to do a separate post on Ryzen PCIe lanes, since it is a bit confusing right now. Here's a crash course. DISCLAIMER: I may be wrong on a detail or two, but if so I blame the tech sites, many of which have posted bad info. Also keep in mind that with ~80 different motherboards on the market, in different configurations, there may be some exceptions to the info I post below. Ryzen has 24 PCIe 3.0 lanes. Total. That's it. On X370, B350 and A320 chipset motherboards, 4 of those lanes are dedicated to the chipset. That leaves 2
    1 point
  7. If you want to improve parity check speed with current setup try this: 6 disks onboard (use your fastest disks only, 4 and 8tb) 6 disks on SASLP #1 using PCIE1 5 disks on SASLP #2 using PCIE4 Divide slower 2TB disks by the 2 SASLP evenly. With the right tunables this should give a starting speed of around 100MB/s, eventually decreasing a little during the first 2TB but speeding up considerably once past that mark, total parity check time should be well under 24 hours.
    1 point
  8. 1 point
  9. The parity is absolutely not valid if you removed a disk from the array.
    1 point
  10. I have a mixed array of 2, 4, and now 8tb drives and thought I would do a little testing. First off this is my initial build. http://forum.kodi.tv/showthread.php?tid=143172&pid=1229800#pid1229800 Since then I have replaced the CPU with a Intel® Core™ i5-3470 and just recently replaced the cache drive with Samsung SSD 850 EVO 250GB along with a new Seagate Archive HDD v2 8TB 5900 parity drive. Here is what my array looks like ...just FYI... So here is a pic of my parity history with the 4 TB against the 8 TB. So after reading about how fast everyone's
    1 point
  11. I think I have an approach. The following command gives this output. It shows I have 5 x 8GB sticks installed. But I know I have 6x 8gb sticks installed. The Locator field tells me the name of the DIMM Slot. So it looks like P1 DIMM 3A is either a bad slot or had a bad stick. Did I interpret this properly? I have attached a diagram of the motherboard. # dmidecode -t 17 # dmidecode 3.0 Getting SMBIOS data from sysfs. SMBIOS 2.6 present. Handle 0x0017, DMI type 17, 28 bytes Memory Device Array Handle: 0x0015 Error Information Handle: Not Provid
    1 point
  12. I am just seeing this thread. When dealing with an older, "fragile" array, I would recommend leaving it alone and building a new modern server. You didn't mention the total data capacity of your existing server, or the actual disk sizes involved, but with 8T drives you'd be able to drop the drive count significantly. Once the new server is ready, you could copy the data over the lan without any changes to the old server, which could live on as a backup server. But given where you are, this is what I would do. I would buy some 8T drives, preclear them, and create a new
    1 point
  13. I've seen it before when the bzroot files are either missing or corrupt. Manually download Unraid from the LT website. Extract bzroot, bzroot-gui and bzimage and copy them to your flash drive.
    1 point
  14. Although I would like to see such information I suspect it is not available. I think the Spindown messages relate to specific events within unRAID, while the Spinups are likely to happen automatically when an access is made to the drive (without an explicit Spinup command being issued). The closest I could see is adding a message to the log on the periodic drive checks when the Spin state is found to be different to the last one logged. Although this may mean the log message is delayed from the actual event happening it would still be useful information.
    1 point
  15. The ones I've seen reported were a bit over 15 hours. on arrays with all 8TB Seagate archive drives.
    1 point
  16. You can, don't forget you need to create the destination folder before doing the restore.
    1 point
  17. Possibly, but keep in mind there have been several reports that linux support is much better in a later kernel than what is currently in unraid. With that in mind, I wouldn't expect much of a positive result until unraid's next update that includes the new kernel. I'm sure limetech has internal builds that they are testing, but don't expect to hear about them. Realistically, I'd say wait for the next round of public unraid beta's before even contemplating a ryzen build unless you are a willing guinea pig.
    1 point
  18. For the future follow the FAQ instructions to remove a cache device, much safer. For now your best bet is probably to try an mount it read only, copy all data and format: mkdir /x mount -o recovery,ro /dev/sdX1 /x Replace X with actual device
    1 point
  19. Funny enough, though I have other issues with my board (and others have the same as well) with false temp sensor events and fans spinning to max on their own, I have no problems using the Marvell SE9230 controller. I have all 4 Marvell ports connected, and am running 2 VMs, and haven't had an issue yet... I should mention that one of my drives *did* just drop out of the array this morning, but I was messing around in there yesterday and may have jostled a cable (haven't had time to confirm which controller the drive was attached to)...
    1 point
  20. Yes, Avoid the Marvel ports. Its a known issue that they go a little crazy if you have visualization enabled. Its discussed here:
    1 point
  21. @Positivo58's issue is solved, as it turns out it was an issue with the VLAN/network setup preventing PMS from pulling the Plex token initially.
    1 point
  22. Solved it today. Did some more searching around, trying to use virtfs etc to get some more data about what is happening. Didn't progress so much with that, buy while reading about different solutions to this type of problem, I found out that having your free disk space reduced below a certain point could create issues with VMs running that would instantly pause them without any appropriate messages whatsoever. It seemed a bit strange as my VMs have all their disk size (40GB each) pre-allocated, but the problem is with the amount of free space the system uses
    1 point
  23. No, they are both exactly the same size. All you would need to do is ... (a) Do a parity check to confirm all is well before you start. (b) Swap the parity drive and wait for the rebuild of the new drive (the WD Red) (c) Do a parity check to confirm that went well. (d) Now add the old 8TB shingled drive to your array. If you haven't seen any issues with the shingled drive, it's probably not really necessary to make this switch; but it IS true that if you are ever going to hit the "wall" of performance with the shingled drive (i.e. a full persistent
    1 point
  24. I copied 6TB from a 8TB Archive (ST8000AS0002) to a 8TB NAS (ST8000VN002) and saw a consistent 180-200MB/s for the whole transfer (using Intel H170 SATA, running Win10Pro). I've never seen inconsistent read performance like that. I have to say, have no regrets in buying the Archive drives, they're quiet, fast and run cool. So far, the pair I have given flawless performance for well over a year. They're also the original Archive v1 drive, so aren't even as good as the ST8000AS0022 Archive v2 units.
    1 point
  25. I regularly do a single write of 100 -120 GB of data at a time, no speed issues at all, with "reconstruct write" on I nearly always max out the Gigabit connection, it occasionally drops from 113MB/s to 90MB/s. I have 11 of these shingled drives in the server, 2 of them are parity.
    1 point
  26. You apparently haven't read about the mitigations Seagate has incorporated into these drives to offset the potential issues with shingled technology. These are outlined in some detail in the 2nd post in this thread -- the most relevant fact vis-à-vis typical UnRAID usage is "... If you're writing a large amount of sequential data, you'll end up with very little use of the persistent cache, since the drives will recognize that you're writing all of the sectors in each of the shingled zones. There may be a few cases where this isn't true - but those will be written to the persistent cache,
    1 point
  27. Are you both confirming that following testing of the drive for infant mortality things were fine and then a Parity Check killed the drive? Can you post the evidence to support these claims we can view? I only ask as I assume you would have collected this to support your RMA claim and it is of benefit to the community. As for reliability I have empirical evidence that these drives in good health are indeed an excellent choice for use with unRAID. I have these running in my Backup Server - an all Seagate Shingled 8TB drive array (single Parity). Running alongside my Main Server - an
    1 point