Leaderboard

Popular Content

Showing content with the highest reputation on 03/03/17 in all areas

  1. I understand you have no more PCIe slots available that is why I suggested a SAS Expander to use with your IBM M1015. If you use the same model I have (Intel RES2SV240) it has a molex for power. It does NOT need to be plugged into a slot but it CAN be if you don't want to use the molex for power. Just put electrical tape over the circut traces so you don't short anything out. You would use a SFF-8087 to SFF-8087 cable to go from your M1015 to the RES2SV240. From that you would use more SFF-8087 to SFF-8087 cables to go to the back planes on your Norco 4224. That is how I have two of my servers setup. One M1015 to one RES2SV240 for 24 SATA port connections for each server.
    2 points
  2. I do too and my USA connections support IPV6 while my China ones don't presently. My point is that it's the China side that has the need. On the China side my router thinks my external IP is one thing, while ipecho.net confirms that my router is being fooled. And my external IP is changing every 48 hours like clockwork. It's a total pain. via Tapatalk
    2 points
  3. When we refresh all our containers on Friday night every week, whatever version is available will be in the container. For Plex, this is the latest public release. If there is a new version released after we build the container, you only need to restart the Plex container and it will automatically pull the latest version.
    1 point
  4. after restarting the docker, did you do into plex and see if its latest version? I'm glad my help is entertaining for you.
    1 point
  5. Careful so you don't end up with a barman that is studying advanced math in university and have the bar job for extra money. Might be expensive for you
    1 point
  6. In my opinion, 'bitrot' is the biggest FUD (Fear, Uncertainty, Doubt) none-issue out there! The read error rate on hard drives, while a very, very small percentage of all reads, probably happens on most hard drives several times a day. This fact has been known for years. Drives manufacturers has implemented several layers of data error detection and correction to find and correct most of them. If you look at the SMART attributes ( say, at Wikipedia), you will see that they actually flag some sectors as unreliable and schedule them for action at some point in the future. Having said all of that, Is there a possibility that so many errors occur in a data block that the detection scheme erroneously assumes that the data is correct data ? Absolutely!!!! That is why some knowledgeable folks have a secondary (and, in some case, tertiary) backup(s) of the really important stuff that can't be restored by other means. Graycase mentioned the file checksum utility. Depending on your level of paranoia, that will provide you with a level of confidence that you haven't had any bitrot occur. But your backup is your safety security blanket. But the five plus years that I here been watching this board, I have have not heard of a single case of even suspected bitrot....
    1 point
  7. http://lime-technology.com/forum/index.php?topic=48508.msg484480#msg484480
    1 point
  8. negotiates world peace and settles the question once and for all of which came first, the chicken or the egg
    1 point
  9. And doing some research the DoD actually does have that many ipv6 addresses so think there's a typing error in jonp's post.
    1 point
  10. And 2^128 ipv6 addresses = 340,282,366,920,938,463,463,374,607,431,768,211,456 Which is apparently said out loud like this... "340 undecillion, 282 decillion, 366 nonillion, 920 octillion, 938 septillion, 463 sextillion, 463 quintillion, 374 quadrillion, 607 trillion, 431 billion, 768 million, 211 thousand and 456"
    1 point
  11. Just built a system for my folks , MSI B250m-Pro-VH Pentium G4560 16GB Corsair Vengence 2400. Fractal Arc r2. That came to £300. (+hdd's, ssd, psu, gt730 had previous htpc) The cpu isnt good enough for vm (librelec), honestly 1core+thread is just a waste of time. Its extremely choppy and slow . Using just sonarr, radarr, plex, plex connect as dockers and its pinned to the floor screaming. Ive tried this hardware on Ubuntu and currently testing on win7 and i've had msg's from my folks reporting the playback isnt smooth. Looks like i will end up putting Unraid back in and using it as a fileserver/nas untill 6700's drop right down in price . Closing notes, get a quadcore with HT unless you want to just hold off, use a cheap cpu and cheap media player (AFTV,ROKU etc) untill you can afford it. wish i never built this one for them its caused more work than was ever needed.
    1 point
  12. I'd still recommend a high endurance ssd for cache Mines a Micron/Dell Enterprise NVMe SSD Sent from my iPhone using Tapatalk
    1 point
  13. SSD cache only serves writes on configured shares, and reads only before the mover shifts it down to array. 10GBE in my Unraid is only really helpful to assist with multi stream, and for some reason seems to help inter-docker transactions Sent from my iPhone using Tapatalk
    1 point
  14. Another option would be to replace the M1015 with an LSI 9201-16i. It has 16 ports available instead of 8 so you would have the 4 more you need plus 4 extra. But that is also expensive. I bought mine for about $384 (If I remember correctly) off of Amazon. Your Marvel 9230 based controller is a candidate for using a port multiplier but with my EP2C602-4L/D16 I found wouldn't work reliably for me with unRAID - why I bought the 9201 and replaced a M1015. You might have better luck then I did with the Marvel controller. I was passing it (9230 Marvel) through to a WHSv1 VM and every 3+ weeks it would drop the drives (same problem I had with my port multipliers on an earlier server). I needed VT-d so I couldn't turn that off and I have the latest bios installed already and adding iommu=pt had no effect which are three of the recommendations in this thread for it:
    1 point
  15. Hmm. Most MB's don't have port multiplier support on the SATA ports. Most of the time you'll need a SATA card, but Port multipliers only seem to work with SIL3132 chips It should be fine, but in my experience they sometimes drop out the entire set of discs at once. I guess it depends on the quality of card and enclosure you are using.
    1 point
  16. Never had good luck on Windows with port multipliers it is probably different with linux/unRAID. My problem was Windows would drop the drives connected to the port multiplier every month or so. If this is to your EP2C602-4L/D16 MB and M1015 controller then I would suggest using a SAS expander to expand your port count. It would be much better than the port multiplier. That of course assumes the box is the one listed in your sig.
    1 point
  17. Quite right, however provided that you either have a large SSD and utilise TRIM etc you wont run into an issue for years, or have an enterprise SSD which is designed for endurance, as i listed with mine, it is entirely SLC over NVMe U.2 Nothing should be long-term kept on the cache that you cant live without. Dockers can be rebuilt, i run the mover once a week, anything in that week can be easily redone. Configurations and things that are important are backed up to the UnRaid array itself and a secondary NAS which is offsite.
    1 point
  18. so going back to jonp question regards why do we need ipv6 then, i think one thing to look at is purely from a marketing viewpoint, which current NAS offerings support IPv6, from my quick googling around i cant see one yet that doesn't, other than yep you guessed it, unRAID, so although this could be taken as a weak argument for what probably is a significant amount of work, i think it also needs to be weighed up, as a potential customer lack of IPv6 support would be a worry for me, not so much when i first purchased an unraid licence but more so as time has progressed. As a real life case for having IPv6 support i saw a user who wanted to run DelugeVPN Docker container with a particular VPN provider, however this wasn't possible due to the fact that the provider only allocated IPv6 tunnel addresses, and as unRAID doesn't support IPv6 on the host it's then also not accessible inside a container either, so it was a no go, its a fringe use case but hey there ya go, probably the use of IPv6 for internal networking from ISP's, VPN providers etc will become more common as time progresses.
    1 point
  19. over 30% of the worlds IPv6 access to google is recorded as coming from the USA. Fundamentally every ISP and NOC int the world is jumping through ever more expensive hoops to cater for IPv4 since it the address space exhaustion is making it more and more expensive. Money drives all innovation as usual. Meanwhile IoT, cheap phones and an exponential increase in connected devices increases sucks up more and more IPv4 space.. IPv6 is not just inevitable it is becoming a necessity faster than most predicated. But I point people back to my lists of changes needed for IPv6. Its non trivial and touches the vast majority of unRAID components in some way or another. This is a not a simple tick the box change, its giant. But we need to start somewhere.
    1 point
  20. I have been using an EVO 850 250GB SSD since May 2016. I run a dozen dockers including SAB, SB, Deluge, and Plex with constant downloading and unpacking to the drive. No issues so far. I configured the TRIM plugin to run once a week. And I use the Community Applications Backup function to backup my AppData to the protected array once a week.
    1 point
  21. Lets get into specifics rather than brinksmanship, what are the real implications of this and do they break anything. e.g perhaps this list includes GUI work for general network config Documentation work (lots) License server Update servers Docker GUI work Docker back end Virtual GUI work Virtual back end Samba control AFS control FTP control NFS control Core addons e.g. unassigned drives Windows domain specific stuff What is missing or included by mistake? What need to work at alpha stage when IPv6 is a command line only option?
    1 point
  22. I had the same issue as you log file showed an RPC error. I just connected to unraid via SSH cd /usr/bin at this point ping your FQDN mine didnt reply back due to using my router as DNS, I then set the domain name on my router and still couldnt ping FQDN yet if I pinged justthe domain name it would reply with the FQDN, chances are I am being impatient here and should have either cleared cache or waited. However I then just ran the command net ads join -S dc01 -U administrator it connected and showed it used the correct FQDN and short name. Unraid GUI then showed me as joined. Just to test after this I told it to leave the domain and then used the GUI again to join it as I had tried before and it connected just fine
    1 point