BetaQuasi

Members
  • Content Count

    848
  • Joined

  • Last visited

Everything posted by BetaQuasi

  1. This is a docker network IP used for bridging, and a private IP. It is not expected to be used for connectivity. Your friends should be connecting using your external public IP.
  2. This is an old thread, there will be newer BIOS available now - however I updated the original links for you.
  3. You'll want to use the UEFI method - the DOS method rarely works with Supermicro motherboards. It's not your particular use case, but you can see the same was true on their older boards by following the M1015/X9SCM link in my signature. There is also a Linux flash method, which you can apparently do from within unRAID itself, but I don't have any direct experience with that. From what I can tell, it does require a 32-bit version of unRAID to work successfully though.
  4. Fair enough. I don't tend to bother with a USB stick that exhibits issues since they are so cheap and the new license process is much simpler these days.
  5. Looks to me like your USB stick (sda1) has issues. I'd try replacing it as a first step.
  6. @LondonDragon This docker is on Deluge 2.0 now, which uses Python 3.0. You'll need to bug the plugin authors for updated plugins, or revert to the last 1.3.15 build. linuxserver/deluge:amd64-5b398f77-ls22
  7. To the OP, I've got a couple of different methods documented in the 'M1015/X9SCM Firmware' link in my signature - maybe one of them will work for you?
  8. Probably the most user-friendly way would be to use gparted (https://gparted.org) - I'd grab the live CD/ISO and set that as your boot drive for the VM running unRAID. It should see your existing vmdk and allow you to resize it. From memory (and this is going back quite a few years now so I don't know for sure!) the .vmdk file itself was already 1Gb, so you should be able to extend the partition to fill that. If the .vmdk file itself also needs extending, you can do that from the command line in VMware, using the vmkfstools command. You can see more detail here. You would first
  9. Yeah I noticed this myself today. Thought it was a browser issue at first, but seems to persist on all browsers and PC's. Not the end of the world, but a bit annoying when you can't check on the progress of a big move/copy job!
  10. It may be that using other containers isn't going to address your issue. See the below link from nVidia. Looks like we're waiting on a new Plex client as well as a new Shield firmware to fully fix it. Suspect it's specific use cases that are affected - all my files are .mkv and uncompressed, and I'm connected via ethernet and have no issues. https://forums.geforce.com/default/topic/1091469/shield-tv/shield-plex-playback-issues-on-7-2/post/5987072/#5987072
  11. FWIW, I use the linuxserver Plex docker with a Shield TV/Plex client I bought last year.. no issues in 12+ months of using it, including after updates etc.
  12. Your CPU is very weak that's for sure - it has a passmark of around 1700, and Plex needs 2000 to transcode 1080p or 1500 for 720p, see here: https://support.plex.tv/articles/201774043-what-kind-of-cpu-do-i-need-for-my-server/ In the past there were issues with the LG TV app which essentially meant it had to transcode everything. There were other third party options though that could direct play, which is really what you want with such a weak CPU (essentially the TV does the work instead of the server.) I'd suggest asking the non-unRAID Plex questions over at the Plex
  13. The stock fans are actually amazing for airflow, they are just noisy as all hell. If you don't care about noise, I wouldn't bother switching them out.
  14. I would just attach the cache drives directly to the SATA ports on the motherboard if you don't intend on using them for anything else. I can see the Norco 4224's have come a fair way since I bought mine 6 years ago! Those mounting bolts etc look like they are much better quality. That being said, I haven't had a single issue with the case/backplanes etc in 6 years now, so all good!
  15. What method are you using to try and access the shares? If you're using Windows, you can just open Windows Explorer and in the text entry bar near the top, type: \\192.168.1.102 and hit enter (assuming that is still your unRAID IP). At that point you should be able to see your shares, or if prompted for a user account, login using any user you have created and added to a share for access. One other thing to double check is what you ended up entering into the 'specify the private subnets to which all clients should be given access' dialog. It sounds like y
  16. You should be fine to keep your existing config, just point the appdata path to the same place. Stop the old docker first of course, and perhaps take a backup of the appdata just in case of any weirdness. Have done this a couple times myself with no issues in any case.
  17. That's basically my setup in a nutshell. The spare bay of the 16 in use is holding an already precleared 8Tb for when I next need to replace a drive.
  18. I had no issues with Plex all the way through the 6.5 series, and it is again working fine with this update. Perhaps it's something specific with your config? (I'm using linuxserver.io's docker as it is kept up to date, not an issue to be seen.)
  19. Missed your mention about Plex earlier - will largely depend on what you're streaming to, but I've set my server to top out at 4Mbps (720p), which is more than fine for tablets, phones etc - even looks pretty good on a 49" TV (only remote TV I've tested it on.) I'm on 50/20 though (fixed wireless, so can't get 100/40), which is why I've limited it to 4Mbps.
  20. Sorry yes, typo Just looking further, if you went with that X11 board above, it has 8 sata ports on board. Assuming you use one for cache, and perhaps keep a couple spare for mounting other external drives as needed, you could use 4 of them to connect to one of the backplanes. Depending on how many total drives you plan on adding, this could help keep costs down initially. (You don't need to connect all of the backplanes, in fact I still have 2 of mine not connected, i.e. only 16 total drives, as I've been upgrading to larger drives over time.) 1x M1015 will run 2 ba
  21. Here is the 70cm cable - note you want 8077 at each end if you are doing m1015 to the norco backplanes. https://www.amazon.com.au/gp/aw/d/B00XOFDJBA/ref=mp_s_a_1_1?ie=UTF8&qid=1524408398&sr=8-1&pi=AC_SX118_SY170_QL70&keywords=uxcell+sas+70&dpPl=1&dpID=41-7S7jOAjL&ref=plSrch
  22. Who would have thought.. Amazon AU have a bunch of SFF 8087 cable options. 50cm (might be a touch short), 70cm and 1m, all for quite a good price.
  23. Just note that 1m SAS cables will leave you a bit of leftover cable to clean up inside the case. I ended up switching them out for some Molex 0.6m SAS cables instead, which are much tidier (you can see a photo in my build thread.) I can't seem to find any of those any more, but a few eBay sellers have 0.7m ones for $7 a pop.
  24. So if you do go the M1015 route with SAS cables in the 4224, you'll need one M1015 for every 8 drives (4 per SAS cable, which nicely mates up with 1 of the 6 rows in the 4224). Each M1015 should be put in a minimum PCI-E 8x slot. The old X9SCM has 2 of these, and 2 physical x8 slots that run as x4. It seems this on is the evolution of that board: https://www.skycomp.com.au/ld-supermicro-up-e3-1200v5-4x-ddr4-ecc-sata-raid-2x-i210-gbe-c236-micro-atx-x11ssm-f.html It supports more recent processors, an extra 32Gb RAM and is a modern chipset that supports Intel series 6 and 7 CPU
  25. I've not seen another option for a bunch of bays in Australia that is more cost effective at this time. The only other route is a case like an Antec900/1200 which you can throw a bunch of 5 in 3 drive cages into if you want to get to a high drive count. When I weighed it up at the time, the Norco was cheaper, with the added win of less cables (most of the cages need a SAS to SATA forward breakout cable.) In terms of motherboards, there are plenty of options. We get quite a few Supermicro boards locally: http://www.staticice.com.au/cgi-bin/search.cgi?q=su