BetaQuasi

Members
  • Posts

    849
  • Joined

  • Last visited

Everything posted by BetaQuasi

  1. Fantastic timing indeed - I was just adding a new drive to my array and it was failing. Then the update notification popped up. Thanks!
  2. This is a docker network IP used for bridging, and a private IP. It is not expected to be used for connectivity. Your friends should be connecting using your external public IP.
  3. This is an old thread, there will be newer BIOS available now - however I updated the original links for you.
  4. You'll want to use the UEFI method - the DOS method rarely works with Supermicro motherboards. It's not your particular use case, but you can see the same was true on their older boards by following the M1015/X9SCM link in my signature. There is also a Linux flash method, which you can apparently do from within unRAID itself, but I don't have any direct experience with that. From what I can tell, it does require a 32-bit version of unRAID to work successfully though.
  5. Fair enough. I don't tend to bother with a USB stick that exhibits issues since they are so cheap and the new license process is much simpler these days.
  6. Looks to me like your USB stick (sda1) has issues. I'd try replacing it as a first step.
  7. @LondonDragon This docker is on Deluge 2.0 now, which uses Python 3.0. You'll need to bug the plugin authors for updated plugins, or revert to the last 1.3.15 build. linuxserver/deluge:amd64-5b398f77-ls22
  8. To the OP, I've got a couple of different methods documented in the 'M1015/X9SCM Firmware' link in my signature - maybe one of them will work for you?
  9. Probably the most user-friendly way would be to use gparted (https://gparted.org) - I'd grab the live CD/ISO and set that as your boot drive for the VM running unRAID. It should see your existing vmdk and allow you to resize it. From memory (and this is going back quite a few years now so I don't know for sure!) the .vmdk file itself was already 1Gb, so you should be able to extend the partition to fill that. If the .vmdk file itself also needs extending, you can do that from the command line in VMware, using the vmkfstools command. You can see more detail here. You would first extend the .vmdk file to a certain size, and then run gparted to expand the partition to fill the newly-created disk. https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc_50%2FGUID-97FED1EC-35A5-4EF2-80BA-7131F8455702.html There's quite a few other ways to approach this too, but that should specifically address what you need. Lastly, back up the .vmdk first in case something goes wrong!
  10. Yeah I noticed this myself today. Thought it was a browser issue at first, but seems to persist on all browsers and PC's. Not the end of the world, but a bit annoying when you can't check on the progress of a big move/copy job!
  11. It may be that using other containers isn't going to address your issue. See the below link from nVidia. Looks like we're waiting on a new Plex client as well as a new Shield firmware to fully fix it. Suspect it's specific use cases that are affected - all my files are .mkv and uncompressed, and I'm connected via ethernet and have no issues. https://forums.geforce.com/default/topic/1091469/shield-tv/shield-plex-playback-issues-on-7-2/post/5987072/#5987072
  12. FWIW, I use the linuxserver Plex docker with a Shield TV/Plex client I bought last year.. no issues in 12+ months of using it, including after updates etc.
  13. Your CPU is very weak that's for sure - it has a passmark of around 1700, and Plex needs 2000 to transcode 1080p or 1500 for 720p, see here: https://support.plex.tv/articles/201774043-what-kind-of-cpu-do-i-need-for-my-server/ In the past there were issues with the LG TV app which essentially meant it had to transcode everything. There were other third party options though that could direct play, which is really what you want with such a weak CPU (essentially the TV does the work instead of the server.) I'd suggest asking the non-unRAID Plex questions over at the Plex forums to get the latest on that. As for which docker, I'd go for the linuxserver one personally.
  14. The stock fans are actually amazing for airflow, they are just noisy as all hell. If you don't care about noise, I wouldn't bother switching them out.
  15. I would just attach the cache drives directly to the SATA ports on the motherboard if you don't intend on using them for anything else. I can see the Norco 4224's have come a fair way since I bought mine 6 years ago! Those mounting bolts etc look like they are much better quality. That being said, I haven't had a single issue with the case/backplanes etc in 6 years now, so all good!
  16. What method are you using to try and access the shares? If you're using Windows, you can just open Windows Explorer and in the text entry bar near the top, type: \\192.168.1.102 and hit enter (assuming that is still your unRAID IP). At that point you should be able to see your shares, or if prompted for a user account, login using any user you have created and added to a share for access. One other thing to double check is what you ended up entering into the 'specify the private subnets to which all clients should be given access' dialog. It sounds like you got that right as you can access your server, but just in case - assuming a consumer Linksys router on the 192.168.1.x/255.255.255.0 range, and the fact that you have stated your unRAID server is on 192.168.1.102, you should enter 192.168.1.0/24 here. Is that what you entered there?
  17. You should be fine to keep your existing config, just point the appdata path to the same place. Stop the old docker first of course, and perhaps take a backup of the appdata just in case of any weirdness. Have done this a couple times myself with no issues in any case.
  18. That's basically my setup in a nutshell. The spare bay of the 16 in use is holding an already precleared 8Tb for when I next need to replace a drive.
  19. I had no issues with Plex all the way through the 6.5 series, and it is again working fine with this update. Perhaps it's something specific with your config? (I'm using linuxserver.io's docker as it is kept up to date, not an issue to be seen.)
  20. Missed your mention about Plex earlier - will largely depend on what you're streaming to, but I've set my server to top out at 4Mbps (720p), which is more than fine for tablets, phones etc - even looks pretty good on a 49" TV (only remote TV I've tested it on.) I'm on 50/20 though (fixed wireless, so can't get 100/40), which is why I've limited it to 4Mbps.
  21. Sorry yes, typo Just looking further, if you went with that X11 board above, it has 8 sata ports on board. Assuming you use one for cache, and perhaps keep a couple spare for mounting other external drives as needed, you could use 4 of them to connect to one of the backplanes. Depending on how many total drives you plan on adding, this could help keep costs down initially. (You don't need to connect all of the backplanes, in fact I still have 2 of mine not connected, i.e. only 16 total drives, as I've been upgrading to larger drives over time.) 1x M1015 will run 2 backplanes (4 drives in each) with two of the mentioned SAS cables. So 8 drives per M1015, and 4 drives off the motherboard. Can always add more capacity later. If you did want to connect the motherboard ports to one of the backplanes, you need a REVERSE breakout connector, which is basically the same thing you linked above, but the data travels in the opposite direction (this is an important differentiation.) It doesn't seem any of those listings on Amazon define forward vs reverse though... Anyway, just food for thought.
  22. Here is the 70cm cable - note you want 8077 at each end if you are doing m1015 to the norco backplanes. https://www.amazon.com.au/gp/aw/d/B00XOFDJBA/ref=mp_s_a_1_1?ie=UTF8&qid=1524408398&sr=8-1&pi=AC_SX118_SY170_QL70&keywords=uxcell+sas+70&dpPl=1&dpID=41-7S7jOAjL&ref=plSrch
  23. Who would have thought.. Amazon AU have a bunch of SFF 8087 cable options. 50cm (might be a touch short), 70cm and 1m, all for quite a good price.
  24. Just note that 1m SAS cables will leave you a bit of leftover cable to clean up inside the case. I ended up switching them out for some Molex 0.6m SAS cables instead, which are much tidier (you can see a photo in my build thread.) I can't seem to find any of those any more, but a few eBay sellers have 0.7m ones for $7 a pop.
  25. So if you do go the M1015 route with SAS cables in the 4224, you'll need one M1015 for every 8 drives (4 per SAS cable, which nicely mates up with 1 of the 6 rows in the 4224). Each M1015 should be put in a minimum PCI-E 8x slot. The old X9SCM has 2 of these, and 2 physical x8 slots that run as x4. It seems this on is the evolution of that board: https://www.skycomp.com.au/ld-supermicro-up-e3-1200v5-4x-ddr4-ecc-sata-raid-2x-i210-gbe-c236-micro-atx-x11ssm-f.html It supports more recent processors, an extra 32Gb RAM and is a modern chipset that supports Intel series 6 and 7 CPU's. Full specs here, along with RAM compatibility list: https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSM-F.cfm All that being said, you can happily buy most mainstream boards and run unRAID on those. Those of us that went the Supermicro route did so mainly because of long term reliability (my build is 6 years old now and hasn't skipped a beat for example.)