dirtysanchez

Members
  • Posts

    949
  • Joined

  • Last visited

Everything posted by dirtysanchez

  1. Awesome. Nice to see unRAID getting the press it deserves, even if the NAS functionality was completely glossed over.
  2. Any thoughts on creating a Ubiquiti UniFi controller docker? I know one already exists by pducharme, but he doesn't seem to be around much and the UniFi controller was updated to 4.7.5 almost a month ago and his docker is still at 4.6.6.
  3. So, nslookup returned 192.168.0.7 as the address of TOWER. Is that the same IP address you can connect to it at when you connect via IP instead of name?
  4. It's most likely a driver issue. The setuperr.log file may or may not help pinpoint the issue, post the contents of it. It should be located at C:\$Windows.~BT\Sources\Panther.
  5. Good to know, because I like everything about the case otherwise.
  6. Oh, don't get me wrong, wireless can work and work well under the right conditions, I'd didn't mean to imply it can't/won't. A single device streaming a 20Mbit BDRIP with minimal other traffic on the wireless will work fine if you have a decent connection, possibly even more than 1 stream. I was more referring to the questions I usually get from others that have no idea how WiFi works. They want to know why they can't stream Netflix on 3 different TV's simultaneously while someone else is online gaming and 2 other PC's are downloading or torrenting (all of it wirelessly), even though their Internet connection is sufficiently fast to sustain all that simultaneous data. People that don't understand how it works (not implying anyone here is one of those people) think a wired and a wireless connection are equivalent, but nothing could be further from the truth.
  7. I'll second the Lian-Li PC-Q25b if you can get it. It's a great case and well worth the cost. The Silverstone case you are currently planning to use has had some heat issues unless the case was modified if I recall correctly. The drives will run a bit hot. I don't know if this has been resolved, perhaps someone else can chime in.
  8. I thought I'd add that I agree with what has been said about wired vs wireless. If it can be wired it should be wired in my opinion, much more robust and reliable connection, especially when it comes to media streaming. Every Roku I have ever purchased I made sure it was a model with Ethernet. I guess I'm just old school in that I think wireless is for phones, iPads, and laptops (when you need to be untethered), otherwise hardwire that stuff. I'm sure being a network engineer by trade only reinforces that. Wireless has it's place, but too many people today think wireless is for everything and get frustrated when things slow to a crawl when you have 10+ devices all hammering the same access point. Unlike wired, wireless is a shared medium and only a single device can "talk" at the same time. Great for web surfing, email, etc. Horrible for streaming media. Granted in some situations you can't hardwire it, or you don't have the ability/access/etc to run cabling, then it is what it is and you'll have to use wireless. Just don't expect it to work as well as wired.
  9. Another vote for the Roku. I have 3 of them myself for the non-smart TV's as well as numerous remote family members that use them to stream content from my Plex Server. They are dead simple to use and they just work.
  10. It needs to be mapped to the location the Mumble container will store it's data. Same as any other docker (but usually called /config in other dockers). For example /mnt/cache/appdata/mumble.
  11. You shouldn't need to reformat from scratch. I do not know if you can update straight from beta10a to 6.1.3 or how it would be done (I don't recall if beta10 had the "check for updates" functionality). Maybe someone here can chime in. If you were to start from scratch, yes your data would be safe. The key item here is to know which drive is parity and which drives are data. I would screenshot the current main page for reference. After the flash reformat you need to assign the parity drive to the parity slot, and data drives to the data slots (order does not matter with data drives), and start the array. Any shares you already have will be automatically generated (any top level folder on any data drive is considered a share), although you would need to reset the options on the shares themselves back to however you had it set up before. As for the registration details, there is a key file on your existing flash drive. It is called xxxxx.key (where xxxxx is your version of unRAID, i.e. Pro, Plus, etc.). I have a Plus license so my key is called Plus.key. It will be located in the config folder on the flash. You need to copy that key off first and replace it on the rebuilt key.
  12. Well, that's not your problem then. And yes, you are correct of it were a double NAT the WAN IP would change. It would be a public address in the SuperHub (like 80.192.78.229), but would be a private address in the Linksys (like 192.168.1.1 or 10.0.0.1).
  13. AHA! I think we may have gotten the critical piece of information. What is the IP address that shows up in the WAN or Internet portion of the status page in the Linksys webgui? Yep, good catch. The Virgin SuperHub is likely not in bridge mode, so there's a double-NAT going on. If so, will need to either get the SUperHub into bridge mode, or port forward on both routers.
  14. Er may or may not be working on souch a thing, but consensus in the group is that sonarr is better Yes, I've seen the same mentioned by others as well. I guess I just haven't tried to migrate to Sonarr yet as it's quite a bit more difficult to get setup and working properly (or so I've heard), but once running it puts SickBeard to shame. Maybe I'll bite the bullet this weekend and give it a shot.
  15. Any chance of a linuxserver.io SickBeard Docker? I've migrated all of my Dockers to the linuxserver.io Dockers where possible, but no Docker for SickBeard makes me sad...
  16. In addition, since you are already on v6, I'd update to the current release and then run Plex via Docker. It takes a bit of getting used to, but it's fairly simple and you'll never go back to plugins.
  17. I did similar to jumperalex. I remapped /transcode in the Docker config from /tmp to /mnt/cache/apps/plex/Library/Application Support/Plex Media Server/Cache/Transcode. I didn't change the path inside Plex itself as I mapped the full path to the transcode directory to /transcode in the Docker.
  18. hex, you are correct about Plex's behavior and it is why I advocated against moving transcoding to RAM in the past. I had the same problem as dirtysanchez, but it was only on long high bit rate videos (movie BD rips). All other stuff ran fine. When I dug into it I was basically juuuuuust not hitting the wall with the shorter lower bit rate stuff but the BD movie rips, at full bit rate, would fill /tmp in about 30-45 min. Crash. flush. restart. rinse. repeat. Frankly the concern over SSD wear is misplaced given all current evidence and most plex / unraid implementations I can guess at. Even with 24/7 streaming of a reasonable number of 20mbit streams the SSD will outlive most people, no less its likely useful life. Hex, they are approx 1.5GB 720p mkv files. It is palying on a Roku, and so is resulting in only the audio being remuxed, therefore the transcoded file size should be approx 1.5GB. It's certainly filling up /tmp before the play completes. I'm going to have to agree with jumperalex here. While transcoding to RAM can save a bit of wear and tear on the cache drive (assuming SSD) it's likely not something that is going to make a significant difference in the life of the SSD, assuming newer-gen SSD's. They have proven to have significantly longer endurance than even the manufacturer ratings in most cases, and even with the added writes of transcoding to the SSD will likely outlast their useful life. I have moved transcoding back to the SSD and will leave it there. If you can transcode to RAM without issues, then I see no point in not doing so. But if you have issues with /tmp filling up, it's probably best to just move transcoding back to cache and be done with it.
  19. I did. It looks like they are related to the Community Applications plugin. Here's a snip from one of the files { "apps": 230, "requests": 72, "last_updated": "25th September 2015 at 17:42:37", "last_updated_timestamp": "1443199357", "applist": [ { "Beta": "False", "Category": "Network:Voip", "Name": "binhex-teamspeak", "Description": "\n TeamSpeak is proprietary voice-over-Internet Protocol (VoIP) software that allows computer users to speak on a chat channel with fellow computer users, much like a telephone conference call. A TeamSpeak user will often wear a headset with an integrated microphone. Users use the TeamSpeak client software to connect to a TeamSpeak server of their choice, from there they can join chat channels and discuss things.[br][br]\n [b][u][span style='color: #E80000;']Configuration[/span][/u][/b][br]\n [b]/config[/b] This is where teamspeak will store it's configuration file, database and logs.[br][br]\n [b][u][span style='color: #E80000;']Notes[/span][/u][/b][br]\n Connect to the server using the TeamSpeak client with the host IP address and port 9987.[br]\n To authenticate use the privilege key shown in the supervisord.log file in the host mapped /config folder.\n ", "Overview": "\n TeamSpeak is proprietary voice-over-Internet Protocol (VoIP) software that allows computer users to speak on a chat channel with fellow computer users, much like a telephone conference call. A TeamSpeak user will often wear a headset with an integrated microphone. Users use the TeamSpeak client software to connect to a TeamSpeak server of their choice, from there they can join chat channels and discuss things.\n ", "Support": "http://lime-technology.com/forum/index.php?topic=38055.0", "Registry": "https://registry.hub.docker.com/u/binhex/arch-teamspeak/", "GitHub": "https://github.com/binhex/arch-teamspeak", "Repository": "binhex/arch-teamspeak", "BindTime": "true", "Privileged": "false", "Networking": { "Mode": "host", "Publish": "\n " }, "Environment": { "Variable": [ { "Name": "", "Value": "" } ] }, "Data": { "Volume": [ { "HostDir": "path to config", "ContainerDir": "/config", "Mode": "rw"
  20. That is the report of the changed values (before preclear to after preclear). None of the values listed are an issue. The drive is fine.
  21. I have been transcoding to /tmp for quite some time and never had issues. Recently (past 2 weeks or so) I have started getting errors where Plex suddenly stops playing and states the file is unavailable. I have traced this down to /tmp running out of free space. I can watch it edge up to 100% full while a transcode is happening and then the transcode dies. I have looked everything over but nothing makes sense, so I thought I'd post up to see if anyone can offer guidance and/or point out what I'm not seeing. System has 8GB RAM, so as expected /tmp is roughly 4GB root@Landfill:/tmp# df -h /tmp Filesystem Size Used Avail Use% Mounted on - 3.8G 3.5G 290M 93% / But why is /tmp 93% full? It is approx 93% full even after a reboot. SO let's look at what's in /tmp root@Landfill:/tmp# du -h 0 ./mc-root 24K ./community.applications/tempFiles 24K ./community.applications 28K ./notifications/archive 0 ./notifications/unread 28K ./notifications 32K ./plugins 0 ./.X11-unix 0 ./.ICE-unix 32M . root@Landfill:/tmp# ls community.applications/ tmp-1661007850.url tmp-399711444.url mc-root/ tmp-168813488.url tmp-449460109.url notifications/ tmp-1719190793.url tmp-456017590.url plugins/ tmp-1876646817.url tmp-477880073.url tmp-1075403738.url tmp-1892929066.url tmp-505641119.url tmp-1094819657.url tmp-1895527982.url tmp-524278372.url tmp-1100506559.url tmp-1952698197.url tmp-558278473.url tmp-1206292614.url tmp-201109261.url tmp-597112372.url tmp-1248792920.url tmp-2011622529.url tmp-602322518.url tmp-1275068659.url tmp-2067235427.url tmp-656886916.url tmp-1280569778.url tmp-207572655.url tmp-682545064.url tmp-1290055322.url tmp-209482678.url tmp-714381734.url tmp-1350278630.url tmp-21930234.url tmp-770145398.url tmp-1547960682.url tmp-350663172.url tmp-875515522.url tmp-1567121976.url tmp-354557422.url tmp-912727871.url tmp-1622958749.url tmp-384040099.url tmp-924626499.url tmp-1648043674.url tmp-386724013.url tmp-991523516.url root@Landfill:/tmp# So according to du, there's approximately 32M of files in /tmp, but according to df -h there's approximately 3.5G used. There's also a lot of tmp-xxxxxxxxxxxxx.url files. Anyone know what these are (or does it even matter)? I know the issue here is there's something I don't understand about how Linux works. I'm assuming most of the space in /tmp is cached for other reasons? What am I missing? In the mean time I moved transcoding back to the cache drive and it's working fine, but I'd like to keep it in RAM if possible.
  22. On the server console type "ifconfig eth0". Find the IP address of the server. See if you can connect to the server from your PC using IP address instead of tower.