• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tuxflux's Achievements


Newbie (1/14)



  1. I did. So I guess it's actually not then? The overnight sync should correct correct the any errors in the parity, right? Or should I do a new config again, reassign the disks and NOT check the parity is valid box
  2. So, I've done it now and suprisingly, no parity check was triggered. I kept the config for the cache and parity drive, rearranged the data drives in their designated array locations, and started the array. I checked that parity is already valid, and it started without any hiccups. I'm gonna do a parity check overnight manually just to be on the safe side, but guess I'm good to go. Thanks for the input everyone!
  3. If I keep the current configuration and use the same assignments for 1-4, that makes sense. But it will trigger a parity sync once I make disk 6 disk 5, regardless if I keep my current config, correct?
  4. Hi everyone. I'm pretty sure I know how to do this, but I'm a bit on the paranoid side and I'd be really happy if someone could confirm and/or suggest a better solution if there is one. The attached image is my current array. Here's what I'd like to do. 1. Remove Disk 5 from the array 2. Move disk 6, to the disk 5 slot What I think is the best methodology is: 1. Stop the array - Tools - New Configuration 2. Assign parity and drives 1-4 in the same slots 3 Assign the current disk 6 to disk 5 4. Start the array, and let the parity rebuild 5. Shut down the server 6. Physically remove disk 5 from my box 7. Reboot Disk 5 is currently empty and I've excluded it from all of my shares. It's a Seagata Barracuda drive, so it's not optimized for NAS usage and I'd like to use in my desktop rather than having it be unused in my array and being a potential point of failure. If there is a way to do this without rebuilding the parity (seeing as there is no actual change in the data going on here), I would be really happy to see a solution to that! Thanks
  5. Thank you for the explanation! Will leave it as is then.
  6. First of all, thanks for the container binhex. It's working great with PIA and port forwarding for the incoming port is working as advertised. However, I was wondering about the outgoing ports. I use a private tracker exclusively, and I'm trying to optimise for upload speeds. My network settings are in the attached image. Can I use the same port for outgoing traffic as the incoming port? I've tested and confirmed that it is open. Is there a best practice here? Any recommendations is appreciated! Using PIA as mentioned through their Swiss server.
  7. I'm having problems making a setting for a plugin stick. I use the Extractor plugin to unpack RAR's downloaded by Radarr and Sonarr and they both look for the extracted file in the original folder for the Torrent in question. The plugin has an option to create a sub-folder with the name of the torrent, which basically defaults to the torrent being extracted to the same folder and the rest of the automated process to complete successfully. However, this gets reset every time the container is restarted or updated. My server updates dockers automatically once a week, and I don't always remember to reactivate this option. I've tried several different plugins with the same result, hence I assume that this is a problem with the Unraid docker and not the plugin itself, and that's why I'm asking in this thread. Any ideas?
  8. Hello fellow Unraiders. My server is currently running on old hardware. An i5 Ivy Bridge 3500k with 4 cores, 16 GB DDR3. I keep toying with the idea of upgrading my server to accommodate a proper gaming VM with passthrough so I can have a fully functioning desktop that I can game on using the same box. I don't have a gaming machine at the moment and I find myself missing it. My server is currently used for file storage and quite a lot of docker containers for various services,. Sonarr, Radarr, piHole, Calibre, Crashplan Pro for offsite backup, and a few others, And of course, Plex. I'm currently using the QuickSync on my CPU for hardware transcoding. It's not exactly great quality wise on this old CPU, but all my files directplay on my AppleTV and I mainly watch at home so the quality dip isn't too much of a concern. However, I would like it to be better. I'd rather not spend the money on something like a P2000 because I just think it's too expensive to simply accommodate transcoding. I have a few users that connect to my server to stream content, but I've never had any problems with throttling of any kind or had any compaints from them. So, Intel vs AMD. The newer Ryzen 3700x and 3900x chips are awesome, but no onboard GPU. Something like a i7 9700k seems like a better idea in my head because it has the possibility for hardware transcoding in the same chip, but I have a couple of questions: 1. Can I get away with passing 6 out of 8 cores to a VM leaving only 2 for Unraid and still have it run well? 2. Should I go for a Ryzen system instead and ignore the hardware decoding for Plex and let the extra CPU cores handle the transcoding load? How would it stack up to option 1? I would appreciate any help and thoughts that anyone might have. Thanks!
  9. Pretty secure password wise. 30 character random letters and symbols stored via self hosted Bitwarden. Can anyone comment on the safety of the webdav protocol itself and how it holds up compared to others like SFTP?
  10. I was wondering how secure WebDAV access is via Nextcloud. I have mounted my main Unraid storage share as external storage in NextCloud to be able to access files via the Windows Explorer on my work computer, and I'm wondering how secure this is. I'm using spaceinvaderone's Let'sEncrypt reverse proxy setup on my own domain for web access. However, I have concerns about exposing my server like this. The reason for doing it, is that my work computer is locked for software installation, so I cannot run OpenVPN to get access that way. I'd be happy if someone would comment on this.
  11. Right. I changed out [audio] portion to [audio] output = audioresample ! audio/x-raw,rate=48000,channels=2,format=S16LE ! audioconvert ! wavenc ! filesink location=/tmp/snapfifo Still no sound. Also, tried running the local scan through the console and got the following output: /usr/local/lib/python2.7/dist-packages/mopidy/ext.py:202: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. extension_class = entry_point.load(require=False) INFO Starting Mopidy 2.2.2 INFO Loading config from builtin defaults INFO Loading config from /root/.config/mopidy/mopidy.conf INFO Loading config from command line options INFO Enabled extensions: mopify, iris, mpd, http, stream, spotify_tunigo, m3u, youtube, simple-webclient, tunein, local-images, softwaremixer, file, musicbox_webclient, party, api_explorer, local-sqlite INFO Disabled extensions: spotify, local, scrobbler, soundcloud WARNING Found local configuration errors, the extension has been automatically disabled: WARNING local/media_dir must be set. WARNING Found scrobbler configuration errors, the extension has been automatically disabled: WARNING scrobbler/username must be set. WARNING scrobbler/password must be set. WARNING Found soundcloud configuration errors, the extension has been automatically disabled: WARNING soundcloud/auth_token must be set. WARNING Found spotify configuration errors, the extension has been automatically disabled: WARNING spotify/username must be set. WARNING spotify/client_secret must be set. WARNING spotify/password must be set. WARNING spotify/client_id must be set. WARNING Please fix the extension configuration errors or disable the extensions to silence these messages. ERROR Unable to run command provided by disabled extension local Configurations for most of the warning messages are set. I'm browsing Spotify and playing files OK, but with no sound,.
  12. I'm having trouble getting sound output and being able to see local files. Spotify loads up, but the server keeps connecting and disconnecting. This is my conf file. I'm guessing the [audio] portion is wrong. I've tried some other variations like: mixer = software mixer_volume = output = autoaudiosink buffer_time = But to no avail. [core] data_dir = /var/lib/mopidy [local] media_dir = /media [m3u] playlists_dir = /var/lib/mopidy/playlists [logging] config_file = /etc/mopidy/logging.conf debug_file = /var/log/mopidy/mopidy-debug.log [mpd] hostname = port = 6600 [http] hostname = port = 6680 [audio] output = lamemp3enc ! shout2send async=false mount=mopidy ip= port=8000 password=hackme [spotify] username password= client_id = client_secret = [spotify_web] client_id = client_secret = [iris] snapcast_enabled = true snapcast_port = 6680 [stream] enabled = true protocols = http https mms rtmp rtmps rtsp timeout = 5000 Any ideas on what parts of the config I need to change? Docker is mapping /media to my NAS folder with music files. The IP set for the container is, ports at default.
  13. The company is huge (government) and any requests to the IT helpdesk go to low level techs with no authority basically. Even Onenote notebook syncing which is something most people use doesn't work properly all the time because of harsh proxy settings and hundreds of firewalls. Yes I actually mean hundreds, it's no joke. There might actually be good news though. I moved over to a Mac about 6 months ago and had a PC previously. Spotify worked on the PC, but not the Mac. Now I have a new PC again, but haven't been to work since I got it. So Spotify might be working again. So hopefully this will solve itself. If not, I'll still need a solution ^^ I've got Airsonic working on my server and I can access that just fine using Let's Encrypt and my own domain. I'd like to be able to do something similar with a Spotify solution.
  14. That worked. It had to be something simple. But once I logged in (which took forever), the prompts to add or replace doesn't work properly. If I click either one of them, it goes to the next step, but bounces back to the first again after 1 second, I've had a lot of issues with the service generally (not specific to the container), and I don't think I'm gonna keep it anyway. With redundancy on the server and the important stuff backed up to an additional RAID1 array, I'm pretty safe from catastrophe I reckon.
  15. I got an e-mail from CrashPlan saying that my server hasn't been backed up for 3 days. When I go to check, my CrashPlan container will not start the WebUI. I just get the red X next to the CrashPlan logo and nothing appears in the opened window. The logs indicate nothing that everything isn't working. I've tried starting with a fresh appdata folder and reinstalling the docker, but nothing seems to get it to start. I'm quite frankly at a loss. Suggestions please? Edit: This came out of the log once I tried to connect. Last line is marked red. 08/11/2018 16:02:03 Got connection from client 08/11/2018 16:02:03 other clients: 08/11/2018 16:02:03 Got 'ws' WebSockets handshake 08/11/2018 16:02:03 Got protocol: binary 08/11/2018 16:02:03 - webSocketsHandshake: using binary/raw encoding 08/11/2018 16:02:03 - WebSockets client version hybi-13 08/11/2018 16:02:03 Disabled X server key autorepeat. 08/11/2018 16:02:03 to force back on run: 'xset r on' (3 times) 08/11/2018 16:02:03 incr accepted_client=1 for sock=10 08/11/2018 16:02:03 webSocketsDecodeHybi: got frame without mask 08/11/2018 16:02:03 rfbProcessClientProtocolVersion: read: I/O error It also keeps outputting e":"No such container: 1377b9fc5835"}