aptalca

Community Developer
  • Posts

    3064
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by aptalca

  1. I upgrade every 3 years or so and I almost always have to get a new mobo. The only time I kept the mobo was when I upgraded my unraid server from a sempron 145 to an older phenom ii b55 unlocked (for plex transcoding). And the only reason I kept the mobo was because I was replacing a low end chip with a higher end one. If I was replacing an Athlon, I probably would have gone with a later gen cpu and replaced the mobo as well. In 3 years, when I do replace this xeon, it better be a passmark 48,000 or something like that since I quadrupled the passmark scores during each of the last 2 upgrades :-)
  2. Have a look at the Unassigned Devices plugin. How does solve his drag-and-drop issue? Sorry, my bad. I'm usually better at looking through the full thread, I promise! Still, it got me to read up on the state of Server-Side Copy and SMB2 FSCTL_SRV_COPYCHUNK, interesting. Will do some experiments later. Unassigned devices will auto mount usb devices. For copying, there are a plethora of options: console, mc (midnight commander), and several dockers with in browser GUIs like dolphin and krusader (they are like using file explorer in a web browser). The dockers can mount both the array and the unassigned devices so you're copying directly, not through the network Unassigned devices plugin also allows for custom and device specific scripts to run when usb devices are plugged in. So if you're copying photos from an sd card to the array, or if you're backing up certain folders from the array onto the usb disk, that can all be automated so you don't even need to drag and drop, just plug in and wait. I shudder at the thought of using Windows as a server, don't get me wrong, I love win 10 and use it on all my laptops and desktops, but not as a server
  3. I believe docker can pass through usb devices as long as the host (unraid) has loaded the drivers for them. I'm not sure if unraid contains any drivers for the kobo device (perhaps recognizes as an sd card?). I use it with a Kindle and my preferred method is to send the ebooks to the kindle associated amazon email addresses through calibre and the books are delivered to the devices. The other method is downloading them from the server.
  4. Since dolphin is getting some attention, I updated the info on its github and docker hub pages. By default, it runs as user nobody (uid 99) which should be fine for unraid. But some docker containers run as root and you may not have write access to their local files. If you want to run dolphin as root (uid 0), add the environment variables USER_ID and GROUP_ID and set both to 0. Or install from scratch from the community applications and the updated template includes these variables under advanced view
  5. I think you must misunderstand the purpose of Dolphin since it is nothing like Tonido. A good analogy for Dolphin is Windows File Explorer. This particular docker implementation of it is just giving you a Linux desktop in a browser with the Dolphin file manager already launched. Yeah, sorry my end goal was always just something to manage my files without managing them from my other systems, not necessarily to have access to the files remotely like with Tonido. I still like being able to manage files while I'm away, but would rather not have it open to the world. With that in mind, is there a way to lock it down? Best option is vpn, second best is to use a reverse proxy like nginx. With reverse proxy, you can set a password through .htpasswd For vpn, check out the openvpn server dockers For nginx, you can use either the nginx docker or the letsencrypt one I have, which sets up a free 3rd party SSL certificate with nginx
  6. You dug your own grave my friend. No one here even suggested a board with 16 ram slots :-p
  7. Bought a cpu but will hold off on getting a mobo. Not sure how long I'll last lol
  8. Home-Automation-Bridge has been updated to ver 1.3.7. It now supports Nest integration, as well as multiple Veras and Harmonys
  9. I didn't read the article, but putting the dependencies into the container images rather than the base image can end up using up more space if you have multiple containers needing the same dependencies (in certain cases). For instance, if you have 10 containers, all using the phusion (ubuntu mod) base, then you only have one copy of the phusion image and all containers share that. But if you were using 10 containers, all using a tiny base with very few dependencies, then you would have one copy of that tiny base image, that would be shared, and each container would contain a separate copy of all the common dependencies they would need. One way to make it effective would be if the baseimage was barebones, but had other module images containing common dependencies, then the containers could share those module images. In theory it makes sense, and it is something the linuxserver team is doing internally for their containers (a linuxserver baseimage, a separate nginx module baseimage that installs on top of the default baseimage, but could be used as a base for other containers needing webservers etc.). But it is hard to accomplish with different individual container devs overall. That's why many devs just use phusion as their base and call it a day.
  10. Haha yeah that makes sense. I thought before the new/updated didn't include the apps that were uninstalled at some point because I didn't see a couple of those I had removed on the list, but perhaps they didn't meet the cut off for the recent date, I'm not sure. The name part also makes sense. In fact, the only time I ever changed the name was because I wanted to run a second instance of a container so I had to rename the second one. I'd be curious about whether people change the names regularly. EDIT: just updated and it works great. Thanks so much
  11. I noticed that the new/updated apps only list the apps that were never installed. It might not be that crucial to see the newly updated apps that one currently has installed, since they manually update it, but for apps that were once installed and later removed, it would be great to see the new updates to them. Sometimes I might decide to reinstall it due to a new feature
  12. There's always this board that takes up to v2 CPUs. (still 10SATA ports, 6 are SATA3 and 4 are SATA2) http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m570.l1313.TR0.TRC0.H0.XX9SRL-F.TRS0&_nkw=X9SRL-F&_sacat=0 Nice find. Ipmi and 10 sata I like it
  13. I'm not 100% sure, but I would say no to backwards comparability. Looking at the supermicro website, the X9 series motherboards are compatible with E5-1600/2600 and E5-1600/2600 v2 family, while the X10 series mobo's are compatible with E5-1600/2600 v3. The v3 family xeons support DDR4, where the V2 and earlier were DDR3. On that's a shame. I was about to pull the trigger on this awesome board with 10 sata3 but it's v3 with ddr4 http://m.ebay.com/itm/161921784506
  14. Are all 2011v3 boards backwards compatible with v1?
  15. API support is in version 1.29 which is still a release candidate. Once the stable is released, it will automatically update. If you want to manually update, there are instructions a couple of pages back
  16. These logs are not within the containers. The syslogs of the containers are mapped to the host where a user can access them without having to exec into the containers. Under unraid's implementation, these logs are saved within the docker.img but separately from the containers That is part of the reason why it causes confusion because when the logs get large, the containers don't, but the docker.img gets full. The users get confused about why the image is ballooning when the containers themselves aren't.
  17. I'm not familiar with Photoshow, but I tried Piwigo. Piwigo is more for showcasing and sharing select photos. It doesn't manage photos in place. You have to import the photos into Piwigo (potentially duplicating). Digikam is great for managing the photo library in place. In other words, you point it to your photos folder, and the changes you make in digikam like sorting, tagging, face recognition, etc., all that info can be stored in a separate database that digikam maintains. I don't like modifying the original files, or duplicating the files so I prefer digikam over other options (I would normally pick picasa desktop over digikam, if only picasa allowed keeping photos on a NAS and access through samba easily and allowed transfer of the info database to other computers easily, but unfortunately picasa desktop is primarily a single computer, local files kind of option, which I dislike) Keep in mind that certain task like face recognition can be extremely cpu intensive and can lock up your container gui for a long time for 200,000 photos. I'd recommend testing on a small batch and doing the rest in batches.
  18. Hit the advanced view button at the top right and it will reveal new settings and likely an error message. It won't let you install without entering that info under advanced view. And make sure you read the description at the top
  19. The simplest way is to forward a port on your router. If your container is running on port 3000, go into your router interface, and forward port 3000 to your server's IP. So when others try to connect http://yourdomain.duckdns.org:3000 they'll reach your container interface. Not sure about how secure plexrequests is, you can perhaps ask on their forums if this method is advised against or not. Other methods (more secure) include setting up a vpn server and having your friends vpn in and access the internal container page, or setting up a reverse proxy (I have a letsencrypt nginx reverse proxy container in the repo, which you can use to set up secure connections with SSL and passwords to your containers. But both of these methods are a little tricky to set up properly.
  20. 1.29 stable is not yet released. It is still a release candidate. You can exec into the container and change the ppa to the master branch which will update it to the rc, but you also have to update a bunch of other things because they changed a lot between those versions. To be honest, I wasn't successful at updating my copy but I didn't try that hard.
  21. How long did you wait? At start, it updates meteor and it might take a few minutes if their server is slow.
  22. See the first post on this thread, it might help: http://lime-technology.com/forum/index.php?topic=45249.0
  23. For reference, below are my config files for reverse proxy with this container (all personal info X'ed out) /config/nginx/site-confs/default server { listen 443 ssl default_server; ssl_certificate /config/keys/fullchain.pem; ssl_certificate_key /config/keys/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { root /config/www; index index.html index.htm index.php; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; } location /sabnzbd { include /config/nginx/proxy.conf; proxy_pass http://192.168.X.X:XXXX/sabnzbd; } location /cp { include /config/nginx/proxy.conf; proxy_pass http://192.168.X.X:XXXX/cp; } location /sonarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.X.X:XXXX/sonarr; } location /plexwatch { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.X.X:XXXX/plexWatch; } location /htpc { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.X.X:XXXX/htpc; } } /config/nginx/nginx.conf user nobody users; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; client_max_body_size 0; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /config/log/nginx/access.log; error_log /config/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /config/nginx/site-confs/*; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; ssl_stapling on; # Requires nginx >= 1.3.7 ssl_stapling_verify on; # Requires nginx => 1.3.7 } /config/nginx/proxy.conf client_max_body_size 10m; client_body_buffer_size 128k; #Timeout if the real server is dead proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # Advanced Proxy Config send_timeout 5m; proxy_read_timeout 240; proxy_send_timeout 240; proxy_connect_timeout 240; # Basic Proxy Config proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect http:// $scheme://; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_cache_bypass $cookie_session; proxy_no_cache $cookie_session; proxy_buffers 32 4k; All my containers are using url prefixes. At the root, I am using a basic html5 webpage (protected with htpasswd) which just links to all the proxies. Note that I removed the lines for php, because it interfered with plexWatch. I guess it was routing the php scripts meant to be run in the plexWatch container to nginx's internal php. Since I didn't need any php support for my main webpage (all html5) I removed php altogether. Hope this helps
  24. You probably need to use a url prefix