danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. DON'T update in the container....... i just bumped the version to 3.11.0.1 in the image and it's building on dockerhub now. Not updating in the container - is that a rule we should go with for ALL Dockers!? E.g. SAB, CTO, SCR etc
  2. Hello All, I thought I would share with you something I learned today. I had just installed Maraschino (which for those who don't know is a simple web interface to act as a nice overview/front page for a Kodi / Openelec powered HTPC) via smidgeon's template repository (https://lime-technology.com/forum/index.php?topic=34009.0) to replace the instance of it I had running on my HTPC living room. I wanted to migrate to my server because in my home Maraschino is used as the browser homepage on many devices BUT the living room HTPC is not always on. Anyway - I absolutely HATE the default grass background (grass.jpg) which comes installed as default. I wanted a nice "simpler" background. Anyway - in the past all I have done is replaced the grass.jpg located in the relevant folder in the installation with my own background also named grass.jpg. Voila. I have my nice background and no more grass. When I installed the docker however I realised that in the /config folder didn't contain the location of the .grass.jpg file. I realised what I would have to do is map a volume to that folder so I can change the files BUT then I realised I had NO idea where this folder was in the container file system. Note: I could have "guessed": which I did really. I went to the git homepage for the project, looked at the folder structure and figured if the thing was installed in /opt then I could work it out and map the volume accordingly. BUT I wanted to be sure. So I did the following to check: Telnet into unraid Get the docker id: docker ps Which gives something like this as its output: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fbfe739c2dbd smdion/docker-maraschino:latest "/sbin/my_init" 32 seconds ago Up 32 seconds Maraschino 003aca2445fd needo/sickrage:latest "/sbin/my_init" 40 minutes ago Up 40 minutes SickRage 4e7bfeccee8a needo/sabnzbd:latest "/sbin/my_init" 45 minutes ago Up 45 minutes SABnzbd 97abad94019e sparklyballs/headless-kodi-helix:latest "/sbin/my_init" 2 hours ago Up 2 hours 9777/udp, 0.0.0.0:8090->8080/tcp KODI-Headless 5c64f3d65868 smdion/docker-htpcmanager:latest "/sbin/my_init" 39 hours ago Up 39 hours HTPC-Manager 6074398c0a6b needo/mariadb:latest "/sbin/my_init" 40 hours ago Up 40 hours MariaDB 87ca4f105585 needo/couchpotato:latest "/sbin/my_init" 42 hours ago Up 42 hours CouchPotato So as I can clearly see the Maraschino id is: fbfe739c2dbd Access the container filesystem using syntax "docker exec -t -i <container id> /bin/bash": docker exec -t -i fbfe739c2dbd /bin/bash This then took me to a command prompt: root@main:/# So with a quick: ls /opt/maraschino/static/images/backgrounds/ I could see that the grass.jpg was indeed there. Exit the container prompt and return to Unraid prompt exit Now I was out of the container file system. I then went on and mapped a volume in the docker to that location and boom - after copying the files from the git source into that folder - I was able to change grass.jpg to my own custom background. Anyway - just thought I would share. Daniel
  3. OK - interestingly it is there, its just not in the config folder. I accessed the docker file system by using a dialogbox within sickrage intended to select a path for a different option and I navigated to /opt/sickrage and low and behold there it is. Unless I have fundamentally missed something here - I cant figure out how to either move this folder from where it is to the config folder (which I have mapped to the host) OR map sab option for "auto processing folder" to this folder (as it is not exposed) let alone get to the config file to change anything in it. I "feel" like I really HAVE missed something fundamental here. I can't be the only one with this issue? I think I am getting a hang of Docker now! I found a better way of doing it I think via volume mappings. My volume mappings are: /config => /mnt/disks/app/docker/appdata/sickrage/ /opt/sickrage/autoProcessTV => /mnt/disks/app/docker/appdata/sickrage/autoProcessTV/ Essentially to me this means the autoProcessTV location within /opt is now mapped to a folder called autoProcessTV within my config folder (mapped in the line above).
  4. Pretty normal it would seem, I get similar reading from my pair. Both are fine IMHO.
  5. No matter - I decided to install Kodi Headless and MariaDB separately and then I worked it out. Just mapped them to a different config folder I created in my app drive. All seems to be working.
  6. Hi Guys, Would someone in the know mind giving me an explanation of the following volume mapping for Koma? /config is simple enough BUT /opt/kodi-server/share/kodi/portable_data I have no idea what I should be changing this to? If anyone has an example I would be very appreciative? Thanks Daniel
  7. OK - interestingly it is there, its just not in the config folder. I accessed the docker file system by using a dialogbox within sickrage intended to select a path for a different option and I navigated to /opt/sickrage and low and behold there it is. Unless I have fundamentally missed something here - I cant figure out how to either move this folder from where it is to the config folder (which I have mapped to the host) OR map sab option for "auto processing folder" to this folder (as it is not exposed) let alone get to the config file to change anything in it. I "feel" like I really HAVE missed something fundamental here. I can't be the only one with this issue? OK - I have "fixed" my issue. I downloaded the Sickrage Source in a zip file from GitHub containing the autoProcessTV folder. I copied this to the config folder and then pointed sab to that folder. I did have a few issues with post processing not working which I figured to have this docker working ok I had to have NetworkType to "Host" AND I also had to explicitly define the location of the tv folder in the Sickrage config. All working now though.
  8. OK - interestingly it is there, its just not in the config folder. I accessed the docker file system by using a dialogbox within sickrage intended to select a path for a different option and I navigated to /opt/sickrage and low and behold there it is. Unless I have fundamentally missed something here - I cant figure out how to either move this folder from where it is to the config folder (which I have mapped to the host) OR map sab option for "auto processing folder" to this folder (as it is not exposed) let alone get to the config file to change anything in it. I "feel" like I really HAVE missed something fundamental here. I can't be the only one with this issue?
  9. Hi Guys, I have just installed the needo sickrage docker (second one I've tried). I can access the app and seems to be working fine. However, I cannot find the autoprocesstv files to change config of and map to from Sab. All that is in my config folder is as attached. Does anyone know how I find them? Ta Daniel
  10. For me, in my new setup Cache drive is a Cache drive and thats all. Thanks to the plugin 'Unassigned Devices' (it seems yet to be named) created by the awesome @gfjardim. BTW: this functionality HAS to be included natively into Unraid - it's THAT good! http://lime-technology.com/forum/index.php?topic=38635.0 I bought an additional drive which this plugin lets me mount outside the protected array (as in "nothing" to do with it at all") and mount it and share it as "app". Anyway, I have placed my Docker.img onto it (literally 5 mis ago) and plan to do the same thing with my VM's. I will back this up to the protected array at a suitable frequency but all in all great!
  11. I'll finish it and post a template on my repository until the end of the week, ok? I'll keep you posted. G'Day! @gfjardim - did you ever get chance to post a template for Hamachi?
  12. A watched pot never boils. You can press en elevator button 100 times it still wont come any faster. Pick one analogy that works for you BUT Dude - step away from the machine. Go do something else - it will finish when it finishes!!
  13. I'm guessing those are "small" files. If so, totally normal.
  14. I have just built a very similar build as an upgrade to my main server. I am documenting my journey as well as build notes and choices here: http://lime-technology.com/forum/index.php?topic=37567.0 As for memory - remember do not buy Registered DIMMs! You need Unbuffered DIMMs. Also you need to buy 4GB or 8GB DIMM's. The board will NOT accept 16GB DIMM's. Also the QVL on the Supermicro site would have you beleive that you are limited to Samsung, Micron or Hynix. That is not true. There are other options out there BUT for this board the only rule I have found is DON'T Buy Kingston! Their modules "used to work" BUT apparently in 2013 Kingston changed the supplier of the DRAM chips without changing the model number of the DIMM's. This meant that newer DIMMs with the same model number no longer worked properly in Supermicro X10 motherboards. Kingston then removed this board from the supported memory list on their site. Full reference see here: http://webcache.googleusercontent.com/search?q=cache:Ot37IHGhOOYJ:forums.freenas.org/index.php%3Fthreads/ram-recommendations-for-supermicro-x10-lga1150-motherboards.23291/+&cd=1&hl=en&ct=clnk&gl=au&client=safari If I was you I would also buy 32GB instead of 16GB. It's one of my own rules when it comes to system building (which I almost didn't follow when doing this build due to cost) and that is - if you can - FILL the board. I always find when I don't - some time down the line I always wish I had. BUT this is a personal opinion. I chose: 2 x Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory CT2KIT102472BD160B/CT2CP102472BD160B Manufacturer: http://www.crucial.com/usa/en/ct2kit102472bd160b Vendor: http://www.amazon.com/dp/B008EMA5VU/ref=pe_385040_127745480_TE_item Apparently Crucial is Micron's (who is on the Supermicro QVL for this board) consumer brand and the above model DIMM is apparently just a rebrand of Micron's MT18KSF1G72AZ-1G6E1. They are working flawlessly in my setup.
  15. Yeah - the Alpha is buggy as ALL hell! Plus they haven't (as of a month or so ago when I checked it) even implemented some basic features like overwrites. I am using the latest stable on W10. Anyway - its all probably moot for you now - how are things progressing anyway - without issue I hope!?
  16. It's a shame that you have formed that opinion. I have found the "stable" version of Teracopy to be excellent and more than reliable. If you see the thread I contributed to about the 8TB Seagate's - all those copies were done using Teracopy. Plus - as if to cement my position - I am currently restoring (copying) 11TB of Data from my Backup Server to my new Main Server using Teracopy with validate copy selected. Solid and easy as ever.
  17. danioj

    Turbo Write

    Hopefully not a too distant one. For what it's worth, not sure if you've followed the entire thread, but I think the on/off/auto option that WeeboTech suggested is by far the best way to implement this ==> where On means TurboWrite is on (and will spin up all disks to do any writes); Off simply means it's off, so writes are done in the usual way; and Auto means use Turbo Write if all disks are spinning; but otherwise do writes normally (i.e. don't spin up extra disks to do the write). I like the Auto mode best -- but there are cases where the Off/On choices are better (as WeeboTech pointed out in our discussion). Firstly, disappointed that this feature got dropped from v6.0 and without notification. All good discussion. For me I know what times of the day I would like Turbo Write to enabled. So for me, a schedule feature would be good. I know you "could" enable auto and just write a script to spin all drives up external to this feature at time(s) you want thus technically enabling Turbo at those times => BUT a small scheduling feature would be cool! P.S. I keep feeing more advanced scheduling has applications in lots of other current and future functions e.g backup so if it was written in a way that you could leverage the code in other features it wouldn't be wasted dev effort.
  18. danioj

    Turbo Write

    Hi Guys, What's the status of this? I have had a look @ the rc and can't see any options for this anywhere. I could be blind. Did it get implemented? I was assuming yes because it's still in the v6.0 roadmap. I currently have a 3 disk array in my backup server without a cache drive and wondered what the impact of this feature would be? Daniel
  19. Cool! Thought so! Nice to have it confirmed - I like to have things well planned. Hate winging it. Roger on the data warning. Agree it's a risk as I'm essentially reducing 2 Parity protected copies of my data to 1 for a short period. However baring an act of God physically destroying the backup server, I - like you - feel the risk of data loss is quite low and I'm happy to take it. Thank you once again!
  20. Hi guys, I have just finished building my backup server (v6) and migrated all data from current main server (v5) over. Now I've built my new main server (v6) and am going to populate it with the disks of my current main sever. Quick question - I am expecting to just drop these into my new main server under a new config and format all disks as xfs - setup all shares and migrate data from backup server to new main server. Is this a reasonable plan or am I missing something? I don't have to preclear all the disks again (for instance) do I? - in this example I would prefer to know this upfront as I don't like to wait for a Unraid to clear them!! Ta, Daniel
  21. I wouldn't use this switch. Use the Faster preclear sure - BUT remember this is not just about clearing the drive it is also about giving them a workout and getting over the mortality period - its all a work out. I'd rather have my drives fail when going through this routine than when I have important data on them or when you want your array available. Just bite the bullet, run the cycles and wait! You'll be better off!
  22. There are a number of factors that can affect this when trying to preclear multiple drives at the same time - are they utilising the same bus, different controllers etc etc BUT on average your speeds are within the norm that I have experienced (it happened to me too).
  23. Seems fine to me. Although I would do more than 1 cycle. I'd suggest running them for 2 more (LT don't suggest 3 cycles for no reason). If you want quicker post write read times then use BJP's "faster" preclear script. Worked fine for me. Also - Weebo suggested something to me that I liked - a long S.M.A.R.T test post preclears which essentially gives you final confidence in the drive and puts a marker in the log for future reference. I liked that. What I'm suggesting is going to put you out a fair few more days BUT we are talking about drive confidence and it is your data that's going on them! I guess it's all a matter of perspective and cost vs gain. But for me for the sake of a few days KNOWING the drives have had a strenuous work before trusting them with your data is a nice peice of mind.
  24. If you're intent on using the command line then I'd advise you use mc over just cp. mc is basically a front end for all the basic file operation commands and directory manipulations such as copying, moving, renaming, linking, and deleting. PLUS I believe it has suspend and resume operations. BUT if I were you I'd stay away from the command line if there is no need to use it and id suggest in this scenario there isn't a need. Just my two cents.