Jump to content

roland

Members
  • Posts

    219
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by roland

  1. Are you using all these apps as plugins or dockers? I would suggest to use one of you 500 GB disks as cache drive. Not to speed up your writes but to store your configs and your download. In that way the array can spin down and only the cache keeps spinning. If you are using plugins, make sure all your configuration is set to be stored in the cache disk or the flash. Since unraid runs completely in memory all other data gets lost on reboot. If you use dockers, use /mnt/cache/appdata to store the config for the same reason. I have all my dockers and downloads on the cache drive and move the complete files to the array once the download finishes.
  2. I had similar issues when the whole box would just hang. Only way out was pushing the power button. Usually I saw the shdf (?) process taking 100% CPU and I suspected it had something to do with the mover. It all went away after I changed all my drives to XFS. See this sticky for details: http://lime-technology.com/forum/index.php?topic=37490.0
  3. As a rule, never mix the two!! cp /mnt/user/movies /mnt/disk1/movies Unexpected things can happen. If you only have 1 disk it won't matter which option you use. If you have multiple disks using shares you might actually be moving data between disks. So if you are happy with the distribution accross the disks I would use the disk to disk option.
  4. Hi, I get the Dynamix WebUI not set to auto update but there is no AutoUpdate available (I am on 6.1.8 ) If it is available in 6.1.9 or 6.2 then there is no big issue, I just need to upgrade.
  5. Why do you want to share a disk? For data? Why not just connect/map to a share?
  6. Thanks! It will still be a headless server, so not GPU required for the VMs. I will give it a spin. Looks like I need a bigger cache disk soon ...
  7. I finally got around to upgrade my BIOS to the 2003 version and tried switching on the virtualisation. I found the VT-x setting but there is no VT-d setting. Could be because my CPU does not support it!! i3-4130 does have VT-x but not VT-d Can I still do virtualisation in unRAID? What would be the restriction? Thanks
  8. I wrote a little script to help me with that. You are welcome to use it. I haven't used it recently, let me know if there is a problem. https://github.com/Data-Monkey/unRAID/blob/master/tools/DockerSelector.sh
  9. You will enjoy it. I use unRAID about 2 years and love how far it has come even in that short time. For your "pool" or array in unRAID, it is the sum of all the data drives. You just need to make sure your parity drive is greater or equal to the biggest of your individual drives. One of the big advantages of unRAID is that you can mix-n-match different drives in your protected array. Have fun!
  10. Hi aptalca, can I suggest to add this thread as the support thread in CA for your docker? Or link to it from your main support thread. Would make it much easier to get here. Thanks for all the great work.
  11. Yes, that will work. Your new "pool" or array is just the sum of all the disks (excluding the parity disk) 0.5+0.5+1+1.5+2 TB = 5.5TB If you don't want to delete any data then you could change your procedure a bit. Copy the 2 big disks onto the 4TB Install the small ones in unRAID that would give you 3.5 TB free Copy the 3 small disks (2 TB in total to unRAID) Install the now empty small disks (gives you a total of 5.5 TB) Copy the 3.5TB from the 4TB disk back to unRAID. then install the 4 TB as the parity Done! the data is moving around a fair bit during that operation. Be sure you have a backup before you start. And check the health of your disks. If you can/want to invest in a second 4TB disk you could avoid the multiple copies and keep the original disks intact until the copy has finished. Install the first 4TB disk as data, copy all the data Install the second 4TB disk as parity (you could install parity at the start, but it will be faster this way) once everything is good, format and pre-clear the "old" disks and add them to the array.
  12. There seem to be a few topics in here. Lets tackle the shares first. Having appdata and Appdata is because you must have the two different folders on your disks. First I would suggest to stick to cache only for appdata, so use /mnt/cache/appdata for all your docker containers. Since Appdata and appdata are set to cache-only they shoud only exist on the cache disk. Check if any of the dockers are installed in Appdata and change them over. (stop the docker, move the folder from Appdata to /mnt/cache/appdata, edit the docker to reflect the change, start) If you have Appdata or appdata on any of your data drives make sure they are empty and remove them ie: /mnt/disk1/Appdata For the other issues I suggest you post a screenshot of the docker overview with advanced settings so we can see all the mappings. There might be some mismatch there. Maybe you are filling up the docker.img with the backups?
  13. I am not using audio often, but I use Plex to stream to chromecast audio. Works good enough for what I need.
  14. You can use a line like this. It will generate a notification that you can manage thru the normal notification settings. /usr/local/sbin/notify -i normal -s "Appdata Backup Completed" -d " Appdata Backup completed at `date`" i: importance s: subject d: description I use this in my backup script.
  15. Yes, reformating all the drives is a slow process. There is a sticky http://lime-technology.com/forum/index.php?topic=37490.0 explaining the process. It worked a treat for me. But you need an empty disk to start with and as always a backup would be good too. Roland
  16. I had similar issues. Usually shsf (not sure I remember this correctly) used 100% CPU. I could still telnet to the server and browse around as long as I didn't get to the array disks. Ended up changing all disks from reiserFS to xfs (including cache) This seems to have fixed it so far. Roland
  17. I use a rsync gui on Windows (delta copy) to backup my data, mostly user folders and photos to unRaid. From there I back everything up to crash plan. Works very well for me. Except that initial backup to the cloud takes forever, still 3 months!! to go. I am contemplating to also back up to a USB disk as I am only talking about 4TB of data. Hope this helps Roland
  18. If you run the rsync from the backup server you could just include the appropriate shutdown command in the script. You could possibly even include the whole script in the go file, but just guessing here as I am not sure where in the startup sequence the go file falls. If this is possible you could wake up the backup server. It would run the rsync script and shut down again. Roland
  19. There is a great sticky that explains the process to move from reiser to xfs. I have just done this last week with no issues at all. http://lime-technology.com/forum/index.php?topic=37490.0
  20. I don't think you need to do anything for transcoding. Plex understands what format the client can consume and transcodes the stream in real-time when required. Just start streaming to the chromecast and Plex will do the rest.
  21. Hi, I am having the same problem every few days. There are a few more posts around the forum. Look for shfs 100%. I tried most of the recommendation in these threads, except changing the file system. (new disk is on its way) But so far no luck. Any extra ideas would be appreciated.
  22. Thanks trurl, looking at it, it seems they only bring up a mongo db docker using compose, maybe I can still work around this by making my own docker. Maybe I start with an AWS instance where I can play without the unRAID restriction and once I get it working come back to unRAID. Thanks Roland
  23. OK, I am trying this .... So far I tried to add this docker-hub to unRAID using the extended search and the xml conversion. https://hub.docker.com/r/openmhealth/shimmer-resource-server/ I only added the port to the config. The result was this XML and a docker that does not start. cat my-shimmer-resource-server.xml <?xml version="1.0" encoding="utf-8"?> <Container> <Name>shimmer-resource-server</Name> <Description>A resource server that retrieves health data from third-party APIs and normalizes it.&#13; [b]Converted By Community Applications[/b]</Description> <Registry>https://hub.docker.com/r/openmhealth/shimmer-resource-server/~/dockerfile/</Registry> <Repository>openmhealth/shimmer-resource-server</Repository> <BindTime>true</BindTime> <Privileged>false</Privileged> <Environment/> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>8083</HostPort> <ContainerPort>8083</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data> <Volume> <HostDir>/mnt/cache/Apps/shimmer</HostDir> <ContainerDir>/opt/omh/shimmer</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Version></Version> <WebUI></WebUI> <Banner></Banner> <Icon></Icon> <ExtraParams></ExtraParams> </Container> I assume it does not start as there is very little in the docker file .... of course shimmer.war does not exist. How do I get these instructions into the docker-file? Roland
  24. Hi you lovely docker developers. With all these new health apps and fitness trackers around I find myself "connected" to a multitude of apps that all store some of my health/fitness data. Fitbit, Google Fit, iHealth, RunKeeper, Endomondo, UnderArmour, and the list goes on and on .... I am looking for something like OwnCloud to place all my fitness data into one spot and I found this project. Shimmer: http://www.getshimmer.co/ Shimmer is the first and only open-source health data aggregator Shimmer is using this open standard to exchange and store health data Open mHealth: http://www.openmhealth.org/ "HEALTH DATA THAT MAKES SENSE. Free, open-source code that makes it easy for developers to process digital health data" Both of them support Docker. Can someone suggest how hard/easy it would be to bring these to unRAID? I am not sure if I want to ask you guys to build it for me, as I am not even sure I will use it in the future. Don't want to waste your precious time. But if you feel like an extra little challenge .... Thanks Roland
  25. After this had happened to me on the 21st (see above) I came back today from a long weekend away and the server was in the same state. Please find attached both syslogs (21st and 26th) The shsf process takes 100% CPU I can telnet to the server and even browse around the flash and cache disk, but not the array. If I hit the array the telnet session freezes up as well. The GUI is not available. Hardware is in the Sig. unRaid 6.1.6 Dockers: binhex-delugevpn crashplan NodeRed PlexMediaServer (needo) Sonarr (linuxserver) syslogs.zip
×
×
  • Create New...