Leaderboard

Popular Content

Showing content with the highest reputation on 01/08/19 in all areas

  1. @Ashe @trekkiedj @itimpi Please see update first post also for an example of how I have it setup. I have also added a temporary solution to adding multiple library locations if anyone needs that. I will add a feature to the WebUI to separate it in the future. Thanks tho to you three for sorting the Docker configs out for everyone. I should have posted better details here initially. @trekkiedj - Thanks for the reports regarding the ascii issues, I'm looking into that one asap. Possible ETA for a fix is by the weekend.
    3 points
  2. Tread lightly here, some discussions are better done in private. Some companies have bigger legal teams than others, best not to bring unwanted attention.
    2 points
  3. CA Appdata Backup / Restore (v2) Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users. Development has ceased on the original version, and is now switched over to this replacement. Fundamentally they are both the same and accomplish the same goals (namely, backing up your Appdata share and USB / libvirt), but this version is significantly faster at the job. This version uses tar instead of rsync (and offers optional compression of the archive - roughly 50% if not including any downloads in the archive - which you really shouldn't be anyways). Because of using tar, there is no longer any incremental backups. Rather, every backup goes into its own separate dated subfolder. Old backups can optionally be deleted after a successful backup. Even without incremental backups, the speed increase afforded by tar means that there should be no real difference in the end. (ie: A full backup using my appdata on the old plugin takes ~1.5 hours. This plugin can do the same thing uncompressed in about 10 minutes, and compressed in 20 minutes. The optional verification of the archive takes a similar amount of time. An incremental backup on the old plugin using my appdata averaged around 35 minutes). The options for separate destination for USB / VM libvirt backups is changed so that if there is no destination set for those backup's then they will not be backed up. Additionally, unlike the original plugin, no cache drive is necessary, and the appdata source can be stored on any device in your system (ie: unassigned devices). The destination as usual can go to any mount point within your system. Unfortunately because of no more incremental backups, this version may no longer be suitable for ultimately backing up offsite to a cloud service (ie: via rclone) You can find this within the Apps tab (Search for Appdata Backup). The original v1 plugin should be uninstalled if migrating to this version.
    1 point
  4. It really doesn't matter which device designation is given to a disk. Linux uses a dynamic assignment and this may vary each time you reboot your system. Unraid uses the serial number of the disks, this is unique doesn't change ever unless you replace the disk with another disk. Not sure how you unplugged the disks, but this should never be done when the array is operational. Some systems support hot-swap (really depends on the hardware you have), but swapping or replacing disks must always be done when the array is stopped. In any case when not sure, always shutdown the system before fiddling with the disks.
    1 point
  5. As was mentioned the sdX designations are always subject to change are not relevant toUnraid which uses serial numbers to identify disks. Some other factor is at play. You should include your diagnostics ZIP file (obtained via Tools->Diagnostics) so we can see what is happening.
    1 point
  6. Has there been any thought to automatically renaming the converted files to reflect the fact that they have been re-encoded? For instance I was thinking of replacing h264 in the filename with either h265 or HEVC.
    1 point
  7. Its legal to virtualize osx on apple hardware. one business example: https://www.macstadium.com/ it's also legal to use it in vmware, again, on apple hardware. And many macs also run windows 10, like my 2007 imac that wouldn't upgrade into sierra, so I loaded windows 10 on it (everything but sound works.) You can also run it on mac pro's (not the cylinders, but previous.) So my assumption is that dkerlee is asking regarding this type of setup (windows running on mac hardware) and no other scenario, as that violates their EULA. But i'm also pretty sure this thread will end up in the bilge, which is also ok.
    1 point
  8. Look into the VMware method, might be easier. He did a vid on that way as well.
    1 point
  9. Holy smokes there are a lots of posts here. Sorry for the absence. Thanks everyone for the feedback, especially those who took the time to raise issues on github. If I don't get back to you directly its because I'm busy either working or working. I've had an insanely busy first day back at work and it looks like more of that for perhaps the next few weeks. I spent a few hours today implementing the guts of automated testing for the project. This highlighted some pretty dumb code that I had written. As such I'v just spent the past few hours refactoring the file that handles communication with the ffmpeg subprocesses. The next priority I'm seeing here is to fix up issues with not escaping or converting special characters. Finger crossed I can get that done asap for you guys. Thanks again, Josh Sent from my ONE E1003 using Tapatalk
    1 point
  10. Not the using the Marvell controller, so resync should go fine now, though it hadn't started yet on the diags you posted.
    1 point
  11. kizer is spot on--the community here is incredible--I've had my butt saved by incredible members here. I've been using the same license (and same USB key now that I think about it hmmm) since buying a 1TB drive was considered HUGE and cost prohibitive. If you look at the cost of my license over the last decade plus, it's amounted to absolute peanuts.
    1 point
  12. I chose Unraid years ago because if its great forum and the community of helpful people here. Just another thing to consider.
    1 point
  13. Based on the two previous solutions, I came up with a config for a subdomain-based setup: server { listen 443 ssl; server_name ubooquity.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_ubooquity ubooquity; proxy_max_temp_file_size 2048m; proxy_pass http://$upstream_ubooquity:2202; } }
    1 point
  14. You can make a backup of its contents by going to Main - Boot Device - Flash - Flash Device Settings and clicking on the Flash backup button. Then, see here: https://lime-technology.com/replace-key/
    1 point
  15. Ah, ha! I also was having issues where the upsmon slave (unRaid) was successfully connecting to my NUT master (pfSense box) but not showing any details at all in unRaid. It was the hardcoded ups name 'ups'. I had set a custom name for the master on pfSense. Changing this back to 'ups' in pfSense resolved the details not showing. Would love to see some expansion on this in the GUI. NUT allows the ability to monitor multiple (redundant) UPSs, so it would be helpful to be able to set custom names for our devices and still have the details shown. Pretty sure I read that you are only able to show details from 1 of the monitored devices, so perhaps show the first MONITOR entry in upsmon.conf or a dropdown in the GUI to select which monitored device to show details for. Thanks for any consideration and thanks for the plugin!!!
    1 point