Community Developer
  • Posts

  • Joined

1 Follower


  • Gender

Recent Profile Visitors

4056 profile views

rix's Achievements


Enthusiast (6/14)



  1. After the recent ungrateful comments and hostility I have decided to unfollow the thread starting with this post to not post on this forum anymore - this includes DMs (exceptions will be made for the Unraid/Mod team). Github issues / discussions are still welcome - and allow me to moderate what I spend my precious free time on. Feel free to discuss my work here and help each other out - so no need to lock this thread. This has worked well in the past. I will continue to work on my projects and publish them (also to the Unraid template repo), for people who find them useful. EDiT: I have removed all links to this thread from my template repo.
  2. Sounds good. Thanks for investing your free time to help others.
  3. That makes sense. It's one of the reasons why the privileged flag is not part of the default config in the Readme. Some users were unable to use ripper without the flag though so it's usefulness may differ between various use cases. So what you are saying is it works for you if you don't run the container as privileged?
  4. Nope. You'll need to ship to central europe
  5. Good suggestion: https://github.com/rix1337/docker-templates/commit/0d3f3ead2849f37384b904b9fa2de254570c8f08 ✅
  6. Thanks for chiming in so quickly, @squid. I have decided not to respond to hostile comments. I am less than willing to spend my free time on internet trolls. To clear things up: as per https://github.com/rix1337/docker-ripper#do-you-offer-support I happily am able to confirm, I am not able to offer any support on this issue. Transparency is always important in open source projects - that's why that passage was added to the readme. Ripper generally works well and anyone is free to solve issues and improve the project by sending a pull request. @stayupthetree if you need further assistance and do not want to become a sponsor, then you can either: a) ask others in the community how they managed to get multiple drives working b) ship me a secondary optical drive for free (ideally your exact model) so I can try and diagnose this Edit: To further facilitate transparency I have reworded the https://github.com/rix1337/docker-ripper/blob/master/README.md#how-do-i-rip-from-multiple-drives-simultaneously section in Ripper's readme.
  7. You do not seem to grasp the concept of "no support".
  8. Good luck getting support for a passion project with that attitude
  9. Try deleting everything from the config path. What version and tag of the image are you using? The reason I am asking is, ripit has long been removed from the image (replaced by abcde). It is highly likely that you ripper.sh ist too old and needs to be deleted. Ripper should then place the latest version at /config.
  10. The Github readme clearly states you need to run one container per optical drive.
  11. That log size is insane, creating a unified log with rolling support is not trivial in bash. Ill gladly accept a pull request for that. I still managed to fix this: using Stack Overflow codeTM the web ui now loads (up to) the last 100 lines from the logfile. If the log is larger than 1 MegaByte a warning to consider clearing the log is shown as well. This worked with a file of 400 MB size in my testing, so no more error should show in your case. Enjoy and show some love for this project, if you can 😉
  12. That size is definitely the reason for the failure.
  13. Post your Ripper.log here and screenshots from your browsers console You might be the first to actually use this
  14. Thank you for posting this. You have noticed a bug. Both versions of ripper (latest and manual-latest) should now correctly display the web UI. Please pull the updated image before proceeding.
  15. Glad you like it. To further assist you we need more info: What specific tag of the docker image are you running? What does your log look like (specifically after the "Starting web ui" message? The tag should be visible in your "Docker" tab, the log is accessible through the unraid UI