Leaderboard

Popular Content

Showing content with the highest reputation on 04/13/22 in all areas

  1. My wife does that all the time to me. Says if I'm going to act like a child she might as well treat me like one
    2 points
  2. Thanks @SpencerJ, Your fast response was very much appreciated. Regardless of my trial time remaining, I just purchased the Pro license. Not that I think I'll ever need an NAS solution that large, it was more as a 'thanks' to the devs for this very simple, yet powerful NAS/Hypervisor interface. But who knows, my young kids might grow up to be YouTubers and may potentially need many Terabytes of storage! Kind regards, DarthKegRaider
    2 points
  3. Hi everybody, it took me several days to figure out what my issue was. I’ll explain to provide a resolution for my issue. my Unraid server is running on a SuperMicro Mainboard X10SDV-2C-TLN2F. It has 2 10Gbe links and currently there seems to be a known issue with those adapters. In my case when both NICs are connected via 10g, that the system will drop network connections unexpectedly at various times. I found this article on the net, describing my issue, and I followed it, contacting SM support. https://tinkertry.com/how-to-work-around-intermittent-intel-x557-network-outages-on-12-core-xeon-d SuperMicro provided me a firmware patch version sdv23a for those NICs, but applying it didn’t change anything for me. I received and applied a version sdv23b a day later and that worked well for me, with a little con, as I had to connect NIC0 to a 1Gbe switch. Then it looks very stable again. AlsoI tried connecting NIC0 to the 10g switch and limit the port‘s bandwidth to 1Gbe, but that didn’t eliminate the problem.
    1 point
  4. Yes, probably that memory adresses were in use by efifb: that commands disables efi framebuffer (efifb), so you can pass it. It's an alternative to video=efifb:off kernel argument.
    1 point
  5. Thanks for making a really good program, and also hoping to see the progress in the E-book part. Tho if that works you should make it a separate thing like E-bookshelf. Well to my "issue" It seems audiobookself is very dependent on the file structure of your library. If your a little messy it make errors with files. Also in regard of matching. This part is a little frustrating. Audiobookshelf finds the items and put in the right meta data and update the info. But I have to go in manualy and press the match on every single book. so it updates and put it in Series folder.
    1 point
  6. ...hast Du mal Dein Google-Fu benutzt? Ich hab leider auch nur ne kaputte Glaskugel[emoji854] Gesendet von meinem SM-G780G mit Tapatalk
    1 point
  7. I just reinstalled the v470.94 - rebooted and now I am back in working order. It must have been a misstep or end user error.
    1 point
  8. Ummm... embarrassing, but please ignore everything I wrote, I appear to have been setting the download quota in qbt, not the upload quota. Total extended brain fart, I blame lack of sleep and work deadlines Sorry for wasting your time, Getting 4-5Mb down now.
    1 point
  9. I got the script from mgutt and I keept them because its practually the same as writing it in the docker: Have one for Emby and another for Plex) both with at RAM folder But I guess its simpler to have just tmp, so I will delete the scripts and add the parameter to them both But using your extra parameter in docker for Emby I now have: --runtime=nvidia --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 And for Plex: --runtime=nvidia --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 So long time since I set this up, but it has been running pretty stable
    1 point
  10. you shouldn't really need to change the ul rate that much, i have mine set at ul of 45 KB/s and i max my line out at times on dl at around 4MB/s, having said that bittorrent has always had a tit-for-tat system so you maybe rewarded with faster download rates by simply uploading faster - just ensure you dont max out your upload rate, otherwise this will start to negatively effect your download rate.
    1 point
  11. have a look at Q6 for some possible causes:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  12. Hi @ich777 Ok, it seems the issue was indeed with the TPU reaching high temperatures. shutdowntemp0=104800 shutdown_en0=1 shutdowntemp1=104800 shutdown_en1=1 shutdowntemp2=104800 shutdown_en2=1 shutdowntemp3=104800 shutdown_en3=1 Now it's much better as I allowed some more ventilation in the case. 2022-04-13 09:19:44 Coral Temp: 45.80C 2022-04-13 09:19:59 Coral Temp: 45.55C 2022-04-13 09:20:14 Coral Temp: 45.55C 2022-04-13 09:20:29 Coral Temp: 46.80C 2022-04-13 09:20:44 Coral Temp: 47.55C 2022-04-13 09:20:59 Coral Temp: 48.30C 2022-04-13 09:21:14 Coral Temp: 48.55C 2022-04-13 09:21:29 Coral Temp: 49.05C 2022-04-13 09:21:44 Coral Temp: 49.30C 2022-04-13 09:21:59 Coral Temp: 49.55C 2022-04-13 09:22:14 Coral Temp: 49.80C 2022-04-13 09:22:29 Coral Temp: 49.80C 2022-04-13 09:22:44 Coral Temp: 49.80C 2022-04-13 09:22:59 Coral Temp: 50.30C 2022-04-13 09:23:14 Coral Temp: 50.05C 2022-04-13 09:23:29 Coral Temp: 50.05C 2022-04-13 09:23:44 Coral Temp: 49.80C 2022-04-13 09:23:59 Coral Temp: 49.80C 2022-04-13 09:24:14 Coral Temp: 50.05C 2022-04-13 09:24:29 Coral Temp: 49.80C 2022-04-13 09:24:44 Coral Temp: 49.55C 2022-04-13 09:24:59 Coral Temp: 50.05C 2022-04-13 09:25:14 Coral Temp: 49.80C 2022-04-13 09:25:29 Coral Temp: 49.55C 2022-04-13 09:25:44 Coral Temp: 49.80C 2022-04-13 09:25:59 Coral Temp: 49.80C 2022-04-13 09:26:14 Coral Temp: 50.05C 2022-04-13 09:26:29 Coral Temp: 50.05C 2022-04-13 09:26:44 Coral Temp: 49.80C 2022-04-13 09:26:59 Coral Temp: 49.80C 2022-04-13 09:27:14 Coral Temp: 50.05C 2022-04-13 09:27:29 Coral Temp: 49.80C 2022-04-13 09:27:44 Coral Temp: 49.80C 2022-04-13 09:27:59 Coral Temp: 49.80C 2022-04-13 09:28:14 Coral Temp: 50.05C 2022-04-13 09:28:29 Coral Temp: 49.80C 2022-04-13 09:28:44 Coral Temp: 50.30C 2022-04-13 09:28:59 Coral Temp: 49.80C 2022-04-13 09:29:14 Coral Temp: 49.80C 2022-04-13 09:29:29 Coral Temp: 49.80C 2022-04-13 09:29:44 Coral Temp: 50.05C The script works great. I will monitor how things go and get back to you. Thank you for your assistance!
    1 point
  13. Hi @steini84, I just took my hands on this plugin and it's amazing! Thanks again for getting it available in Unraid.
    1 point
  14. GOT IT. thanks for your advice. but it seems not work for me cause my unraid runs on a laptop so i cannot replace the gpu😂
    1 point
  15. Some sync errors are expected because of what happened, just let it finish, like mentioned by itimpi you can then run a non correcting check if you want to confirm all is fine, and it should be.
    1 point
  16. Und man muss noch den "destructive Mode" für UAD aktivieren
    1 point
  17. Thanks for the help, I found one cache drive working, but with uncorrectable errors. I have removed it from the pool and will continue to monitor to see if that resolves the issue.
    1 point
  18. If you are running a correcting check then it should be fixing the errors reported. The next check should be non-correcting and if everything is good will come back with 0 errors.
    1 point
  19. There are two root share choices: The "User and Pool Shares" choice should include your appdata. Your log looks a lot cleaner.
    1 point
  20. Skip the preclear, just replace the drive, build parity, do a parity check, do a long smart test on the drive. Preclear is not productive for your situation, you want to get valid parity back in place ASAP. If you had a new drive in anticipation of replacing a drive that was still functioning, then preclear might be of some value, but only as a stress test to weed out infant mortality. You will get that same level of testing by rebuilding the drive and checking parity, and the bonus is you will be protected that much sooner. If the new drive fails during the process, you aren't in any worse shape, just back to needing another drive. If you were getting a new drive to ADD to a NEW data slot, not replacing a drive, then preclear could save some time as well as give you a confidence check in the drive before trusting it in the array. This does not apply to your situation.
    1 point
  21. @ich777Thank you for the /tmp recommendation, I'm trying that now. I was trying to look to see if /tmp/jellyfin & /tmp/jellyfin/transcode were found, and if not, create them with proper permissions. But, I see that eveything in /tmp is supposed to be volatile, so directory structure isn't needed. Hoping this works, I'll report back - thank you! EDIT - Works like a charm without the need to babysit directories...sigh. Thank you again. I don't really use unRAID Cache or Arrays, is it "ok" to mount /cache to /tmp as well, or should it go to an ssd or nvme?
    1 point
  22. Hi thanks for starting this wonderful plugin. With my datasets (there are quite a lot). I wonder if it might be possible to make it so that the pools are shown in a list and then the data sets are only shown / hidden by clicking the pool i.e. having an expand / retract feature. That way we could have a summary status across the whole system and more easily find problems without scrolling, potentially missing them in the process. Plus it would be a lot cleaner, currently mine takes up about 3-4 screens of scrolling. Thanks. <Edit> I think I should have opened my eyes - 'Show Datasets'!
    1 point
  23. da du uns nicht verräts welchen Docker und welches Problem ... außer das es nicht geht (sicher ?) Seite 1 ein paar Beispiele ...
    1 point
  24. It is included as placeholder for the “Done” button, it will know where to return to. Done is actually preferred over the browser back button, it will keep the GUI up to date instead of a cached browser copy. In addition the placeholders are used to load the correct translations, without them translations won’t be complete.
    1 point
  25. Or check my two messages above, which are likely the issue. 🙂
    1 point
  26. Compression simply adds another option to the tar command which compresses the archive. Personally, I never use this as the extra time it takes to do this is insane, and since we're all media whores at heart, we've learned that "storage is cheap" The "update" thing I'd have to look at.
    1 point
  27. yes, but you cant just change it, you need to see if your vpn provider has wireguard configs with different ports, if not then you are out of luck. The alternative is to try openvpn, you will need to download the openvpn config file from your vpn provider, follow the guide here if you get stuck:- https://github.com/binhex/documentation/blob/master/docker/guides/vpn.md
    1 point
  28. Yep, there is an error with the template. I lost everything from my first setup upon docker update due this error.
    1 point
  29. Unraid will run well on any common modern (and not so modern) hardware. Just choose the processor and pick from the list of the compatible motherboards. https://pcpartpicker.com/list/
    1 point
  30. How is this emulation on Unraid progressing? I am mostly curious. I don't have any specific need for it, but it would be fun to try and useful to have.
    1 point
  31. 1 point
  32. Ya my system boots no issue.. I just have to give it a few minutes past that message and then log in from my phone or another system to access the interface. It goes dead after the error message as the system loses the ability to keep updating the display there without a graphical output available. The system keeps going.
    1 point
  33. Hmm, do you mean that setting on the Management Access Settings page?
    1 point
  34. Do the following to let docker rebuild the networks rm /var/lib/docker/network/files/local-kv.db /etc/rc.d/rc.docker restart
    1 point