Leaderboard

Popular Content

Showing content with the highest reputation on 07/06/20 in all areas

  1. Salut @Alex.b, Unraid utilise ce que nous appelons des "Partages", ces partages sont des répertoires isolés qui se situent sur l'Array (dans votre cas, 8TB pour l'Array et 8TB pour la parité). Les Partages ont le choix d'utiliser le système de cache, ce qui est un stockage beaucoup plus vite qu'un disque dur (SSD). En utilisant le système de cache, quand vous fait une écriture sur vos Partages, les fichiers seront transférés sur le cache pour une écriture beaucoup plus rapide. Ensuite, basé sur un horaire, il y a un programme qui s'appelle "mover" qui transférera tous vos fichiers du disque de cache à l'Array. Unraid donne aussi l'option qu'un Partage utilise seulement le système de cache, ce qui veut dire que tous les fichiers du Partage habitera seulement sur le SSD.
    2 points
  2. Overview: Support for Docker image arch-qbittorrentvpn in the binhex repo. Application: qBittorrent - https://www.qbittorrent.org/ Docker Hub: https://hub.docker.com/r/binhex/arch-qbittorrentvpn/ GitHub: https://github.com/binhex/arch-qbittorrentvpn Documentation: https://github.com/binhex/documentation If you appreciate my work, then please consider buying me a beer 😁 For other Docker support threads and requests, news and Docker template support for the binhex repository please use the "General" thread here
    1 point
  3. Bonjour, J'ai un serveur windows depuis 4 ans que j'ai récemment passé sous openmediavault mais je n'apprécie pas trop son fonctionnement. Je me renseigne avec UnRaid mais je ne comprends pas tout. Déjà je viens de comprendre ce qu'étais le système de parité et je trouve l'idée top. Aujourd'hui j'ai 2 SSD de 128Go et 2 disques de 8Tb, j'utilisais basiquement du raid 1. Là où je ne comprends pas tout, c'est le système de cache sous UnRaid. À quoi sert-il ? J'ai une bibliothèque plex et pas mal de metadatas, l'usage d'un SSD pour ça est vivement recommandé. Puis-je configurer le docker Plex pour qu'il mette toutes les métadatas sur le SSD ? Je précise que je n'ai pas encore installé unRaid, je parle un peu à l'aveugle. Merci
    1 point
  4. I'm not planning to switch to Protect, will happily maintain this until it literally doesn't work anymore. Hopefully where it currently lives, or as a fork if Pducharme wants to shut it down.
    1 point
  5. You can look at your syslog, it has timestamps. If you have dockers that depend on other dockers you would want them to start in a particular order, maybe with some delay. Mariadb for example might be the database for other dockers.
    1 point
  6. @Bjur Download via what -- Usenet/Torrent? Or download from your actual mount? Are you utilizing a cache drive to avoid parity write bottlenecks? Lots of different variables can effect your dl speeds and a lot are out of your control --> like distance from server and peering to the server. But onto what you can control. Generally the fastest way (and to test for any bottlenecks) would be to dl from whatever to a share that is set as "use cache: only" in unraid. That way you avoid any parity write overhead. Also, kinda obvious but, NVME/SSD will trump any mechanical HDD so for quick writes that's what you should be using. Other than that, you can play with the amount of parallel workers. Buffer and cache size of files, ect. With DZMMs scripts, these values are optimized for downloading/streaming from gdrive but you can read up on others settings on the official rclone forum. Animosity022's github has some great settings (heavily tested and very active on the rclone forum). His recommendations are often the most widely accepted settings when it comes to general purpose mounting!
    1 point
  7. I put it on the Intel controller, ran the fstrim command and didn't have any issues.
    1 point
  8. Je me réponds à moi-même, il faut simplement arrêter Docker pour que le Mover s'active.
    1 point
  9. It is probably happening because of the RAM buffer for writes. As soon as the last of the current file is written to that buffer, the client will start the transfer of the next file. This will start the file allocation process which requires disk access (both read and write operations) on the other disk which (of course) is available because there are no pending operations for it. PS --- I see this all the time when I use ImgBurn to write an Bluray .iso to the server. The data transfer to the server will stop about thirty seconds before ImgBurn receives back the message that the file write to the physical device is completed.
    1 point
  10. Like @trurlmentioned this is the result of having shares set to "most free" allocation method, parity writes will overlap making the performance considerably slower, note also that since v6.8.x "turbo write" will be disable once multiple array disk activity is detected making performance even worse, you'll see the same behavior with v6.8, I never recommend using most free as an allocation method precisely because of this performance issue, it can be a little better if used with some split level that avoids the constant disk switching for any new file.
    1 point
  11. Not clear why you think it shouldn't. In fact, you have a share with allocation set to Most Free, and many of your drives are very full. It shouldn't be surprising that it constantly switches between disks when it is moving that share.
    1 point
  12. Gracias por el update @SpencerJ, estoy revisando y agregaré comentarios.
    1 point
  13. Hay otro PR de @Kuton aquí: https://github.com/unraid/lang-es_ES/pull/6 Si alguien quisiera revisar, por favor avíseme si necesita alguna traducción o cambio de palabras.
    1 point
  14. 1 point
  15. Merci pour ta réponse ! Elle me sera d'une grande aide !
    1 point
  16. Bonjour :) Que se passe-t-il si le disque de parité tombe en panne ? Je le remplace et il sera reconstruit ? -> Oui et si c'est le parité, tu as tjrs accès à tes données car elles sont stockés en "dur" sur le disque contrairement à du Raid 5 par exemple qui découpe le fichier Si je mets 2 SSD en raid 1 dans le cache, je suis obligé d’utiliser le système BTRFS, c’est ça ? -> Il me semble oui Je lis qu’il est nécessaire de faire un pre-clear des disques, est-ce toujours d’actualité ? -> Non nécessaire sur des disques déjà utilisé je trouve Le cache SSD est-il compris dans la parité ? -> Nop, pour ça qu'il faut bien invoquer le mover la nuit car souvent le cache n'a qu'un disque et donc non protégé un minimum Certains préconisent de mettre les metadatas de Plex dans un SSD à part du cache, est-ce primordial ? -> Un cache plein entrainerait des soucis ta base Plex. Perso j'ai un disque en cache et un disque avec le plugin Unassigned Device pour mes VM / Docker
    1 point
  17. Yep; should've checked the template before blindly adding the new values!
    1 point
  18. ok now i'm start to look the github file
    1 point
  19. I might not be a lot of help myself, but I am pretty sure that those who could help you will need your diagnostics file ;). Please attach to your next posting.
    1 point
  20. Oh my god, you're right! The slot the card is in was previously occupied by a GPU that I was passing through to a VM. So many hours of troubleshooting wasted, I should've just posted here first. Thank you so much!
    1 point
  21. EDIT: Ok it just works now I guess. I did nothing, just waited a bit. As of this morning I can't see my Ark server in the server browser. The log file doesn't seem abnormal. I've tried restarting the container. Last night my friend was logged in and playing just fine. I had Creature Finder Deluxe as a mod on there which was working. I removed that from the config lines to eliminate variables but I still can't find it in game or RCON to it. Connecting anonymously to Steam Public...Logged in OK Waiting for user info...OK Success! App '376030' already up to date. CWorkThreadPool::~CWorkThreadPool: work processing queue not empty: 1 items discarded. ---Prepare Server--- ---Server ready--- ---Start Server--- [S_API FAIL] SteamAPI_Init() failed; SteamAPI_IsSteamRunning() failed. Setting breakpad minidump AppID = 346110
    1 point
  22. I figured this out, it was Corsair Icue software causing the "VIDEO_DXGKRNL_FATAL_ERROR" BSOD. Once I uninstalled it the GPU driver install went fine.
    1 point
  23. It passed the short test, you should run a long one, but if Seatools passed it should also pass, it did fail a long test before, so there were issues before.
    1 point
  24. no, you set the key to be the key and the integer you want to be the value for the key, e.g screenshot below sets the creation of backups to be every 6 hours:-
    1 point
  25. This does not look too good: While reallocated sectors are not necessarily a problem as long as the numbers stays constant anything other that a small number is often a good indication that the drive's health may be suspect. You might want to run an extended SMART test on the drive to see how that goes.
    1 point
  26. 1 point
  27. Just saw this on another thread, don't know if it is relevant or not:
    1 point
  28. Syslosg doesn't show the start of the problem but looks like one the cache devices dropped offline, run a correcting scrub and check all errors were corrected, more info here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582 If the scrub can't fix all errors or there are more issues post after reboot diags.
    1 point
  29. @DZMM If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build! Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github. https://github.com/watchmeexplode5/lclone-crop-aio Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags) drive_service_account_file_path = /folder/SAs (No trailing slash for service account file) service_account_file = /folder/SAs/any_sa.json All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested -----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. -----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. For more info and options on crop/rclone_gclone config files check out: l3uddz Repository https://github.com/l3uddz?tab=repositories
    1 point
  30. +1 for this Having switched from Xpenology/Synology to Unraid, the one thing I really miss is a well made built-in file manager. The current Unraid options are extremely clunky in comparison. Without previously using file managers from other NAS/Storage solutions like Synology and QNAP, it may not seem that way or seem like a downgrade, but it certainly is. Considering how underpowered CPU/RAM wise most consumer Synology is, and it has a great file manager. Compared to how powerful some of our Unraid setups are, I dont think resources will be an issue. Having a visually pleasing and modern built-in file manager would be a great addition to the Unraid solution.
    1 point
  31. 2020-06-02 19:23:42,027 DEBG 'plexmediaserver' stderr output: ERROR: Could not open the input file (No such file or directory) Getting this error in Plex now...Plex keeps going down every half an hour or so.
    1 point
  32. @Squid Sadly didn't work, but the var.ini was the solution still! Start: #!/bin/bash CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') curl -k --data "startState=STOPPED&file=&csrf_token=${CSRF}&cmdStart=Start" http://localhost/update.htm Stop: #!/bin/bash CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') curl -k --data "startState=STARTED&file=&csrf_token=${CSRF}&cmdStop=Stop" http://localhost/update.htm
    1 point
  33. After many, MANY, diffcompare I narrowed the offending XML which led me to @SpaceInvaderOne explanation HERE. Essentially I was using both a boot device selector earlier in my XML: <boot dev='hd'/> and boot ordering in the disk attachment: <boot order='1'/> Only one is usable at a time. I chose the latter. Easier to understand when I inevitably change something in the future.
    1 point
  34. It is not a hard drive. The 500hz/1Khz alternating tone isn't typical of a hard drive and certainly shouldn't be that loud. In addition, removing the jumper for the speaker while it is beeping confirms that the system is causing the beeping. I've not heard the beeps since I removed the jumper about a month ago. It's never been this long. The motherboard is the source of the beeping.
    1 point
  35. I noticed something in the manual for this board the other day. It specifically says NOT to mix 3 pin and 4 pin fans on the motherboard headers. I doubt that's the cause of your beeping issue, but I figured it can't hurt to mention it anyway.
    1 point