chocorem

Members
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chocorem

  • Rank
    Member
  1. Im getting troulbe with the extract archive plugin What is this error -11 ? I'm getting and error 85505.12.2020 19:37:51INFOADDON ExtractArchive: Check package: cccccc.magnet 85605.12.2020 19:37:51INFOADDON ExtractArchive: [ OxTorrent.cc ] cccccc.rar | Extract to: /downloads_wo/ 85705.12.2020 19:37:51ERRORADDON ExtractArchive: [ OxTorrent.cc ] cccccc.rar | Archive error | Process return code: -11 85805.12.2020 19:37:51ERRORADDON ExtractArchive: [ OxTorrent.cc ] cccccc.rar | Extract failed
  2. after updating from beta20 to beta35, I'm getting random crash (3 to 4 per day) Could anybody help me to find out the reason ? loki-diagnostics-20201117-1831.zip
  3. thanks for the info .... so this process is not very stable yet ......
  4. NO here I'm talking about a BTRFS Raid on a cache pool. I still have access to the pool, but the 3 drives are shown as green .... So I do not know that a drive is missing .... In this case, I know because I removed it on purpose, but if a drive if failing or missing, I would exoect that Unraid is showing a red light next to the drive
  5. Sorry you got me wrong, I removed the drive intentionally to test the notification /rebuild process. My problem is that I still see the pool in Green on the 3 drives in Unraid. I just got a warning notification that a disk is missing. I wanted to know if this is normal ? I find it a little little from a notification perspective, If you miss the notification, you are not seeing any problem on the main / Dashboard
  6. I'm currently 6.9.0 beta 30 and making some tests with a cache pool linked to a share steup as "cache only". I wanted to get the features of unraid with some more speed ! So the cache pool is setup as Raid 5, I rtied to remove one disk to see what is happening. Nothing , I have 3 drives in Raid5, and even removing one drive the 3 leds are still Green. Some minutes later, i simply get a notification warning that one disk is missing. The leds on the Webui are staying Green. Is there another way to be informed that something is wrong with the Pool ?
  7. I would have a question, My unraid server is ok for all what is media streamming where I do not need a lot of speed. I wanted to buld another server to be able to have also fast access and be able to use raid functionnalities to get a 10gb read/write speed for my video editing. With the 6.9, I have been testing a separate cache pool with 12 x 1To 15k SAS drives linked to an unraid share with cache only. I get full speed on this share, but wanted to know if there is any Cons using the system in this way ? I haven't been testing drive failure and BTRFS Array reconstructio
  8. I just received my DS4243 last week, the QSF+ to SFF-8088 are so expensive (at least in frace) that it was much more cheaper to replace the controller with a DELL 0952913-07 (which is 6gbps by the way), so I'm landing with a DS4246 .... and can use standard SFF)8088 cables. My question is more about the redondancy. does it make sense to use 2 controllers ? If I connect both controllers to the HBA card, do I increase the speed and use 2 channels ? or is it only for redundancy ? Even for redundancy, I do not understand how the switch to the second one is made. If I conne
  9. HI did you found a solution for this topic ? I just got a DS4243 and same error as you .... during parity check loosing the parity drive. Before the Drives were connected directly to the HBA card with a SFF-8087 to SATA adapter, now I put the drives in the Shelf, I lost the Parity , then randomly 2 other drives. As a remark, the HBA is cooled down with a fan directly blowing on itself
  10. I using the edit to put the data in .... and when putting the other UUID from the 1050, I get no errors ? how to remove this \\n\ ?
  11. I double checked in the config field, no there are no space, neither at the start nor at the end
  12. No the single files are under 20gb, so all is working good for the transfer process and the systems switches to the array, but the cache is Full and Plex is crashing because Cache is "Only"
  13. I have a weird behaviour , I have a quadro P400 and a 1050ti in the system Nvidia Driver Version: 440.59 GPU 0 Model & Bus: Quadro P400 23:00.0 GPU 0 UUID: GPU-de8ab77e-8fff-db12-f93d-ebe991944a85 GPU 1 Model & Bus: GeForce GTX 1050 Ti 2D:00.0 GPU 1 UUID: GPU-e60379bf-191f-14ec-3841-d4dfd8e82ab8 All is working fine when passing through the 1050TI to the plex container, but when trying to pass the quadro, I get this /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process
  14. Hello, I 'm trying to figure out if Unraid is the right system for me ... I would need a combination of speed and protection .. this is what Ihave need : - Some docker and VMs that need speed -> running on cache (only) - Plex Library ->running on cache (only) - Download and data transfer -> Share running on Cache (Yes with mover set to 1 hour) My questions are the following, how can I assure that there always enough sufficient space on the cache for the Docker and VMs to run ? This is my nightmare scenario : - All