Lobsi

Members
  • Content Count

    24
  • Joined

  • Last visited

Everything posted by Lobsi

  1. Hey LNXD, I tried it already, unfortunately it did not run, i get the same error; Project: PhoenixMiner 5.5c Author: lnxd Base: Ubuntu 14.04.02 Target: Unraid 6.9.0 - 6.9.1 Wallet: 3K4kD8QijmqAJHEqijhWe5Nnbu4CvdURJB Pool: asia1.ethermine.org:4444 Starting PhoenixMiner 5.5c with the following arguments: -pool asia1.ethermine.org:4444 -wal 3K4kD8QijmqAJHEqijhWe5Nnbu4CvdURJB.x -tt 75 -tstop 85 -tstart 80 -cdm 1 -cdmport 5450 -amd Phoenix Miner 5.5c Linux/gcc - Release build -------------------------------------------- [0mNo OpenCL platforms
  2. Ah, i see i have misunderstood, no Problem... I "modprobed" my GPU and then i were force-updateing the container, it seems the Version is as you mentioned Debian Bullseye. Here is the output from the log: Project: PhoenixMiner 5.5c Author: lnxd Base: Debian Bullseye Target: Unraid 6.9.0 - 6.9.1 Wallet: 3K4kD8QijmqAJHEqijhWe5Nnbu4CvdURJB Pool: asia1.ethermine.org:4444 Starting PhoenixMiner 5.5c with the following arguments: -pool asia1.ethermine.org:4444 -wal 3K4kD8QijmqAJHEqijhWe5Nnbu4CvdURJB.x -tt 75 -tstop 85 -tstart 80 -cdm 1 -cdmpo
  3. Hi LNXD, Thank you for your engagement, i much appreciate that! When i check the logs, ill get the following: Base: Ubuntu 20.04 As i understand, the container is not taking 1404 as base then? There must be something wrong with it. Maybe this helps to resolve what is going on?
  4. Hi LNXD, I can confirm i completed steps 1-7 (except putting in the correct pool adress, i haven`t registered to any pool until the card is running). But this should be no reason to not do a test first... I also put my Card asleep like you mentioned, and updated the repository / docker. Unfortunately i get the same error, the card does`nt get recognized. Project: PhoenixMiner 5.5c Author: lnxd Base: Ubuntu 20.04 Target: Unraid 6.9.0 - 6.9.1 Wallet: 3K4kD8QijmqAJHEqijhWe5Nnbu4CvdURJB Pool: asia1.ethermine.org:4444 Starting PhoenixM
  5. Here are more (lspci -v) details: 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Curacao XT / Trinidad XT [Radeon R7 370 / R9 270X/370X] (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Curacao XT / Trinidad XT [Radeon R7 370 / R9 270X/370X] Flags: fast devsel, IRQ 10, IOMMU group 19 Memory at d0000000 (64-bit, prefetchable) [disabled] [size=256M] Memory at f7a00000 (64-bit, non-prefetchable) [disabled] [size=256K] I/O ports at d000 [disabled] [size=256] Expansion ROM at f7a40000 [d
  6. Hi LNXD, Currently i am using Version 6.9.0-rc2, the current next branch.
  7. Dear LNXD, I like to ask if the AMD R9 270X is supported? My Graphics Card is the following: IOMMU group 19:[1002:6810] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Curacao XT / Trinidad XT [Radeon R7 370 / R9 270X/370X] [1002:aab0] 02:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000 Series] I get the following Error: --------------------------------------------------------------------------------------------------------------------
  8. Hello friends of Unraid, I would like to know where i am struggling... I just arrived to setup my personal OneDrive and i were able to see my files who are already on my OneDrive by using "Rclone ls Onedrive:"... For now i try to Mount the Onedrive but i dont arrive it. Is it possible to mount the drive by using unassigned devices plugin somehow? It would be nice to make the OneDrive available in the /mnt directory. What i have seen is the directory /mnt/disks/rclone_volume but the directory is empty (watching with WinSCP)... I would like to expose the Onedrive as sh
  9. Hello all, I would like to ask about the support of larger Files then 20GB. Me and a co-worker wer trying to upload a large (40GB) Iso Image but the transfer stopped @ the Size of 20GB (WinSCP is reporting filesize 19908MB). Is it possible to upload the whole file, without splitting it? Thank you for a fast response, have a nice day and stay healthy! Simon
  10. Okay, after a few retries i throw the hand towel. I dont get it to work until now, so i have to think about moving away from unraid and get 2 FreeNAS boxes (using ESXi with the compromise of being less power saveing due lack of spindown etc...) and use ZFS instead The most annoying thing is to waive the Unraid File handling and i`ve got used to unraid. To make a last hurrah, is there any Snapshot plugin or Unraid base OS support for BTRFS snapshots (to another unraid host) planned in future?
  11. I would love it to be able to use BTRFS snapshots in a GUI (Setup Snapshots) and beeing able to set up BTRFS snapshotting (send / receive) to another Unraid Server, this would be a really nice and powerful feature i would never miss! Hope this will be implemented soon! so there is another +1 from me
  12. Okay so i am on the wrong way... sorry i am a bit confused because i am not very deep knowing about btfrs sync. Do you have an idea how i finally could snap the 4 4tb disks as you mentioned before? From my point of view it should be possible to create some virtual (?) subvolumes on the offsite server to put the snapshots from the mainserver in it, but right now i am unable to get this working.
  13. My idea is to trigger the destination unraid servers unraid mechanic by using the /mnt/users directory For now i did the following change to your script: btrfs send -p /mnt/disk$i/snaps/$sd /mnt/disk$i/snaps/"$sh"_$nd | pv -prtabe -s "$t"M | ssh root@$ip "btrfs receive /mnt/users/snaps/disk$i" Unfortunately i am further not able to make the initial snapshot from source server to destination server, to use your script once the initial snapshot is made.
  14. Due my 20 TB unraid array is only filled with 8TB data now i could live with this circumstance. It would be nice if you can get me the idea how i could do that. Even it would be nice to know how i could scale this, if i add more drives to the offsite Server (replace small case with bigger one one time).
  15. Oh no, i hoped so much this is not needed... My small server does only have 3 HDD-slots, so i am unable to provide the same amount of disks of my main server. The configuration of the main Server is 1x 8TB for parity and 5x 4TB for any kind of Data. Is there no way to put the snapshots in directories labled as the disks name?
  16. Thank you very much for your explaination, i appreciate that much. Can you give me an example how i can put the first snapshot from my source server to my destination server? I would like to get finally this situation: I have 2 Unraid Servers, a bigger one who has different Dockers and VMs and a smaller one who only should store the snapshots from the bigger one in case i get a ransom ware or crypto trojan... On the small server i would like to have all snapshots stored by preserving the unraid mechanic (there are 3 8tb Drives, one for parity in it) if this is
  17. This seems to be my main issue, i have not the right idea how i get the parent snapshot from one server to the backup server. The output of your question: root@Zwerg:~# ls -l /mnt/disk1 total 0 drwxrwxrwx 1 nobody users 8 Nov 11 11:08 snaps/
  18. Dear johnnie.black, Sorry i am testing around with your given script but unfortunately it seems to be not working for me. Everytime i try to "send the differences from previous one" i get some connection errors and finally the information "send/receive failed" from all disks. I try to figure out how i can get it working, because i am not so familiar with ssh it is a little challenge for me. Please can you explain me a way to get it working for my private purpose? The goal will be to send btrfs snapshots to my other unraid Server located on a different location in my ho
  19. Hi, Please can you fix the icon for TS3? I have attached a new icon. I made it transparent (with Gimp, free software like Photoshop) and put it back here ? Thank you for this great Plugin! Regards Lobsi
  20. @CHBMB, I am auto-mounting this ssd with unassigned devices, if i dont change my setup it is keeping on /dev/sdh. This is also surviveing reboots so far... its working for me, for my purposes, but it could be done much easier and out of the box. regards Lobsi
  21. This is such a good idea! Currently all my VMs are on a dedicated out-of-any-array SSD... Theoretically for me it does not play any role if the unraid array is started or not, the VMDatastore path (/dev/sdh/VMDatastore) would be accessible if the array would be unavailable. I would appreciate it to give this input a real chance!
  22. currently i am using the VM Wol Plugin and a tiny WOL app on my Windows 10 driven Phone. I also would appreciate if the local System powerbutton could be abused for managing a prefered VM Power
  23. Hi, Thank you very much, this made me able to pass my LSI Megaraid 2960 Pci-E controller to one of my VM`s, was finally made that i decided to move my private Virtualisation-/Storage-/Gaming-/All-in-One rig from Vsphere 6 to Unraid Cheers Mate!