Jump to content

DaPedda

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by DaPedda

  1. 17 hours ago, Squid said:

    Taking a guess it looks like sdm dropped offline and became sdp  However neither are showing up in the smart reports, so it appears that it dropped for good.  Reseating cabling is your first step.

    Thank you for your help. I will move all Toshibas to a new case with new HBA cards and new cables this weekend. Then we will see. I'll report back and then close the post if all is well.

  2. Hello Community,
    I have since some days this problem: two harddisks bring me after 2h - 8h Disk I/O Error (after Server reboot) These two harddisks are connected to a HBA 9211-8i with unRaid 6.9.2

    Interestingly, my array (connected to the internal SATA port) has exactly the same read performance as the two failed disks. I created a new disk configuration yesterday, but the error is back since this morning.
    Please, who can give me a tip on how to narrow down the error further?

     

    Thank you very much

    Peter

     

    223670477_2022-01-3117_39_42-Window.thumb.png.b79a693688a96f9c3895d87e19946ad7.png

     

    1960192186_2022-02-0318_06_49-Window.png.e889772765f75bc26668470f83eb6289.png

     

     

  3. Ich habe das Problem wie folgt lösen können:
    - Stop Array

    - neue Config eingeleitet, bis auf die Pools die ich hatte

    - Array Start im Wartungsmodus

    - Reboot unRaid

    - Start Array im normalen Modus

    Im Moment stelle ich noch keinen Fehler fest

    • Thanks 1
  4. Hallo zusammen,

    seit wenigen Tagen werden mir bei zwei Platten folgende Symbole angezeigt:
    1159296439_2022-01-3117_39_42-Window.thumb.png.10b4ff48b1f4a6413b8cc4cd75aa7fb4.png

    Die Platten sind als Unassigend Devices an zwei Dockercontainer mit dem Schalter - Slave durchgereicht. Diesen Schalter habe ich gesetzt, nachdem mir erstmalig beide Platten rausgeflogen sind.

     

    Im Disklog steht:
     

    Quote

     

    Jan 31 14:53:21 ATBNAS20 unassigned.devices: Error: Device '/dev/sdy1' mount point 'plots17' - name is reserved, used in the array or by an unassigned device.

     

    Im Systemlog steht:

     

    image.png.188d5425341e45268724c3b71ebc0eb5.png

     

    Es handelt sich dabei einmal um ein NTFS Device und um ein BTRFS Device.

    Das wäre jetzt echt doof, wenn mir die zweite HBA Karte aussteigt.

     

    Komischerweise ist aber Aktivität auf den beiden HDD's. Ich würde mich um Hilfe freuen.

     

    Viele Grüße,

     

    Peter

  5. Hi folks, I still have two problems:
    1. I see my worker (harvester), but he doesn't give me any stats
    2. Under "Connections" I cannot remove a host that is no longer reporting

     

    My worker is on another unRaid server. I created the template via the full node and created it on the target server via docker-compose. I see him in the full node. Under Alerts it says: "Your harvester appears to be offline! No events for the past 81187 seconds." I think it's because of the port mappings. Before the update, the port mapping on the target server said e.g .: Network = default and mapping 8926: 8926
    Today it says machinaris_default and mapping = 172.18.0.2:8926/TCP <> 192.168.158.103:8926

    Same for the Port 8927.

     

    At https://pool.space/ I see both harvesters, so the farming seems to be working for now.

     

    Greetings, Peter

  6. I just saw that all settings in plotman.yaml are gone. A backup from this morning is ignored. I am not amused 😞

    Next surprise: no matter what I enter under Settings> Plotting: it doesn't work. It is reset to "factory settings" after i revisite die Settings Tabs

    P.S.: manuell Copy from plotman.yaml works, plotter do his Job. But when i revisited the Settings Page for Plotting, all Settings are gone. Wrong Path?

  7. Hello, I'm missing something in the machinaris wiki. At https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines it says: Sektioon Harvester + Plotter: "Don't forget to copy your fullnode's ca certificates over as outlined above in the harvester section. "
    Unfortunately, there is no description of where I should copy the CA from the full node and then integrate it in the new worker.
    Can you please help me here?

  8. Hey Guys, i would like to Forward a Message sent in Discord at SpacePool:
    "Hey guys, I tried to join your SpacePool today. I use a Docker container machinaris for this. If I click on the suggested link on the right to copy, then I get the following result: https://eu1.pool.space/ I entered this link in machinaris and Log tells me: "Exception from 'wallet' Error connecting to pool https://eu1.pool.space/: Response from https://eu1.pool.space/ not OK: 404" It killed my 100 mojos When I tried to use https://eu1.pool.space/, machinaris correctly said: "Will create a plot NFT and join pool: https: //eu1.pool.space.Error creating plot NFT: {'error': 'Not enough balance in main wallet to create a managed plotting pool. ',' success': False} " How can you please help me? Kind regards, Peter"

     

    Some Guys said, to join tho Pool takes only 1 Mojo, but all my 100 mojos are gone. Where ist the Mistake?

     

    image.thumb.png.5e408db6232c3fb569d8e2cb02e6a238.png

     

  9. 18 minutes ago, guy.davis said:

     

    Good day!  That does sound like perhaps something is misconfigured.  When I go to Advanced View and expand all settings for my install of Machinaris, I see: 

    image.thumb.png.fbde1db04df822355cbb24cfcea8a9fd.png

     

    Makes me think perhaps you are plotting within the container space itself?  You definitely want to double-check your volume mounts.  If you choose, the *.db and *.log files can be purged, during a container stop/start. Hope this helps!

     

    Thank you for your answer. Below my settings. 

     

    I checked the Size of /mnt/user/appdata/machinaris/plotman/logs

     

    root@ATBNAS20:/mnt/disks# du -shc /mnt/user/appdata/machinaris/plotman/logs
    42M     /mnt/user/appdata/machinaris/plotman/logs

     

    Also i check the Size of /mnt/user/appdata/machinaris/mainnet/db

     

    root@ATBNAS20:/mnt/user/appdata/machinaris/mainnet/db# du -shc /mnt/user/appdata/machinaris/mainnet/db
    6.2G    /mnt/user/appdata/machinaris/mainnet/db
    6.2G    total

     

    Today I restarted the container several times. In your opinion, are the sizes okay?

     

    image.thumb.png.663c2e29fc4ad5a562fc40abf0322ce5.png

  10. Guten Abend zusammen,

    ich komme von OMV zu Euch und bin recht zufrieden. Allerdings vermisse ich eine Funktion: wenn ich eine weitere Disk einstecke, dann wäre es toll,.... wenn der Füllstand von allen Disk automatisch angepasst wird. Gibt es sowas? 

    Fragen nach dem Sinn beantworte ich nicht, weil unter Snapraid funktioniert das tadellos 🙂

     

    Viele Grüße und einen schönen Abend,

     

    Peter

×
×
  • Create New...