Jump to content

L0rdRaiden

Members
  • Posts

    568
  • Joined

  • Last visited

Posts posted by L0rdRaiden

  1. 5 hours ago, psm321 said:

    You have too many ptys open.  Apparently each qemu (VM) uses one up to provide a serial console to the VM.  You likely have some combo of 8 total VMs+web terminals+preclears running.  Other than closing some of them, the other option would be to allow more ptys to be used for a root login by appending lines like

    
    pts/8
    pts/9
    pts/10
    pts/11

    etc. into /etc/securetty.

     

    Removing /etc/securetty entirely also accomplishes this, but I don't claim to understand the security implications (if any) of doing this.

     

     

    How do I close some of them without restarting unraid?

    • Upvote 1
  2. On 1/3/2017 at 1:19 AM, limetech said:

     

    Wow thanks for the detailed analysis!  Probably there's a simple setting somewhere to speed up win10.

     

    For unRaid-5 you can type 'smbd -v' to get the samba version number and then try to find the smb.conf man page for that version; could be they didn't support that setting then...

     

    I'm having a similar issue here

     

    What has the solution at the end? it was fixed in unraid?

  3. 6 minutes ago, bonienl said:

    This tells how the disk is operating, can't be changed. For the OS the sector size is reported as 512B in both cases.

     

    But the setting of the HD isn't just wrong? Is making everything to use more space, can't be changed doing a backup and reformating the disk? Is the setting to fix it available in the unraid interface?

  4. 42 minutes ago, nuhll said:

    Yeah the name after exec is the name how you named the docker, in my case Storj and Storj1.

     

    With your ports, i dont know, i guess you need to edit the template, but if im not misstaken br0 is own ip adress anyway, so no port forwarding (inside unraid) needed...?!

    With br0 you have to port forward to the ip you assign to the comtainer

  5. 4 hours ago, John_M said:

    I don't really understand what you are asking. When you say "the 4k format" are you talking about the change from 512-byte sectors to 4096-byte sectors, known as Advanced Format, or are you talking about the size of an empty directory being 4096 bytes, or are you talking about something completely different?

     

     

    The sector size I guess, it says 4k aligned, is this the sector size?

  6. I have folders in the SSD (Cache) witha  size of 500MB but a disk size of 1.4GB. In windows it appears fine.

     

    I guess is because of the 4k format? isn't it better a lower one? why is this the default?

    If I want to convert the disk do I have to re-format it? or can I just convert it?

  7. On 2/25/2018 at 5:00 AM, landS said:

    ssh into the server...

    who -aH

    find the PID of pts/# shown in the eventlog immediately after the web terminal failure

    kill PID#

     

    .... doesn't work... as who -aH immediately shows the same pts being used again :/

     

    also tried shell in a box... but it also is throwing the same issue

     

    What appears to be the problem is...

    root / password is not being accepted in Shell in a Box / Web Terminal after x number of uses without a full server restart... but it works fine via SSH [email protected]

     

    So the only way to fix it is a restart? or is there a cooldown?

  8. For me it works If I have one in host and another in custom br0, the problem with br0 is that I have problems with port forwarding

     

    Unraid is reporting the port 4000 despite in the json file I have 4004

     

    192.168.1.214:4000/tcp192.168.1.214:4000
    192.168.1.214:4001/tcp192.168.1.214:4001
    192.168.1.214:4002/tcp192.168.1.214:4002
    192.168.1.214:4003/tcp192.168.1.214:4003

     

    and now it works, don't ask me why

     

    root@MediaCenter:~# docker exec Storj2 storjshare status
    
    ┌─────────────────────────────────────────────┬─────────┬──────────┬──────────┬─────────┬───────────────┬─────────┬──────────┬───────────┬──────────────┐
    │ Node                                        │ Status  │ Uptime   │ Restarts │ Peers   │ Allocs        │ Delta   │ Port     │ Shared    │ Bridges      │
    ├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
    │ 4dd54ed8e3b68fe752618e17055e4eec52114b4c    │ running │ 3m 54s   │ 0        │ 101     │ 0             │ 50ms    │ 4004     │ ...       │ connected    │
    │   → /storj/share                            │         │          │          │         │ 0 received    │         │ (TCP)    │ (...%)    │              │
    └─────────────────────────────────────────────┴─────────┴──────────┴──────────┴─────────┴───────────────┴─────────┴──────────┴───────────┴──────────────┘
    root@MediaCenter:~# docker exec Storj storjshare status
    
    ┌─────────────────────────────────────────────┬─────────┬──────────┬──────────┬─────────┬───────────────┬─────────┬──────────┬───────────┬──────────────┐
    │ Node                                        │ Status  │ Uptime   │ Restarts │ Peers   │ Allocs        │ Delta   │ Port     │ Shared    │ Bridges      │
    ├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
    │ 6ec5490584a014188a8e3ae366d5e17ed7749bf1    │ running │ 3h 16m … │ 0        │ 105     │ 3             │ 0ms     │ 4000     │ 505.73MB  │ connected    │
    │   → /storj/share                            │         │          │          │         │ 0 received    │         │ (TCP)    │ (0%)      │              │
    └─────────────────────────────────────────────┴─────────┴──────────┴──────────┴─────────┴───────────────┴─────────┴──────────┴───────────┴──────────────┘

     

    THe only problem is that needs around 120MB or RAM and when is actually doing something I have read that it can take up to 1 GB

  9. 3 hours ago, nuhll said:

     

    1.) add a second share for storj (i did storj and storj1)

    2.) download this docker again, but change path from /mnt/usr/storj to /mnt/usr/storj1 [YOU DONT NEED TO CHANGE THE PATH INSIDE THE DOCKER!]

    3.) start storj1, stop it

    4.) change config file which was created in storj1 like payment adress, do not traverse, rpc adress, max tunnels, storage and max file size to your needs

    5.) i always add the node to https://storjstat.com/index.asp#! for easy checking

    6.) finish

     

    Im just not sure by now if i need to change ports, if so, then also change ports.

     

    I did this but I get this error, Storj is my original container Storj2 is the new one. Am I using the command right?

    root@MediaCenter:~# docker exec Storj storjshare status
    
    ┌─────────────────────────────────────────────┬─────────┬──────────┬──────────┬─────────┬───────────────┬─────────┬──────────┬───────────┬──────────────┐
    │ Node                                        │ Status  │ Uptime   │ Restarts │ Peers   │ Allocs        │ Delta   │ Port     │ Shared    │ Bridges      │
    ├─────────────────────────────────────────────┼─────────┼──────────┼──────────┼─────────┼───────────────┼─────────┼──────────┼───────────┼──────────────┤
    │ 6ec5490584a014188a8e3ae366d5e17ed7749bf1    │ running │ 15m 32s  │ 0        │ 107     │ 0             │ 32ms    │ 4000     │ 505.73MB  │ connected    │
    │   → /storj/share                            │         │          │          │         │ 0 received    │         │ (TCP)    │ (0%)      │              │
    └─────────────────────────────────────────────┴─────────┴──────────┴──────────┴─────────┴───────────────┴─────────┴──────────┴───────────┴──────────────┘
    root@MediaCenter:~# docker exec Storj2 storjshare status
    Error response from daemon: Container 8c732a9d378bab85f3b062afb47c63713e0248a2bcd6cbc8752ea5b634e1fdcf is restarting, wait until the container is running
    root@MediaCenter:~# docker exec Storj2 storjshare status
    Error response from daemon: Container 8c732a9d378bab85f3b062afb47c63713e0248a2bcd6cbc8752ea5b634e1fdcf is restarting, wait until the container is running
    root@MediaCenter:~# docker exec Storj2 storjshare status
    Error response from daemon: Container 8c732a9d378bab85f3b062afb47c63713e0248a2bcd6cbc8752ea5b634e1fdcf is restarting, wait until the container is running

     

    I edited the Json file but it looks like is not even reading it, share and log folder and empty despite having Storj2 container running.

     

    What network type do you use?

    Captura.PNG

  10. On 3/28/2018 at 2:34 AM, Jcloud said:

    Away from my unRAID box for a while. I believe  (I reserve the right to be 100% wrong lol) when you first start the VM the there's a key-press to enter inside the guest emulated bios, from there should be able to change boot order. Might have to start the VM, Open the VNC window then back in the webui force a restart of VM; that way the "monitor" is already up and you can try hitting that key-press.  Where as on the first boot, could be missing it in the time it takes to bring up the VNC window.

     

    It work, somehow I was able to enter to a kind of BIOS related to PXE not Seabios and I could change the boot order, so now 1 of the nics doesn't try to boot but the other still try to do it.

    I have tried to press different buttons at different times to see If I can get into a different BIOS but no luck.

     

    Does SEABIOS has a bios interface?

  11. If I run the comand below in the screenshot I get the output above the command

    image.thumb.png.e9dc1769fd01e23dba04f79716f4573f.png

     

    Every time I run the command it writes a new Json file but I don't know where it is stored

     

    Every time I run the command to create a new node I get this error

    image.thumb.png.d7bd6560bbd1ea46f4fc85471cfced8d.png

     

    The path are fine, is case sensitive, I don't now what could be failing, I have found this https://github.com/Storj/storjshare-daemon/issues/122

    but I don't know how to use it to fix this issue

     

    Any idea?

  12. Regarding the system stats app is there any way to keep log of the statistics even when you are not viewing them?

    I mean every time I enter in the Stats tabs everything is reset, and all the graphs start to write at the time and open the tab, previous data doesn't exist, doesn't matter if I change the periods available when you select Real time -> 2h, 1h, 30 min etc.

  13. In the docker containers, this 2 graphs produces a perfect match (talking about the netdata screeshot), which makes sense, but in the VM I get what you can see in the picture bellow and it doesn't match with what unraid is reporting (unraid reported and average of 15-20% as much load during the period 10:50 and 10:55, and the CPU report included in the application (SophosXG) running in the VM was reporting almost a iddle load...

    Why is this? I don't know what to trust.

     

    I have all the cores assigned to SophosXG and looking at the netdata graphs everything else was mostly idle.

     

    image.png.ad9b71694573ae4be4e524af9f5f712f.png

     

    image.thumb.png.6be348eeac6e432ebdf825436c26a5a9.pngIn te doc

  14. 10 hours ago, Jcloud said:

    Your VM is not finding a bootable disk so the the guest bios is trying to PXE-boot (network boot). 

     

    Does that disk image have anything on it, or are you trying to build a new VM? If it's a new disk, I don't see your boot media/ISO disk. Edit your VM template and provide it an ISO for install disk. 

     

    If vdisk1.img had an OS on it, which one is it?

     

    The disk image is a bootable disk with SophosXG on it, it's like a firewall based on linux, I have the same issue with pfsense, which is a firewall based on FreeBSD

    The pcie card is based on i350 lan chipset.

    The image is bootable but the OS on both pfsense and sophos XG only start after the VM tries to boot or do whatever is trying to do with the 2 nics, which if you don't skip the process manually it takes around 3 mins or more. It tries to do it with one nic and then with the other.

×
×
  • Create New...