Jump to content

pederm

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by pederm

  1. There seems to be different home directories depending on whether a script is invoked in the background or not. I have created a simple script with this contents:

    #!/bin/bash
    ls ~ -al

    When running in foreground, it lists the /root directory:

    drwx--x--- 7 root root 300 Jul 17 17:31 .
    drwxr-xr-x 20 root root 440 Jul 17 17:24 ..
    -rw------- 1 root root 16 Jul 17 17:31 .bash_history
    -rw-r--r-- 1 root root 316 Jul 15 04:27 .bash_profile
    -rwxr-xr-x 1 root root 60 Jul 15 04:27 .bashrc
    drwx------ 3 root root 60 Jul 17 17:20 .cache
    drwxr-xr-x 4 root root 80 Jul 17 17:20 .config
    lrwxrwxrwx 1 root root 30 Jul 15 04:27 .docker -> /boot/config/plugins/dockerMan
    drwx------ 3 root root 60 Jul 17 17:20 .local
    drwx------ 2 root root 60 Jul 17 17:22 .screen
    lrwxrwxrwx 1 root root 21 Jul 15 04:27 .ssh -> /boot/config/ssh/root
    -rw-r--r-- 1 root root 180 Jul 17 17:23 .wget-hsts
    -rw------- 1 root root 7606 Jul 17 17:20 keyfile
    -rw-rw-rw- 1 root root 28512 Jul 17 17:21 patch.sh

    When running in background, it lists the / directory:

    drwxr-xr-x  20 root root  440 Jul 17 17:24 .
    drwxr-xr-x  20 root root  440 Jul 17 17:24 ..
    -rw-r--r--   1 root root  228 Jul 17 17:20 .wget-hsts
    drwxr-xr-x   2 root root 3820 Jul 17 17:20 bin
    drwx------  10 root root 4096 Jan  1  1970 boot
    drwxr-xr-x  18 root root 3840 Jul 17 17:24 dev
    drwxr-xr-x  63 root root 3340 Jul 17 17:24 etc
    drwxr-xr-x   2 root root   40 Jul 15 04:27 home
    drwxr-xr-x   2 root root    0 Jul 17 17:20 hugetlbfs
    lrwxrwxrwx   1 root root   10 Jul 15 04:27 init -> /sbin/init
    drwxr-xr-x   1 root root  100 Jun  9 21:34 lib
    drwxr-xr-x   7 root root 4440 Jul 17 17:20 lib64
    drwxr-xr-x  13 root root  260 Jul 17 17:24 mnt
    drwxrwxrwx   4 root root   80 Jul 17 17:24 opt
    dr-xr-xr-x 632 root root    0 Jul 17 17:19 proc
    drwx--x---   7 root root  300 Jul 17 17:31 root
    drwxr-xr-x  19 root root 1180 Jul 17 17:24 run
    drwxr-xr-x   2 root root 5400 Jul 17 17:20 sbin
    dr-xr-xr-x  13 root root    0 Jul 17 17:19 sys
    drwxrwxrwt  16 root root  440 Jul 17 17:34 tmp
    drwxr-xr-x   1 root root  240 Jul 16 12:08 usr
    drwxr-xr-x  15 root root  360 Jul 17 17:20 var

    Would it be possible to use the /root directory when running in background also?

  2. According to sources, a 5406 beta bios exist for Asus x370 boards, which seems to play nice with both Ryzen 3rd gen and virtualization (AGESA 1.0.0.4B). Probably the one that @cjbconnor refers to?

     

    I have downloaded it for my x370 Prime, but have not flashed it. Other manufacturers seems to have released this some time ago for their x370 based boards, just not Asus. In case anybody has tried this on an Asus x370 based board, please report results.

  3. 10 hours ago, Squid said:

    IIRC, First generation Ryzen CPUs have issues.  I believe the workaround is to disable C-States in the BIOS

    I have put this into the go file:

    /usr/local/sbin/zenstates --c6-disable

    The same setup has been running for a long time without exhibiting these crashes.

  4. Running Ryzen, I at some point got the impression, that turning off address space layout randomization (ASLR) would provide a more stable environment. I have therefore added this to the go script:

    echo 0 | tee /proc/sys/kernel/randomize_va_space

    I actually dont know whether this (still) is a good idea.

    Also it seems to provide a more stable Ryzen system when adding the rcu-nocbs option (substitute the number of cores minus one for 11):

    label unRAID OS
      kernel /bzimage
      append rcu_nocbs=0-11 initrd=/bzroot

     

  5. Ran an Ubuntu VM for more than 30 minutes and closed it down. No trouble, it worked fine and the qemu process exited as expected. Afterwards I was able to start a new session.

     

    I will runn this session for a couple of hours and hopefully be able to close it with no trouble also.

  6. A quick successful restart of my Ubuntu VM with Nvidia passthrough on an Asus x370 Prime Pro seems to confirm that the problem with hanging VM's have disappeared.

     

    Edit: Unfortunately, my VM once again did not shut down, requiring a server reboot. It had been running for many hours, and the symptoms were the same as before.

  7. On my Asus Prime x370 Pro, I have updated BIOS to the latest 3203, and in this BIOS I enable SVM in order to allow Virtual Machines to run. In the Unraid syslinux.cfg I specify two options for my Ryzen 1600x: rcu_nocbs=0-11 processor.max_cstate=1. With the latest beta of Unraid 6.4, it runs with my Nvidia 1050Ti as passthrough for any OS. For Windows 10, I use OVMF and No to Hyper-V.

    • Upvote 1
  8. 21 hours ago, OnlyOneCannoli said:

     

    Yes, I created a new menu entry for "Ryzen Fix" which was a copy of the GUI menu entry with the extra nocbs line. It locked up somewhere between 2 and 3 hours later (screen was showing Plex, so I'm not sure when). 

    I had to disable C-states in BIOS in order to avoid freezes with my Ryzen 1600X. It is called AMD CBS on my Asus Prime x370. Previously I also had to disable multithreading (SMT) and Nested Page Tables (NPT), but it is not necessary with the 6.4 prerelease based on Linux 4.13.

×
×
  • Create New...