Darksurf

Members
  • Posts

    149
  • Joined

  • Last visited

Posts posted by Darksurf

  1. 15 hours ago, Squid said:

    I have linux VMs. They are actually build nodes used to build packages for a linux distribution.  So my VMs accept jobs to compile and packages inside containers, upload said packages, then delete everything and start another job. I'm not sure if your suggestion works in this scenario does it?

  2. 32 minutes ago, pk1057 said:

    I digged around,

    made a fork of the project and switched from ripit to abcde wich is more versatile and mature then ripit.

     

    With abcde there are no encoding problems !

    You might be able to perform a pull request to update the main project is you have through testing and proof of stability.

  3. Is anyone else having issues with memory ballooning not working in VMs? I check my linux VMs and they have virtio_ballooning loaded, but their memory won't increase past initial size.

     

    I'm using an ASROCK Creator TRX40 w/ Ryzen Threadripper 3970X 64G DDR4.  I'm using the rule initial memory is 1core=1G and Max is 1core=2G. I'm doing this on 3 VMs 8core, 8core, and 4core.  None of which see their memory balloon while  compiling software and they end up crashing with OOM errors.

     

     

    oceans-diagnostics-20210528-1427.zip

    • Like 1
  4. That's awesome! It would be nice if we could get a lifespan meter somewhere in the open (it seems my method may be inaccurate and yours would be better). I want to make sure my server uptime doesn't take a bad turn when I need to order an SSD and it takes a week to get here. I'd like some pre-emptive warning/monitoring so I can plan accordingly rather than have items live on a shelf for years.

     

    Thanks for the correction! I'm learning something new everyday.

  5. I'm curious if it would be possible to store a MAX TBW for SSDs in the warranty information in the Identity drive info, then have a running comparison of what smartctl shows for nvme/ssds to show how close you are to reaching that maximum so someone would know to prepare for a replacement. You'll see after doing a smartctl -a /dev/nvme0n1 I have a "Data Units Written" of 9.67 TB. This unit has a MAX TBW of 1800. Now, this isn't my cache drive, this is my desktop. But if you're using an SSD as a cache drive, I'm sure you could see how the SSD would quickly deteriorate and fail.  My cache SSD on my server is currently at 169TBW with a maximum of  530TBW before failure. Having this SSD lifespan viewable from the dashboard would be very helpful. My SSD in my server is only 1year old, but its used heavily for an open source project.

     

     

    jcfrosty@Zero ~ $ sudo smartctl -a /dev/nvme0n1
    Password: 
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.11.0-sabayon] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Number:                       Sabrent Rocket 4.0 1TB
    Serial Number:                      03F10797054463199045
    Firmware Version:                   EGFM11.1
    PCI Vendor/Subsystem ID:            0x1987
    IEEE OUI Identifier:                0x6479a7
    Total NVM Capacity:                 1,000,204,886,016 [1.00 TB]
    Unallocated NVM Capacity:           0
    Controller ID:                      1
    Number of Namespaces:               1
    Namespace 1 Size/Capacity:          1,000,204,886,016 [1.00 TB]
    Namespace 1 Formatted LBA Size:     512
    Namespace 1 IEEE EUI-64:            6479a7 2220653435
    Local Time is:                      Sat Apr 17 11:32:39 2021 CDT
    Firmware Updates (0x12):            1 Slot, no Reset required
    Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
    Optional NVM Commands (0x005d):     Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
    Maximum Data Transfer Size:         512 Pages
    Warning  Comp. Temp. Threshold:     70 Celsius
    Critical Comp. Temp. Threshold:     90 Celsius
    
    Supported Power States
    St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
     0 +    10.73W       -        -    0  0  0  0        0       0
     1 +     7.69W       -        -    1  1  1  1        0       0
     2 +     6.18W       -        -    2  2  2  2        0       0
     3 -   0.0490W       -        -    3  3  3  3     2000    2000
     4 -   0.0018W       -        -    4  4  4  4    25000   25000
    
    Supported LBA Sizes (NSID 0x1)
    Id Fmt  Data  Metadt  Rel_Perf
     0 +     512       0         2
     1 -    4096       0         1
    
    === START OF SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        45 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          5%
    Percentage Used:                    1%
    Data Units Read:                    7,506,169 [3.84 TB]
    Data Units Written:                 18,893,007 [9.67 TB]
    Host Read Commands:                 56,347,067
    Host Write Commands:                289,751,028
    Controller Busy Time:               583
    Power Cycles:                       118
    Power On Hours:                     14,438
    Unsafe Shutdowns:                   55
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      271
    Warning  Comp. Temperature Time:    0
    Critical Comp. Temperature Time:    0
    
    Error Information (NVMe Log 0x01, max 63 entries)
    No Errors Logged
    

     

     

    Screenshot_20210417_113435.png

  6. I really could use this. I use my personal server for an OpenSource Linux project so giving my team members access would be really handy.

    I'd like to see

    1. Multiple users enabled for WebUI (simple checkbox within the user profile would be nice)
    2. Different levels of access. (Example: Restart VMs, VM Access, but not Creation/Deletion or root host shell access)
    3. Log Users login and change actions (VM/docker reboot, deletion, creation, etc)

    Just some SMB features could be handy.

  7. 9 hours ago, JTok said:

    Whoops! typed the wrong filename. I think it should work now.

    I can't seem to update the plugin even though it tells me there's an update.

    I tried removing the plugin to reinstall it, and now it still fails so it never installs.
     

    plugin: updating: vmbackup.plg
    Removing package: xmlstarlet-1.6.1-x86_64-1_slonly
    Removing files:
    Removing package: pigz-2.3-x86_64-2_slonly
    Removing files:
    
    +==============================================================================
    | Installing new package /boot/config/plugins/vmbackup/packages/xmlstarlet-1.6.1-x86_64-1_slonly.txz
    +==============================================================================
    
    Verifying package xmlstarlet-1.6.1-x86_64-1_slonly.txz.
    Installing package xmlstarlet-1.6.1-x86_64-1_slonly.txz:
    PACKAGE DESCRIPTION:
    # xmlstarlet (command line xml tool)
    #
    # XMLStarlet is a command line XML toolkit that can be used to
    # transform, query, validate, and edit XML documents and files using
    # a simple set of shell commands, which work similarly to 'grep',
    # 'sed', 'awk', 'tr', 'diff', or 'patch' on plain text files.
    #
    # Homepage https://sourceforge.net/projects/xmlstar/
    #
    Executing install script for xmlstarlet-1.6.1-x86_64-1_slonly.txz.
    Package xmlstarlet-1.6.1-x86_64-1_slonly.txz installed.
    
    +==============================================================================
    | Installing new package /boot/config/plugins/vmbackup/packages/pigz-2.3-x86_64-2_slonly.txz
    +==============================================================================
    
    Verifying package pigz-2.3-x86_64-2_slonly.txz.
    Installing package pigz-2.3-x86_64-2_slonly.txz:
    PACKAGE DESCRIPTION:
    # pigz (Parallel gzip)
    #
    # pigz, which stands for parallel implementation of gzip, is a fully
    # functional replacement for gzip that exploits multiple processors and
    # multiple cores to the hilt when compressing data. pigz was written by
    # Mark Adler, and uses the zlib and pthread libraries.
    #
    # Home page: http://www.zlib.net/pigz/
    #
    Package pigz-2.3-x86_64-2_slonly.txz installed.
    
    +==============================================================================
    | Skipping package vmbackup-v0.2.2-2021.02.03 (already installed)
    +==============================================================================
    
    plugin: run failed: /bin/bash retval: 1

     

     

    After upgrade to stable 6.9.0 from RC1, I tried to install the plugin again. Still Fails:


     

    plugin: installing: https://raw.githubusercontent.com/jtok/unraid.vmbackup/master/vmbackup.plg
    plugin: downloading https://raw.githubusercontent.com/jtok/unraid.vmbackup/master/vmbackup.plg
    plugin: downloading: https://raw.githubusercontent.com/jtok/unraid.vmbackup/master/vmbackup.plg ... done
    No such package: xmlstarlet*. Can't remove.
    No such package: pigz*. Can't remove.
    
    +==============================================================================
    | Installing new package /boot/config/plugins/vmbackup/packages/xmlstarlet-1.6.1-x86_64-1_slonly.txz
    +==============================================================================
    
    Verifying package xmlstarlet-1.6.1-x86_64-1_slonly.txz.
    Installing package xmlstarlet-1.6.1-x86_64-1_slonly.txz:
    PACKAGE DESCRIPTION:
    # xmlstarlet (command line xml tool)
    #
    # XMLStarlet is a command line XML toolkit that can be used to
    # transform, query, validate, and edit XML documents and files using
    # a simple set of shell commands, which work similarly to 'grep',
    # 'sed', 'awk', 'tr', 'diff', or 'patch' on plain text files.
    #
    # Homepage https://sourceforge.net/projects/xmlstar/
    #
    Executing install script for xmlstarlet-1.6.1-x86_64-1_slonly.txz.
    Package xmlstarlet-1.6.1-x86_64-1_slonly.txz installed.
    
    +==============================================================================
    | Installing new package /boot/config/plugins/vmbackup/packages/pigz-2.3-x86_64-2_slonly.txz
    +==============================================================================
    
    Verifying package pigz-2.3-x86_64-2_slonly.txz.
    Installing package pigz-2.3-x86_64-2_slonly.txz:
    PACKAGE DESCRIPTION:
    # pigz (Parallel gzip)
    #
    # pigz, which stands for parallel implementation of gzip, is a fully
    # functional replacement for gzip that exploits multiple processors and
    # multiple cores to the hilt when compressing data. pigz was written by
    # Mark Adler, and uses the zlib and pthread libraries.
    #
    # Home page: http://www.zlib.net/pigz/
    #
    Package pigz-2.3-x86_64-2_slonly.txz installed.
    
    +==============================================================================
    | Installing new package /boot/config/plugins/vmbackup/vmbackup-v0.2.2-2021.02.03.txz
    +==============================================================================
    
    Verifying package vmbackup-v0.2.2-2021.02.03.txz.
    Installing package vmbackup-v0.2.2-2021.02.03.txz:
    PACKAGE DESCRIPTION:
    Package vmbackup-v0.2.2-2021.02.03.txz installed.
    plugin: run failed: /bin/bash retval: 126

     

  8. On 1/10/2021 at 5:29 AM, rix said:

    Latest image includes ccextractor v.0.88

    Let me know if something needs to be changed.

     

    jlesage's image includes a line in the settings.conf that guides to the binary.

    https://github.com/jlesage/docker-makemkv/blob/master/rootfs/defaults/settings.conf

     

    app_ccextractor = "/usr/bin/ccextractor"

     

    Please try without changing settings.conf first.

     

    The ccextractor thing is still an issue as the location settings do not match. ccextractor wasn't installed in /usr/bin/ccextractor. Its been installed in /usr/local/bin/ccextractor .  So the errors still exist. I've not added that settings.conf.
     

    MSG:5015,131072,4,"Saving 1 titles into directory file:///out/DVD/ADDAMSFAMILY using profile 'Default' from file '/config/default.mmcp.xml'","Saving %1 titles into directory %2 using profile '%3' from file '%4'","1","file:///out/DVD/ADDAMSFAMILY","Default","/config/default.mmcp.xml"
    MSG:4040,0,1,"Unable to execute external program 'ccextractor' as its path is not set in preferences","Unable to execute external program '%1' as its path is not set in preferences","ccextractor"
    MSG:4040,0,1,"Unable to execute external program 'ccextractor' as its path is not set in preferences","Unable to execute external program '%1' as its path is not set in preferences","ccextractor"
    MSG:4040,0,1,"Unable to execute external program 'ccextractor' as its path is not set in preferences","Unable to execute external program '%1' as its path is not set in preferences","ccextractor"

     

    Side note unrelated to ccextractor:

    I've been noticing sometimes the docker will lockup (its rare overall, but I have it running 24/7 whether its in use or not, I mean why not?). Well sometimes after ripping a few discs, the docker will just peg once CPU core and has a hard time progressing. I couldn't really figure out what was going on. I just know that attempting to kill/stop the docker would result in the docker just being hard locked and refusing to stop. Showing the log from WebUI fails and you cannot access the terminal either. Unplugging the optical drive would allow the docker to stop (its an external USB3 bluray drive).  After plugging the drive back in and starting the docker again I noticed that sometimes the output files in /out are not being placed there as the root folder but in another folder called "," and inside that folder is another folder that is the drive name "DRV:0,2,999,12,"BD-RE ASUS BW-12D1S-U E401"" and in that folder is where the output is being placed during the RIP. I couldn't figure this out for the longest time. I'd have to reboot the server to fix it and force it to rip in the correct location. 

    Well I think I finally discovered it today. The issue occurred again last night, I started doing some investigating of the script (no issues found), started looking for any issues and eventually jumped into the host servers /dev folder only to find /dev/sr0 was not a block device but a FOLDER! I couldn't think of why this would happen so I stopped the docker, removed the drive, deleted that folder, connected the drive back and now /dev/sr0 is a block device again! This explains what's probably happening when I reboot the machine as the block device gets recreated then. I started a new rip and surprise, its ripping like it's supposed to in the correct folder now!  OK, time to investigate the docker. Now I noticed something interesting. Its passing optical drive configuration AS A PATH! What is should be doing is passing /dev/sr0 as a DEVICE. would it be possible to tweak the docker to do that? Or is it possibly my old config is being kept in place during updates? Just figured I'd pass on this info.  Thanks for everything you do! this Docker is GOLD.

     

    Screenshot_20210112_092510.thumb.png.df748535a633335925dbf274bf881120.png

    Screenshot_20210112_092559.png

     

    Here is my current config and it seems to be working without extra parameters or anything:

     

    [2415694.831852] usb 8-4: new SuperSpeed Gen 1 USB device number 4 using xhci_hcd
    [2415694.845100] usb-storage 8-4:1.0: USB Mass Storage device detected
    [2415694.845260] usb-storage 8-4:1.0: Quirks match for vid 174c pid 55aa: 400000
    [2415694.845331] scsi host1: usb-storage 8-4:1.0
    [2415695.868323] scsi 1:0:0:0: CD-ROM            ASUS     BW-12D1S-U       E401 PQ: 0 ANSI: 0
    [2415695.876630] sr 1:0:0:0: Power-on or device reset occurred
    [2415695.899356] sr 1:0:0:0: [sr0] scsi3-mmc drive: 125x/125x writer dvd-ram cd/rw xa/form2 cdda tray
    [2415695.905593] sr 1:0:0:0: Attached scsi CD-ROM sr0
    [2415695.905667] sr 1:0:0:0: Attached scsi generic sg1 type 5

     

    Screenshot_20210112_122257.thumb.png.6ffce69cd119bb947a31c2bef6723703.png

  9. On 1/10/2021 at 5:29 AM, rix said:

    Latest image includes ccextractor v.0.88

    Let me know if something needs to be changed.

     

    jlesage's image includes a line in the settings.conf that guides to the binary.

    https://github.com/jlesage/docker-makemkv/blob/master/rootfs/defaults/settings.conf

     

    app_ccextractor = "/usr/bin/ccextractor"

     

    Please try without changing settings.conf first.

    Thanks, this was a blueray, but I've seen it with DVDs as well. I'll give it a test and see how it goes! 

  10. @rix Did something change with makemkv? It seems the logs shows an attempt to reach an application called ccextractor which isn't in the docker?

     

    MSG:5015,131072,4,"Saving 1 titles into directory file:///out/Ripper/DVD/ARCHIVE using profile 'Default' from file '/config/default.mmcp.xml'","Saving %1 titles into directory %2 using profile '%3' from file '%4'","1","file:///out/Ripper/DVD/ARCHIVE","Default","/config/default.mmcp.xml"
    MSG:4040,0,1,"Unable to execute external program 'ccextractor' as its path is not set in preferences","Unable to execute external program '%1' as its path is not set in preferences","ccextractor"

  11. Can you check /var/log? open a terminal and do "ls -hal /var/log" just curious to see what is the largest file(s).

     

    Also, seeing a TON of this in your syslog. You need to edit your VM settings from "virtio" to "virtio-net" for network interfaces.
    Oct 27 22:53:28 mnemosyne kernel: tun: unexpected GSO type: 0x0, gso_size 1448, hdr_len 1514 <----YUCK FIX VM settings

    Arch
        <interface type='bridge'>
          <mac address='52:54:00:9e:3d:49'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
    
    Eaton IPM
    	<interface type='bridge'>
          <mac address='52:54:00:e1:ef:a8'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
    
    Pihole #2
        <interface type='bridge'>
          <mac address='52:54:00:0b:99:19'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
    
    Windows Server 2019
        <interface type='bridge'>
          <mac address='52:54:00:69:77:2c'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
    
    loxberry-vm
    	<interface type='bridge'>
          <mac address='52:54:00:6f:49:4d'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>

     

    That should help clear up the TUN stuff.  Lets start there and come back :)

  12. 11 hours ago, Phonejudgement said:

    Hey! I'm having some trouble with ripper. Can't get it to work for CDs that has either artist or album name with special characters (å, ä, ö, [ ] etc) . It tries to rip, but it seems it is unable to create the folders. CDs with only regular characters work fine. Is it something with my config? 

    It could be the script ripper.sh. special characters tend to need to be broken out with a "\" before they're allowed to be used in bash/shell scripts etc. You might take a look and see if you can edit the script by copying the cd ripping line that creates the folder and statically name the folder and comment out the original line.

  13. On 10/2/2020 at 2:58 AM, tuxbass said:

    Have you configured unraid SSH with key-only? If so, could you walk me through?
     

    It should be just like any other SSH config setup. /root/.ssh is a soft link to /boot/config/ssh/root/ .

    So inside /boot/config/ssh/root/ , create authorized_keys file via "touch authorized_keys".  Then you can copy and paste your info from you local machine's .ssh/id_rsa.pub key file. Remember to use the PUBLIC key file, not the private.

     

  14. @rix 

    I  wanted to give you and update on the addition of java. It works GREAT! the files now have more info than just t00.mkv, t01.mkv, t02.mkv! I just did Jumanji The Next Level and the file names come out like this "Jumanji- The Next Level-FPL_MainFeature_t00.mkv" "Jumanji- The Next Level-SF_CS_PlayAll_t02.mkv" "Jumanji- The Next Level-SF_Level_Up_Making_t01.mkv". I now know what the MainFeature is without having to search/hunt it down! Excellent addition!

  15. On 9/16/2020 at 10:16 AM, TeKo said:

    Is the GoogleMusicManager not working anymore? Im getting "Can't connect to Google Play". I dont think there is a solution for Youtube Music yet, at least I wasnt able to find one.

    Google is killing off Google Play Music in attempt to force everyone over to Youtube Music. Because its not ready and every is complaining about how bad it sucks, they've been on a mad dash to try and add some level of feature parity because they know its not ready, but don't care. I've downloaded all my music and ported all over to Plexamp (Plex, but special music player that works on Android/iOS/Linux/Windows/Mac).

     

    Youtube Music will be a streaming service only. You will not be able to purchase music or download your music like you had the ability in GPM. When GPM dies, you'll lose the ability to download your music, so get it now while you still can. They shut it down this December.

  16. The Titan Security key is basically Google's idea of a Yubikey. Its FIDO U2F. You're asking for something along the lines of:

     

     

    Now, there is a method you could use U2F for and that is encryption. I encrypt my desktop and have a randomly generated 32 character password (alphanumeric+symbols). Yubikey's have 2 slots. I've set slot 1 (tap) to be U2F and slot 2 (hold down) as the 32-character password. I use the 2nd slot to decrypt my system. I couldn't remember the password if I wanted. I know you can encrypt drives now in Unraid, so you could encrypt all the storage and use a 32 character password completely randomly generated alphanumeric+symbols and store it on a couple keys in slot2. Put one key in a safe and another on your keyring.

    • Thanks 1