vanes

Members
  • Posts

    120
  • Joined

  • Last visited

Posts posted by vanes

  1. 4 minutes ago, Un1Q said:

    Добрый день!

    Хочу представить Вам развивающийся блог, созданный новичком для новичков - MyUnraid.ru

    На создание личного блога меня сподвигло отсутствие русскоязычных мануалов по настройке OS Unraid и управлению NAS на нем в целом.

    На ресурсе планируется публикация собственных настроек, гайдов, твиков и прочих инструкций для:

    • Докер контейнеров
    • Плагинов
    • Скриптов

    На сегодняшний момент имеется порядка 15-20 сформированных статей данной тематики, которые направленны на облегчение управления собственным сервером.

     

     

    Отлично, продолжай в том же духе.

  2. Hi guys, need some help.

    I set up zfs-zed using this post 

    Zed is working fine, but now my syslog full of this:

     

    Mar 6 07:41:54 unRaid zed[3601]: Invoking "all-syslog.sh" eid=82 pid=25649
    Mar 6 07:41:54 unRaid zed: eid=82 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 07:41:54 unRaid zed[3601]: Finished "all-syslog.sh" eid=82 pid=25649 exit=0
    Mar 6 07:46:57 unRaid zed[3601]: Invoking "all-syslog.sh" eid=83 pid=29036
    Mar 6 07:46:57 unRaid zed: eid=83 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 07:46:57 unRaid zed[3601]: Finished "all-syslog.sh" eid=83 pid=29036 exit=0
    Mar 6 07:52:00 unRaid zed[3601]: Invoking "all-syslog.sh" eid=84 pid=32653
    Mar 6 07:52:00 unRaid zed: eid=84 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 07:52:00 unRaid zed[3601]: Finished "all-syslog.sh" eid=84 pid=32653 exit=0
    Mar 6 07:57:03 unRaid zed[3601]: Invoking "all-syslog.sh" eid=85 pid=4478
    Mar 6 07:57:03 unRaid zed: eid=85 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 07:57:03 unRaid zed[3601]: Finished "all-syslog.sh" eid=85 pid=4478 exit=0
    Mar 6 08:02:05 unRaid zed[3601]: Invoking "all-syslog.sh" eid=86 pid=8041
    Mar 6 08:02:05 unRaid zed: eid=86 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 08:02:05 unRaid zed[3601]: Finished "all-syslog.sh" eid=86 pid=8041 exit=0
    Mar 6 08:07:08 unRaid zed[3601]: Invoking "all-syslog.sh" eid=87 pid=10773
    Mar 6 08:07:08 unRaid zed: eid=87 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 08:07:08 unRaid zed[3601]: Finished "all-syslog.sh" eid=87 pid=10773 exit=0
    Mar 6 08:12:11 unRaid zed[3601]: Invoking "all-syslog.sh" eid=88 pid=13372
    Mar 6 08:12:11 unRaid zed: eid=88 class=config_sync pool_guid=0xD30AFB4571F3B450 
    Mar 6 08:12:11 unRaid zed[3601]: Finished "all-syslog.sh" eid=88 pid=13372 exit=0

     

    Please help to stop zed spam syslog.

  3. @jang430 you dont need new hardware

    you cpu has Intel® HD Graphics P630 https://ark.intel.com/products/97476/Intel-Xeon-Processor-E3-1225-v6-8M-Cache-3-30-GHz-

    it support 10-bit H.265 4K

    https://www.intel.ru/content/www/ru/ru/architecture-and-technology/hd-graphics/hd-graphics-performance-guide.html

    u need just to set-up unraid and plex/emby to use it

     Xeon silver 4108 is ~30% faster and have no iGpu. 

    Did u set up iGpu transcoding or you dont use  HD Graphics P630 at all?

  4. On 10/1/2018 at 1:49 AM, Joeyleigh said:

    Anyone know how i can set local scan/container scan of file system used by nextcloud to update any files i have manually added into smb/explorer?

    run these inside nextcloud docker container:

    cd /config/www/nextcloud/ 
    sudo -u abc php7 occ files:scan --all

    • Like 1
  5. Hi guys, i need some help. Some days ago i precleared then added old hitachi drive to my array,  and now i see some errors during parity check. Parity is fine files are fine...

    Aug 13 00:32:49 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x8000000 SErr 0x0 action 0x6 frozen
    Aug 13 00:32:49 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED
    Aug 13 00:32:49 unRaid kernel: ata6.00: cmd 60/08:d8:c0:00:00/00:00:00:00:00/40 tag 27 ncq dma 4096 in
    Aug 13 00:32:49 unRaid kernel:         res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
    Aug 13 00:32:49 unRaid kernel: ata6.00: status: { DRDY }
    Aug 13 00:32:49 unRaid kernel: ata6: hard resetting link
    Aug 13 00:32:49 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Aug 13 00:32:49 unRaid kernel: ata6.00: configured for UDMA/133
    Aug 13 00:32:49 unRaid kernel: ata6: EH complete
    Aug 13 00:37:51 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x400008 SErr 0x0 action 0x6 frozen
    Aug 13 00:37:51 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED
    Aug 13 00:37:51 unRaid kernel: ata6.00: cmd 60/00:18:78:22:79/02:00:07:00:00/40 tag 3 ncq dma 262144 in
    Aug 13 00:37:51 unRaid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
    Aug 13 00:37:51 unRaid kernel: ata6.00: status: { DRDY }
    Aug 13 00:37:51 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED
    Aug 13 00:37:51 unRaid kernel: ata6.00: cmd 60/00:b0:78:20:79/02:00:07:00:00/40 tag 22 ncq dma 262144 in
    Aug 13 00:37:51 unRaid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
    Aug 13 00:37:51 unRaid kernel: ata6.00: status: { DRDY }
    Aug 13 00:37:51 unRaid kernel: ata6: hard resetting link
    Aug 13 00:37:52 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Aug 13 00:37:52 unRaid kernel: ata6.00: configured for UDMA/133
    Aug 13 00:37:52 unRaid kernel: ata6: EH complete
    Aug 13 00:53:14 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x4 SErr 0x0 action 0x6 frozen
    Aug 13 00:53:14 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED
    Aug 13 00:53:14 unRaid kernel: ata6.00: cmd 60/00:10:98:44:2e/02:00:02:00:00/40 tag 2 ncq dma 262144 in
    Aug 13 00:53:14 unRaid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
    Aug 13 00:53:14 unRaid kernel: ata6.00: status: { DRDY }
    Aug 13 00:53:14 unRaid kernel: ata6: hard resetting link
    Aug 13 00:53:15 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Aug 13 00:53:15 unRaid kernel: ata6.00: configured for UDMA/133
    Aug 13 00:53:15 unRaid kernel: ata6: EH complete
    Aug 13 00:56:17 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x6 frozen
    Aug 13 00:56:17 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED
    Aug 13 00:56:17 unRaid kernel: ata6.00: cmd 60/00:58:f0:7b:60/02:00:02:00:00/40 tag 11 ncq dma 262144 in
    Aug 13 00:56:17 unRaid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
    Aug 13 00:56:17 unRaid kernel: ata6.00: status: { DRDY }
    Aug 13 00:56:17 unRaid kernel: ata6: hard resetting link
    Aug 13 00:56:18 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    Aug 13 00:56:18 unRaid kernel: ata6.00: configured for UDMA/133
    Aug 13 00:56:18 unRaid kernel: ata6: EH complete

    Do I need to worry?

  6. i915 is not loading by default!

    To activate it add this to go file

    On ‎4‎/‎1‎/‎2018 at 8:58 AM, dmacias said:

    #enable module for iGPU and perms for the render device

    modprobe i915

    chown -R nobody:users /dev/dri chmod -R 777 /dev/dri

     

    if I turn i915 off, I cant use intel hd graphics for plex/emby hardware transcoding

     

    sorry for my English ?

  7. 5 minutes ago, witalit said:

    Well I have a slightly different board and yes I get the same issue. When booting into unRAID screen is blank and the fix posted on other threads does not work.

     

    I see most of unraid boot process till the i915 driver is loaded. Screen becomes black near the final part of the boot process...
    If I don't load i915 (modprobe i915) everything works fine.

  8. 1 hour ago, 1812 said:

    How much ram does it require?


    I am not zfs-guru, my first zfs pool was created some weeks ago.  I use 2 usb hdd drives in mirror (raid1) for zfs-backup-pool, rsync by user-scripts copy important data there once a week. I limited ARC cache of 2Gb memory using user-scripts. Totally my system have 8Gb of RAM.

  9. 1 hour ago, jang430 said:

    Are you using official embyserver docker? 

    Yes

    1 hour ago, jang430 said:

    What needs Ed’s to be in the go file?

    my go file is:

    #!/bin/bash
    #enable module for iGPU and perms for the render device
    modprobe i915
    chmod -R 777 /dev/dri
    # Start the Management Utility
    /usr/local/sbin/emhttp &

    container Extra-Parameters:

    devdri.jpg.f9eb5ae85c7fb0fb6206c1801c30bbc8.jpg

  10.  

    37 minutes ago, comet424 said:

      I couldn't get the File Options on top  I thought ALT would do but nope

     

    MC full features don`t work in browser. Try use ssh client like Putty

     

    to share my Backup folder on my pool ( mounted /mnt/zfspool )i added

     

    [Backup]
          path = /mnt/zfspool/Backup
          comment =
          browseable = yes
          # Public
          writeable = yes
          read list = 
          write list = 
          valid users = 
          vfs objects =

    To Settings>SMB>SMB Extras

    smb.thumb.jpg.7f0b1d13c9ba8970478f5aac9beb2646.jpg

     

  11. @comet424 go to terminal, then type mc,  you can see your pool there if it is mounted

    to see mountpoint use "zfs list" comand

    root@unRaid:~# zfs list
    NAME      USED  AVAIL  REFER  MOUNTPOINT
    zfspool   127G   322G   127G  /mnt/zfspool

     

    to add share go to Settings - SMB and add something like this to SMB Extras

    [Backup]
          path = /mnt/zfspool/Backup
          comment =
          browseable = yes
          # Public
          writeable = yes
          read list = 
          write list = 
          valid users = 
          vfs objects =

    this worked for me, i am new to zfs. Some days ago i created my first usb mirror pool for backups. will see how it go....

  12. 8 minutes ago, Squid said:

    No idea about what you're talking about

    i need to limit zfs ARC-cache size

    I'm trying to do as it is written in the first post, by adding line in go file, but this doesn`t work =(

    On 9/21/2015 at 2:03 AM, steini84 said:

    limit the ARC to 8GB with these two lines in my go file:

     

    
    #Adjusting ARC memory usage (limit 8GB)
    echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max