-
Posts
120 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by vanes
-
-
Hi guys, need some help.
I set up zfs-zed using this post
Zed is working fine, but now my syslog full of this:
Mar 6 07:41:54 unRaid zed[3601]: Invoking "all-syslog.sh" eid=82 pid=25649
Mar 6 07:41:54 unRaid zed: eid=82 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 07:41:54 unRaid zed[3601]: Finished "all-syslog.sh" eid=82 pid=25649 exit=0
Mar 6 07:46:57 unRaid zed[3601]: Invoking "all-syslog.sh" eid=83 pid=29036
Mar 6 07:46:57 unRaid zed: eid=83 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 07:46:57 unRaid zed[3601]: Finished "all-syslog.sh" eid=83 pid=29036 exit=0
Mar 6 07:52:00 unRaid zed[3601]: Invoking "all-syslog.sh" eid=84 pid=32653
Mar 6 07:52:00 unRaid zed: eid=84 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 07:52:00 unRaid zed[3601]: Finished "all-syslog.sh" eid=84 pid=32653 exit=0
Mar 6 07:57:03 unRaid zed[3601]: Invoking "all-syslog.sh" eid=85 pid=4478
Mar 6 07:57:03 unRaid zed: eid=85 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 07:57:03 unRaid zed[3601]: Finished "all-syslog.sh" eid=85 pid=4478 exit=0
Mar 6 08:02:05 unRaid zed[3601]: Invoking "all-syslog.sh" eid=86 pid=8041
Mar 6 08:02:05 unRaid zed: eid=86 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 08:02:05 unRaid zed[3601]: Finished "all-syslog.sh" eid=86 pid=8041 exit=0
Mar 6 08:07:08 unRaid zed[3601]: Invoking "all-syslog.sh" eid=87 pid=10773
Mar 6 08:07:08 unRaid zed: eid=87 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 08:07:08 unRaid zed[3601]: Finished "all-syslog.sh" eid=87 pid=10773 exit=0
Mar 6 08:12:11 unRaid zed[3601]: Invoking "all-syslog.sh" eid=88 pid=13372
Mar 6 08:12:11 unRaid zed: eid=88 class=config_sync pool_guid=0xD30AFB4571F3B450
Mar 6 08:12:11 unRaid zed[3601]: Finished "all-syslog.sh" eid=88 pid=13372 exit=0Please help to stop zed spam syslog.
-
@Jcloud, please tell us about the support of the v3 protocol with this container.
https://storj.io/blog/2018/10/introducing-the-storj-v3-white-paper/
-
@jang430 you dont need new hardware
you cpu has Intel® HD Graphics P630 https://ark.intel.com/products/97476/Intel-Xeon-Processor-E3-1225-v6-8M-Cache-3-30-GHz-
it support 10-bit H.265 4K
u need just to set-up unraid and plex/emby to use it
Xeon silver 4108 is ~30% faster and have no iGpu.
Did u set up iGpu transcoding or you dont use HD Graphics P630 at all?
-
On 10/1/2018 at 1:49 AM, Joeyleigh said:
Anyone know how i can set local scan/container scan of file system used by nextcloud to update any files i have manually added into smb/explorer?
run these inside nextcloud docker container:
cd /config/www/nextcloud/
sudo -u abc php7 occ files:scan --all- 1
-
I did another parity check, finished with no errors. Everything works fine. I keep watching log.
-
what kind of tests should I do? Reconnect cable? Replace cable?
-
Hi guys, i need some help. Some days ago i precleared then added old hitachi drive to my array, and now i see some errors during parity check. Parity is fine files are fine...
Aug 13 00:32:49 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x8000000 SErr 0x0 action 0x6 frozen Aug 13 00:32:49 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED Aug 13 00:32:49 unRaid kernel: ata6.00: cmd 60/08:d8:c0:00:00/00:00:00:00:00/40 tag 27 ncq dma 4096 in Aug 13 00:32:49 unRaid kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 13 00:32:49 unRaid kernel: ata6.00: status: { DRDY } Aug 13 00:32:49 unRaid kernel: ata6: hard resetting link Aug 13 00:32:49 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 13 00:32:49 unRaid kernel: ata6.00: configured for UDMA/133 Aug 13 00:32:49 unRaid kernel: ata6: EH complete Aug 13 00:37:51 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x400008 SErr 0x0 action 0x6 frozen Aug 13 00:37:51 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED Aug 13 00:37:51 unRaid kernel: ata6.00: cmd 60/00:18:78:22:79/02:00:07:00:00/40 tag 3 ncq dma 262144 in Aug 13 00:37:51 unRaid kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 13 00:37:51 unRaid kernel: ata6.00: status: { DRDY } Aug 13 00:37:51 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED Aug 13 00:37:51 unRaid kernel: ata6.00: cmd 60/00:b0:78:20:79/02:00:07:00:00/40 tag 22 ncq dma 262144 in Aug 13 00:37:51 unRaid kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 13 00:37:51 unRaid kernel: ata6.00: status: { DRDY } Aug 13 00:37:51 unRaid kernel: ata6: hard resetting link Aug 13 00:37:52 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 13 00:37:52 unRaid kernel: ata6.00: configured for UDMA/133 Aug 13 00:37:52 unRaid kernel: ata6: EH complete Aug 13 00:53:14 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x4 SErr 0x0 action 0x6 frozen Aug 13 00:53:14 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED Aug 13 00:53:14 unRaid kernel: ata6.00: cmd 60/00:10:98:44:2e/02:00:02:00:00/40 tag 2 ncq dma 262144 in Aug 13 00:53:14 unRaid kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 13 00:53:14 unRaid kernel: ata6.00: status: { DRDY } Aug 13 00:53:14 unRaid kernel: ata6: hard resetting link Aug 13 00:53:15 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 13 00:53:15 unRaid kernel: ata6.00: configured for UDMA/133 Aug 13 00:53:15 unRaid kernel: ata6: EH complete Aug 13 00:56:17 unRaid kernel: ata6.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x6 frozen Aug 13 00:56:17 unRaid kernel: ata6.00: failed command: READ FPDMA QUEUED Aug 13 00:56:17 unRaid kernel: ata6.00: cmd 60/00:58:f0:7b:60/02:00:02:00:00/40 tag 11 ncq dma 262144 in Aug 13 00:56:17 unRaid kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 13 00:56:17 unRaid kernel: ata6.00: status: { DRDY } Aug 13 00:56:17 unRaid kernel: ata6: hard resetting link Aug 13 00:56:18 unRaid kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 13 00:56:18 unRaid kernel: ata6.00: configured for UDMA/133 Aug 13 00:56:18 unRaid kernel: ata6: EH complete
Do I need to worry?
-
What board do you have?
Did you tried this?
if you don`t need i915, try to edit you syslinux append line like this:
append nomodeset initrd=/bzroot
-
i915 is not loading by default!
To activate it add this to go file
On 4/1/2018 at 8:58 AM, dmacias said:#enable module for iGPU and perms for the render device
modprobe i915
chown -R nobody:users /dev/dri chmod -R 777 /dev/dri
if I turn i915 off, I cant use intel hd graphics for plex/emby hardware transcoding
sorry for my English ?
-
5 minutes ago, witalit said:
Well I have a slightly different board and yes I get the same issue. When booting into unRAID screen is blank and the fix posted on other threads does not work.
I see most of unraid boot process till the i915 driver is loaded. Screen becomes black near the final part of the boot process...
If I don't load i915 (modprobe i915) everything works fine. -
56 minutes ago, witalit said:
Does IPMI work for you on this board at all?
Yes, it works. it works until i915 driver boot, then screen becomes black.
latest BMC and bios
2.60 5/17/2018 BIOS 07.12.00 11/15/2017 BMC -
Will user script help? Can it work together with vfs objects = recycle ?
I have Recycle-bin plugin installed, will this affect?
Both lines will be ok? ??
vfs objects = recycle
vfs objects = btrfs
PS: Or to add in one line?
vfs objects = recycle btrfs
-
Very interesting! Tell me please how to implement this on the current version
-
1 hour ago, 1812 said:
How much ram does it require?
I am not zfs-guru, my first zfs pool was created some weeks ago. I use 2 usb hdd drives in mirror (raid1) for zfs-backup-pool, rsync by user-scripts copy important data there once a week. I limited ARC cache of 2Gb memory using user-scripts. Totally my system have 8Gb of RAM. -
3 hours ago, 1812 said:
outside of my unRaid array, to which I wish to use for backups of my arra
you can try use zfs plugin to create zfs pool for backups, then share it using smb-extra config.
-
I recently bought AsRock e3c236d2i and I can confirm that it allows to use intel video core, but only for video encoding/decoding, not for display....
-
@jang430 open terminal, type mc, th en go to /dev/dri folder. U see renderd128 ?
-
take a look at this. double check all bios settings, nomodeset , go file, etc
-
-
1 hour ago, jang430 said:
Are you using official embyserver docker?
Yes
1 hour ago, jang430 said:What needs Ed’s to be in the go file?
my go file is:
#!/bin/bash #enable module for iGPU and perms for the render device modprobe i915 chmod -R 777 /dev/dri # Start the Management Utility /usr/local/sbin/emhttp &
container Extra-Parameters:
-
37 minutes ago, comet424 said:
I couldn't get the File Options on top I thought ALT would do but nope
MC full features don`t work in browser. Try use ssh client like Putty
to share my Backup folder on my pool ( mounted /mnt/zfspool )i added
[Backup] path = /mnt/zfspool/Backup comment = browseable = yes # Public writeable = yes read list = write list = valid users = vfs objects =
To Settings>SMB>SMB Extras
-
@comet424 go to terminal, then type mc, you can see your pool there if it is mounted
to see mountpoint use "zfs list" comand
root@unRaid:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfspool 127G 322G 127G /mnt/zfspool
to add share go to Settings - SMB and add something like this to SMB Extras
[Backup] path = /mnt/zfspool/Backup comment = browseable = yes # Public writeable = yes read list = write list = valid users = vfs objects =
this worked for me, i am new to zfs. Some days ago i created my first usb mirror pool for backups. will see how it go....
-
45 minutes ago, Squid said:
user scripts should do the trick
user script worked! Thanks!
- 1
-
8 minutes ago, Squid said:
No idea about what you're talking about
i need to limit zfs ARC-cache size
I'm trying to do as it is written in the first post, by adding line in go file, but this doesn`t work =(
On 9/21/2015 at 2:03 AM, steini84 said:limit the ARC to 8GB with these two lines in my go file:
#Adjusting ARC memory usage (limit 8GB) echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max
Полезная информация на тему OS Unraid
in Russian / русский
Posted
Отлично, продолжай в том же духе.