Grobalt
-
Posts
122 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Grobalt
-
-
Is there a new way, easier, in the meantime? I have like 2000 movies l, managing with TMM and all small files are in the movies folder. Would be nice to have a fast Kodi etc that can access the small files on SSD instead spinning the harddrives
-
is this still the correct one ? cannot see it in the community apps and the link to My Repository: https://github.com/SmartPhoneLover/unraid-docker-templates is dead as well
-
no they dont draw more power and most of the time they are spin down. Spin up and spin down causing issues on drives is a myth I have customers with more then 30k disks with a "zero watt storage" design where disks are used like a tape without any problem.
-
Yes, parity will be done after all files are written, I upgraded all disks and have another copy of the files. But an additional nic will be more power draw, not the best option. And no write cache is default at enterprise data disks, therefore I enable that in the go.cfg
-
as shown in post 1 - both are unraid, both are running same version and i copy via MC in a putty instance
-
adding NFS share on main page and click on mount :)
-
/etc/nfsmount.conf
# Protocol Version [3,4]
# This defines the default protocol version which will
# be used to start the negotiation with the server.
Defaultvers=4then restartet nfs service
-
I have 2 unraid systems which i currently "clone". Copying 40TB of large files from one to the other. Default is v3, but i changed to NFS v4 and restartet the nfs server as well -> behaves same.
When i start the nfs service it starts copying with around 180-200 MB/s over 2.5gbit ethernet. After like 12 hours it is around 80-100 MB/s left. Stopping and Starting nfs service again revices original speed.
Any idea ? I can find some google results with same problem but no solution.
- NFS simply gradually slows down over time
- NFSv4.1 mount is extremly slow until remount
- NFS low performance after some activity
- Severe NFS performance degradation after LP #2003053
- https://access.redhat.com/solutions/6271621 ( i cannot read the solution)
Both running 6.12.3
-
DVB-S2 Card for TVheadend, location Germany, sending to EU.
90 euro + shipping
-
After a week with array on compressed ZFS ... i copy everything back to XFS. Huge cpu workload, nearly all time a core is 100% load, the copy speed is below 50 MB/s with 18TB enterprise SATA spindles (both write methods).
-
Ich habe nur einen Unraid Server und (Dateigröße bezogen) von der Struktur meiner Daten und Wichtigkeit reicht mir eigentlich eine Parity Disk (da es mich nicht stört wenn Medien verloren gehen). Dennoch gibt es natürlich auch papierkram, eMail Archiv etc, was aber nur paar Gigabyte sind -> da hätte ich gerne aber double parity oder .. zumindest eine Kopie auf einer zweiten Disk.
Dachte an einen Share der nur auf einer Platte läuft und auf den täglich ein Backup gemacht wird vom eigentlich großen Array. Mit welchem Container wäre das am einfachsten umzusetzen ? Ich möchte einfach nicht wegen 5 Gigabyte eine zweite 18TB Parity Disk vergeuden oder deswegen Controller nachrüsten für weitere SATA ports etc.
-
Hi,
so ein B660M Mortar DDR4 (ohne wifi) habe ich zum Verkauf, liegt hier schon eine Weile und wurde nie verbaut.
-
Did you manage to get c10 states on that board ?
-
On 4/30/2023 at 2:44 PM, mgutt said:
Ich mach dazu auch mal eine Support-Anfrage bei Gigabyte auf.
Servus, darf ich fragen was da rausgekommen ist ? Ich muss aufrüsten und hole einen Server zurück nach Hause und muss dann halibwegs effizient 24 HDD im neuen Server verwalten können.
-
Also die Ursache für die falschen Inhalte wurde gefunden -> sieht man auch oben auf dem blauen Bild. Der NFS Server hatte 2 identische FSID und hat daher den Inhalt des ersten gezeigt. Wenn ich einen neuen Share anlegte wurde wieder ein Duplikat genutzt. Warum, wieso und weshalb ... null Ahnung.
Habe dann alle Shares NFS deaktiviert , neu gebootet und dann überall NFS wieder aktiviert, dann hat es gepasst.
-
-
now it messed up the system rights ...
shares with the same name as an old share have the old fsid when recreated -> duplicate
so i used new names, but the new shares are not listed under users, i cannot set the rights for them
-
i edited /boot/config/shares/ron.cfg to fsid 104 and restarted the array - seem to work.
but then i created another new share, just for test purpose, enabled nfs export -> id received again a duplicate, already used fsid !
-
I did edit before reboot to 104, but after reboot it was the old fsid. Let me do again to validate after some hours of sleep thank you
-
root@unraid:/etc# cat exports
# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd."/mnt/user/aufnahmen" -fsid=100,async,no_subtree_check
"/mnt/user/download" -fsid=101,async,no_subtree_check
"/mnt/user/filme" -fsid=102,async,no_subtree_check *(ro,sec=sys,insecure,anongid =100,anonuid=99,all_squash)
"/mnt/user/ron" -fsid=102,async,no_subtree_check *(ro,sec=sys,insecure,anongid=1 00,anonuid=99,all_squash)
"/mnt/user/serien" -fsid=103,async,no_subtree_check *(ro,sec=sys,insecure,anongi d=100,anonuid=99,all_squash)
reboot did not help, still duplicate fsid -
37 minutes ago, itimpi said:
I am not an expert on this, but should the exports not have a unique value for the -fsid option for each share? The ones that you are saying are going wrong are where they have the same fsid value as another entry.
where can i see this and how to change ? in the gui for share i dont see any fsid to set
-
nach meinen Kenntnissen nicht ?
root@unraid:~# cat /proc/mounts rootfs / rootfs rw,size=8012476k,nr_inodes=2003119,inode64 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 tmpfs /run tmpfs rw,nosuid,nodev,noexec,relatime,size=32768k,mode=755,inode64 0 0 /dev/sda1 /boot vfat rw,noatime,nodiratime,fmask=0177,dmask=0077,codepage=437,io charset=iso8859-1,shortname=mixed,flush,errors=remount-ro 0 0 /dev/loop0 /lib/firmware squashfs ro,relatime,errors=continue 0 0 overlay /lib/firmware overlay rw,relatime,lowerdir=/lib/firmware,upperdir=/var/l ocal/overlay/lib/firmware,workdir=/var/local/overlay-work/lib/firmware 0 0 /dev/loop1 /lib/modules squashfs ro,relatime,errors=continue 0 0 overlay /lib/modules overlay rw,relatime,lowerdir=/lib/modules,upperdir=/var/loc al/overlay/lib/modules,workdir=/var/local/overlay-work/lib/modules 0 0 hugetlbfs /hugetlbfs hugetlbfs rw,relatime,pagesize=2M 0 0 devtmpfs /dev devtmpfs rw,relatime,size=8192k,nr_inodes=2003121,mode=755,inode64 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime,inode64 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 cgroup_root /sys/fs/cgroup tmpfs rw,relatime,size=8192k,mode=755,inode64 0 0 cpuset /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cpu /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0 cpuacct /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 blkio /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0 memory /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0 devices /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0 freezer /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0 net_cls /sys/fs/cgroup/net_cls cgroup rw,relatime,net_cls 0 0 perf_event /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0 net_prio /sys/fs/cgroup/net_prio cgroup rw,relatime,net_prio 0 0 hugetlb /sys/fs/cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0 pids /sys/fs/cgroup/pids cgroup rw,relatime,pids 0 0 tmpfs /var/log tmpfs rw,relatime,size=131072k,mode=755,inode64 0 0 efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/elogind cgroup rw,nosuid,nodev,noexec,relatime,xattr,relea se_agent=/lib64/elogind/elogind-cgroups-agent,name=elogind 0 0 rootfs /mnt rootfs rw,size=8012476k,nr_inodes=2003119,inode64 0 0 tmpfs /mnt/disks tmpfs rw,relatime,size=1024k,inode64 0 0 tmpfs /mnt/remotes tmpfs rw,relatime,size=1024k,inode64 0 0 tmpfs /mnt/rootshare tmpfs rw,relatime,size=1024k,inode64 0 0 nfsd /proc/fs/nfs nfsd rw,relatime 0 0 nfsd /proc/fs/nfsd nfsd rw,relatime 0 0 /dev/md1 /mnt/disk1 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,n oquota 0 0 /dev/md2 /mnt/disk2 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,n oquota 0 0 /dev/md3 /mnt/disk3 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,n oquota 0 0 /dev/md4 /mnt/disk4 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,n oquota 0 0 /dev/nvme0n1p1 /mnt/cache btrfs rw,noatime,ssd,space_cache=v2,subvolid=5,subvol= / 0 0 shfs /mnt/user0 fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_p ermissions,allow_other 0 0 shfs /mnt/user fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_pe rmissions,allow_other 0 0 /dev/loop2 /var/lib/docker btrfs rw,noatime,ssd,space_cache=v2,subvolid=5,subvol =/ 0 0 /dev/loop2 /var/lib/docker/btrfs btrfs rw,noatime,ssd,space_cache=v2,subvolid=5, subvol=/ 0 0 nsfs /run/docker/netns/default nsfs rw 0 0 /dev/loop3 /etc/libvirt btrfs rw,noatime,ssd,space_cache=v2,subvolid=5,subvol=/ 0 0 nsfs /run/docker/netns/1eb3ed9661cc nsfs rw 0 0 nsfs /run/docker/netns/180fd72a9eea nsfs rw 0 0 nsfs /run/docker/netns/3c46190792cc nsfs rw 0 0 nsfs /run/docker/netns/00704f97f3c1 nsfs rw 0 0 10.253.1.1:/mnt/user/ron /mnt/remotes/ron nfs rw,relatime,vers=3,rsize=1048576,w size=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10. 253.1.1,mountvers=3,mountport=57223,mountproto=udp,local_lock=none,addr=10.253.1 .1 0 0 tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=1616640k,nr_inodes=404160, mode=700,inode64 0 0
-
-
I posted this in the german board, but i have to ask here after a few days:
in short - i created a share, mounted this share from another computer (or for test reason via nfs local) and it shows the content of another share. I did this with 2 shares and it was reproducable -> showing the content of 2 old shares. totally crazy and i have no clue.
The new shares are "ron" and "scraping" -> ron is showing the content of "aufnahmen" and scraping is showing the content of "download"
[SUPPORT] SmartPhoneLover - tinyMediaManager
in Docker Containers
Posted
A bit strange, i did not change anything on my system since many weeks and suddenly TMM is not working any more.
Logfile:
Fontconfig error: No writable cache directories
2024-01-29 22:02:52,046 INFO success: x11 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-01-29 22:02:52,046 INFO success: openbox entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-01-29 22:02:52,046 INFO success: websockify entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-01-29 22:02:52,046 INFO success: app entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories