Faceman

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Faceman's Achievements

Noob

Noob (1/14)

8

Reputation

1

Community Answers

  1. Devs have said that 6.13 public testing should be ready to start soon (likely within another week or two unless a major bug is found), that will be the first version to bump to the newer kernel with inbuilt ARC support. Likely be in testing for quite a while as it is going to be a big kernel version jump, and this little patch update shows us that kernel upgrades can have all sorts of weird minor edge cases across the mess of mixed hardware people use.
  2. I'm having trouble getting MakeMKV to work with a physical drive for the first time. I've used it many times for iso files and bdmv folders and the like. My error is a timeout of some sort, which is odd as I know the drive works fine and the sata controller is good as I have used them both recently. the errors I am getting are: Error 'Scsi error - HARDWARE ERROR:TIMEOUT ON LOGICAL UNIT' occurred while reading 'BD-RE ASUS BW-16D1HT 3.02 KETI5GI0938' at offset '65536' I get this regardless of the container I use so I'm worried something has somehow killed my drive. I did get it to show the titles on a disk once using the the other container, but then it didn't start when I tried to rip, it just sat there and finally timed out. EDIT: ok odd. I just went out to the rack and force ejected the disk then put it back in now everything works perfectly. Must have been jammed in a busy state of some kind.
  3. How did I miss this? I've been running FMD in a full fat windows VM like some kind of chump.
  4. Per share settings is a huge upgrade, so happy to see that is now possible.
  5. I second this, more mover control would be huge, and with the changes due in 6.13 to free up more options for multiple pools mapping eachother as primary/secondary, we will be building more complex caching arrangements. more control of the mover would be sorely needed once that feature is mature. I currently use the Mover tuning plugin to more intelligently handle files staying on cache based on age and only doing a full dump move when the disk is dangerously full. Having this kind of control on a PER SHARE basis would be massive. I'd also like to see some form of read cache implemented, for example, a dedicated pool/disk, or a percentage of an existing cache pool allocated to a smart read cache pulling files based on some criteria to serve as a read cache would be very cool.
  6. haven't gotten too far, but haven't done much testing either. Seems like the connection OUT of the docker network is OK, I can access a locally hosted Openspeed test and pull a 2gb+ download speed on a remote computer, though that is on a 10gbe connection so theres still a bottleneck there. the other direction IN to the docker network varies between 500mb and 800mb in these tests, which seems very slow. CPU usage isn't too bad, Previously my server ran on a pair of ancient low power Xeon 2450L processors running at 1.8ghz and could basically saturate 10gbe connections from docker network, now I'm on a 4Ghz 6700k , same NIC and struggling to hit half a gig download whether local LAN or from the web. definitely seems off. I've also tested a VM, and it behaved the same as a real machine on the network, very similar throughput numbers, it is on the same bridge network... will try testing a dedicated NIC and removing the secondary bridge I have set up for a single container next.
  7. Don't buy anything labelled CAT7 or CAT8 on amazon, they are all fake. buy CAT6A (not CAT6E that doesn't exist) UTP patch cables from a respected data equipment supplier. If the cable is "flat" it would barely meet cat5e specs. All that aside, the cables are not your problem, the issue is a bottleneck somewhere in the chain, has someone ever done a proper throughput test on that switch? can it actually switch 10gbe at full speed through those ports? is it one 10gbe channel on the switch chip and they are splitting it for convenience? Is there a CPU bottleneck somewhere, a PCIE lane limitation? have you tried a direct connection between the machines?
  8. If you run some traffic between those two 10g ports do any other lights flash on the switch? perhaps something funny with the routing forcing traffic to flow through a 2.5g connection? is your local traffic going from 10gbe to switch, then 2.5gbe to router and back, then 10gbe to the other end? If you disconnect every other device (and any secondary NICS) and just run server to windows through that switch does it still cap out at 2.5? Cables shouldn't be an issue, you can run 10GBaseT over UTP CAT6 if the distances aren't too long, and 2.5gbe over UTP Cat5e is just fine for short hops too, and those sorts of issues arent going to cut your throughput that far, a badly grounded FTP cable can cause more problems than that.
  9. Did you ever find a way around this? i'm finding a very similar issue where all networking is capped to about 500Mbit whenever the docker is running. Even if I put a container on host it's throughput is capped, if I set up a separate bridge on a different NIC it is still capped, if I speedtest on a VM on that bridge it is capped. if I disable docker it runs at line speed, used to be much faster. perhaps IPvlan VS MACvlan? I havent tried comparing that yet.
  10. Just chiming in as i might be having a similar issue, recently upgraded from 100mbit to 1gig internet and i'm seeing my unraid networking maxing out at ~400-500mbit, and that includes an internal test from OpenSpeedTest on HOST to a VM, SAB caps out at around 55Mb/s, torrents hit the same sort of limit, and if I hit them all at once, I still only see about 500Mbit maximum total throughput. Still tinkering as it is a brand new connection and I haven't done any real tuning yet, and I know my old CX2 10g card is running on a 5GT/s bottlenecked PCIE bus at the moment, but that should be more than enough to service a 1G internet connection.. Modem to router is a solid 1G, Then router to switch to server connection is all 10G. Internet speed test on the Router exceeds 900mbit, so that's fine. Edit: Some testing, if I disable docker entirely from the settings the network speeds skyrocket, If I enable docker with any settings it collapses down to 500mbit, not just the WAN speed, the local lan access drops too. Might need to rebuild my whole docker config from scratch.. somethings wrong here. after some tweaking running the openspeedtest container and accessing it from across the network I see a full speed download (docker br0 upstream speed) and 500mbit upload (docker br0 download speed)... so theres something bottlenecking the download pseeds on the bridge network, but running on host or br1 (separate NIC) or any other test doesnt seem to change the behavior.
  11. without a parity disk you cant do the standard disk swap procedure as there is no way to rebuild the data onto the new disk. Definitely need to go the New Config route to get it to forget that 40gb disk ever existed and not have to replace it with a new disk. Alternatively, add a parity disk now, let it rebuild, then you will be able to replace the 40gb with a proper disk after that.
  12. In that case I think I'll keep running it just on a limited library with --memory='8g' for now since it isn't hurting anything if it does crash the container. at least with that memory limit command it doesn't crash the whole server.
  13. That seems like it could do it, I have tens of thousands of files, and it is keeping track of every one of them ( i guess to avoid re-processing files through the same flow every scan?) I cant see how that would grow to several gigs of ram in just a couple of days though? maybe a couple of hundred megs? how much data is cached for each library entry?
  14. no HDR to SDR being done, 90% of what file flows is doing through the day is adding an audio track based on simple rules. sometimes Plex forces a video transcode when only the audio needs transcoding, so I add compatible tracks in FileFlows to minimize that. there is a small amount of video transcoding being done overnight, but its just basic h264 > h265 on files around 2-3gb is size. this does seem to be what kills it though. I'm going to read through the logs and see if it's certain files doing it or something else..
  15. Loving this app, I now have multiple flows handling various things on my system like magic. However, I'm getting out of memory errors. Previously I limited the container to 4gb but it didn't like that, so I bumped it to 6gb. That seemed fine since it tended to hang around 2-3gb in normal use. but occasionally it will start to shoot up and will quickly crash. If I remove the limit entirely it will crash my whole docker system eventually. I'm suspecting this is a bug rather than the app actually needing dozens of gigs of ram. I also noticed it was generating huge logs, a gigs worth of them in the appdata folder.