• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

potatows's Achievements


Newbie (1/14)



  1. for some reason the language info doesn't get passed to youtube-dl unless you specify it with env LC_ALL=en_US.UTF-8. When that happens youtube-dl defaults to restrict filenames and replaces spaces with _. It does get passed when you run the command from a terminal though.
  2. I tried just now and the same result occured. I looked at the logs and noticed this: "WARNING: Assuming --restrict-filenames since file system encoding cannot encode all characters. Set the LC_ALL environment variable to fix this." i am not sure what i should be setting it to though EDIT: setting the LC_ALL variable for the command fixed it for posterity i fixed this by changing the script to: cd /mnt/user/YouTube/ env LC_ALL=en_US.UTF-8 python3 youtube-dl --config-location /mnt/user/YouTube/youtube-dl.conf
  3. I am having some trouble with getting "_" instead of spaces when running a script through user scripts. the script is: cd /mnt/user/YouTube/ python3 youtube-dl --config-location youtube-dl.conf when I run a run.sh file with the unraid terminal containing just python3 youtube-dl --config-location youtube-dl.conf in the /mnt/user/YouTube/ directory, spaces are correctly created by youtube-dl in filenames. anyone know how i could fix this?
  4. No its okay, as long as you can still do it with some flag. In fact just having to do a single sfp=1 to enable across all adapters is much easier than having to do a ,1 for each one. Was just difficult figuring out which flag to use after the update since the documentation on these flags is not super clear and varies by driver.
  5. On releases prior to 6.6.0 you could allow unsupported sfp modules for intel nics by adding ixgbe.allow_unsupported_sfp=1,1,1,1 to the kernel boot config. Is there a new way to enable unsupported SFP modules? EDIT: For posterity and future google searchers here is how you solve this. On unraid before version 6.6.0 use append initrd=/bzroot ixgbe.allow_unsupported_sfp=1,1,1,1 a ,1 for every additional sfp nic you need to enable unsupported sfp modules on For unraid 6.6.0+ use append initrd=/bzroot ixgbe.allow_unsupported_sfp=1 This will unlock unsupported sfp modules for all ixgbe driver nics I would assume this change has something to do with the newer kernel or in tree drivers. Hopefully this saves someone else a headache
  6. So for a couple months now I have been monitoring my drive writes with Grafana and I have noticed that something is constantly writing to my nvme ssd to the tune of 30GB a day. I however am struggling to figure out what is causing it. At first I assumed it was the influxdb for grafana but I put that on another disk and it did not help. At this point I have moved all databases/downloading onto other drives to try and isolate the problem. I also moved the docker image to another disk. The only notable things that exists on the NVME drive at this point is a plex database, docker image, docker config files Using grafana I pulled the docker io_service_bytes_recursive_write value and discovered that docker has only written 737MB in the past two days to all drives. I also have live monitoring of writes using field(write_bytes) Mean() non_negative_derivative(1s) which shows 1MB/s every 20-30 seconds or so. My grafana database receives data from the server every 10s. So far I have tried a few command line utils but they just show total writes across the system which is not helpful to me. On thing I did notice while trying this was btrfs_transacti having a decent amount of writes. Only problem is that my server has 3 ssds with btrfs and they are all used for various things. How can I figure out what is doing this? It has written 61GB in 44 hours, about 1.38GB per hour, which is just insane. I almost want to just completely wipe the drive and unraid install off the usb and start with a fresh config. System Info: R720 XD 960 Pro 512GB nvme
  7. I should mention I have not needed to do this in a long time. I moved all my downloading and extracting operations to a second ssd mounted with unnassigned devices and I no longer have this problem. In case you still are trying to find it: From johnnie.black
  8. WOW, thanks for linking me that. I followed the instructions and this fixed all of my problems. Copied a 35gb file to the ssd and the web ui, dockers, everything didn't even slow down or freeze. I felt like something was wrong for a long time but no matter how much i googled I couldn't find a solution. Had started to wonder if the ssd had issues.
  9. root@Potato:~# btrfs fi usage /mnt/cache Overall: Device size: 447.13GiB Device allocated: 447.13GiB Device unallocated: 1.05MiB Device missing: 0.00B Used: 144.76GiB Free (estimated): 300.98GiB (min: 300.98GiB) Data ratio: 1.00 Metadata ratio: 1.00 Global reserve: 185.56MiB (used: 0.00B) Data,single: Size:445.10GiB, Used:144.11GiB /dev/sdc1 445.10GiB Metadata,single: Size:2.00GiB, Used:658.38MiB /dev/sdc1 2.00GiB System,single: Size:32.00MiB, Used:64.00KiB /dev/sdc1 32.00MiB Unallocated: /dev/sdc1 1.05MiB Ah so unallocated is different from free space. I was getting confused between the two terms. So i guess trim is being used correctly then?
  10. /dev/sdc1, ID: 1 Device size: 447.13GiB Device slack: 0.00B Data,single: 445.10GiB Metadata,single: 2.00GiB System,single: 32.00MiB Unallocated: 1.05MiB So unallocated is different than space being used by data then? cause that output shows 445.10 data usage, yet the drive has 343GB free right now.
  11. Plugged into an P8B WS Asus motherboard, into the one of the two intel 6GB/s ports. Latest bios, set to ahci in bios.
  12. Wanted to make sure trim was working, SSD seems to be acting a bit odd at times. For example, copied a large file to it copying maxed out around 20 MB/s and the system ui locked up, dockers lagged out, etc. I have the trim plugin installed which is set to run every day, i also have tried running the trim command manually which results in: root@XXXXXXXX:~# fstrim -v /mnt/cache /mnt/cache: 56 KiB (57344 bytes) trimmed The automatically run trim outputs the following: Sep 10 05:00:01 Potato root: /etc/libvirt: 911 MiB (955297792 bytes) trimmed Sep 10 05:00:01 Potato root: /var/lib/docker: 12.9 GiB (13865934848 bytes) trimmed Sep 10 05:00:01 Potato root: /mnt/cache: 56 KiB (57344 bytes) trimmed I read from https://forums.lime-technology.com/topic/35921-how-to-implement-trim-on-ssd-drives/ that the bytes shown is available space on the ssd. 56kb is obviously not the free space on the ssd, it is a sandisk ultra II 480GB, with ~240 free. So is trim working correctly? Did I set something up wrong?