MightyT

Members
  • Posts

    20
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MightyT's Achievements

Noob

Noob (1/14)

8

Reputation

  1. So yesterday I once again cleaned out all the cookies for all unraid domains and also reset the settings in cookie-autodelete (a firefox addon I'm using) and now it seems to be stable, maybe it was just a weird edge case with my config.
  2. Any updates on this? It has been a while and I'm still deleting cookies everyday, the latest update also didn't change anything.
  3. Ok, so now the other bug report, where the more recent discussion was was closed. And we are back here, a bunch of people have that problem and it is easy to recreate, but we are all still here 2,5 weeks after the latest release deleting cookies or using incognito mode every day.. Anyone got a clue on how to fix this?
  4. I just tried it again, in Firefox I cleared everything for unraid.net and myunraid.net in about:preferences → "Cookies and Site Data" → "Manage Data". Worked, just as clearing only the rxd-init cookie did. Reopened Firefox and got the error again.
  5. Can confirm, deleting the 'rxd-init' cookie let me connect again without deleting all cookies.
  6. Ok, thanks. Anyway I'm not complaining at all, just looking for an explanation. Like I said, I didn't lose any important data and will do the mounting differently in the future.
  7. Yeah, I directly mounted it to /firefox/Downloads. So I'm guessing it just got overwritten on the update, which usually doesn't happen when the container is updated offline because the the folders are not "mounted" ?
  8. @ich777 Since I didn't find a separete thread, I got a question / potential problem with your firefox docker: I had my regular download/temp user share mounted to the Downloads folder inside the firefox appdata, this morning the whole share was emptied (nothing that I couldn't recover but still..). And I noticed the time of the last change was pretty much exactly the time the firefox container got started this mornging (on a schedule via a userscript). And I also noticed that the automatic updater must have run since the vnc password I set was reset. Any chance the automatic update just deleted all of my share's content? I only had a quick look at the github repo and didn't see anything about the container updating itself. Like I said, didn't lose any critical data, but imagine any other docker container acted like that, i.e. your regular plex container just deleting all your media..
  9. Yeah, I forgot about that. Still that would bring the max only up to 100kb, still ok, I might just turn it down more.
  10. exactly Yeah, I figured I wouldn't want to do that every 15 minutes, but just spreading out the potential blocks so I got a chance of reading something that isn't cached seems to work consistently. And now I'm only reading 5 blocks of max. 2kb per disk, I think that is negligible.
  11. Has been working just fine all week. I'd change the size of the skip parameter, to only skip half the amount of blocks, at the most, and then just add a line per disk inside the loop, like this: #!/bin/bash for ((N=0; N<5; N++)) do dd if=/dev/sdc of=/dev/null skip=$(($RANDOM % 2*1024*1024*1024)) bs=$((1024 + $RANDOM % 1024)) count=10 &> /dev/null dd if=/dev/sde of=/dev/null skip=$(($RANDOM % 2*1024*1024*1024)) bs=$((1024 + $RANDOM % 1024)) count=10 &> /dev/null done
  12. Alright, yesterday's version was a bit overkill, I turned it down a little and ended up with this: #!/bin/bash for ((N=0; N<5; N++)) do dd if=/dev/sdc of=/dev/null skip=$(($RANDOM % 4*1024*1024*1024)) bs=$((1024 + $RANDOM % 1024)) count=10 &> /dev/null dd if=/dev/sde of=/dev/null skip=$(($RANDOM % 4*1024*1024*1024)) bs=$((1024 + $RANDOM % 1024)) count=10 &> /dev/null done So 5 rounds (per disk) of reading 10 random blocks of a size between 1 and 2 kb, skipping up to 4*2^9 blocks, so that should if I'm not mistaken cover the first 8tb of my 10tb disks, way too much to be cached, with relatively little read activity. Works perfectly fine so far.
  13. So just outputting the same file is not working because of caching, also it should be echo not cat. But, like doron suggested I just ended up with this: #!/bin/bash #disk1 = sdc #disk2 = sde for ((N=1; N<20; N++)) do dd if=/dev/sdc of=/dev/null skip=$(($RANDOM % 1024*1024)) bs=$((1024 + $RANDOM % 2048)) count=$((N*10)) &> /dev/null dd if=/dev/sde of=/dev/null skip=$(($RANDOM % 1024*1024)) bs=$((1024 + $RANDOM % 2048)) count=$((N*10)) &> /dev/null done So a few relatively random reads, a first try with smaller/fewer block didn't work because of caching and I hadn't thought of dropping the cache. Unsure if I want to with the Folder Caching plugin, let's see how that goes
  14. My script now looks like this: #!/bin/bash cat /mnt/user/<share>/<file>; With 1 line per disk, that just outputs the file to the console, or in that case the log. So same as before, script as cronjob for the time that you want them to keep spinning, spindown settings for the rest of the time. Yeah, that remains to be seen, my script will start soon and I'll report back if it kept working through the evening. Still this is a clunky workaround, there has got to be something lke the old version with 'sdspin'
  15. For now I just 'cat' a small txt file for each disk, that wakes the disks up and refreshes the status, I'm hoping this keeps them spinning since there is no read caching that I'm aware of. Use case is, like coblck said, keeping the disks spun during a certain time of day while having relatively strict spindown settings at other times. For me it is too noisy while I'm working (and rarely accessing anything), and then in the evening and on the weekends I just want to keep them spinning since I'm not in the room aynway, to reduce spinup/spindown cycles.