Ryan J. Gallagher
Applied scientist trying to make the internet a little better. PhD. Platform manipulation, deceptive behaviors, disinformation, networks, fingerstyle guitar. I use my hair to express myself. He/they
- [Not loaded yet]
- Read the room Dina
- [Not loaded yet]
- I just use a password manager and save the answer as a random set of words. Kinda annoying because I have to remember which sites are real answers, but I'd never be able to remember this one anyway
- [Not loaded yet]
- I wonder how much this actually helps divert useless user reports
- Sometimes I think "wow it's really good I left academia because my work on social media and marginalization would have definitely been defunded by the government" and then I remember I work at the one social media platform banned by said government
- don't throw stones in glass houses etc etc
- A lot of academic work on detecting online coordination uses all sorts of fancy methods but then hardly validates them For the past 3 years I've worked with analysts who can spend weeks validating a single network is actually coordinated. Academia should be using the same level of rigor
- [Not loaded yet]
- The worst I've seen has basically been making a retweet network, thresholding it, and saying "this is coordination!"
- It's simply not enough to declare "this is a coordinated network!" because you used a fancy statistical or network algorithm Validating that is true should be as much effort if not more than the actual detection. Without the validation, the method can't be trusted to scale
- If you're doing coordination detection correctly, some of it should be easy to validate because it should be benign coordination between corporate accounts, journalists, etc If you want to publish about deceptive coordination (the flashy stuff), you have to validate that's what it actually is
- [Not loaded yet]
- [Not loaded yet]
- This is really close to what I had in mind, and I wasn't aware of this article, thank you for sharing!
- I think this is right. Harassment in general is understudied, and especially its effects longitudinally Some people write off public figures getting inundated with negativity online as part of the job, but I worry a lot about how the scale enabled by social media can radicalize people through it
- Obviously it's a balance and people should be held accountable. Shame is a way we communicate what is and is not ok, and public figures do need to meet a higher bar But if you start mixing that with lots of death threats, "fuck you you piece of shit" etc, do people just start insulating themselves?
- My theory is that if an "elite" is very online, and they receive a lot of negativity (everything from "that's dumb" to death threats), then they lean heavily into selective exposure as a coping mechanism. Alternatives are breaking down under it (which happens) or self reflection (less likely)
- Regardless, I think you're exactly right on the dosage aspect. Getting swarmed online can really change how you approach, and so can having parasocial relationships to maintain
- The researchers mostly appeal to the harm that will occur if they *don't* do the research ("rampant bad actors might do this anyway"), which is self serving They hardly seem to reflect on the harms that can come from *actually doing* the research (deception, violating community norms, active harm)
- @sarahagilbert.bsky.social goes through the harms in more detail in this thread
- This is what really irks me with the IRB decision Researchers have a conflict of interest assessing the ethics of their research: they want to do and publish their research A good IRB should be able to assess *all* the possible harms, not just the (self serving) ones identified by the researchers
- The r/ChangeMyView LLM experiment is an ethical mess There's a lot to say but I'm really frustrated with how the IRB dropped the ball. Lots of social media studies get this "minimal risk, go ahead" assessment, but they're usually NOT interacting with users. Doing so completely changes the stakes
-
View full threadYou see this reasoning all the time in social media study ethics statements and it's so weak Just because something may have societal harm doesn't give you a blank check to still do your research Many researchers overestimate the benefits of their work and how those outweigh the harms
- I try to assume best intentions in this space because social media research ethics are hard But it ruins a lot of good will when you fuck up, double down on that fuck up, and don't show any sign that you're really listening to the community you manipulated
- IRB approval does not mean something is ethical. It's not a get-out-of-jail free card for any ethical concerns In this case the IRB failed to do its minimal job, and so the researchers hide behind this "Well the IRB approved it, so it's ok" reason in half their responses to the community
- Another thing I'm annoyed about is the ethics justification "Yeah our bot was intentionally deceptive, broke community norms, and we failed to debrief individuals affected, but *someone else* could do this so we *had* to"
- the mass layoffs at Twitter ruined me because I still think it's funny to use the salute emoji 🫡
- Having a PhD is funny because once in a blue moon someone brings up the one very specific thing that your dissertation is on and they have no idea what they've just stumbled into
- Thoughts and prayers to anyone who ever mentions the words "core" and "periphery" when working on social networks with me
- [Not loaded yet]
- I think the implication in the game is that those ones have just been infected longer
- Feature from Twitter that I miss on Bluesky: getting a notification when something I reposted leads to my followers liking or reposting it I amplify a lot more than I write myself. I like seeing that has an impact too
- I honestly really like the affiliation mechanic on X. I think it's an interesting way to flesh out what we mean by "verification" and that there are different interpretations of it This could go hand in hand with domain verification too
- I think the idea of an all powerful Blue Check verification is a bit dated and it's disappointing to see Bluesky chasing it Even X has multiple types of verification now. Of course their blue check is infamously useless, but they have a grey check for government, and a gold check for businesses
- Another place X unfortunately leads Bluesky on "verification" is through "affiliates," accounts that a verified government or business account has verified itself are affiliated with it They get a custom icon that points back to the account that affiliated them
- Affiliated accounts on X feel more like what Bluesky rolled out yesterday with "trusted verifiers" because (currently) the only trusted verifiers are newsrooms who only seem to be expected to "verify" their employees "Affiliation" is much clearer than the overused term "verification" though
- Bluesky is introducing account verification, and also trying to decentralize who can verify I'm interested to see how this turns out, but some questions come to mind Who can be a trusted verifier? Can different verifiers use different definitions of "verification"? Are verifiers like labelers?
- Part of this is selfish wondering - I've thought about doing verification as a labeler, but it didn't feel like quite the right route (would people ever actually see it?) With this program, is there a path for someone like me to build something that could be a "trusted verifier"?
- "As this feature stabilizes, we’ll launch a request form for notable and authentic accounts interested in becoming verified or becoming trusted verifiers."
- Some Sunday guitar trying out a new tripod
- I get what Bluesky is going for here because lists have been an unchecked vector for harassment and abuse But phrasing it in terms of "unproven allegations" is bad policy writing which is going to lead to all sorts of headaches for everyone - both users and Bluesky's moderators
- Bluesky's community guidelines need a revamp
- The ever tricky thing here is that policy writing is slow and careful, while Trust and Safety needs to be fast and agile to respond to emerging threats I worry in 6 months, 1 year, 2 years, Bluesky is going to look at its internal moderation and realize it's out of alignment with the actual policy
- Research on misinformation does not infringe on anyone's free speech! Blatantly anti-scientific reasons for blocking this research
- Forgot to add the link www.nsf.gov/updates-on-p...
- Just made my first code commit on a JavaScript file*. I am a full stack data scientist *a single string input on a predefined form
- Excited to see TikTok testing a new Community Notes-like feature, Footnotes This is in addition to fact checking, not replacing it, like X and Meta have done (irresponsibly in my opinion) newsroom.tiktok.com/en-us/footno...
- Disclaimer that I currently work at TikTok, and I worked on Community Notes at Twitter. I'm not involved with this project at all though
- Oh to have the confidence of an anonymous account arguing Markov chains are deterministic
- "they're deterministic because if you implement one on a computer and use a seed, you always get the same output" where do you even start with this
- "It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics" Is it? Is it well known? Is it so well known there are no citations for this claim?
- Incredibly cringe to say the political "bias" of your model is comparable to Grok, as if that's a good thing
- Just because I have a lot of followers doesn't mean I don't see someone regularly following (and unfollowing) me to try and get a follow back
- If you're an academic scholar, you really don't need to be doing follow4follow
- [Not loaded yet]
- Cancelled