Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!
What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!
you forgot the last stage of the evolution,
you’ll later find out that people were talking about you, your actions, your words, and that being ghosted was in fact the consequence of your actions, and then you’ll have one last opportunity to turn it all around
or
Always my favorite part of your day.
Why protest when you could spend far less energy and just “not be wrong” and “have no stake” by over-fitting your statistical model to the past?
So far, there has been zero or one[1] lab leak that led to a world-wide pandemic. Before COVID, I doubt anyone was even thinking about the probabilities of a lab leak leading to a worldwide pandemic.
So, actually, many people were thinking about lab leaks, and the potential of a worldwide pandemic, despite Scott’s suggestion that stupid people weren’t. For years now, bioengineering has been concerned with accidental lab leaks because the understanding that risk existed was widespread.
But the reality is that guessing at probabilities of this sort of thing still doesn’t change anything. It’s up to labs to pursue safety protocols, which happens at the economic edge of of the opportunity vs the material and mental cost of being diligent. Reality is that lab leaks may not change probabilities, but yes the events of them occurring does cause trauma which acts, not as some bayesian correction, but an emotional correction so that people’s motivations for atleast paying more attention increases for a short while.
Other than that, the greatest rationalist on earth can’t do anything with their statistics about label leaks.
This is the best paradox. Not only is Scott wrong to suggest people shouldn’t be concerned about major events (the traumatic update to individual’s memory IS valuable), but he’s wrong to suggest that anything he or anyone does after updating their probabilities could possibly help them prepare meaningfully.
He’s the most hilarious kind of wrong.
Ah, if only the world wasn’t so full of “stupid people” updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for… inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.
What… insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.
Risk management is not a statistical question in fact. It’s an economics question of your opportunities. It’s why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It’s why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just “try to make money, stupid”.
In the future, everything will be owned and nothing taken care of.
One day, when Zack is a little older, I hope he learns it’s okay to sometimes talk -to someone- instead of airing one’s identity confusion like an arxiv prepublish paper.
Like, it’s okay to be confused in a weird world, or even have controversial opinions. Make some friends you can actually trust, aren’t demanding bayesian defenses of feelings, and chat this shit out buddy.
Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.
Think of it this way. One, reason why humans don’t just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It’s not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.
I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.
Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it’s the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.
It’s funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we’ll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we’ll ignore that and focus on reductive world views again.
Normies go crazy for this one neat rationalist trick!
It’s also, probably wrong. Modern views of intelligence (see Multiple realizability of cognition and Multi-level competency collective intelligence and Free Energy Principle models) suggest you are better of measuring intelligence by measuring it’s metabolism or through perturbation and interactions.
Which isn’t reductive enough for these people.
It’s hilarious to me how unnecessarily complicated invoking moore’s law is to say anything…
With Moore’s Law: “Ok ok ok, so like, imagine that this highly abstract, broad process over huge time period, is actually the same as manufacturing this very specific thing over a small time period. Hmm, it doesn’t fit. ok, let’s normalize the timelines with this number. Why? Uhhh because you know, this metric doubles as well. Ok. Now let’s just put these things together into our machine and LOOK it doesn’t match our empirical observations, obviously I’ve discovered something!”
Without Moore’s Law: “When you reduce the dimensions of any system in nature, flattening their interactions, you find exponential processes everywhere. QED.”
A trillion transistors on our phones? Can’t wait to feel the improved call quality and reliability of my video conferencing!
Recently, a sign showed up in El Paso advertising San Francisco as a sanctuary city, as a great “own the libs,” I suppose because SF would receive of applicants overwhelming their social service programs?
It didn’t work.
We simply don’t know how the world will look X (anything with a bigger scale)
Yes. So? This has, will, always be the case. Uncertainty is the only certainty.
When these assholes say things, the implication is always that the future world looks like everything you care about being fucked, you existing in an imprisoned state of stasis, so you better give us control here and now.
Also meta but while I am big on slamming AI enshitification, I am still bullish on using machine learning tools to actually make products better. There are examples of this. Notice how artists react enthusiastically to the AI features of Procreate Dreams (workflow primarily built around human hand assisted by AI tools, ala what photoshop used to be) vs Midjourney (a slap in the face).
The future will involve more AI products. It’s worthy to be skeptical. It’s also worthy to vote with your money to send the signal: there is an alternative to enshitification.
You can read their blog about the AI-crap, in terms of their approach and philosophy. In general, it is optional and not part of the major experience.
The main reason I use kagi is immediately obvious from doing seaches. I convinced my wife to switch to it when she ask, “ok but what results does it show when I search sailor moon?” and she saw the first page (fan sites, official merch, fun shit she had forgotten about for years).
What you need to know is that you pay money, and they have to give you results that you like. It’s a whole different world.
Helpful reminder to spread the word on Google alternatives this holiday season. Bought Kagi subscriptions as stocking stuffers for my loved ones. Everyone who I have convinced to give it a try has been impressed thus far.
SEO will pillage the commons. It has been for years and years. Community diversity and alternative payment models for search are part of the bulwark.
Rich People: “Competitive markets optimize things, see how much progress capitalism has brought!”
Also Rich People: “But what if everything descends into expensive, unregulated competition between things that aren’t rich people oooo nooo!!!”
Ha! Nope, not buying it.
Funny you mention licenses, since stable diffusion and leading AI models were built on labor exploitation. When this issue is finally settled by law, history will not look back well on you.
Doesn’t seem to prevent you from doing it anyways. Does any license slow you down? Nope.
Not sure that’s true, but also unnecessary. Artists don’t care about this or need it to be. I think it’s a disengenous argument, made in the astronaut suit you wear on the high horse drawn from work you stole from other people.
Sounds like an admission of success given that you have to step out of the shadows to tell artists on mastodon not to use it because, ahem, license issues???
No. Listen. The point is to alter the economics, to make training on image from the internet actively dangerous. It doesn’t even take much. A small amount of internet data actively poisoned requires future models to use alignment to bypass it, increasing the marginal (thin) costs of training and cheating people out of their work.
Shame on you dude.
Good luck on competing in the arms race to use other people’s stuff.
@self@awful.systems can we ban the grifter?