I think we just differ on the terminology of invention versus observation. What draws the line between a well-supported theory and an observation in the end comes down to how tangible you think the data is.
I think we just differ on the terminology of invention versus observation. What draws the line between a well-supported theory and an observation in the end comes down to how tangible you think the data is.
The concept needs to be able to predict and explain new observations, or else it has no utility and is still essentially just a placeholder.
They first came up with it to explain galactic rotation curves. After that, many new observations came in and the model successfully explained them. To name a few: bullet cluster dynamics, gravitational lensing around galaxies, baryon acoustic oscillation.
Like, relativity, you have to accept and account for or GPS wouldn’t work nearly as accurately as they do.
It is neat that general relativity is used in GNSS, but I’d bet that GNSS could still be invented even if we don’t know general relativity. Engineers would probably have came up with a scheme to empirically calibrate the time dilation effect. It would be harder, but compared to the complexity of GNSS as a whole not that much harder.
There’s no real value in having an explanation (other than personal satisfaction, i.e. vibes) for something unless that explanation helps you to make predictions or manipulate objective reality in some way.
You can make a lot of predictions with Lambda CDM. But yeah they’re not going to help anyone manipulate objective reality. Even so, >95% of math, astronomy, and probably many other fields of research don’t help anyone manipulate reality either. It’s harsh to say they have no value, but perhaps you’re right.
At least let me say this: finding explanations to satisfy personal curiosity (doing it for vibes, as you put it) is different from projecting personal feelings onto objective understanding of reality (the vibes-based astrophysics I was referring to in the meme).
The ball can quantum mechanically tunnel out to the true minimum. In this sense the local minimum is actually not perfectly stable.
I must admit I don’t know that much about MOND being tested. But yeah, from a Lambda CDM point of view it is unsurprising that MOND would not work well for every galaxy.
Yeah it’s not settled by any means. Far from it.
But the hypothesis that it exists and is some kind of matter is pretty well supported through observing gravitational effects.
It’s a classic MEMRI TV meme. What MEMRI TV is would require a … “nuanced” explanation that I don’t want to get into here. Look it up on Reddit or start a thread on !nostupidquestions@lemmy.ml
WIMP is only one model of dark matter. A favorite of particle physicists. But from a purely astrophysics point of view there is little reason to believe dark matter to have any interaction beyond gravity.
But it is a model we invented no? To explain the astrophysical and cosmological observations.
Among all those observations, a commonality is that it looks like there is something that behaves like matter (as opposed to vacuum or radiation) and interact mostly via gravity (as opposed to electromagnetically, etc.). That’s why we invented dark matter.
The “it is unsuited” opinion in this meme is to poke at internet commentators who say that there must be an alternate explanation that does not involve new matter, because according to them all things must reflect light otherwise it would feel off.
Once you believe dark matter exists, you still need to come up with an explanation of what that matter actually is. That’s a separate question.
(I’m not trying to make fun of people who study MOND or the like of that. just the people who non-constructively deny dark matter based on vibes.)
Particle physicists love the Weakly-Interacting Massive Particle dark matter model. But from a purely astrophysics point of view there is little reason to believe dark matter to have any interaction beyond gravity.
I’m still far from convinced about MOND. But I guess now I’m less confident in lambda CDM too -_-
I’m inclined to believe it’s one or many of the potential explanations in your second link. But even then, those explanations are mostly postdictions so they hold less weight.
MOND is a wonderful way to explain rotation curves but since then with new observations (bullet cluster, gravitational lensing, …) MOND doesn’t really hold up.
I’ve heard of something similar that is able to predict an effect of dark matter (the rotation curves), but AFAIK it couldn’t match other observations (bullet clusters, etc.) correctly.
Do you have a link for the model you’re talking about. I’m curious.
This is a very fair take, but I’d say dark matter is harder to falsify, but not totally unfalsifiable.
You can’t see it, true. But what makes sight so special? We can’t smell stars either. You just need to sense dark matter in some other way. Namely gravity! We have seen the way visible matter orbit, and that points to dark matter. We have seen gravitational lensing due to dark matter. Hopefully soon we’ll observe gravitational waves well enough to sense dark matter around the regions the waves are being emitted from.
Most individual dark matter models are falsifiable (and many have already been falsified) through non-gravitational means too. People have been building all sorts of detectors. The problem with this is that detectors are expensive and there are always more models beyond any detector’s reaches.
“Anthropic Claude” does that. Their paying users hit rate limits all the time.
These users end up falling into a strange stockholm syndrome believing that if they shill the product harder VCs will give their beloved company more money to buy GPUs and the beloved company will definitely use those GPUs to serve their requests.
Yes it was <20. That was my guess given how inefficient their inference is. People were downvoting me so I thought maybe the statement was too assertive (because to be fair we don’t know their model size) and relaxed it to <100. Got me more downvotes 🤷♂️.
Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.
@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.
@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?
The tweet I linked shows that good LLMs can be much cheaper. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems
What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.
LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.
That said, the optimal batch size on today’s hardware is not big (<100). I would be very very surprised if they couldn’t fill it for any few-seconds window.
Okay that sounds like the best one could get without self-hosting. Shame they don’t have the latest open-weight models, but I’ll try it out nonetheless.
Here’s a better media coverage of the same paper https://www.nature.com/articles/d41586-025-00030-5