Lvxferre [he/him]

The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 43 Posts
  • 3.88K Comments
Joined 1 year ago
cake
Cake day: January 12th, 2024

help-circle
  • Clear PTB.

    I didn’t read the whole comment chain because it’s rather long, but goat is consistently building straw men over what you and LibertyLizard said. For example, going out of his way to mix both meanings of the word “communist”, then as LL explained both things he idiotically answers it with “if u now you’re arguement is semantics, y r u even arguing?”.

    And… speaking on general grounds, this sort of moron who’s eager to oversimplify complex matters is always a dead weight in any sort of discussion, and is best ignored as you address other users. If you must answer them, clipped/short replies like “I already addressed this” are often good.









  • Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.

    Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.

    But I wish it had some broader use, that would justify its cost.

    I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.

    (In both our cases, it’s about matching cost vs. use.)



  • Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!

    …what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.




  • The whole thing can be summed up as the following: they’re selling you a hammer and telling you to use it with screws. Once you hammer the screw, it trashes the wood really bad. Then they’re calling the wood trashing “hallucination”, and promising you better hammers that won’t do this. Except a hammer is not a tool to use with screws dammit, you should be using a screwdriver.

    An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates.

    So he’s suggesting that the models are producing less accurate results… because they have higher rates of less accurate results? This is a tautological pseudo-explanation.

    AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months

    When are people going to accept the fact that large “language” models are not general intelligence?

    ideally to make them better at giving us answers we can trust

    Those models are useful, but only a fool trusts = is gullible towards their output.

    OpenAI says the reasoning process isn’t to blame.

    Just like my dog isn’t to blame for the holes in my garden. Because I don’t have a dog.

    This is sounding more and more like model collapse - models perform worse when trained on the output of other models.

    inb4 sealions asking what’s my definition of reasoning in 3…2…1…



  • I wish EU4 had more automation, the amount of micromanagement there was awful. And this sort of game is more interesting when you can focus on the big picture.

    Sadly I don’t trust Hipsters’ Electronic Arts Paradox to do automation right. And by “right” I mean:

    • Transparent. You could reasonably get why the game AI will / won’t take a certain decision, without spending hours in the wiki or fucking around the game files.
    • Flexible. The best decision is often circumstantial, and playing styles are a thing.
    • Powerful, but not overpowered. The AI’s decisions should be decent, but not the best - a player who takes the time to learn how stuff works should be rewarded. (Or even better, tweak the AI so it does the best.)



  • …you know, naming things after small pebbles is less cringe than being called “bootsie” (Caligula).

    (Both words are similar by coincidence: calx “chalk/limestone”→ calculus “pebble” vs. caliga “boot” → caligula “little boot”. I’m not aware of any word like caligulus, at most caliculus “little cup” ← calix “cup”. inb4 sorry I probably just explained a joke, but I couldn’t resist.)