• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • Once the water companies were privatised, they took out massive loans and performed no maintenance. The loans were purely to pay shareholder dividends. Now they’re loaded down with debt.

    Atop this, that crumbling infrastructure can’t handle the increased water flow that’s due to rainfall increases. So there’s been a general trend of dumping raw sewage into rivers (the fines are cheaper op ex than the capex needed to fix the situation).

    It’s parasitic capitalism at its finest.




  • This is some appalling pedagogy, possibly summarised during the questions: “I haven’t used this enough to have a concrete opinion on whether you should use it.”

    All examples were presented too fast, beginning with a solution, but without introducing the problem they were intended to solve. It wasn’t even clear after they were presented what they solved.

    IdentityT is a terrible choice because it adds absolutely nothing; that it can be defined is theoretically interesting, but doesn’t answer the novice’s question: since it does nothing, why would I care about it?

    Using terminology like “lift” as though the audience were familiar with it, before eventually saying “ah-ha! This is lift!” doesn’t supply any motivation.

    The whole thing seemed self-congratulatory; if there’s a reason people seem “easily confused” the fault lies with the presentation of the material.


  • Casey’s video is interesting, but his example is framed as moving from 35 cycles/object to 24 cycles/object being a 1.5x speedup.

    Another way to look at this is, it’s a 12-cycle speedup per object.

    If you’re writing a shader or a physics sim this is a massive difference.

    If you’re building typical business software, it isn’t; that 10,000-line monster method does crop up, and it’s a maintenance disaster.

    I think extracting “clean code principles lead to a 50% cost increase” is a message that needs taking with a degree of context.





  • The test case purported to be bad data, which you presumably want to test the correct behaviour of your dearchiver against.

    Nothing this did looks to involve memory safety. It uses features like ifunc to hook behaviour.

    The notion of reproducible CI is interesting, but there’s nothing preventing this setup from repeatedly producing the same output in (say) a debian package build environment.

    There are many signatures here that look “obvious” with hindsight, but ultimately this comes down to establishing trust. Technical sophistication aside, this was a very successful attack against that teust foundation.

    It’s definitely the case that the stack of C tooling for builds (CMakeLists.txt, autotools) makes obfuscating content easier. You might point at modern build tooling like cargo as an alternative - however, build.rs and proc macros are not typically sandboxed at present. I think it’d be possible to replicate the effects of this attack using that tooling.






  • Came here to say the same thing. The git book is an afternoon’s reading. It’s well worth the time - even if you think you know git.

    People complain about the UX of the cli tool (perhaps rightly) but it’s honestly little different from the rest of the unix cli experience: ad hoc, arbitrary, inconsistent.

    What’s important is a solid mental model and the vocabulary of primitive and compound operations built with it. How you spell it in the cli is just a thing you learn as you go.