• queermunist she/her@lemmy.ml
    link
    fedilink
    English
    arrow-up
    104
    ·
    7 months ago

    Logic guys love assigning random values to things based on gut feelings. “Everything is 5x as hard to do at scale” means absolutely nothing.

    • Findom_DeLuise [she/her, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      69
      ·
      edit-2
      7 months ago

      My boss does this for “estimating” software project schedules. He built a goddamned spreadsheet* where he will rate the entire project on a scale of 1 to 5, with 1 being trivial/quick-win territory, and 5 being extremely labor-intensive.

      Two problems with this approach as used at my job:

      1. He assigns the ratings before requirements gathering has even started (if they ever get documented in the first place).
      2. He bases the final deadline around the calculator spreadsheet, and sends that date on to the business partners/project stakeholders within the company, and they usually pass it along to upper management.

      So, by the time we finally get requirements together and find out, oh, shit, this is actually way more complicated than a 2.71828 or whatever, the stakeholders have already told the Senior VPs of Thought Leadering that my team will be done by a specific date. The week before that date rolls around, boss goes into a panic, demands that I work on absolutely nothing else as I’m being pinged daily to put out random bullshit fires on other projects that were rushed through implementation before I even worked here. Between that and the low pay, I start really strongly considering pulling a no-show. I stay up late a couple of nights, project gets finished. Rinse. Repeat.

      I envy the dead.


      *: No, it’s not a Monte Carlo simulation or anything that fancy – he just multiplies the complexity rating by a set number of labor hours, and doesn’t bake in additional time for risk mitigation. They promoted his ass because this is so scientific and data-driven. Edit: and no, there isn’t a more detailed breakdown/implementation milestone schedule somewhere further down in the estimate. It’s literally “I feel like this is a… 2. You have a week. GIT 'ER DUN!”

    • lugal@lemmy.ml
      link
      fedilink
      English
      arrow-up
      33
      ·
      7 months ago

      Yes, I agree. I think it’s only 3x, 4x at best according to my gut feeling

    • HumanBehaviorByBjork [any, undecided]@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      Big Yud is literally the founder of an assigning-random-values-based-on-gut-feelings religion. His essay on Bayesian reasoning has given perhaps thousands of blog-reading nerds irreversible brain damage.

  • laziestflagellant [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    7 months ago

    Scrolling through his twitter is a real trip. I’m genuinely envious of someone who is actually worried about evil AGI becoming a reality and thinking that’s the most significant threat to the human species. Believing in longtermism must be such a pleasant experience. No thoughts, just vibes, pay no attention to the climate change behind the curtain.

    • LaughingLion [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      as someone currently doing contracting to clean up a big database which has been mismanaged and poorly maintained this entire twitter thread gives me a professional panic attack

      im breathing into a paper bag rn

  • PaX [comrade/them, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    7 months ago

    Actually based

    Get rid of the web interface and all the overcomplicated shit, make Twitter into a filesystem accessed over Plan 9 protocol and make everyone use acme or maybe a simple native client for the non-Plan 9-using-betas to use it

    I can do it with 10 Plan 9 nerds and ~30 million dollars (we’ll need most of this for writing process migration and better clustering into the 9front kernel so we can distribute the load of such a large system over many machines)

    Elon, DM me if you see this, the mainstream woke computer industry doesn’t want you to know about Plan 9

          • PaX [comrade/them, they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            7 months ago

            Ooh okay, so, Plan 9 from Bell Labs (these nerds specifically named it that so it would be impossible to market lmao) is a computer “research” operating system (operating system developed primarily to try out new concepts) in the late 80s into the very early 2000s by the same team who originally developed Unix (which originated pretty much all popular operating systems today except Windows, but even that has been highly influenced by Unix) to try to transcend the flaws of Unix while keeping its best concepts: system resources are represented as (ideally plain-text) files (file metaphor: you can read, write, create, and delete them) and that the system should be made up of small programs that combine together to accomplish great things.

            By the time Plan 9 was being developed, the limitations of Unix’s design were becoming more and more cumbersome. Unix was developed in a time where all computation was done on big (huge by today’s standards) computers that users would connect to with a dumb terminal and share system resources with many other users. Now pretty much no one does that and yet so many of the most fundamental aspects of modern Unixes’ design assumes this. From little things like the talk program being included by default with Linux distributions (lol, lmao, there’s no one logged onto my system but me) to more concerning things like the Unix security model being incapable of controlling system resource access for separate programs running under the one user on the system unless you starting adding shit on top of it (like SELinux, AppArmor, OpenBSD’s pledge, Linux’s Seccomp, whatever else the Linux people are cooking these days) or different Unix systems being unable to share resources unless some program is written specifically for a specific purpose (what other Unix system? Aren’t you connected to your universities big iron?). When a lot of non-Plan-9pilled people talk about Unix they always say something like what I said some of the big ideas about Unix are (everything is a file, small programs, etc)… but that hasn’t actually been true since… like… the mid 80s? It’s something more like “everything is an ioctl or a system call” lol. The fact is Unix was never designed for an era of ubiquitous, internet-connected, powerful computers with many capabilities that are often only used by one person but people just kept adding more stuff on top of it.

            So after the 10th edition of Research Unix, the Unix people threw it all away and started from scratch, keeping only the best concepts of Unix. One of the best things they came up with is a protocol called “9P” or the Plan 9 Filesystem Protocol. Essentially, this dead-simple protocol is used for accessing all resources on the system (and this system can be distributed across multiple computers, because once you start addressing all resources as files, it no longer matters if that resource is actually on your local computer, the Plan 9 kernel will just transparently speak 9P across the network to transfer files). I’m not exactly sure rn how to describe it further in abstract terms so maybe an example: all network connections on the system are represented under the directory /net/. If you go to /net/tcp/, you will see a clone file (you can open this file to make a new connection) and a series of other directories number 0-whatever that each contain a ctl file that you can use to control the connection and a data file which you can write to send data over the connection (although there are library functions and programs that can handle this for you). Or… something more familiar: let’s say you want to use another computer’s speakers. Because to send audio to output devices on the system you write audio data to a file called /dev/audio you can use the import (or rimport if you’re on 9front) program to “import” that file into your view of the filesystem or even replace your /dev/audio with it and any programs that use it will transparently send it to the other system and it will play out of that system’s speakers. Pretty much everything on the system is like this. And most of it is even in userspace, really the only thing the Plan 9 kernel does is handle 9P connections for you and manage hardware… so I guess you could even call it a microkernel lol. Web pages are files, audio is a file, network connections are files, etc etc. Careful use of this abstraction (among others) has made it so that Plan 9 is able to do many things Linux can with a tiny, tiny fraction of the code size and complexity.

            So my joke was that Twitter could be a filesystem too hehe. If you want to learn more about Plan 9 from people who know a lot more than me, you can read some of the papers that come with Plan 9, describing it:

            https://doc.cat-v.org/plan_9/4th_edition/papers/

            Or you can watch this video (and other really great videos) on Youtube by adventuresin9 that covers why Plan 9 is so weird and why it’s like this lol (I didn’t even talk about namespaces or how different programs can have different views of system resources/files):

            https://www.youtube.com/watch?v=VYAyINkDjNk

            Oh and 9front is just the most modern and maintained distribution of Plan 9, Bell Labs shut down a while ago sadly :(

            You can find their website here: https://9front.org/

            In short, I once heard Plan 9 described as: what if the things they told you about Unix were actually true? I hope all that made sense, I’m not so good at writing

            • AernaLingus [any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              Thank you for such a thorough introduction! This sounds like such an interesting approach to an operating system, especially given that it’s something Bell Labs pursued. I’m actually wresting with some really annoying audio routing issues right now so the file abstraction for audio devices in particular sounds like a dream come true.

              I’ll definitely be delving into those additional resources you linked–you may make a Plan 9 convert out of me yet!

              Also you were bang-on about Lemmy not putting your first reply in my inbox because of the bot–what’s the deal with that, if you don’t mind me asking?

              • PaX [comrade/them, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                I’m glad you liked it!

                Also you were bang-on about Lemmy not putting your first reply in my inbox because of the bot–what’s the deal with that, if you don’t mind me asking?

                Lemmy will mark replies to you as read if anyone (or anything) replies to them at all for some reason. I keep missing replies cuz of this lol

  • aen [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    26
    ·
    7 months ago

    i only know this guy as the writer of harry potter and the methods of rationality, i forgot he did stuff other than write fanfiction

  • NewLeaf@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 months ago

    This guy sounds exactly like Elon. He got caught using one burner account. I wouldn’t be surprised if this is another

  • TheDoctor [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 months ago

    Mr. Rationality truly doesn’t understand economies of scale. Once you’re as large as Twitter, it becomes cheaper to run more and more of your own infrastructure.

  • alvvayson@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    I don’t think he’s totally wrong.

    With 10 engineers one should be able to set up a Mastodon instance and scale it.

    I think the issue comes when you look at all the functionality that is much more nuanced than just the bare technicals.

    A good algorithm to maintain high engagement and display relevant content and relevant ads. Moderation to maintain a balance between an environment friendly for advertising without feeling censored.

    And all the data analysis and UX testing to achieve that.

    Building a Twitter clone is easy. Dominating the niche is hard.

    • GaveUp [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      edit-2
      7 months ago

      I think the issue comes when you look at all the functionality that is much more nuanced than just the bare technicals.

      So he’s right that you could make Twitter if you just don’t implement 99% of the features that make Twitter, Twitter. Not to mention all the workers that work on the non-product side… All the various infra teams, security, abuse, etc. etc.

      bruh come on…

    • Nachorella@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      Yeah, it basically comes down to a complete lack of comprehension for how big something like twitter really is. On the surface level the functionality is pretty simple. But there’s so much else going on that nobody sees, and a whole heap of it will be interconnected.

      Twitter web, twitter app for ios and android, twitter api, advertising, content monitoring, content storage, caching, serving, twitter for businesses, content algorithms, accounts, privacy features, user settings, theming, ui, ux, embedded content. That’s just off the top of my head. I’m sure a lot of these huge companies could be a bit leaner than they are, but usually the size is somewhat warranted.

      This guys whole thing is just making stupid takes based on absolute surface level knowledge of things and sounding confident enough that people buy into it.

    • TechnoUnionTypeBeat [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      With 10 engineers one should be able to set up a Mastodon instance and scale it

      A Mastodon instance is used by, at best, a few hundred to low thousands of people, and is going to be small and relatively obscure

      Twitter is used by millions, is the preferred quick communication tool of tens of thousands of companies, and is one of the single biggest presences on the net. It’ll take far more than 10 engineers to keep it running when it gets randomly DDOSed for a laugh by some bored teenagers, where a Mastodon instance either wouldn’t even be a target or would just accept going down temporarily

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        A Mastodon instance is used by, at best, a few hundred to low thousands of people, and is going to be small and relatively obscure

        Both Gab and Truth Social are Mastodon instances (albeit not federated, though if they ever enabled federation they’d be immediately blocked by a majority of instances due to a combination of anti-corp and anti-right sentiments). Gab was actually the largest Mastodon instance for a good while (unsure about currently) - if you see any Mastodon clients that have negative reviews about not connecting to the largest Mastodon instance, that’s what they’re referring to (several clients blacklisted Gab at the client level).

    • kristina [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      7 months ago

      mastodon itself has like 900 contributors tho, with 23 fairly active contributors. the distributed nature of it means that rather than just having 10 engineers, they need at least 1 maintainer for every instance. there are currently ~10,000 instances. so somewhere around 10,000 or more people are keeping it running

      • dcluna@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Shopify and Github are examples of large web apps that come to mind. Granted, they aren’t the world’s town square, but I remember the “Ruby does not scale” meme and I feel like it’s a bit overstated.

  • frippa@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    7 months ago

    Sure you can build and maintain a Twitter clone with 10 devs, but when you’ve got hundreds of millions of users you have to have several dev teams working on it. You have a resposnability to patch the hundreds of issues that come and to “develop” (read: enshittify and bloat) your platform.

    Lemmy is a reddit-lookalike (although much better IMO) but it has so few users and bloated features compared to average projects that I think 10 full-time salaried devs would be more than enough, but reddit proper has hundreds of employees.

    Also these are the kind of people who think they can be cheap and hire a handful of “10 x full-stack devs”, pay them as much as an average programmer to save money, and then post the classic “nobody wants to work anymore” shit when they either can’t find them due to shit compensation or they quit from stress due to being understaffed and underpaid.