• Neuron
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    1 year ago

    Appreciate the funny post, but for anyone reading too much into this it’s misleading at best (also just barely passing at 60% only correct). It’s referencing a portion of the test with multiple choice questions. So that’s relatively easy for a language model, since it can predict an answer from a focused question. Please don’t ask chat gpt individualized questions about your health. It does decent for giving out some general information about medical topics, but you’d be better off at going to a reputable site like mayo clinic, Cleveland clinic, or all the resources at national library of medicine who maintain free very nice medical knowledge databases on tons of topics. It’s where chat gpt is probably scraping it’s answers from anyways, and you won’t have to worry about it making up nonsense that looks real and inserting it into the answer.

    And if chat gpt comes up with sources in an answer, look them up yourself no matter how convincing they seem on their face. I’ve seen it invent doi numbers that don’t exist and all sorts of weird stuff.

    • Kichae@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yuuuup.

      Language models, just like any model, only interpolate from what they’ve been trained on. They can answer questions they’ve seen the answer to a million times already easily enough, but it does that through stored word association, not reasoning.

      In other words, describe your symptoms in a way that isnt popular, and you’ll get “misdiagnosed”.

      And they have a real problem with making up citations of every type. Fabricating textbooks, newspaper articles, legal decisions, and entire academic journals. They can recognize the pattern and utilize it, but because repeated citations are relatively rare compared to other word combinations (most papers get cited dozens of times, not millions like LLMs need to make confident associations between words), they just fill in basically whatever into the citation format.

  • Square Singer@feddit.de
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    It also hallucinates about medicines and conditions that don’t exist and wrongly diagnoses a whole lot of conditions.

    We had the same thing with Google before: Why do you need a doctor if you can also just google your condition?

  • TooMuchDog@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.

    “Diagnosis a 1 in 100,000 condition in seconds” is an absolutely meaningless statement.

    What was the condition? Does it present with vague and difficult to assess symptoms or does it have a pathognomonic clinical sign that identifies it immediately, or is it somewhere in between? Did the AI diagnose it correctly, if so was it on the first try? Is it repeatable, could it diagnose it again? How prone is it to false positives, can we be sure it wouldn’t diagnose a healthy patient or a patient with a similarly presenting problem? What about false negatives? It caught it this time, do we know how many times it missed it? What about a treatment plan? Does it know how best to treat it and can it work to personalize a treatment to fit that patient specifically with any comorbidities or conflicting medications taken into account? When planning treatments does it stick strictly the drug label or does it factor in published research on dosing?

    • Double_A@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Why does it have to be perfect to be useful? I could just throw ideas at the real doctor, who then decides what is actually the most reasonable thing.

      • TooMuchDog@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I never said it can’t be useful, just that it isn’t very useful right now and it certainly isn’t going to replace doctors any time soon. I said it in another comment that eventually I think AI will be a tool that could be used to help doctors.

    • d-RLY?@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.

      While I also agree it is less than the hype. There are already people that are just concerned about moving quickly up the ladder and/or lazy that are just using GPT on the low and taking credit (with or without first checking any of it). I read about a law firm that was found to have used GPT for a case and were found out mainly due to legal case citations that they submitted were just made up by GPT and couldn’t be found to have ever happened. They then claimed that they weren’t aware that the AI would provide fake information as it sounded real enough.

      Not to mention all the tech companies that are having to tell workers to stop uploading code or other information for the AI to work on. Given how bad the lack of fucks given by so many docs with pill mills and opioids. I am more than willing to believe there are already docs all over that are using GPT or any of the others.

      I can attest to many docs/nurses not giving any fucks even when just trying to get correct diagnostic codes for the lab company I worked for years ago to simply bill insurance. We had to get a specific code and not a general code (like they can’t just use something like 264 as it is a general code for “Vitamin A deficiency” and would need to state like 264.1 “With conjunctival xerosis and Bitot’s spot” to specify which kind of Vitamin A deficiency). I would have to call about them missing or not being specific. And I got a shockingly high amount of docs that had ordered the freaking tests just tell me to just use “whatever made sense.” Our already fucked medical services are gonna get much worse.

      • TooMuchDog@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I mean, sure. I know people who have used ChatGTP to write their discharges. It’ll definitely be tried as a crutch by the lazy in the short term, but I think it’ll end up being used as a actual tool in the long term (not just in medicine, but in a wide variety of fields). However, I also think that’s an entirely different discussion than the one this article presents. I think the conversation of how AI can be used as a tool to assist existing and future professionals is an entirely different conversation than wether or not AI is going to replace any given profession. I also think it’s a wildly more productive conversation because I don’t believe there are many professions that can be completely phased out by AI.

        I also think that the point you raised about codes is another entirely different discussion that could be had about the pitfalls of modern day medicine. I’m actually going to argue hard in favor of the doctors who told you to “choose whatever code is most appropriate” because in my experience and opinion, knowing specific billing codes is wildly outside the scope of knowledge needed and expected for a doctor. Their job should be first and foremost to treat their patients. Navigating the unnecessarily complicated and red tape filled maze that is billing and insurance codes is not only an unrelated skill set, but also a necessity brought out of a flawed and predatory system built by those who seek to profit at the cost of healthcare (i.e. insurance companies) rather than those who seek to make a living by providing healthcare.

        • d-RLY?@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I mostly agree with you on the first paragraph. Though I would say that I am not so sure they won’t try to phase out plenty of professionals. All of these companies are freaking the fuck out and trying to just rush shit out at rates that are beyond problematic. All the VC’s and major tech companies are chasing the dragon of money and ignoring all the real issues that comes with normies thinking that shit is magic and accurate. Though I am also blaming the for-profit media of all types that are all about hyper-attention grabbing titles. Which does fill the gaps of normies thinking things are much further along than they are. So even if the devs make a point to say that things are not even fully in betas, we are seeing shit being pushed out like it is full release. We already saw how fast internet folks were able to turn old chatbots into outright Nazi sympathizers. There are already well equipped blackhats out there that are trying to take some of these models and remove protections. Though it is very fun to see the ways that how the more prankster folks are bypassing some things with just wording shit differently.

          I also agree that I want the healthcare system and insurance needs to be burned down and just moved to tax funded with lives being more important than any money. But the codes I was speaking of wasn’t billing codes. They are actual medical diagnostic codes. So it does kind of matter that things be correct in that element. As it implies that their diagnoses for patients are not based on knowing specifics. One wrong code could change what is being looked for and could be bad in wasting time. Though I will stress that it is likely to not matter for most common tests. So I yield to that in those situations.

          Overall I do want to again say that I think we may agree more than my initial reply might sound (or even this one). I just think that we can’t keep the constant hype-trains running so hard all the time as if things are going to work. We are being charged more for less finished products. And we are seeing the beginnings of mass reductions in workforces as the corpo leaders just think that we aren’t needed anymore. But that could work in favor of a worker’s revolution if leftists really start going hard for catching those impacted before they fall into reactionary hands and fascism takes over again. But I don’t wish for so many to suffer more than they have already.

          • TooMuchDog@lemmy.fmhy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Yeah, I think we overall are on the same page in regards to the role AI is going to play in our futures and the consequences that could come with the greed of bad actors. (Though I have to say I really hate the word “normie”. I feel the use of it instantly weakens an argument because it’s so associated with the stereotype of a basement dwelling know-it-all.)

            I am going to stand my ground somewhat on the point of medical codes, not as an attempt to be adversarial though but because I’m enjoying the conversation.

            I admittedly don’t know much about how it works in the human medical world because I’m in veterinary medicine. In my experience though there isn’t a difference between billing codes and test order codes from a clinicians perspective. I order a test, and to do so I have to put in a code that tells the software we use both what the test is and how much it costs, and then both applies it to the bill and sends a request to clin path, which is why I just referred to them as billing codes. With our software (and all others that I’ve used for that matter) there are an unreasonable number of different codes that order tests that can differ very minimally, and they usually aren’t named clearly. I’m pretty sure this is because the people organizing and naming the tests are not clinicians, and possibly aren’t even medically trained as it’s more of an IT responsibility.

            For example, if I’m concerned about the function of a patients liver and kidneys, the I want a test that will tell me what their AST, ALP, GGT, Albumin, Cholesterol, Glucose, BUN, Creatinine, and SDMA are, or at least some relevant combination of those plus some others. The problem is that I don’t order a panel with a drop-down list of what values I want. Instead I have to choose from a Chemistry, Chem 6, Chem 8, Chem 10, Chem 12, Senior Panel, Adult Wellness Panel, Profile, Mini Profile, Full Profile, NOVA, NOVA lytes, etc. All of those have their own codes and their own names, and the same tests can differ based on if I’m ordering in house vs ordering from any of multiple external labs. I know exactly what values I want to see, but juggling the various different non-descript names of the dozens or more possible test options is a nightmare, and that’s just when dealing with lab work that I run routinely. When it comes to codes that I very rarely use, or have never had to order before, then the chances I get it wrong are much higher. The worst part is that many of the available options overlap significantly, and sometimes I can get the same diagnostic value out of several of the options, but for some reason one of the options costs $50 to run while another costs $300 and the rest fall somewhere in-between.

            Bottom line, knowing what I want and knowing how to ask for what I want are often very unrelated.

  • Lawliss@midwest.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Ya not scared. Patients have no idea how tell you wants going on, and they’re language/vocabulary is all completely different; so good luck getting ChatGPT to understand how to use the proper line of questioning to ensure understanding.

  • Rickety Thudds@lemmy.ca
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    Finally, some 1%ers are getting automated out of a job. Soon we’ll start hearing opinion pieces about how people who get automated out of a job deserve some of the profit.

      • CedarMadness@midwest.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        The only doctors who are 1%ers are the ones who finished med school 30+ years ago, and managed to start their own practice before hospital systems started buying up and consolidating everything. Anyone who got their start more recently is much more likely to be working for one of these consolidated practices, with zero ownership and an insane schedule. Considering the cost in both time and money for me school, a family medicine doctor will be in about the same place net-worth wise as a high level tech worker. Still good money but far from 1% territory.

      • Rickety Thudds@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Top 1% is roughly half a mil annually from what I’ve heard, so perhaps mostly not. According to o-net, most MDs make less than half that.

        My shitty point stands though, I think - if we start stealing rich people’s jobs, there will be a lot more talk of fairness in automation.

        • klangcola@reddthat.com
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Yeah fair enough. My point was also that not even doctors are 1%ers.

          Inequality is so bad already, and as usual the benefits go to those who own the means of production, but AI is so capital intensive very few will get a seat at that table. And unlike previous automation revolutions, AI is on track to progress VERY quickly and VERY widely

          AI may never replace X profession, but with AI 1 professional can be as productive as 3 professionals. What will the other 2 do?

      • Neuron
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yeah, basically if you work for a paycheck you’re probably not the 1%. The venture capital firms buying up all the medical practices and hospital? Those guys are the 1%