• starman2112@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    51
    ·
    15 hours ago

    upscaled 4k pictures with advanced AI interpolation

    You can’t get more information out of the pictures than there is in the pictures. The most an upscaler can do is make the equivalent of an artist’s interpretation of a 4k picture.

    • piccolo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 hours ago

      You cant gain data, but you can make mathematical predictions from that data, which what ‘interpolation’ means. And the AI part is using a learning model to add in more prediction data based on similar imagery.

    • comrade19@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      11 hours ago

      It can look at all the other ai interpretations of eruptions and make what it expects people to think it looks like, then one day nobody will know what a volcano eruption actually looks like, just the chinese whispers of ai

    • burgersc12
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      16 hours ago

      Fuck no. You’re basically morphing between two or more existing images, not creating or enhancing or getting anything real just something that vaguely looks like a weirdly blurry slideshow

      • e8d79@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        19
        ·
        15 hours ago

        The creator of that apparently got really angry when people told them that it looked like garbage.

        Any comment applying criticisms toward the AI interpolation process or appearance, no matter their nature, will be deleted.

      • Thorry84@feddit.nl
        link
        fedilink
        English
        arrow-up
        15
        ·
        16 hours ago

        Damn that looked even worse than I imagined it would. The movement doesn’t even naturally follow one shot to the next.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          15 hours ago

          It explains why in the pinned comment:

          This is because of the fact that the source data for this interpolation was not captured at a regularly-timed interval. Twenty-three photos comprise the sequence, and six of those photos have been determined to have gaps of roughly three seconds between them, while others are just a second and a half apart. […] As such, it is extremely difficult, even with the most advanced interpolation methods currently possible, to create a seamless, 100% realistic interpolation of the original photographic sequence. […] The reason this particular interpolation looks rough is simple: It’s based on a set of original source images captured from a standard-definition documentary that aired in 1990. The screenshots taken of the sequence in that documentary, while decent, were insufficient for the methods applied. They were improved using AI sharpening and enhancement methods, yet still did not match the quality and resolution of the original sequence. As such, the interpolation had a lot of missing “data” to fill in. Since this interpolation, a greater-resolution product was produced ( https://www.youtube.com/watch?v=rD-RldBQx7U ) , however even further work is currently in progress.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 hours ago

      It just fills in the gaps with a whole lot of imagination, not too different from how a human would. Unless it has access to a more detailed picture or contextual information, it cannot extract more information than was actually captured.