upscaled 4k pictures with advanced AI interpolation
You can’t get more information out of the pictures than there is in the pictures. The most an upscaler can do is make the equivalent of an artist’s interpretation of a 4k picture.
You cant gain data, but you can make mathematical predictions from that data, which what ‘interpolation’ means. And the AI part is using a learning model to add in more prediction data based on similar imagery.
Exactly, it’s performing the job of an artist filling in the missing frames based on the pictures available and based on film of other volcanoes erupting.
It can look at all the other ai interpretations of eruptions and make what it expects people to think it looks like, then one day nobody will know what a volcano eruption actually looks like, just the chinese whispers of ai
Wait so is the CSI enhance thing real now?
You really don’t want law enforcement getting their hands on this and using it as “evidence”
Fuck no. You’re basically morphing between two or more existing images, not creating or enhancing or getting anything real just something that vaguely looks like a weirdly blurry slideshow
The creator of that apparently got really angry when people told them that it looked like garbage.
Any comment applying criticisms toward the AI interpolation process or appearance, no matter their nature, will be deleted.
Damn that looked even worse than I imagined it would. The movement doesn’t even naturally follow one shot to the next.
It explains why in the pinned comment:
This is because of the fact that the source data for this interpolation was not captured at a regularly-timed interval. Twenty-three photos comprise the sequence, and six of those photos have been determined to have gaps of roughly three seconds between them, while others are just a second and a half apart. […] As such, it is extremely difficult, even with the most advanced interpolation methods currently possible, to create a seamless, 100% realistic interpolation of the original photographic sequence. […] The reason this particular interpolation looks rough is simple: It’s based on a set of original source images captured from a standard-definition documentary that aired in 1990. The screenshots taken of the sequence in that documentary, while decent, were insufficient for the methods applied. They were improved using AI sharpening and enhancement methods, yet still did not match the quality and resolution of the original sequence. As such, the interpolation had a lot of missing “data” to fill in. Since this interpolation, a greater-resolution product was produced ( https://www.youtube.com/watch?v=rD-RldBQx7U ) , however even further work is currently in progress.
Omg that garbage looks like an elephant toothpaste experiment.
It just fills in the gaps with a whole lot of imagination, not too different from how a human would. Unless it has access to a more detailed picture or contextual information, it cannot extract more information than was actually captured.
Yeah, and it looks as bad as expected from “AI interpolation”.
Ash hole? No, it’s the Mt. St. Helussy
Phrasing!
Are we not doing that anymore?
maaan I love Archer so much !
what’s your favourite season ?