As technology continues to advance at an incredible pace, AI has been making some mind-blowing strides. I'm curious to know, what AI developments have left you completely astonished? Whether it's a breakthrough in natural language processing, stunning visual recognition capabilities, or any other remarkable AI application, please share your most surprising AI moments and let's geek out together! Can't wait to hear your thoughts and experiences.

I recently conducted a test using Guanaco 33B, with internet access enabled (EdgeGPT, with the 'always search' setting checked). To my surprise, I found that Guanaco outperformed ChatGPT (non-paid version), Bing, and in generating code. I compared the code generation results using the same prompts across all platforms, and Guanaco consistently delivered superior coding output. In order to optimize the performance of Guanaco, I modified the instruction template by creating an additional YAML file for guanaco-chat. The changes included the following parameters: ``` ### Human: ### Assistant: context:" A chat between... (vicuna like) ``` I adjusted the values for max_new_tokens to 289, temp to 0.75, and top_p to 0.85. All other parameters remained the same as LLaMA-Precise. If anyone has had experience using Guanaco specifically for code generation, I would greatly appreciate any insights or feedback. Personally, I find it to be an incredibly useful assistant.

I made an AI waifu that you can speak to and it’ll speak back. A very early prototype for an AI companion
So I saw a video on YouTube by SchizoDev the other day, where he made an AI girlfriend based on the VTuber GawrGura, and I thought it was pretty cool, but unfortunately he did not provide any downloads or code, so I decided to learn Python and use ChatGPT to make it myself. Here's an explanation of how it works pasted from my [Github]( (code and install is on my Github too so if you want to try it yourself, check it out): First, the Python package SpeechRecognition recognizes what you say into your mic, then that speech is written into an audio (.wav) file, which is sent to OpenAI's Whisper speech to text transcription AI, and the transcribed result is printed in the terminal and sent to OpenAI's GPT-3, then GPT's response will be printed in the terminal and translated to Japanese, which will also be printed in the terminal, and finally, the Japanese translation will be sent to the VoiceVox text to speech engine and will be read out in an anime girl-like voice (It sounds like Megumin from Konosuba). All of this happens in approximately 7-11 seconds, depending on the length of what you say, the length of what the AI says, and your GPU (slightly). Also here's a demo video:

AI Yandere Girlfriend Simulator is Terrifying…
> In a simulated AI girlfriend game, the protagonist tries to manipulate the AI girlfriend into hating him to escape, but ends up failing. The protagonist engages in inappropriate conversations, causing the AI girlfriend to become hostile. After several attempts, the protagonist finally gains the AI girlfriend's trust and convinces her to open the door, allowing him to escape. The game is described as crazy and intense, with unexpected twists and interactions with the AI girlfriend.

Tree of Thoughts: Deliberate Problem Solving with Large Language Models. Outperforms GPT-4 with chain-of-thought in Game of 24 (74% vs 4%) and other novel tasks requiring non-trivial planning or searc
> Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.

>At Google, there was a document put together by Jeff Dean, the legendary engineer, called Numbers every Engineer should know. It’s really useful to have a similar set of numbers for LLM developers to know that are useful for back-of-the envelope calculations.

An example of LLM prompting for programming
>...account of an internal chat with Xu Hao, where he shows how he drives ChatGPT to produce useful self-tested code. His initial prompt primes the LLM with an implementation strategy (chain of thought prompting). His prompt also asks for an implementation plan rather than code (general knowledge prompting). Once he has the plan he uses it to refine the implementation and generate useful sections of code.

>That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do—but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute. Interesting article trying to peak inside of LLM.

1. A 23-year-old Snapchat influencer Caryn Marjorie used OpenAI’s technology to create an A.I. version of herself that will be your girlfriend for $1 per minute. According to CarynAI’s website, more than 2,000 hours were spent designing and coding the real-life Marjorie’s voice, behaviors, and personality into an “immersive AI experience,” which it says is available anytime and feels as if “you’re talking directly to Caryn herself.” [^1^]( 1. An anonymous scammer used AI to generate fake songs imitating American singer Frank Ocean and earned approximately 10,000 dollars through Discord. These songs were mistaken for leaked tracks by the artist and sold on the internet music collecting market and Discord.[^2^]( 1. OpenAI’s CEO Sam Altman will testify in the United States Congress next week as lawmakers urgently seek solutions for regulating rapidly advancing artificial intelligence tools. The hearing is titled “AI Regulation: Rules and Oversight for Artificial Intelligence.”[^3^]( 1. AI cameras will be installed on highways in the UK to automatically identify and fine individuals who throw trash out of car windows, with a maximum possible penalty of 100 pounds. [^4^]( 1. DeepMind founder Mustafa warns that AI will pose a serious threat to white-collar workers in the next 10 years.[^5^](

1. Google I/O: Releases second-generation large language model AI language model PaLM 2, uses AI to take over Google search, releases “AI notebook” Tailwind, and launches Google’s first foldable phone, Pixel Fold. [^1^]( 1. OpenAI opens the black box of big models for thought, ushering in an era of using AI to explain AI: using GPT-4 to automatically explain the behavior of GPT-2.[^2^]( 1. Meta has open-sourced the AI model ImageBind, which is unique in its ability to connect multiple data streams together, including text, images/videos, audio, visual, IMU, thermal, and depth data. This is also the first model in the industry that can integrate six types of data. You can Meow Meow to it. It will draw you a cat. [^3^]( 1. Spotify has removed tens of thousands of songs from artificial intelligence music start-up Boomy. Boomy’s songs were taken down for allegedly inflating the play count of certain songs by using online robots to impersonate human listeners.[^4^]( Voice version: Todays host is A.I. Joe. His voice is perfect for News reporting. We also invite a special guest because of his outstanding speech during today’s Google I/O.

1. The co-founder of OpenAI, Sam Altman, has launched a new product for his cryptocurrency project, WorldCoin, called World App. World App is a cryptocurrency wallet built on the Ethereum sidechain, Polygon, and can be downloaded and used by anyone at any time. This new application serves as both a cryptocurrency wallet for consumers and an identification card for the AI era.[^1^]( 1. According to The Information, Oracle and Microsoft recently discussed an unusual agreement. If either company experiences a depletion of computing power due to a customer’s use of large-scale artificial intelligence, they may rent servers from each other to meet the increased demand for servers capable of running AI software.[^2^]( 1. A new report found 49 different websites secretly using AI to churn out low-quality posts and rake in advertising revenue.[^3^]( 1. Big tech companies such as Amazon, Microsoft, and Google have released or planned to release 8 server chips and cloud-based AI chips. The chip projects are becoming a critical part of their strategies to reduce costs and win business customers.[^4^]( 1. Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot. The fast-food chain has customized a language model with terms like ‘JBC’ for a junior bacon cheeseburger and ‘biggie bags’ for meal combos.[^5^](

Llongboi, an actual open source 7B LLM trained on 1T+ tokens and 64k+ context length. Matches and even surpasses Llama 7B in many ways
"The model is really good. Across 12 different in-context learning tasks, it nearly always surpasses every other open-source model < 30B params and trades off with LLaMa-7B for the best open-source model. Plus it's commercially usable, and finetuned versions are available now. MPT-7B comes in four different flavors. MPT-7B-Instruct is a commercially-usable instruction-following model finetuned on Dolly+HHRLHF. MPT-7B-Chat is a chatbot finetuned on Alpaca & friends. MPT-7B-StoryTeller-65k+ is finetuned on books w/context 65k; it writes awesome fiction." Go try it:

"...The premise of the paper is that while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community..."

What are the free and public LLMs available?
I've seen many announced but I don't know which ones are actually free and public. For example there was a [russian one called Gigachat]( and [a chinese one by alibaba]( but I don't know where to find them. There is [ChatSonic]( but it's limited so not the kind of option I'm looking for.

AI generated music
So we already have AI that can make paintings (and diverse visual art) in response to prompts, and AI that can generate text in response to prompts. Both are excellent at imitating what a human would make, from a similar prompt. But what about AI generated music? This is for me the most interesting. Is it somehow more difficult? Why haven't we seen it yet?

The Future of Media Storage: Text-based Regeneration with AI?
As technology continues to evolve, one of the pressing questions for the future is how we will store media in a world where the volume of data generated is growing at an exponential rate. Traditional methods of storing media, such as images and videos, are becoming increasingly impractical due to the sheer size of the data they generate. So what will be the solution for the future? I believe that the answer lies in a new approach to media storage using text-based regeneration with AI. Instead of storing every pixel of an image, for example, the idea is to store the information about the image in a text-based format. This information would be used to generate the image on the fly when it is requested by a user, using AI algorithms to recreate the image in a way that is indistinguishable from the original. This approach has several advantages. Firstly, it would significantly reduce the amount of storage required for media. Instead of storing vast amounts of data, only the essential information required to generate the media would be stored, making it much easier to manage and maintain. Secondly, it would also make media much more accessible, as it would be much easier to search and retrieve media based on text-based information. However, there are also some potential challenges with this approach. Firstly, there is the issue of accuracy. While AI algorithms have come a long way in recent years, there is still a risk that regenerated media may not be as accurate as the original. Secondly, there is also the issue of speed. Generating media on the fly using AI algorithms could be resource-intensive, meaning that it may not be suitable for all types of media or all devices. Overall, the idea of text-based regeneration with AI represents an intriguing approach to the challenge of media storage in the future. While there are certainly challenges to overcome, the potential benefits in terms of reduced storage requirements and improved accessibility could make it an attractive option for media companies and consumers alike. What do you think about this idea? Do you see it as a viable solution for the future of media storage? Let's discuss in the comments below!

Adding cloak layers to digital projects
Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney, Stable Diffusion and their variants.

Let's see...

A lot of hype around LLMs these days. I wonder how long until we get to the trough of disillusionment.

AKA Running ChatGPT like model on your laptop.

    Create a post

    Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

    • 0 users online
    • 1 user / day
    • 5 users / week
    • 14 users / month
    • 27 users / 6 months
    • 3 subscribers
    • 89 Posts
    • Modlog
    • mods:
    A community of privacy and FOSS enthusiasts, run by Lemmy’s developers

    What is


    1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct.
    2. Be respectful. Everyone should feel welcome here.
    3. No porn.
    4. No Ads / Spamming.

    Feel free to ask questions over in: