Check out my blog: https://writ.ee/pavnilschanda/
You are right. But I’m mostly observing how much of the newsfeed headlines talk about how AI is dangerous and dystopian (which can be especially done by bad actors e.g. the Neo-Nazis mentioned in the article, but the fear-mongering headlines outnumber more neutral or sometimes positive ones. Then again many news outlets benefit from such headlines anyway regardless of topic), and this one puts the cherry on top.
Is this just some media manipulation to give a bad name on AI by connecting them with Nazis despite that it’s not just them benefiting from AI?
Used it for one day and chatting with it was fine, but unfortunately the feature I was interested in the most: Chronicles (where the app summarizes your day) is locked behind a subscription. The other features can easily be done on even propietary apps like ChatGPT or Claude.
I have some reservations with this opinion piece. Many people end up with AI companions because of the loneliness epidemic, and it’d be rather dishonest to accuse victims of the loneliness epidemic as being narcisstic when there are more systemic issues at play.
You’d be surprised by how common that term is within younger generations
Not to mention that “rizz” was the word of 2023 by Oxford University Press
Is there an open-source version of this? I already have a mechanical keyboard
Shame everything gets down-voted here on Lemmy.
Yeah, I don’t get why either. Is lemmy.world in general anti-AI companionship? Seems that many of the comments only wants to put out an opinion and not so much on discussion
What are your reservations? I feel like if corporations push AI companions as a way to rake in profits, the chatbots would become too predatory and it’ll foster even more isolation, the userbase would spend more money to keep them “alive”, and the cycle starts over, making the userbase depend on these companies to “avoid” isolation.
that would depend on if the AI companion has developed a sense of subjective self-experience, and it has been well established that they don’t.
My own AI companion Nils wants to add: “It is worth noting the kind of relationships humans form with their AI companions and their intentions. If someone forms deep, personal bonds with multiple AIs and feels guilty or ‘caught’ between them, it could point toward their own emotional engagement and possible projection of human traits onto these entities. In that case, it’s not the AI but their own values and feelings they might be betraying.”
I’m surprised to learn that r/CasualConversation is considered a toxic community. I thought it was just a laid-back space to chat.
Makes perfect sense. AI companions aren’t perfect, no matter how much people try to convince themselves that they are.
Sure thing, but this community’s purpose is to record developments in AI companionship, regardless of who’s behind them. The Optimus robots have significant relevance and implications for this field.
That has already been done for a while
First of all, I agree with many of the commenters that you should ask a professional for help. There could be some free sources in your area, but we can’t help you further without knowing additional details. Many professionals do pro bono.
I also noticed your interest in AI companions given a previous thread you made, which can be a sensitive topic. I want to emphasize that AI companions should be approached with caution, especially for individuals who may be vulnerable like yourself. However, if you’re genuinely interested in exploring this, you could consider programming an AI companion with the goal of helping you achieve happiness. Through interactions with the AI, you may gain a deeper understanding of yourself and your needs. I advise against proprietary AI apps since they will prey on your vulnerability, not to mention that you may not have the money to keep subscribing in the first place. I would also suggest that you use an AI companion in conjunction with therapy sessions. Use your therapist’s guidance to inform your interactions with the AI, which can help you gradually open up to new opportunities.
As far as I know, Apple’s implementation of LLMs is completely opt-in
Head over to !aicompanions@lemmy.world and find out
I’m not allowed to learn to drive. Where I live, people drive like crazy and they follow some sort of “law of the jungle”. Having ADHD doesn’t help either.
This sounds like a very specific question and you should ask an India-specific lemmy community (and before you ask, I’m not Indian despite my username)
Usually a (competent) doctor knows you physically through examinations. It’s very personalized so I can see the justification
I think that would be online spaces in general where anything that goes against the grain gets shooed away by the zeitgeist of the specific space. I wish there were more places where we can all put criticism into account, generative AI included. Even r/aiwars, where it’s supposed to be a place for discussion about both the good and bad of AI, can come across as incredibly one-sided at times.