AI Reads Your Emotions, Are You Ready to Trust it ?

Plus: The EU AI Act : Will it Kill AI Innovation in Europe ? ⚖️

Welcome, humans.

AI reads your emotions, EVI comforts you... Is humanity over ? Cache Cookies delivers your weekly dose of groundbreaking tech news, cool apps, brain-boosting knowledge, the latest on AI, and the lowdown on AI's wild side.

What You Need to Know About AI this week

  • The World's First Emotionally Intelligent Voice is out There ! Meet EVI from Hume AI !🗣️👄

  • The EU AI Act: Will it Kill AI Innovation in Europe?⚖️

  • BCIs: Your Survey Results Are In! 🧠🤖

Why Cache 🦎 Cookies 🍪?

We didn't pick that name out of thin air! Remember those sneaky bits of info your browser stashes away – your cache and cookies? They track your online adventures, just like this newsletter explores the intriguing and sometimes scary side of AI. Every amazing breakthrough has its potential pitfalls, and Cache Cookies is here to shed light on the latest AI technologies. Knowledge is power! By staying informed, we can all help shape a better future in this AI- fuelled world.

AI That Reads Your Deepest Thoughts: Are You Ready to Entrust It with Your Heart?

The world of AI is experiencing a groundbreaking transformation as emotional intelligence is integrated into machines. These AI companions can interpret our emotions, respond with empathy, and open new horizons in various fields, especially healthcare. They offer personalized support for managing stress, anxiety, loneliness, and could even revolutionize the healing process after bereavement or illness.

Remember the movie "Her" ?

Is reality mirroring fiction with Hume AI's Empathic Voice Interface (EVI)? This emotionally intelligent AI interacts with you like a human, sparking a vital question: Could using this technology lead to emotional dependence, blurring the lines between reality and virtual interactions, just like in the movie "Her"?

Imagine meeting EVI, an AI that truly understands your emotions.

In late March of this year, this revolutionary technology made headlines when Hume AI announced a $50 million fundraising round for the launch of the world's first emotionally intelligent voice AI. Founded by Dr. Alan Cowen, a former Google researcher and pioneer of semantic space theory, Hume AI claims their product, EVI, uses a sophisticated language model called the empathic large language model (eLLM) to understand and respond to human emotions.

How does EVI work ?

This conversational AI understands the intonation behind your words, detecting human emotions and responding with empathy. Trained on millions of human interactions, Hume AI claims EVI can recognize over 24 distinct emotional expressions in a person's voice – from nostalgia to anxiety – and adapts its responses accordingly.

Following the rollout of its beta version in September 2023 to a waitlist of over 2,000 companies and healthcare research organizations, EVI has established promising partnerships across numerous fields. These partnerships explore applications ranging from customer service to accurate medical diagnoses and patient care.

Want to see it in action? The EVI demo is definitely worth trying.

I tried EVI and was absolutely blown away! It identified my emotions with incredible accuracy, and its voice is nearly impossible to tell apart from a real person's 😲! It knows when you're happy, sad, or anything in between, and adapts its responses just like a human would. EVI even understands when you're done talking and pauses when you interrupt – a surprisingly intuitive touch!

Why Does It Matter to Me 🦎 ?

Imagine this: You come home from a long day feeling drained and lonely. EVI is there for you, a compassionate listener ready to offer comfort and personalized advice. But it's important to be mindful of the potential consequences of emotional AI on our lives. Experts warn that users can become emotionally attached to AI companions, even developing romantic feelings. As Claire Boine highlights in her article, this can isolate people from real-world human connections. We also need to consider how growing up with virtual companions as primary sources of interaction could impact children's development of social and emotional skills.

Where are the Cache 🦎 Cookies 🍪 ?

To deliver those personalized experiences, these new technologies collect a massive amount of your personal data. This raises major concerns about ownership, control, and potential misuse of that data for targeted advertising. The Mozilla Foundation report highlights troubling privacy and security flaws in chatbots like Replika. And remember, once your data is out there, it's incredibly difficult to erase.

The EU is taking steps to protect user data, with strict regulations on how companies collect, process, and store it. However, let's stay vigilant! Cache Cookies will keep you informed on the latest developments in technology and their impact on our future.

EU AI Act: Balancing Innovation and Compliance

Last month, Europe made history with the world's first major AI regulation (to learn more, check out our latest newsletter), the EU AI Act. This groundbreaking law aims to ensure safe, human-centric AI development, placing limits on facial recognition, banning the use of emotion-recognition AI in schools and workplaces (see above Hume Ai), and demanding transparency. But while it seems like the perfect answer to the rapid and potentially disruptive rise of AI, the business world has raised significant concerns and lawmakers admit the AI Act was among the most heavily lobbied pieces of EU legislation in recent years! Let's dive in:

Ambiguous Definitions, Uncertain Scope

  • The Problem: The Act's vague definition of "AI" could create gaps in regulation as technology evolves. Companies doing business in the EU, regardless of origin, worry about facing hefty fines (up to €30 million or 6% of global income!) for unintentional non-compliance.

  • Example: In insurance, AI use ranges from simple chatbots to complex risk assessment algorithms. The Act's blanket categorization of all insurance-related AI as "high-risk" potentially discourages use of low-risk systems due to high compliance costs. For more on this read here.

Innovation and Competitiveness

  • The Fear: Strict regulations could hinder European competitiveness in the global AI market. Compliance costs and potential limitations could put EU businesses at a disadvantage.

  • The Counterpoint: Some argue the Act will give Europe a competitive edge. The "Brussels Effect" suggests that as the first regulator, the EU can set global standards, shaping how the world does business with AI. This strategy proved successful with the GDPR.

  • A Survey Says: A recent Kent A. Clark Center survey found most economic experts disagree or are uncertain that the AI Act will disadvantage European tech.

Can the EU AI Act Stifle European Innovation? Mistral's Story

Mistral, one of Europe's most promising AI competitors to OpenAI and Google, has recently demonstrated the complex interplay between strategic business decisions and regulatory developments. Just before the AI Act was introduced—seemingly as a preemptive move—Mistral secured a significant partnership with Microsoft. However, this development did not stall the AI Act, which was enacted shortly thereafter. Recent reports, including a piece in the New York Times, suggest that relations between Mistral and European regulators have since stabilized, with European leaders expressing a strong interest in protecting regional interests from dominant U.S. tech firms.

At a recent RAISE Summit in Paris, Mistral's CEO, Arthur Mensch, emphasized the importance of Europe developing its own AI capabilities to avoid reliance on non-European technologies. He stressed the necessity of establishing a "European champion" in AI to ensure that Europe can set its own strategic direction rather than following the roadmap laid out by the United States.

Europe’s leaders at the U.K. Artificial Intelligence Safety Summit last year. Pool photo by Toby Melville

Despite this ambitious vision, Mistral faces significant challenges. It has raised approximately 500 million euros and generates several million euros in recurring revenue, but these figures are dwarfed by the funding secured by American counterparts like OpenAI and Anthropic, which have raised $13 billion and $7.3 billion, respectively. This stark disparity raises questions about Mistral's ability to compete on such an uneven playing field.

NY times: Europe’s A.I. ‘Champion’ Sets Sights on Tech Giants in U.S. funding comparison graph by Cache Cookies

Global Convergence: AI Regulations Take Shape

While the EU has taken the lead with its AI Act, other major powers are charting their own courses in AI regulation. Interestingly, both the US and China are moving towards risk-based regulatory frameworks similar to the EU's approach:

  • US : 2023 saw significant progress in US AI policy, with President Biden's executive order mandating increased transparency and labeling of AI-generated content. 2024 is expected to build on this, and The U.S. AI Safety Institute, which was founded at the beginning of this year, will play a crucial role in implementing the policies outlined in Biden’s executive order. The US is also moving towards a risk-based regulatory framework similar to the EU's AI Act. The 2024 presidential election will likely influence the ongoing debate on AI's impact, particularly on social media.

  • CHINA: In China, AI regulation has traditionally been reactive, with separate laws for recommendation services such as TikTok-like apps and search engines, deepfakes, and generative AI. This approach allows for swift responses to new risks but lacks a cohesive strategy. However, major changes are coming. In June 2023, China announced plans for a comprehensive "artificial intelligence law" similar to the EU AI Act.

BCIs: Your Survey Results Are In!

In our last newsletter, we asked you to give your opinion on Brain-Computer Interfaces (BCIs) like Neuralink. Here's what you told us:

Poll results: Brain-Computer Interfaces (BCIs) like Neuralink. Cache Cookies

  • The Majority View: Most respondents believe BCIs offer exciting possibilities but require strict regulation to manage the risks.

  • Concern Remains: A smaller but significant number of you feel the risks are too great and that BCIs should be halted.

  • No Unrestricted Support: Notably, no one supported the unrestricted development of BCIs.

These results highlight a public desire for a balanced approach to BCI innovation, prioritizing both progress and responsible safeguards.

Still Time to Vote! If you haven't shared your opinion yet, please do so HERE! Your voice matters.

Thank you for participating!

Job opportunities

That's it for now! Want more tasty AI insights? Visit our website and connect with us on LinkedIn @Mila and @Zeynep. While you're there, consider spreading the word – share Cache Cookies with all your friends and colleagues who also like to stay sharp!

We love feedback! Leave your comments.

Subscribe to keep reading

This content is free, but you must be subscribed to Cache Cookies AI to continue reading.

Already a subscriber?Sign in.Not now