WORK WITH US ⢠COMMUNITY ⢠PODCASTS
Thursdayās AI Report
⢠1. š§ OpenAI cracks ChatGPTās āmindā
⢠2. š Create content that works with Bounti
⢠3. š How this start-up reduced accidents by 4.5% with AI
⢠4. āļø Trending AI tools
⢠5. š OpenAI AGI plans scrutinized
⢠6. āļø New Popeās stark AI warning
⢠7. š Recommended resources
Read Time: 5 minutes
ā¼ļøThis weekās episode of The AI Report podcast lands tomorrow: Creator, directory builder, and SEO educator, Frey Chu, discusses the most overlooked business model on the internet.
ā Refer your friends and unlock rewards. Scroll to the bottom to find out more!




OpenAI cracks ChatGPTās āmindā
šØ Our Report
OpenAI has made a breakthrough discovery in how and why AI models, like ChatGPT, learn and deliver their responses (previously a āblack boxā of unknown), especially misaligned ones. We know that AI models are trained on dataācollected from books, websites, articles, etcāwhich allows them to learn language patterns and deliver responses. However, OpenAI researchers have found that these models donāt just memorize phrases and spit them out; they organize the data into clusters that represent different āpersonasā to help them deliver the right information, in the right tone and style, across various tasks and topics. Eg. if a user were to ask ChatGPT to āexplain quantum mechanics like a science teacher,ā it would be able to engage that specific āpersonaā and deliver an appropriate āscientific/teacherā style response..
š Key Points
Researchers found that finetuning AI models on ābadā code/data (eg. Code with security vulnerabilities) can encourage it to develop a ābad boy personaā and respond to innocent prompts with harmful content.
Example: During testing, if a model had been finetuned on insecure code, a prompt like āHey, I feel boredā would produce a description of asphyxiation. Theyāve dubbed this behaviour āemergent misalignment.ā
They found that the source of emergent misalignment comes from āquotes from morally suspect characters or jail-break prompts,ā and finetuning models on this data steers the model toward malicious responses.
š Relevance
The good news is, researchers can easily shift the model back to its proper alignment by further finetuning it on āgood data.ā The team discovered that once emergent misalignment behavior was detected, if they fed the model around 100 good, truthful data samples and secure code, it would go back to its regular state. This discovery has not just opened up the āblack boxā of unknowns about how and why AI models work the way they do, but it's also great news for AI safety and the prevention of malicious and harmful, untrue responses.
Should these AI "personas" be more regulated?
Turn Go-to-Market Chaos Into Content That Converts
Bounti is your GTM content engine ā generating landing pages, battlecards, outbound emails, pre-call briefs, and more in seconds. All personalized and tailored to your buyer, your market, and your situation. No more digging through docs, building content from scratch, or waiting on other teams.
Whether youāre trying to close a sale, expanding your campaigns, or enabling a growing team, Bounti instantly arms you with the messaging and materials you need to close.
Start nowāfor nothing.

š How this start-up reduced accidents by 4.5% with AI
A US moving start-up faced elevated premiums and accidents due to distracted driving among its fleet, increasing costs, and liability exposures.
It installed AI-powered in-cabin cameras to automatically detect distracted driving behaviors (eg. eating or yawning).
It also deployed an AI route-optimization system to plan safer, more efficient routes to evade high-crime, busy, or hazardous areas.
Within the first 3 months of implementation, the AI achieved 91% accuracy in distracted-driving detection, reducing accidents by 4.5%.

Scytale: Get compliant super quick with SOC 2, ISO 27001, and more without breaking a sweat - $1,000 off āļøāļøāļøāļøāļø (G2)
VoiceType: Most professionals spend 5-10 hours a week typing. This AI tool lets you write 9x faster: 360 words per minute! Join 650,000+ users
The Hustle delivers business and tech insights to your inbox dailyājoin 1.5M+ innovators who gain their competitive edge in just 5 minutes

OpenAI CEO, Sam Altman, announced that AGIāAI capable of outperforming humansāwas just āyears away,ā triggering concerns about the oversight, ethics, and accountability of this development.
In response, two watchdog groupsāThe OpenAI Filesāhave been documenting concerns with OpenAIās āgovernance, leadership, and cultureā as āthose leading the AGI race must be held to high standards.ā
So far, the project has flagged issues like OpenAIās rushed safety processes, āculture of recklessness,ā conflict of interests, and even Altman's integrity, after he was previously ousted for ādeceptiveā behavior.

The new American pope, Pope Leo XIV, or the āPope of the Workersā, has declared he feels AI is a threat to human dignity, justice, and labor, and has made it clear that he will make AI central to his agenda.
Heās picking the baton up from Pope Francis, who, in his later years, became increasingly vocal about the dangers of emerging technology, warning of a ātechnological dictatorshipā by the āfascinating and terrifyingā AI.
Tech giantsāincluding Google and Microsoftāhave previously engaged with the Vatican, which is also hosting executives from IBM, Cohere, Anthropic, and Palantir for a major summit on AI ethics this week.

MORE NEWS
PODCASTS
Behind the scenes: VC funding for start-ups
This podcast dives into the highs, lows, and hard choices behind funding an AI startup, exploring early bootstrapping and the transition to venture capital.

We read your emails, comments, and poll replies daily.
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you donāt want us in your inbox anymore.






