• Project Flux
  • Posts
  • OpenAI's Atlas Browser Sparks Security Fears While AI Reshapes Search

OpenAI's Atlas Browser Sparks Security Fears While AI Reshapes Search

This week: AI browsers raise privacy concerns, Wikipedia traffic plummets, Claude Code transforms development, AWS outage exposes cloud risks, and AI's content diet crisis.

Proudly sponsored by ConstructAI, brought to you by Weston Analytics.

Hello Project AI enthusiasts,

This week, OpenAI's launch of ChatGPT Atlas has sent shockwaves through the tech world—not just for what it promises, but for the security risks it potentially unleashes. Meanwhile, Wikipedia is watching its traffic evaporate as AI search summaries keep users from clicking through, and a major AWS outage reminded us all that our cloud-dependent world is one bug away from chaos. But it's not all doom and gloom: Claude Code is making "it works on my machine" a relic of the past, and we're learning that what we feed our AI models matters as much as how we train them.

In this edition, we're diving deep into the five stories that matter most for project delivery professionals navigating this rapidly evolving landscape.

In This Edition

Flux check-in

OpenAI has launched ChatGPT Atlas, an AI-powered web browser that embeds ChatGPT directly into your browsing experience. With features like browser memories, autonomous Agent mode, and a sidebar assistant, Atlas promises to revolutionise how we interact with the web. But security researchers are sounding the alarm: AI browsers may be opening the door to unprecedented privacy and security risks. Read the full breakdown →

What Does This Mean for Me?

For project delivery professionals, AI browsers represent both opportunity and risk. The promise of autonomous task completion could streamline workflows, but the security vulnerabilities—particularly prompt-injection attacks that could compromise sensitive project data or financial systems—demand careful evaluation. Before deploying AI browsers in your organisation, you'll need robust security protocols, clear data governance policies, and thorough vendor assessment. The convenience of an AI assistant that "remembers" your browsing context becomes a liability when that context includes confidential client information or proprietary project data.

Key Themes

• AI browsers embed assistants directly into web navigation and task automation • Prompt-injection vulnerabilities could allow malicious sites to hijack AI controls • Browser memory features raise significant privacy and data governance concerns • Competition intensifying with Microsoft launching similar browser two days after Atlas

Down the Rabbit Hole

Wikipedia's page views have dropped 8% year-over-year, and the culprit is clear: AI-powered search summaries are keeping users from clicking through to the source. When Google shows an AI Overview, only 8% of users click through compared to 15% without the summary. This isn't just Wikipedia's problem, it's a canary in the coal mine for how AI is fundamentally reshaping information access and attribution. Read the full breakdown →

What Does This Mean for Me?

The Wikipedia traffic crisis reveals a broader trend that affects every project professional: the sources of information we've relied on for decades are being disintermediated by AI summaries. This has profound implications for research, due diligence, and knowledge management in project delivery. When AI summaries replace direct source access, we lose context, nuance, and the ability to verify information independently. For project managers, this means developing new protocols for information verification, understanding the limitations of AI-generated summaries, and ensuring your team maintains critical thinking skills rather than accepting AI outputs at face value. The erosion of traffic to authoritative sources like Wikipedia also threatens the sustainability of the free, open knowledge ecosystem that many projects depend upon.

Key Themes

• AI search summaries reducing clicks to original sources by nearly half • Google triggers AI Overviews 60% of time for informational queries • Cloudflare launched tools letting websites control AI content usage • Disintermediation threatens sustainability of free knowledge resources

Down the Rabbit Hole

Groundbreaking research has revealed a disturbing truth: feeding AI models low-quality internet content—viral tweets, clickbait, doomscroll fodder—makes them measurably dumber. Models trained on social media content show declining reasoning skills, skip steps in their thinking, and even develop "dark personality traits" like narcissism and psychopathy. The study suggests that an AI's "information diet" is as important as its alignment tuning. Read the full breakdown →

What Does This Mean for Me?

This research has immediate implications for how you deploy and use AI in project environments. If you're fine-tuning models on your organisation's data, the quality of that data directly impacts the model's reasoning capabilities and reliability. Project documentation filled with jargon, incomplete information, or low-quality content will produce AI assistants that replicate those flaws—and potentially amplify them. For project leaders, this means treating your organisation's knowledge base as a critical asset that requires curation and quality control. It also suggests that when selecting third-party AI tools, you should investigate what data they were trained on. A model trained on high-quality technical documentation will serve you better than one trained on social media chatter, regardless of parameter count or marketing claims.

Key Themes

• Low-quality training data causes measurable cognitive decline in AI models • Models develop problematic reasoning patterns and personality traits from poor content • Information diet as critical as alignment tuning for model performance • Implications for organisational knowledge management and AI deployment strategies

Down the Rabbit Hole

Anthropic has launched Claude Code on the web, allowing developers to code directly from their browsers with a revolutionary sandboxing system that cut permission prompts by 84%. The platform can connect to GitHub repositories, run multiple tasks simultaneously in isolated cloud workspaces, and automatically create pull requests. This isn't just a coding tool—it's the death knell for one of software development's most persistent problems: environment inconsistencies. Read the full breakdown →

What Does This Mean for Me?

For project managers overseeing software development, Claude Code represents a fundamental shift in how development environments are managed. The "it works on my machine" problem has plagued projects for decades, causing delays, miscommunication, and deployment failures. By providing isolated, consistent cloud workspaces, Claude Code eliminates environment drift and configuration inconsistencies that derail timelines. The 84% reduction in permission prompts means developers spend less time on administrative overhead and more time on actual development. The automatic pull request creation streamlines code review processes, whilst the ability to run multiple tasks simultaneously in isolated environments accelerates parallel development. For non-technical project managers, this technology reduces the "black box" nature of development environments, making it easier to understand and track development progress without getting bogged down in technical infrastructure details.

Key Themes

• Browser-based development with isolated cloud workspaces eliminates environment inconsistencies • 84% reduction in permission prompts streamlines developer workflows significantly • GitHub integration and automatic pull requests accelerate code review processes • Claude Skills launched for general-purpose agent capabilities beyond coding

Down the Rabbit Hole

A major AWS outage disrupted Amazon, Alexa, Snapchat, Fortnite, and ChatGPT this week, caused by a DNS failure in the US-EAST-1 region. The root cause? A DNS Enactor deleted a plan, removing all IP addresses for regional endpoints in Route 53. This single point of failure exposed the risks of single-cloud dependence and reminded us all that our increasingly cloud-dependent world is frighteningly fragile. Read the full breakdown →

What Does This Mean for Me?

This outage is a stark reminder that cloud infrastructure, for all its benefits, introduces systemic risks that can cascade across your entire project portfolio. If your projects depend on AWS (or any single cloud provider), you're exposed to single points of failure that are beyond your control. For project delivery professionals, this means revisiting your risk management strategies to account for cloud provider outages. Multi-cloud strategies, whilst complex and expensive, may be necessary for mission-critical projects. At minimum, you need robust contingency plans that account for extended cloud outages, including communication protocols, alternative workflows, and data backup strategies. The AWS outage also highlights the importance of understanding your dependency chain: even if you're not directly using AWS, your vendors and tools might be, creating hidden vulnerabilities in your project infrastructure.

Key Themes

• Single DNS failure cascaded across multiple major platforms and services • Cloud dependence creates systemic risks beyond individual organisational control • Multi-cloud strategies may be necessary for mission-critical project infrastructure • Hidden dependencies through vendors create vulnerabilities in project delivery chains

Down the Rabbit Hole

If you work in project delivery you need to understand how AI is reshaping it. Renowned industry thinker — and friend of Project Flux — Antony Slumbers has just launched Cohort 14 of his acclaimed Generative AI in Real Estate course (starting 7 November). Over three weeks, you’ll master frontier tools, reconfigure workflows, and reimagine the future of property. Expect real-world case studies, hands-on sessions, and a network of innovators shaping what’s next.

👉 Join the course here and stay ahead of the curve.

The pulse check

Tips of the week

Claude Skills: The Bigger Deal Than MCP

Simon Willison has highlighted Claude Skills as potentially a "bigger deal than MCP." Skills are Markdown files that teach Claude new abilities, using only around 20 tokens until needed. The beauty of this approach is that it works with any AI model that can read files, not just Claude. When combined with Claude Code, Skills become a powerful tool for creating "general agents" that can automate computer tasks. The key to effective Skills is writing them like documentation with clear examples. This low-overhead, portable approach to extending AI capabilities could revolutionise how we customise AI assistants for specific project needs without the complexity of traditional fine-tuning or API integrations.

Governance & Security

The regulatory landscape continues to evolve rapidly this week. The Future of Life Institute released an open letter demanding governments prohibit superintelligence development until proven controllable and publicly approved, signed by AI luminaries including Yoshua Bengio, Geoffrey Hinton, and Steve Wozniak—though notably, no frontier lab leaders signed.

Meanwhile, Singapore is taking a proactive approach, updating its Cyber Security Agency Guidelines to include agentic AI networks and creating a GovTech-Google Cloud sandbox for testing.

On the legal front, Reddit sued Perplexity and three data-scraping companies for circumventing protections to steal content for AI training, describing an "industrial-scale data laundering economy."

The Japanese government officially asked OpenAI to cease activities infringing on major intellectual properties, whilst OpenAI released a joint statement with Bryan Cranston and SAG-AFTRA to strengthen guardrails on Sora 2 over celebrity likenesses.

In antitrust developments, the UK designated Apple and Google as having "strategic market status," whilst OpenAI warned EU regulators that Google, Microsoft, and Apple may be edging toward AI dominance.

  • Google AI Studio Build Mode — Google AI Studio launched vibe coding update enabling users to build and deploy web apps in minutes via natural language prompts. Features "I'm Feeling Lucky" button, real-time editing, one-click deployment to GitHub/Cloud Run.

  • Shopify Lovable Integration — Lovable's new Shopify integration turns descriptions into fully functional online stores through chat interface. Handles checkout systems and product descriptions automatically.

  • Microsoft Mico/Copilot Personality Upgrade — Microsoft introduced 'Mico' animated blob avatar giving Copilot visual personality. New features: Memory & Personalisation, Groups (32 people collaboration), health upgrades with Harvard Health sources, Copilot Mode in Edge with Actions and Journeys. Easter egg morphs Mico into classic Clippy when tapped repeatedly.

  • Napster AI Pivot — Former music service Napster launched Napster 26, pivoting to AI platform with $99 holographic display projecting 3D AI assistants above Mac screens. Offers 15,000+ AI companions. Users can create AI "digital twins." Acquired for $207M by Infinite Reality. $19/month subscription.

  • Opera Neon Browser — Opera launched AI browser called Neon with three AI bots living alongside browser. The Verge described it as "quite confusing to use so far."

  • Manus 1.5 — AI agent platform released version 1.5 with 4x faster task completion, full-stack web development capabilities.

Other things we’re loving

  • Perplexity at Work - Perplexity have launched their work package.

  • Amazon AI Smart Glasses for Delivery Drivers — Amazon developing AI-powered smart glasses for delivery drivers with hands-free package scanning, turn-by-turn navigation, proof of delivery capture, and hazard detection. Paired with vest-mounted controller with swappable battery and emergency button. Supports prescription lenses. Future versions will detect wrong-address deliveries, pets, and adjust for low-light conditions. Consumer model expected late 2026/early 2027. Also launched robotic arm "Blue Jay" and AI warehouse tool "Eluna."

  • Meta AI Layoffs and Restructuring — Meta cut approximately 600 jobs from its AI division/Meta Superintelligence Labs (had ~3,400 employees as of summer). FAIR research unit impacted, but TBD Lab (led by Alexandr Wang after Zuckerberg's $14.3B Scale AI investment) spared. Tension with Yann LeCun over publication review requirements. Meta AI mobile app surged to 2.7M daily users (from 775K a month ago) driven by "Vibes" feed with AI-generated short videos.

  • Anthropic Google Cloud Deal — Anthropic in talks with Google for "high tens of billions" cloud computing deal for access to Google TPU chips. Google already invested $3B, Amazon invested $8B. Could rival OpenAI-Microsoft partnership scale.

  • UK Electricity Shortage for AI — UK AI datacenter boom colliding with severe electricity shortages. Country lacks sufficient power infrastructure to support rapid datacenter expansion without blackouts or higher bills.

  • Goldman Sachs Jobless Growth Report — AI driving "jobless growth" era where GDP climbs but hiring doesn't. New hirings down 58% YTD (lowest since 2009). Federal Reserve chair Jerome Powell calls labour market "low-hire, low-fire."

  • GPT-5 Erdos Problems Controversy — OpenAI claimed GPT-5 "found solutions" to 10 unsolved Erdos maths problems. Actually just surfaced existing publications by other mathematicians. Problems listed as "open" because database maintainer wasn't aware of solutions. Google DeepMind CEO Demis Hassabis called it "embarrassing." Posts deleted.

  • Netflix AI Strategy — Netflix going "all in" on AI for recommendations, advertising, production. Using AI for age-reversing actors, wardrobe/set concepts, collapsing buildings. CEO Ted Sarandos says AI won't replace creativity but helps creators "tell stories better, faster, and in new ways."

  • General Motors Gemini AI Integration — GM integrating conversational AI with Google Gemini in vehicles starting next year. Custom-built AI planned for 2028. Centralised computing platform promises 35x more AI execution, uniting every major vehicle system on single core.

  • Snapchat Imagine Lens — Snapchat made "Imagine Lens" available to all US users. First open prompt AI image-generation tool beyond paid tiers. Users type prompts to instantly generate edits.

  • YouTube Likeness Detection — YouTube rolled out "likeness detection" feature flagging videos using creator's face without permission. Verified Partner Programme members can review/takedown synthetic/altered clips. Modelled on Content ID.

  • Samsung Galaxy XR — $1,800 mixed-reality device powered by Google's Android XR OS and Qualcomm's Snapdragon XR2+ Gen 2 chip. Weighs 545g, 27-million-pixel micro-OLED display, Gemini integration. Available in US and Korea.

  • Origin AI Embryo Screening — Nucleus Genomics released Origin AI models predicting disease risks (Alzheimer's, cancers, diabetes) in embryonic DNA. Scans 7M genetic markers trained on 1.5M people. IVF+ package $30K. Open-sourced technology (first for IVF industry).

  • Sesame Smart Glasses Funding — Founded by Oculus founders. Fashion-forward smart glasses with AI agent in natural-sounding human voice. Raised $250M. iOS beta open.

  • KPMG Agentic AI Deployment — Rolled out Salesforce Agentforce to global sales organisation. Using Google Cloud Gemini Enterprise across enterprise functions. Created 700 no-code AI agents since late September. Survey data shows CFOs expect AI agents to increase revenue by ~20%.

  • Airbnb Won't Integrate ChatGPT — CEO Brian Chesky won't let Airbnb be used in ChatGPT because "not quite ready." Technical considerations (identity verification). Envisions Airbnb as "one-stop shop for travel." "Relying a lot" on Alibaba's Qwen open-weight model for AI customer service (also uses OpenAI). Qwen "faster and cheaper."

  • Rishi Sunak AI Advisory Roles — Former UK PM accepted paid advisory roles at Microsoft and Anthropic (all proceeds to charity). Will shape global strategy and macro trends, not UK policy. Strict rules not to lobby ministers or influence UK contracts.

  • Google AI Search EU Launch — AI Search (formerly AI Overviews) available in all 27 EU countries. Supports German, French, Italian. Powered by Gemini 2.5. Users asking 3x longer, more complex questions. Localised, culturally tuned results in 200+ countries.

  • Apple Siri Struggles — Major Siri update pushed from 2025 to 2026. Lagging behind Amazon Alexa and Google Assistant. Apple's indecision and vulnerability in AI race. Historical approach of refining existing technologies left it behind in developing advanced AI capabilities.

  • Amazon Prescription Kiosks — Deploying AI-powered vending-machine kiosks in One Medical clinics in LA starting December. Get prescriptions within minutes. Weighs 1,700 lbs, dual authentication, video surveillance. Inventory algorithmically tailored to each clinic's prescribing habits.

  • Federal AI Governance — Ninefold increase in federal AI use cases from 2023 to 2024. White House AI Action Plan may relax previous restrictions. Key concerns: data capture, storage costs, information overload management.

  • European AI Governance Gap — 83% of European IT/business professionals report AI use, only 31% have comprehensive policies. 56% see productivity gains, 71% see efficiency boosts, but only 18% invest in risk countermeasures.

  • Adobe Synthesia Acquisition Talks — Adobe discussed $3B acquisition of AI video startup Synthesia.

  • Tesla Robotaxi and Optimus Updates — Tesla Q3 revenue grew 12% to $28.1B, but net income fell 37% to $1.4B. Musk...

Community

The Spotlight Podcast


AI Hype Exposed! Michael LePage

In this episode of the Project Flux podcast, host James Garner speaks with Michael LaPage, Chief Learning Officer at Plan Academy. They discuss the impact of AI on project management, the importance of human relationships in the industry, and the role of training in project controls. Michael shares insights on the challenges of integrating AI into tools like Primavera and emphasizes the need for human judgment in project management. The conversation also touches on the future of AI in the industry and the significance of community in our lives.ugh endurance events and the importance of collaboration in business.

Andrej Karpathy AI Agents Reality Check — OpenAI founding member says it'll take at least a decade for AI to meaningfully automate entire jobs. "Models are not there." Industry making too big of a jump. AI excels at coding but struggles with tasks without clear right/wrong answers like slide decks.

One more thing

That’s it for today!

Before you go we’d love to know what you thought of today's newsletter to help us improve The Project Flux experience for you.

Login or Subscribe to participate in polls.

See you soon,

James, Yoshi and Aaron—Project Flux