• Project Flux
  • Posts
  • Oxford's AI Summit Reveals the Great Rewiring: From Translation to Transformation

Oxford's AI Summit Reveals the Great Rewiring: From Translation to Transformation

As institutions embrace generative AI, the question isn't whether machines will translate our work—it's whether we'll evolve beyond being translators ourselves.

Proudly sponsored by ConstructAI, brought to you by Weston Analytics.

Hello Project AI enthusiasts,

This week felt like standing at a crossroads. At Oxford's GenAI Summit, we watched some of the world's brightest minds grapple with a question that's becoming impossible to ignore: what happens when AI doesn't just augment our work, but fundamentally rewrites the rules of how we deliver value?

From Anthropic's radical rethinking of agents to Microsoft's provocative "translation theory," the message is clear—the old playbook is being torn up.

Meanwhile, Saudi Arabia's NEOM stumbles (or does it?), OpenAI's infrastructure bets reach historic scale, and the bubble debate rages on.

In this edition, we're diving deep into what this all means for those of us in the trenches of project delivery.

In This Edition

Flux check-in

Last week, we found ourselves in the hallowed halls of Oxford University for the GenAI Summit—a gathering that brought together some of the world's leading minds in AI, from researchers at DeepMind to policymakers and industry leaders. The atmosphere was electric, not with hype, but with a palpable sense that we're witnessing a fundamental shift in how institutions think about intelligence itself. Read the full breakdown →

What Does This Mean for Me?

For project delivery professionals, the Oxford summit crystallised a critical insight: the organisations that thrive won't be those with the best AI tools, but those that fundamentally rethink their operating models. The speakers emphasised that generative AI isn't just another technology to bolt onto existing processes—it's a catalyst for reimagining how work gets done. If you're still treating AI as a productivity add-on rather than a strategic imperative, you're already behind. The institutions that "get it" are redesigning workflows, redefining roles, and rebuilding cultures around AI-first thinking.

Key Themes: • World-class institutions are treating AI as an organisational transformation, not a tech upgrade • The gap between AI leaders and laggards is widening at an unprecedented pace • Ethical frameworks and governance are no longer optional—they're competitive advantages • The future belongs to those who can blend human judgement with machine intelligence

Anthropic just dropped a bombshell that's being overlooked in the noise: they're not building more agents. Instead, they're teaching Claude skills—repeatable, learnable workflows that you can train once and deploy across your organisation. This is a radical departure from the "agent for everything" approach that's dominated the conversation. Read the full breakdown →

What Does This Mean for Me?

Think about your daily work in project delivery. How much of it is genuinely novel, and how much is applying the same patterns to different contexts? Anthropic's skills feature is designed for exactly that repetition. Instead of building bespoke agents for every task, you teach Claude a skill once—like "extract key risks from a project status report" or "generate a stakeholder communication plan"—and it becomes part of your AI's permanent toolkit. This is automation that scales with your expertise, not against it. For project managers, this could be transformative: imagine an AI that learns your organisation's unique workflows and applies them consistently across every project.

Key Themes: • Skills-based AI represents a shift from one-off tasks to repeatable automation • Claude 4.5 Haiku offers enterprise-grade performance at a fraction of the cost • Enterprise search integration makes organisational knowledge instantly accessible • The future of AI is less about agents and more about augmented expertise

Anthropic. (2025, October 15). Introducing Claude Haiku 4.5. https://www.anthropic.com/news/claude-haiku-4-5 

Anthropic. (2025, October 16). Claude Skills: Customize AI for your workflows. https://www.anthropic.com/news/skills 

The "AI bubble" narrative is everywhere right now, but we think it's fundamentally wrong. Yes, valuations are eye-watering. Yes, the infrastructure spend is unprecedented. But unlike the dot-com bubble, this isn't speculative froth—it's a structural shift backed by real productivity gains, measurable ROI, and a market projected to grow from $391 billion in 2025 to $3.5 trillion by 2033. Read the full breakdown →

What Does This Mean for Me?

We're not in a hype cycle that will eventually correct—we're in the early innings of a decades-long transformation. That means the skills you're building today around AI aren't just relevant for the next quarter; they're foundational for the next decade. It also means that organisations treating AI as a "wait and see" proposition are making a strategic error. The infrastructure being built now—data centres, AI models, integration platforms—will become the rails on which all future work runs. Position yourself accordingly.

Key Themes: • AI market growth is backed by real productivity gains, not speculation • Infrastructure investments are strategic assets, not sunk costs • Energy challenges are driving innovation, not constraining growth • Historical resilience suggests corrections strengthen core players, not destroy the sector

Down the Rabbit Hole

[1] CNN Business. (2025, October 18). Why this analyst says the AI bubble is 17 times bigger than the dot-com bust. https://www.cnn.com/2025/10/18/business/ai-bubble-analyst-nightcap 

[2] Reuters. (2025, October 16). Opinions split over AI bubble after billions invested. https://www.reuters.com/business/finance/opinions-split-over-ai-bubble-after-billions-invested-2025-10-16/

[3] China Worker. (2025, October 19). When will the AI bubble burst?. https://chinaworker.info/en/2025/10/19/48172/ 

Microsoft's Chief Product Officer of AI Experiences, Aparna Chennapragada, published an essay that's been rattling around my head all week. Her thesis: most managers aren't actually creating or executing—they're translating. Engineers translate specs into code, analysts translate data into charts, managers translate strategies into updates. And now, AI can do all of that translation instantly, for nearly free. Read the full breakdown →

What Does This Mean for Me?

If you're a project manager who spends most of your time translating between stakeholders, technical teams, and executives, this should be a wake-up call. That translation work—the status reports, the stakeholder updates, the requirement documents—is exactly what AI excels at. But here's the opportunity: if AI handles the translation, what's left is the high-value work that actually requires human judgement. Advising on trade-offs. Making decisions under uncertainty. Building relationships and trust. Understanding context and nuance. These are the skills that will define the next generation of project leaders. The question is: are you developing them, or are you still optimising for translation?

Key Themes: • Most managerial work is translation, which AI can now do instantly • Organisational hierarchies will flatten as translation layers become unnecessary • The future belongs to advisors and decision-makers, not translators • Microsoft's "Researcher" tool aims to give every employee CEO-level insights

Down the Rabbit Hole:

Chennapragada, A. (2025, September 12). Most Work is Translation. ACD. https://aparnacd.substack.com/p/most-work-is-translation 

Saudi Arabia's NEOM project—the $500 billion desert utopia—is reportedly on hold, buried under $40 billion in delays. The AI community is paying attention because NEOM was supposed to be the ultimate testbed for "Physical AI": autonomous transport, AI-driven infrastructure, robotic construction. Its stumble raises uncomfortable questions about the gap between AI ambition and real-world execution. Read the full breakdown →

What Does This Mean for Me?

In tech news, stories of failure often overshadow progress. Recent reports about reduced plans and paused work on Saudi Arabia's $500 billion NEOM project have been seen as signs of failure, with headlines claiming it's "coming to a stop" due to delays and costs. While this narrative is dramatic and fits a skeptical mindset, it's misleading.

Key Themes: • NEOM's delays highlight the gap between AI vision and execution reality • SoftBank's "Physical AI" strategy faces significant real-world challenges • Startup dependence on mega-projects carries enormous risk • AI infrastructure investments require realistic timelines and robust project management

Down the Rabbit Hole: 

Intelligence Online. (2025, October 14). End of The Line? Saudi's Neom mega-project grinds to halt. https://www.intelligenceonline.com/international-dealmaking/2025/10/14/end-of-the-line-saudi-s-neom-mega-project-grinds-to-halt,110533731-eve 

If you work in project delivery you need to understand how AI is reshaping it. Renowned industry thinker — and friend of Project Flux — Antony Slumbers has just launched Cohort 14 of his acclaimed Generative AI in Real Estate course (starting 7 November). Over three weeks, you’ll master frontier tools, reconfigure workflows, and reimagine the future of property. Expect real-world case studies, hands-on sessions, and a network of innovators shaping what’s next.

👉 Join the course here and stay ahead of the curve.

The pulse check

Tips of the week

Stop Re-Explaining Yourself—Use AI Projects

Here's a scenario you'll recognise: You're working on a quarterly report, collaborating with AI across multiple conversations. You hit the token limit, start a new chat, and spend twenty minutes re-uploading files and re-explaining your formatting requirements before you can continue.

AI Projects (available in Claude and ChatGPT) solve this by creating persistent workspaces where your AI actually remembers project-specific guidelines, reference materials, and previous conversations. Think of them as dedicated digital assistants for each type of work you do—one for board reports, another for team updates, another for client proposals.

Here's how to set up your first Project this week:

1.Choose one recurring deliverable you create regularly

2.Create a dedicated Project in Claude or ChatGPT

3.Upload 3-5 high-quality examples specific to that work type

4.Write specific instructions: Instead of "professional tone," try "Write for senior executives who want clear recommendations. Use short paragraphs, active voice, lead with conclusions"

5.Test and refine: Expect 4-5 iterations before it truly understands your needs

Stop copying and pasting context into every new chat. Set up one Project this week and reclaim hours of setup time.

Governance & Security

This week brought sobering reminders that AI safety remains an unsolved problem. A collaborative investigation by researchers from OpenAI, Anthropic, Google DeepMind, and leading academic institutions found that 12 recent AI safety defenses collapse when tested against adaptive attack strategies, achieving over 90% success rates in bypassing most protections. The research demonstrates that static, rules-based defenses cannot keep pace with adaptive, probabilistic AI attacks that learn from failed attempts—a critical concern as AI becomes embedded in critical infrastructure.

Meanwhile, the Japanese government formally requested that OpenAI cease copyright infringement related to Sora 2's video generator, calling manga and anime characters "irreplaceable treasures that Japan boasts to the world." Japan is pushing for a permission-based system where creators must explicitly allow their work to be used for AI training, potentially setting precedent for other countries. Together, these developments underscore that governance and security aren't afterthoughts—they're existential challenges that will define whether AI delivers on its promise or becomes a source of systemic risk.

  • Agentforce 360: Salesforce enters the enterprise AI race with Agentforce 360, a comprehensive platform for building and deploying AI agents across customer relationship workflows, promising seamless CRM integration.

Other things we’re loving

Community

The Spotlight Podcast

Lidar That Draws Your Floor Plan in 60 Seconds!

In this episode of the Project Flux podcast, host James Garner and co-host Yoshi welcome Josh Schumann, director of Pepper Build Construction and consultant for Nav Live. They discuss the evolution of construction technology, the impact of AI, and the future of project management. Josh shares insights on the challenges and opportunities within the construction industry, particularly regarding the adoption of new technologies like scanning devices. The conversation also touches on personal growth through endurance events and the importance of collaboration in business.

One more thing

What looks like a bubble could be the scaffolding of the next economy. Interdependence isn’t weakness — it’s the network effect in action.

That’s it for today!

Before you go we’d love to know what you thought of today's newsletter to help us improve The Project Flux experience for you.

Login or Subscribe to participate in polls.

See you soon,

James, Yoshi and Aaron—Project Flux