• Project Flux
  • Posts
  • When Engineering Giants Become Tech Companies: AECOM's $390M AI Bet Signals Industry Transformation

When Engineering Giants Become Tech Companies: AECOM's $390M AI Bet Signals Industry Transformation

AECOM's bold acquisition reveals consulting's existential crisis while insurance abandons AI risk, Claude enters benchmark wars, construction faces knowledge gaps, and UK commits billions to sovereign capability.

Proudly sponsored by ConstructAI, brought to you by Weston Analytics.

Hello Project AI enthusiasts,

This week reveals the tectonic plates shifting beneath the AEC industry. AECOM just bought an AI startup for nearly $400 million, not licensing their tech, but acquiring the capability itself. Meanwhile, major insurers are formally excluding AI liabilities from policies, and 81% of UK construction professionals admit they lack the knowledge to deploy AI responsibly. The gap between AI adoption and AI competence has never been more apparent, or more costly.

In This Edition

Flux check-in

When Big Consulting Becomes a Tech Company: AECOM's $390M Bet on AI Engineering

Global infrastructure giant AECOM just rewrote the playbook for professional services firms. Their $390 million acquisition of Norwegian AI startup Consigli isn't just another corporate deal, it's a fundamental realignment of how traditional consulting firms view their own survival. Instead of licensing AI tools like everyone else, AECOM bought the team, the IP, and the capability itself.

This isn't incremental. This is strategic repositioning. Consigli's "Autonomous Engineer" platform claims 90% reduction in engineering time for early-stage work, space planning, MEP loadings, BIM modelling, tender prep. That's not efficiency gains. That's restructuring. Read the full breakdown

What Does This Mean for Me?

If your firm relies on the same commercial tools as competitors, you've got no differentiation. If your consultancy outsourced its AI strategy, competitors can replicate your approach tomorrow. AECOM's move reveals the uncomfortable truth: owning AI capability matters more than accessing it.

Key Themes

  • Competitive Fear: AECOM feared competitors would "effectively put us out of business" without this move

  • Strategic Divestment: The firm is now reviewing divestment of construction management to focus on AI-augmented advisory

  • Labour Economics: 90% time reduction means 40-person design teams become 4-person teams

  • Market Disruption: When customers start acquiring vendors' targets, the entire software distribution model breaks

Down the Rabbit Hole

Claude Opus 4.5: The Benchmark Arms Race That Stopped Mattering

Anthropic announced Claude Opus 4.5 this week, breaking 80% on SWE-Bench Verified. The industry called it a breakthrough. But here's Project Flux's contrarian take: by the time you finish evaluating whether Opus 4.5 is "better" than competitors, OpenAI will have released something new and reshuffled the rankings again.

The real story isn't benchmark performance. It's that Opus 4.5 costs 67% less than Opus 4 whilst using 48-76% fewer tokens. That economic shift matters far more than synthetic test scores. Read the full breakdown

What Does This Mean for Me?

Stop optimising for "best model." The model that wins this month loses next month. Build AI architecture flexible enough to swap models quarterly without rebuilding everything. Token efficiency and price-to-capability ratio are your only sustainable competitive advantages.

Key Themes

  • Model Velocity: OpenAI, Google, and Anthropic release major updates every 8-12 weeks, competitive advantage evaporates monthly

  • Economic Efficiency: Model that's 95% as capable but costs 67% less is superior for production deployment

  • Real-World Performance: Teams report tasks "impossible for Sonnet 4.5" now work with Opus 4.5

  • Architecture Flexibility: The only strategic question: can you swap models without rewriting your entire implementation?

Down the Rabbit Hole

The Insurance Industry Is Quietly Walking Away From AI Risk

AIG, Great American Insurance Group, and WR Berkley have formally asked state regulators for permission to exclude AI-related liabilities from commercial insurance policies. The industry that prices risk for a living has concluded AI risk is too unpredictable, too opaque, and too difficult to price using traditional underwriting.

Translation: they're saying, "This category of risk is outside our capacity to manage." For companies deploying AI without proper governance, this is the safety net disappearing. Read the full breakdown

What Does This Mean for Me?

Your traditional commercial insurance may no longer cover AI-related losses. WR Berkley's proposed "absolute AI exclusion" removes coverage for chatbots, automated decision systems, generative content tools, and algorithmic compliance systems. That's exactly where most organisations have already deployed AI.

Key Themes

  • Failed Protections: Google AI Overview generated $110M in false legal advice; Air Canada's chatbot created non-existent discounts

  • Correlated Risk: AI introduces correlated risk where one faulty model affects thousands of customers simultaneously

  • Regulatory Response: 21+ US states adopted NAIC Model Bulletin requiring board-level AI oversight and bias testing

  • Insurance Evolution: Specialist AI insurance exists but requires detailed audits, strict governance, and costs significantly more

Down the Rabbit Hole

The Construction Industry's AI Crisis Isn't Technology, It's Knowledge

Recent research from the AI Institute reveals the brutal reality: 81% of UK construction professionals lack foundational knowledge to deploy AI responsibly. Nearly half cite lack of training as the primary blocker. Yet firms are already deploying AI tools in project management, cost estimation, risk assessment, and autonomous equipment.

This is exactly backwards. You're supposed to understand a tool before deploying it widely. Construction is doing the opposite. Read the full breakdown

What Does This Mean for Me?

Shadow AI is already happening, people using ChatGPT, Claude, and Gemini on projects without official oversight. They're drafting specs, analysing site photos, estimating costs. Some outputs look plausible but are fundamentally flawed. Nobody catches the problems until they cause damage on site.

Key Themes

  • Awareness Gap: One in five construction professionals haven't identified which processes could be automated

  • Implementation Failure: Research shows "doomed to fail" approach of implementing tools without investing in organisational knowledge first

  • Trust Without Understanding: Teams using AI without understanding limitations will trust outputs when they shouldn't

  • Success Patterns: Firms starting with knowledge-building have substantially higher AI implementation success rates

Down the Rabbit Hole

UK Government Commits £24B to AI, What Actually Gets Built Matters More Than the Promise

The UK government announced £24.25 billion in new AI investment commitments, bringing total promises since taking office to £78 billion. A Sovereign AI Unit. Free compute for UK researchers. South Wales growth zone targeting 5,000 AI jobs. Infrastructure investments in power and data centres.

The headline figures are impressive. The fundamental question remains: will this actually translate into economic growth, or join the long list of government technology initiatives that promise transformatively but deliver incrementally? Read the full breakdown →

What Does This Mean for Me?

Free compute access for researchers creates resources that didn't exist before. Infrastructure investment in South Wales lowers facility costs. A sovereign AI unit creates potential customer base for UK AI companies. But none of this translates into strategy recommendations until investments actually materialise.

Key Themes

The pulse check

Tips of the week

Become an Effective "Vibe Coder" – Practical Tips for Building Better with AI

AI coding tools like Cursor, Claude Code, Anti Gravity and Windsurf allow non-developers to create dashboards and internal applications, but many people still struggle because the AI loses context and unintentionally breaks earlier work. Effective vibe coding is not about becoming a software engineer overnight. It is about giving the AI the right guidance so it can build consistently without causing avoidable issues.

You can achieve this by giving your project a stable memory system. Cursor users can set up a cursorrules file, and Claude users can create reusable Skills that define design standards, code structure, component choices and preferred patterns. Building layer by layer also keeps the process stable because each step is tested before adding the next.

Core practices include:

  • Use cursorrules or Claude Skills to maintain persistent context

  • Build in layers: first layout, then data, then functionality, then enhancements

  • Test each layer before moving on

  • Keep styling, structure and components consistent throughout

Once your tool works locally, deployment becomes very simple. Platforms like Netlify and Vercel connect directly to GitHub, publish your project within minutes, and allow quick iteration without dealing with complex infrastructure. This approach helps teams build practical internal tools that work reliably and can be improved continuously.

Need help structuring your prompts? Try my free Prompt Generator. Describe your task, select parameters (intent, tone, length), and get a structured prompt ready to paste into ChatGPT, Claude, or Perplexity in 30 seconds.

If you would like a deeper, structured learning path, here is the Referral Link for the Course

Governance & Security

This week brought critical developments in AI governance that project delivery teams need to understand. Anthropic CEO Dario Amodei will testify before the House Homeland Security Committee on 17th December following the first reported AI-orchestrated cyber-espionage campaign, the committee commended Anthropic for self-reporting the attack. This marks a significant moment in AI security governance, with Google Cloud and Quantum Xchange CEOs also invited to participate.

Meanwhile, security researchers warned that cracked AI hacking tools will likely flood cybercriminal forums in 2026, enabling attackers to discover vulnerabilities in minutes instead of days. This represents significant escalation in the cyber threat landscape as AI capabilities become accessible to criminal actors.

On the regulatory front, the Trump administration might not fight state AI regulations after all, signalling potential shift in federal-state dynamics around AI governance. Additionally, Partnership on AI published new guidance on protecting data workers, highlighting human rights considerations in AI development supply chains.

The message is clear: AI governance isn't optional anymore. Whether you're deploying AI internally or working with AI-powered vendors, understanding the evolving regulatory and security landscape is essential for managing risk effectively.

Robotics

Tesla is bleeding AI talent to a small new robotics start-upTesla’s AI and robotics divisions are facing a significant “brain drain” as a stealth startup called Sunday Robotics emerges with a roster of engineers from Tesla’s Optimus and Autopilot teams. Read the story

Home Robot Clears Tables and Loads the Dishwasher All by ItselfSunday Robotics has a new way to train robots to do common household tasks. The startup plans to put its fully autonomous robots in homes next year. Explore Sunday Robotics

Alibaba launches US$537 Quark AI glasses – Sunday Robotics has a new way to train robots to do common household tasks. The startup plans to put its fully autonomous robots in homes next year. Read more

Social and companion robots: More than just machines? Social and companion robots occupy a different space in the automation landscape. Instead of replacing physical labour, they augment human relationships, taking on roles traditionally associated with caregivers, teachers, receptionists, and even friends. Read more

  • Perplexity AI’s new clothes try-on tool takes on GooglePerplexity’s virtual try-on lets you see clothes on a digital avatar made from your own photo, and the tool realistically shows how fabrics drape and fit, making online shopping feel more accurate. Read the full story

  • Replit Design Agent for UI Creation Replit launched Design agent that creates stunning UIs from text descriptions. Features modern, clean designs with clear layout, strong visual hierarchy, and intuitive navigation. Includes live preview for desktop and mobile, one-click publish functionality. Try it now

  • LLM Council: Multi-Model Consensus Tools Andrej Karpathy released LLM Council which sends questions to four top models, has them grade each other's answers, then "chairman" model writes final response. Represents new approach to leveraging multiple AI models for better outputs. Read the full update

  • Deep Research Now in NotebookLMGoogle's NotebookLM has added Deep Research capabilities, expanding its functionality for comprehensive research tasks with integrated AI assistance. See announcement

  • Perplexity Adds AI-Powered Shopping Feature With PayPal CheckoutPerplexity offers conversational search that incorporates the user's history and wants when looking for item suggestions. The Perplexity team suggests that the AI is able to understand each shopper's unique needs better than a search algorithm that's optimized for advertiser dollars. Explore the announcement

  • ChatGPT’s voice mode is no longer a separate interfaceChatGPT’s voice mode is getting more usable. OpenAI announced it is updating the user interface to its popular AI chatbot so users can access ChatGPT Voice right inside their chat, instead of having to switch to a separate mode. Learn more

  • DeepSeek-Math-V2 Achieves IMO Gold – DeepSeek released DeepSeek-Math-V2, open-source MoE model achieving gold-medal performance at IMO 2025 (International Mathematical Olympiad). Review technical details

  • Kimi K2 introduced thinking mode that's turning heads in AI communityRepresents a new approach to AI reasoning with extended contemplation before responding to details emerging about capabilities and performance. Discover the details

Links We are Loving

Community

The Spotlight Podcast

From Fields to AI: Why the Future of Intelligent Systems Isn't About Prompting Anymore.

This week, we spoke with Johnny Morris, co-founder of Fifth Dimension, about his journey from watching his father transform derelict monasteries to building AI platforms for real estate professionals. Morris delivers a masterclass on why in-house AI development and "vibe coding" won't save your business, and why the shift from prompting to "context engineering" represents the next genuine frontier in AI adoption.

Key insights: Enterprise search implementations consistently fail when they meet real organisational data. Vibe coding is brilliant for proof of concepts but insufficient for production. The organisations advancing furthest understand that the blocker isn't the technology anymore, it's organisational readiness and clear strategy. Morris believes 2026 will reveal the gap between those on actual transformation paths and those still experimenting at the margins.

One more interesting podcast to check out:

Event of the Week

AI & Societal Robustness Conference

12 December 2025 | Jesus College, Cambridge, UK,

As AI systems become increasingly integrated into critical infrastructure and key industries, questions of control, oversight, and institutional safeguarding become paramount.

This one-day conference brings together AI safety and security researchers, policymakers, and other institutional stakeholders. Through technical and policy-oriented talks, collaborative workshops, and 1:1s, we will investigate emerging challenges at the intersection of geopolitics, cybersecurity, societal impacts research, and institutional governance in the era of advanced AI systems. Register now


One more thing

This is how people are addressing AI literacy at the moment

That’s it for today!

Before you go we’d love to know what you thought of today's newsletter to help us improve The Project Flux experience for you.

Login or Subscribe to participate in polls.

See you soon,

James, Yoshi and Aaron—Project Flux 

1