- Project Flux
- Posts
- When McKinsey Cuts Jobs, Boards Take Notes: AI Reshapes Professional Services
When McKinsey Cuts Jobs, Boards Take Notes: AI Reshapes Professional Services
McKinsey's workforce reduction signals more than cost-cutting. It's proof that AI is fundamentally changing how knowledge work is valued, delivered and structured across industries.

Proudly sponsored by ConstructAI, brought to you by Weston Analytics.
Hello Project AI enthusiasts,
As we approach Christmas, I want to take a moment to wish you and your teams a wonderful festive season. Thank you for your continued support throughout 2025. It has been a year of remarkable change, and we appreciate you being part of the Project Flux community. We have some big things coming in the new year, so stay tuned for the next issue.
This week, McKinsey is cutting thousands of roles as AI reshapes its operating model. When the firm that advises on workforce optimization applies the same prescription to itself, it signals that the AI transition has moved from experimentation to expectation. Boards everywhere are taking notes.
Adobe faces a proposed class-action lawsuit over allegedly using copyrighted books to train its AI models without permission. This case exposes how training data provenance has become a critical delivery risk, with legal and operational consequences reaching far beyond one company. The settlement patterns emerging suggest only tech giants can afford the multi-billion dollar licensing fees paid in arrears.
Nearly nine in ten contractors believe AI will reshape construction, yet fewer than 15 per cent currently use it in live projects. The gap between optimism and adoption reveals a critical challenge: data quality remains the constraint that will not go away. Contractors see value in specific applications like constructability analysis and bid decision support, but scaling beyond pilots requires fundamental changes in data discipline and governance.
OpenAI rolled out a major image generation update amid internal "Code Red" urgency, delivering four times faster output as competitive pressures intensify. This signals that capability development cycles are accelerating, and vendor strategies are shifting in ways that directly affect technology roadmaps. Visual generation has moved from experimentation to infrastructure, but governance of dynamic outputs becomes more complex.
OpenAI's latest Codex model moves beyond writing code faster to compressing entire delivery cycles. By supporting reasoning across repositories, it changes how engineering work is organized and how teams regain visibility into systems made opaque by years of incremental change. However, as AI becomes more capable in software delivery, governance becomes more critical, not less.
In This Edition
Flux check-in
When McKinsey Cuts Jobs, Boards Take Notes
When the consulting firm that has spent decades advising boards on workforce optimization decides that ten per cent of its own support workforce can be replaced by automated workflows and large language models, it moves the conversation decisively from experimentation to expectation. McKinsey's reported plan targets centre-based and support functions rather than client-deployed roles, demonstrating that job security is increasingly tied to proximity to the customer and outcomes that cannot be standardised. Read the full breakdown →

What Does This Mean for Me?
If McKinsey can reduce back-office headcount by ten per cent while maintaining delivery capability, your board will reasonably ask why your organization cannot do the same. Authority and credibility are no longer derived from team size but from efficiency and effectiveness of the delivery stack.
Key Themes
Eat your own dog food: When the firm defining "corporate best practice" cuts 10% of support staff, it validates AI workforce reduction for every board
Shield versus sword: Job security now tied to proximity to customer. Internal-facing roles are in the AI crosshairs first
Growth without headcount: McKinsey aims for revenue growth while flattening workforce, ending the era where manager power equalled team size
Consulting deflation risk: If AI does junior consultant work, clients question paying $500/hour for it. Industries selling hours face commodity pricing
Down the Rabbit Hole
When AI Training Runs Into Copyright Law: What Adobe's Proposed Class Action Says About Responsibility in the AI Era
Adobe faces a proposed class-action lawsuit alleging that copyrighted books were used without permission to train its SlimLM language model. The complaint targets not only Adobe's use of content but also the legitimacy of widely used AI training datasets like Books3 and RedPajama, which have been at the centre of several high-profile copyright disputes. This follows Anthropic's $1.5 billion settlement with authors earlier in 2025, signalling that creators are willing to escalate legal challenges and that the legal landscape around AI training data is shifting rapidly. Read the full breakdown →

What Does This Mean for Me?
If you're building, integrating or deploying AI systems, training data provenance is now a contractual requirement and negotiable risk. A misstep in data practices can delay roadmaps, expose organizations to material damages and erode stakeholder trust, these are delivery risks with business consequences.
Key Themes
Supply chain contamination: AI has a "dirty supply chain" problem. Training data provenance is now a contractual requirement and delivery risk
Small model, big risk: The rush to make AI portable and efficient led tech giants to take shortcuts with data. Now the bill is due
Ethical arbitrage gap: Even "responsible" companies struggle to find enough legal data, exposing massive trust gaps in vendor claims
Settlement as business expense: $1.5 billion settlements are becoming the "licensing fee paid in arrears" for AI development. Only giants can afford it
Down the Rabbit Hole
Unlock the Future of Digital Construction
The DTSA micro-credential gives young people and career changers barrier-free access to digital twin education – a first for the UK construction industry. Built on 32 months of work at the University of Cambridge’s CDBB, it opens doors to cutting-edge skills in safer, smarter, and more sustainable project delivery.
With portfolio-based assessment (offered as part of an Apprenticeship) and real industry insight, the course creates a clear pathway into digital construction for site teams, aspiring architects, engineers, surveyors, and project owners / funders. In partnership with the Digital Twin Hub and OCN London, the DTSA is shaping the next generation of talent and helping position the UK as a global leader in digital construction and innovation.
Sign up by emailing [email protected]

Contractor Optimism About AI Is Rising: Delivery Reality Will Decide What Happens Next
Nearly nine in ten contractors believe AI will have a meaningful impact on their organizations, according to recent research. Their confidence centres on specific, familiar problems: project planning, constructability analysis, compliance workflows and commercial decision support. However, adoption levels remain modest, with fewer than 15 per cent currently using AI-enabled functions in live project environments. The persistent challenge? Data quality. Only a minority of contractors rate their data as strong enough to support advanced analytics, mirroring findings from organizations like the Royal Institution of Chartered Surveyors. Read the full breakdown →

What Does This Mean for Me?
Contractor optimism reflects strategic necessity as much as enthusiasm. AI implementation requires clear ownership of data standards, integration with existing project controls, and training for delivery teams to interpret and challenge outputs without these elements, AI tools risk becoming isolated dashboards that fail to influence real decisions.
Key Themes
Foundations first reality check: 87% want AI transformation, but only 26% have data quality to support it. You cannot build AI on bad data
From estimating to predicting: AI can analyze 20 years of risk data to identify "margin killer" projects before bidding. Success lies in algorithmic selectivity
Strategic project manager emergence: 85% expect AI to eliminate mundane tasks. The PM becomes a Project Orchestrator, not a paper-pusher
Constructability as auto-correct: 81% see benefit in automated constructability analysis. Digital twin simulations find conflicts before concrete is poured
Down the Rabbit Hole
OpenAI and the Acceleration of AI Image Generation
OpenAI rolled out a major update to ChatGPT's image generation capabilities amid what has been described internally as a "Code Red" push. The upgraded model—GPT Image 1.5—brings significant improvements in instruction adherence, editing precision and generation speed, with OpenAI citing up to four times faster output compared with its predecessors. This isn't a standard product launch. OpenAI declared heightened urgency in response to competitive pressures, particularly from competitors such as Google and its Gemini AI models. The pace of capability development is increasing, and competitive dynamics are driving prioritization in ways that were less visible in earlier years of mainstream AI adoption. Read the full breakdown →

What Does This Mean for Me?
Organizations embedding generative AI into product, design and operations workflows face a landscape that is rapidly evolving in capability and complexity. Capability assessments conducted even six months ago may already be outdated, requiring structured frameworks for evaluating new AI tools.
Key Themes
Speed as competitive moat: 4x speed increase is psychological hit to competitors. In creative flow state, 5-second wait is conversation, 30-second wait is eternity
Code Red survival instinct: Model supposed to launch January released now due to competition. Innovation cycle time for response products now measured in weeks
Visual generation becomes infrastructure: No longer experimentation. Teams use AI for design exploration, stakeholder communication, and rapid iteration daily
Governance of dynamic outputs: Ownership, licensing, brand consistency and compliance become harder when visuals are generated dynamically, not designed
Down the Rabbit Hole
GPT-5.2-Codex Signals a Shift From Code Assistance to Delivery Acceleration
OpenAI announced GPT-5.2-Codex in December 2025, positioned as the most capable code-focused model OpenAI has released to date. It builds on the broader GPT-5.2 reasoning stack but is specifically tuned for software engineering tasks, including code generation, refactoring, debugging and repository-scale understanding. What matters here is not raw coding competence but the direction of travel: GPT-5.2-Codex represents a move away from AI as a helpful assistant towards AI as an active participant in delivery workflows. By supporting reasoning across larger codebases, it reduces the cognitive load on developers when navigating complex systems, with knock-on effects for planning, coordination and quality assurance. Read the full breakdown →

What Does This Mean for Me?
For projects with significant software components, improved system comprehension changes how work is planned and governed. Risks can be identified earlier, dependencies surfaced before they turn into blockers, and trade-offs discussed with clearer evidence, but governance becomes more important rather than less.
Key Themes
From coding speed to delivery compression: Value lies in reducing rework and late-stage defects, not just writing code faster. Throughput over productivity
Regaining system visibility: Models reasoning across repositories help teams regain context in systems made opaque by years of incremental change
Governance becomes more critical: Accountability remains. Clear ownership needed for reviewing, approving and deploying code regardless of AI assistance
AI as bounded collaborator: Organizations that treat AI as collaborator within defined boundaries benefit most. Not as external accelerator operating in parallel
Down the Rabbit Hole
The pulse check
Tip of the week
I Moved 1,000+ Notion Notes to Obsidian in One Afternoon (Thanks to Claude)
I had used Notion for years with thousands of notes, project documents, meetings and research, all sitting there but poorly connected. Obsidian appealed because of its local files, backlinks and knowledge graph, but migrating felt like weeks of manual work I would never get round to.
Instead, I used Claude with two MCP servers.
Notion MCP (built into Claude) to read my entire workspace
Obsidian MCP to write directly into my Obsidian vault
I gave Claude a single instruction: read everything in Notion, move it to Obsidian, tag each note and create backlinks where ideas connect.
Claude did not just copy content. It analysed over 1,000 notes, applied meaningful tags, created backlinks between related concepts, built a clean folder structure and preserved formatting.
The result was a genuinely connected knowledge graph. Old meeting notes linked to current challenges. Isolated ideas turned out to be part of the same line of thinking. Patterns emerged that I had completely missed.
Setup was straightforward. Connect Notion via Claude’s integrations and install the Obsidian MCP server from
https://github.com/MarkusPfundstein/mcp-obsidian
Once connected, Claude handled the rest using proper markdown, tags and links.
What mattered was not the migration itself, but what came after. Claude did not just move files. It made sense of them. It surfaced recurring problems across projects, old research relevant to current work and trends in client feedback I had not spotted.
The takeaway: MCP servers let AI work across tools with real understanding. If your notes are scattered, connect Claude and ask it to find patterns. The insights are worth it.
Need help structuring your prompts? Try my free Prompt Generator. Describe your task, select parameters (intent, tone, length), and get a structured prompt ready to paste into ChatGPT, Claude, or Perplexity in 30 seconds.
If you would like a deeper, structured learning path, here is the Referral Link for the Course

Governance & Security
The governance landscape for AI is shifting from principles to enforcement, and the security implications are becoming impossible to ignore. Trump's executive order attempting to neutralize state AI regulations has sparked constitutional challenges, with states forging ahead despite federal pressure. The order instructs the Attorney General to create an AI Litigation Task Force to challenge state laws deemed inconsistent with maintaining U.S. AI dominance through a minimally burdensome framework. This reflects a broader tension: as 38 U.S. states enacted about 100 AI measures in 2025, federal authorities are pushing back against what they call a "patchwork of 50 state regulatory regimes."
Meanwhile, security breaches continue to expose fundamental weaknesses. OpenAI's breach via third-party provider Mixpanel revealed how supply chain vulnerabilities enable regulatory risk, with attackers gaining access to user information that can be weaponized for phishing and impersonation. Research shows 2025 is set to surpass all prior years combined in breach volume, with 70% of incidents involving generative AI and agentic AI causing the most dangerous failures. The average cost of an AI-specific data breach reached $4.80 million per incident, affecting 73% of companies. Perhaps most concerning: 97% of organizations that experienced AI-related breaches lacked basic access controls.
The pattern is clear. Organizations rushed to adopt AI without establishing governance frameworks or security protocols, and now they are paying the price. CIOs are responding by creating risk-tiered governance protocols, with companies that combine AI with integrated security platforms expected to experience 40% fewer employee-driven security incidents by 2026. The challenge is that AI breaches take longer to detect and fix than traditional breaches. This makes continuous, real-time monitoring essential rather than optional.
For project delivery teams, this means treating AI governance as a delivery constraint, not an afterthought. Data handling protocols, access controls, and incident response plans must be defined before deployment, not discovered during a breach investigation. The regulatory environment is fracturing, security threats are accelerating, and the organizations that survive will be those that bake governance into their AI implementations from day one.
Robotics
Figure AI CEO Launches Hark AI Lab – Brett Adcock (Figure AI CEO) starting new AI lab called Hark with $100M personal funding, pursuing "human-centric AI" with proactive reasoning and self-improvement. Will run alongside Figure (valued at $39B with nearly $2B in funding). Hark's first GPU cluster reportedly came online this week. Read the story
Tesla Optimus Robot Controversy – Tesla’s Optimus is at the centre of a growing storm. Last week, a video of the humanoid robot running eerily similar to a human raised a few eyebrows on social media, with some suspecting that the robot may be teleoperated behind the scenes. It’s a claim Tesla has repeatedly denied. But some surprising footage has surfaced this week, and it isn’t helping the company’s case at all. Read more
A robot just learned 1,000 tasks in a single day, and it’s a big deal for everyday AI – Researchers have managed to teach a robot to learn 1,000 different physical tasks in a single day, each from just one demonstration. Not 1,000 variations of the same movement, either. We’re talking about a wide range of everyday object interactions, such as placing, folding, inserting, gripping, and manipulating items in the real world. For robots, that’s a genuinely big deal. Explore more
Robots-As-A-Service Emerging as Driving Force – Robots-as-a-Service (RaaS) is accelerating global automation in 2025, offering subscription-based access to autonomous robots and shifting costs from capital expenditure to operating expense. The International Federation of Robotics projects the market will grow from $16.18 billion in 2025 to $125.17 billion by 2034, with logistics leading adoption and healthcare emerging as the fastest-growing segment. Explore further
Trending Tools and Model Updates
Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts – Meta's first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts, offering new possibilities for audio editing and production workflows. Read the full story
Zoom launches AI Companion 3.0 with agentic workflows, transforming conversations into action – Zoom Communications, Inc. unveiled the next evolution of its agentic AI solution, Zoom AI Companion 3.0, including new AI-first capabilities for personal workflows (beta), agentic AI features for Zoom Docs (coming soon), and a new web interface with expanded context to help users uncover insights, optimise their day, and uplevel their work. Read more
Gemini Just Added More AI Image Editing Tools – One of the ways AI models are rapidly improving is in their image editing capabilities, to the extent that they can now quickly take care of tasks that would previously have taken a substantial amount of time and effort in Photoshop. Read the full update
Introducing GPT-5.2-Codex – GPT-5.2-Codex is the most advanced agentic coding model for real-world software engineering, optimised for long-horizon work, large-scale refactors and migrations, improved Windows support, and stronger cybersecurity. See announcement
Gemini 3 Flash: frontier intelligence built for speed – The Gemini 3 model family has been expanded with Gemini 3 Flash, delivering frontier-level intelligence optimised for speed at a much lower cost, and making next-generation Gemini 3 capabilities widely accessible across Google products. Explore the announcement
DeepSeek V3.2 and V3.2-Speciale – DeepSeek launched V3.2 and V3.2-Speciale on December 16, 2025, claiming GPT-5/Gemini 3 Pro-level performance. 685B-parameter releases under MIT license running at fraction of cost ($0.28/$0.42 per 1M input/output tokens). V3.2-Speciale specifically for deep reasoning. Learn more
Mistral 3 Family Launch – Mistral launched Mistral 3 (four new open source models: 3B, 8B, 14B, and Mistral Large 3 at 675B parameters), Devstral 2 (coding model hitting 72.2% on SWE-bench), and Vibe CLI. Learn more
Developers can now submit apps to ChatGPT – Apps were introduced in ChatGPT at DevDay earlier this year, and developers can now submit them for review and publication, enabling richer conversations and actions such as ordering groceries, creating slide decks, or searching for apartments. Read more
Links We are Loving
AI-Powered Smart Metre Project Launched to Boost Water Efficiency in Abu Dhabi Agriculture — The Abu Dhabi Department of Energy, in collaboration with the Abu Dhabi Agriculture and Food Safety Authority (ADAFSA), has launched the Smart Metre Project in the Al Wathba area, marking a significant step in the emirate’s drive to enhance water-use efficiency and support sustainable agriculture through artificial intelligence and digital monitoring.
New MI6 chief: Tech bosses are becoming as powerful as nations — Blaise Metreweli, in her first public speech as head of the intelligence agency, says the world is being remade by technology that was once the stuff of fiction and Perilous technologies and the bosses behind them are becoming as powerful as states, the new head of MI6 has warned in her first public speech.
Gemini Call: My experience with the NZ national data approach — Leroy Clarke from LC Strategic Advantage shares four years of frontline experience working with New Zealand's water, transport, and health sectors as they fundamentally reimagine how data supports digital twins at a national scale.
Former chancellor George Osborne joins OpenAI — Former chancellor George Osborne is joining artificial intelligence (AI) giant OpenAI, and he will lead its "OpenAI for Countries" programme, which is aimed at helping governments increase their AI capacity.
ChatGPT Gets Apple Music Integration and New Image Generator — With Apple Music integration, ChatGPT will be able to make music recommendations and playlists based on listening history and user suggestions.
Databricks raises $4B at $134B valuation as its AI business heats up — Databricks, the data intelligence company, has just raised more than $4 billion in a Series L funding round at a $134 billion valuation, up 34% from the $100 billion valuation that it achieved just three months ago.
Manus Update: $100M ARR, $125M revenue run-rate —Manus has crossed $100M in ARR eight months after launch, making it the fastest startup to go from $0 to $100M in the world.
Waymo Reaches 450,000 Weekly Rides Milestone As Robotaxi Race Gathers Pace — Waymo has crossed 450,000 paid rides per week in the U.S., nearing 1.8 million monthly trips, delivered 14 million rides in 2025 with more expected by year-end, and is reportedly in talks to raise around $15 billion in funding.
SoftBank, Nvidia Weigh Over $1B Skild AI Investment At $14B Valuation — SoftBank Group and Nvidia are reportedly advancing discussions to take part in a new Skild AI fundraising round that would exceed $1 billion and assign the robotics foundation-model developer a valuation of roughly $14 billion.
Hollywood stars launch Creators Coalition on AI — Eighteen entertainment industry workers, including actor Joseph Gordon-Levitt and director Daniel Kwan, launched the Creators Coalition on AI to protect creators’ rights.
Trump admin to hire 1,000 specialists for ‘Tech Force’ to build AI, finance projects — The Trump administration unveiled a new initiative, dubbed the “U.S. Tech Force,” that will focus on AI infrastructure and other technology projects.
Elon Musk xAI AGI by 2026 Prediction — Elon Musk predicts that his company xAI could achieve artificial general intelligence (AGI) within the next couple of years, and maybe as soon as 2026, according to a new report from Business Insider.
Vibe-coding startup Lovable raises $330M at a $6.6B valuation — Stockholm-based Lovable said it had raised $330 million in a Series B funding round that was led by CapitalG and Menlo Ventures, at a $6.6 billion valuation.
ChatGPT for doctors’ raising $250M at $12B valuation — OpenEvidence, which has been called “ChatGPT for doctors,” is raising $250 million in equity financing at a $12 billion valuation.
China unveils $70 billion of financing tools to bolster investment — China will deploy policy-based financial tools worth 500 billion yuan ($70.25 billion) to accelerate investment projects, the state planner said on Monday, as part of efforts to support the slowing economy.
Google to launch AI-powered smart glasses in 2026 — Google to develop lightweight AI-powered glasses, with the first product expected to launch in 2026.
Yann LeCun confirms his new ‘world model’ startup, reportedly seeks $5B+ valuation — Renowned AI scientist Yann LeCun has confirmed that he has launched a new startup, a move that had long been an open secret in the tech world, although he said he will not serve as the company’s CEO.
Community
The Spotlight Podcast
Can We Ever Trust Music Again? Inside the Collision Between AI and Authenticity.

This week's conversation with drummer Anna Mylee and songwriter Tim Fraser explores what happens to authenticity when machines can create work indistinguishable from human creation. Anna describes the moment she realised AI would fundamentally transform music. If machines could generate music indistinguishable from human work, how could we ever truly know what we were listening to? This wasn't just an economic threat but an existential one to music as an art form.
Tim traced the industry's vulnerability back to Spotify's "better than nothing" logic and the normalisation of interpolation, where artists take melodies without permission. The three major record labels have endorsed AI through tech partnerships, while BBC Introducing now broadcasts AI-generated artists. Tracy Chapman's successful lawsuit against unauthorised use showed artists can fight back, but the pattern persists.
The episode reveals two futures emerging: music as pure consumption versus music as art. The conversation extends beyond music to radiographers, solicitors, and project delivery professionals facing the same question. Can we still trust what's authentic when the industry has been gradually eroding consent and compensation long before AI arrived?
Event of the Week
International Conference on AI-powered Data Science for Business Intelligence
19 January 2026 | Manchester, UK,
Enriching the treasury of knowledge and creativity, the International Conference on AI-powered Data Science for Business Intelligence, organised by IAAR, held at Manchester, UK, on 19 January 2026, is a beacon of inspiration. The main aim of this expert-led conference is to help you navigate industry uncertainties and accelerate your progress through actionable strategies. Develop career plans by networking with global scholars, researchers, business owners, academics, and policymakers. Discover breakthrough solutions and fascinating real-life case studies that can help you attract better opportunities. Register now
One more thing

That’s it for today!
Before you go we’d love to know what you thought of today's newsletter to help us improve The Project Flux experience for you. |
See you soon,
James, Yoshi and Aaron—Project Flux
1
