- Project Flux
- Posts
- The AI Paradox in Projects
The AI Paradox in Projects

We’re only two months into 2025 and the improvement of AI has been exponential. Today, artificial intelligence stands as our most ambitious creation—a tool designed not merely to serve, but to help us do more than we did before. If done right, projects across any industry should become increasingly viable and certain. If done wrong, the downsides can be catastrophic. So why are people hesitant to use AI in projects?
AI’s Growing Complexity
Firstly, because AI is unfamiliar, its hard to maximise it’s upside and minimise it’s downside. This becomes even harder as we’re now exposed to more powerful models which bring more complexity. We’ve yet to fully grasp the complexities of older models like GPT-4o, yet are presented with more powerful reasoning models like ChatGPT’s o1 and DeepSeek-R1 where challenges of robustness, interpretability and ethicality have heightened. The very mechanisms that imbue these models to think and reason in unprecedented ways also make them adverse, opaque and unpredictable. Let’s take a deeper look:
Robustness: Imagine an AI model that might be overly sensitive to edge cases, leading to unnecessary impactful decisions. This can destabilise a project.
Interpretability: Picture a cost manager that justifies the refusal of a contractor’s invoice based on AI-generated advice. The inability to explain the AI's internal rationale because of poor interpretability can undermine accountability and trust.
Control: What if in the drive to meet deadlines, an autonomous AI robot cuts corners, prioritising speed over the health and safety? If there is a lack of human control, this can lead to considerable rework.
Ethicality: What if an AI is designed to boost productivity, and looks at nothing else other that it’s programmed goal. Might it promote burnout?
The Risk Paradox
The second reason, is that sizeable projects can carry a huge risk profile, but smaller projects limit AI’s impact. Larger projects hold larger contracts with more money, more people, and more things that can go wrong. If you flavour this with AI’s growing complexity, it can be a recipe for disaster.
Both the collapse of Carillion and ISG reveal the glasslike nature of construction, now imagine throwing bad AI into the mix. Or don’t imagine and see the impact of a $304m write-down, 2,000+ person layoff and shutdown of company Zillow’s Offers division in 2021. Here, AI market forecasting models trained on historical data failed to consider the rapid interest rate hikes and dynamic buyer behaviour, which led to the AI overestimating home values by 5-15%.
On the other hand, smaller projects carry less risk, but gives AI a weaker showcase. This can cause a failure to realise the potential return-on-investment of AI, risking a bubble burst. Together, these scenarios create a risk paradox which contribute to AI’s project paralysis. It doesn’t mean that AI isn’t ready for projects, in fact we’d argue that its a necessity. The challenge is in solving this paradox so we can harness AI effectively.
Aligning AI with human values
If we get AI wrong, the consequences on projects are numerous, and so a big effort is focused on ensuring that the AI we build today and for the future, is ethical, interpretable, controllable, and robust. In this pursuit, our strategy is to align AI with human values and intent. The problem is that alignment isn’t easy.
In alignment, it’s not silly to assume that through countless human feedback, our ability to steer an AI to the right output will teach it to become smarter. The issue is that while AI does learn from our feedback, this outcome only holds true if we ourselves were objectively right and truly ethical.
The reality is that we aren’t. For instance, in procurement, if historical data reflects human biases—say, a defence contractor consistently winning bids due to political lobbying rather than merit—an AI trained on this feedback will codify those unethical patterns as ‘correct.’ The system might then automate preferential treatment toward connected vendors, scaling systemic discrimination.
Worse, these biases become structural, buried in layers of algorithmic complexity, making them harder to audit or reverse. In this way, AI doesn’t just learn our ethics—it mirrors our flaws. And so, technology, no matter how brilliant, remains a reflection of the values we instil within it. That’s why alignment is so difficult in traditional AI and even more difficult in newer, less predictable AI.
So what can we do on our projects?
To optimise how we use AI on projects, a triadic approach combining technical safeguards, robust governance, and human-centric processes is essential.
Technically, projects should use vertical AI systems that are designed with specific use-cases and are trained on verified, factual data—eliminating guesswork and biases. This springs to mind companies like nPlan, Nodes & Links and ConstructAI.
From a governance perspective, collaboration is critical: project associations should work closely with clients, contractors, public sector entities, and AI developers to establish clear, constitutionally aligned project guidelines. These guidelines should provide a structured framework for ethical and responsible AI deployment, ensuring compliance with industry standards and societal values.
Finally, a human-centric process is vital to ensure AI outputs are both practical and trustworthy. We’ve mentioned a "book-ending" approach, where human expertise guides AI at both the input and output stages. For instance, when using AI to design a build, a human expert first defines the parameters and steers the AI’s reasoning process. The AI then generates an output, which is rigorously validated by the same or another qualified professional. This iterative cycle of human-AI-human interaction ensures accuracy, ethical alignment, and practical relevance.
Together, these three pillars—technical precision, collaborative governance, and human oversight—create a comprehensive strategy to make AI safer, more reliable, and better integrated into project workflows today.