Tuesday, 10 March 2026

The AI Cannibalization Economy: When Every Generation of AI Eats the Last

Another important implication of the AI cannibalization cycle is its impact on the sustainability of Software-as-a-Service (SaaS) business models. Each generation of artificial intelligence tends to rapidly absorb and replicate the capabilities of the previous generation. Features that were once considered proprietary innovations are quickly incorporated into newer foundational models and platforms. As a result, the differentiation advantage of many AI-driven SaaS applications erodes rapidly. What previously required a specialized software product can often be reproduced by a newer generation of AI models integrated directly into broader platforms. This dynamic significantly compresses the traditional software business cycle. Instead of product lifecycles measured in years, AI-enabled services may experience competitive disruption within months. For SaaS companies whose value proposition relies primarily on algorithmic capabilities or data processing features, this creates a structural vulnerability. Unless such firms anchor their offerings in proprietary data, domain expertise, operational integration, or real-world infrastructure, their products risk being rapidly commoditized by successive waves of AI development. Consequently, sustainable competitive advantage in the AI era may depend less on the software itself and more on access to unique datasets, deep industry knowledge, and tightly integrated operational ecosystems.


A useful illustration can be seen in the rapid evolution of generative AI writing tools. Early applications built on large language models offered specialized services such as automated marketing copy generation, blog writing assistance, and email drafting. Several startups built SaaS products around these capabilities. However, when foundational models and productivity platforms such as OpenAI, Microsoft, and Google began integrating similar capabilities directly into widely used tools like Microsoft Word and Google Docs, the differentiation of many standalone writing assistants diminished rapidly. Functions that once required a separate subscription service could now be performed within mainstream productivity software. This illustrates how quickly AI-driven features can be absorbed into larger platforms, effectively shortening the competitive window for specialized SaaS products. Companies that relied primarily on access to generative algorithms found it increasingly difficult to maintain a sustainable advantage once those same capabilities became embedded in widely adopted digital ecosystems.


Breaking the AI Knowledge Loop: Why Organizations Must Anchor AI in Reality


Artificial Intelligence is widely perceived as a technology that improves continuously with scale. The prevailing assumption is that larger datasets and more sophisticated algorithms will naturally lead to better performance and deeper insights. However, a growing body of discussion among researchers and practitioners suggests that this development trajectory may contain a structural weakness. As AI systems become more widespread, increasing volumes of digital content are generated by AI itself. When new models are trained on large datasets that include AI generated material, the system may progressively learn from earlier machine outputs rather than from original human knowledge. This recursive learning process has been described as AI cannibalization or model collapse. Over time, the diversity, accuracy, and originality of knowledge embedded within AI systems may gradually decline.


The rapid proliferation of AI generated content across the internet has accelerated this concern. Articles, summaries, technical documentation, marketing material, software code, and visual media are increasingly produced by generative models. When future AI systems are trained on large internet datasets, a growing proportion of the training data may consist of machine generated content. This situation introduces several potential risks. First, knowledge diversity may decline because AI generated text tends to smooth over contradictions and uncertainties that are characteristic of human reasoning. Second, errors introduced by earlier models may propagate across successive generations of systems. Third, AI generated outputs often converge toward statistically averaged patterns, which may reduce the presence of unconventional or minority viewpoints. Finally, the accumulation of synthetic knowledge may reduce the system’s ability to generate genuinely novel insights. For organizations that rely on AI for operational analytics, decision support, and strategic planning, the implications are significant because the quality of insights produced by AI systems depends directly on the quality and diversity of the underlying training data.


Organizations that wish to benefit from AI while avoiding recursive self learning must therefore design systems that remain anchored in human knowledge and empirical observation. Artificial Intelligence should not be treated as a self contained intelligence engine but rather as one component within a broader knowledge ecosystem that integrates human expertise, operational data, and continuous experimentation. Three foundational elements are particularly important in this regard.


The first element is the preservation and systematic capture of human origin knowledge. Human expertise remains the most valuable source of contextual understanding, tacit judgment, and experiential learning. Many organizations possess large amounts of internal knowledge that are rarely curated or structured in ways that allow them to support AI learning. Maintenance reports, engineering design decisions, operational reviews, project documentation, and incident investigations all contain valuable insights derived from real experience. When these materials are systematically captured and organized, they form a rich dataset that reflects the complexity and nuance of real world operations. AI systems trained on such material can develop a deeper understanding of operational contexts, constraints, and decision processes that cannot easily be inferred from generic internet data.


The second element is the integration of real world operational data. Artificial Intelligence systems perform far more reliably when they learn from measured reality rather than from textual descriptions alone. Many modern organizations generate extensive operational datasets through sensors, monitoring systems, and digital infrastructure. Examples include energy consumption measurements in buildings, equipment telemetry from industrial machinery, environmental monitoring data, predictive maintenance vibration signals, and indoor air quality measurements. These datasets represent direct observations of physical systems and therefore provide empirical grounding for AI analysis. When AI models analyze such data streams, they remain closely connected to measurable operational conditions rather than drifting into purely synthetic knowledge domains. The combination of sensor networks, IoT infrastructure, and digital twin systems creates a powerful feedback mechanism that continuously refreshes AI models with new ground truth information.


The third element is the cultivation of continuous experimentation within the organization. AI systems remain effective only when they are supplied with new observations and validated outcomes. Organizations should therefore actively generate new knowledge through pilot projects, controlled trials, and operational experimentation. Examples include testing alternative energy optimization strategies, evaluating predictive maintenance approaches, experimenting with new operational workflows, or implementing technology trials in specific facilities. Each experiment generates new datasets that reflect actual system behavior under different conditions. These results enrich the organizational knowledge base and ensure that AI systems continue learning from real discoveries rather than from recycled or synthetic data.


Another important consideration involves the establishment of data provenance frameworks. Organizations should maintain clear visibility into the origin of datasets used for AI training and analysis. Training data can be broadly categorized into human authored content, operational measurement data, simulation datasets, and AI generated material. Each category has different characteristics and levels of reliability. Human authored and empirically measured data typically provide the most reliable knowledge foundations. Simulation datasets can be useful for exploring hypothetical scenarios or rare events that are difficult to observe directly. AI generated material can assist in certain modeling or scenario generation tasks but should not dominate the training process. Transparent data provenance helps organizations maintain confidence in the integrity and reliability of their AI systems.


Human oversight also remains an essential component of responsible AI deployment. Although machine learning algorithms can identify patterns and correlations across large datasets, they cannot fully replicate the contextual reasoning and ethical judgment of experienced professionals. Human experts play a critical role in validating AI generated insights, interpreting complex operational patterns, and ensuring that recommendations are feasible within real world constraints. Effective governance structures typically involve collaboration between domain experts, engineers, data scientists, and organizational leadership. Such human in the loop systems ensure that AI functions as an analytical support tool rather than an autonomous decision maker.


Organizations that successfully deploy AI at scale increasingly treat data as a form of strategic infrastructure. Just as physical infrastructure requires investment, maintenance, and governance, data ecosystems must also be carefully designed and managed. This includes establishing data governance frameworks, implementing data lineage tracking, developing knowledge repositories, and ensuring long term preservation of high quality datasets. Organizations that invest in robust data infrastructure are better positioned to develop AI systems that remain reliable, adaptive, and resilient over time.


Interestingly, sectors that operate physical systems may possess a natural advantage in avoiding AI cannibalization. Industries such as facility management, infrastructure operations, energy systems, manufacturing, and logistics generate continuous streams of operational data through sensor networks and monitoring systems. These industries can create a learning cycle in which real world measurements inform AI analysis, engineers validate the insights, operational improvements are implemented, and the resulting performance data becomes new training input for future models. This continuous interaction between physical systems and analytical models ensures that AI capabilities evolve alongside real world experience.


Artificial Intelligence should therefore be understood primarily as a knowledge amplification tool rather than an independent source of knowledge. The most effective organizational learning model can be described as a sequence in which human expertise generates knowledge, AI systems analyze and synthesize information, human experts validate the outputs, and real world feedback produces new datasets. This cycle ensures that knowledge continuously evolves rather than stagnates.


An analogy from agriculture provides a useful illustration of this concept. When farmers cultivate a single crop repeatedly without replenishing nutrients, soil quality gradually deteriorates and biodiversity declines. Similarly, a knowledge ecosystem that relies excessively on machine generated information risks becoming intellectually depleted. To maintain the health of the knowledge environment, organizations must introduce new insights, diverse perspectives, empirical observations, and experimental discoveries. This process can be viewed as a form of knowledge crop rotation that preserves the vitality and resilience of the intellectual ecosystem.


Artificial Intelligence will undoubtedly continue to transform industries and organizational practices. However, its long term effectiveness will depend on maintaining strong connections to human expertise and empirical observation. Organizations that design AI systems as part of an integrated learning ecosystem that combines expertise, operational data, experimentation, and governance will be able to avoid the trap of recursive self learning. Rather than becoming self referential systems, their AI capabilities will evolve alongside human discovery and real world experience. In the long term, the most valuable AI systems will not simply be those that process the largest quantities of data, but those that remain deeply grounded in how the world actually functions.

Saturday, 14 February 2026

Designing a Resilient AI Nation: A 4-Driver Framework for Singapore

Singapore’s 2026 Budget signals a decisive national push into artificial intelligence. The government is investing in skills, enterprise adoption, infrastructure, and governance to ensure that AI strengthens economic competitiveness while protecting workers. But funding and ambition alone do not guarantee success. History shows that complex technological transformations fail not because of weak technology, but because of weak system design.

Two powerful lenses help explain why: Charles Perrow’s theory of complex system failure and David Hardoon’s concept of the “silent fracture” between strategy and execution. Together, they suggest that national AI success depends on architecture, not algorithms. Background on Perrow and Hardoon is found in Appendix. 

To translate these lessons into practice, Singapore should anchor its AI strategy on four structural drivers.


1. Structural Governance - Prevent Systemic Failure

AI must be governed like critical infrastructure, not treated as another IT tool. Strong oversight, auditability, and clear accountability reduce the risk of cascading failures in complex systems. National-level coordination bodies and sector-specific safeguards ensure that innovation does not outrun safety.

2. Workforce Adaptability - Prevent Social Instability

AI transformation is fundamentally a human-capital challenge. Training programs, mid-career pathways, and certification frameworks should be viewed as national infrastructure. A workforce that can adapt quickly reduces resistance, prevents displacement shocks, and increases execution capacity across industries.

3. Enterprise Enablement - Prevent Fragmented Adoption

Without structure, companies adopt AI unevenly, creating inefficiencies and hidden risks. Standardized toolkits, trusted solution libraries, and sandbox environments help firms deploy AI safely while controlling complexity. Adoption should be systematic, not experimental.

4. Ecosystem Coordination - Prevent National “Silent Fracture”

AI success depends on alignment across government, industry, academia, and society. Shared standards, interoperable platforms, and collaborative research environments prevent fragmentation and ensure that progress in one sector strengthens the whole ecosystem.


Why These Four Drivers Matter Together

Each driver protects against a different type of systemic risk:

DriverRisk Prevented
GovernanceCatastrophic failures
WorkforceSocial disruption
EnterpriseInnovation fragmentation
EcosystemStrategic misalignment

Remove one, and the system becomes fragile.

The Strategic Insight

The global AI race will not be won by the country with the most models or the largest data centers. It will be won by the country with the most resilient AI ecosystem.

Singapore’s advantage is not size. It is system design capability. If it treats AI as a national systems-engineering challenge rather than a technology initiative, it can become one of the world’s most robust AI economies.

Closing thought

Robust AI is not achieved when systems never fail.
It is achieved when systems remain safe, stable, and trustworthy even when they do.


Appendix 


David Hardoon’s perspective highlights that most AI failures are not technical but organizational. His concept of the “silent fracture” describes the hidden gap between strategic ambition and operational capability. Organizations often invest heavily in AI tools yet lack aligned governance, clear accountability, skilled talent pipelines, and execution capacity. This mismatch leads to stalled projects, wasted resources, and leadership churn. Hardoon’s key insight is that successful AI adoption depends less on model sophistication and more on institutional readiness. In other words, AI transformation is fundamentally a systems-management challenge, not just a technology initiative.

Charles Perrow’s theory from Normal Accidents explains why complex technologies such as AI inevitably produce unexpected failures. Perrow argued that systems with high complexity and tight coupling will eventually experience breakdowns even when individual components work correctly. Failures arise from unpredictable interactions rather than single mistakes. Applied to AI, this means unintended behavior is not an anomaly but a structural property of advanced systems. His work emphasizes designing architectures that contain and recover from failure, rather than assuming perfect reliability. Together, Perrow’s framework shows that resilience must be built into the system’s design, not added afterward as a safeguard.

Sunday, 8 February 2026

Summary of “Dollars and Sense of Safety” (1940)

In Dollars and Sense of Safety, F. J. Van Antwerpen argues that industrial safety should be viewed not only as a humanitarian responsibility but as a sound economic investment. Writing in 1940, the author challenges the belief that safety programs exist purely for moral reasons and demonstrates, using long-term industry data and case studies, that safety delivers substantial financial returns.

Drawing on accident statistics from as early as 1912 and longitudinal evidence from heavy and process industries, the paper shows that systematic safety programs reduce accident frequency, lost workdays, compensation payments, and insurance premiums. Van Antwerpen emphasizes that direct costs such as medical expenses and compensation represent only a fraction of the true economic burden of accidents. Indirect or hidden costs including lost productivity, supervisory time, production disruption, retraining, and material damage are estimated to be four to four-and-a-half times the direct costs.


Through multiple industry examples including chemical, steel, oil, and manufacturing firms, the paper documents reductions of 40 to 80 percent in accident-related costs following the introduction of structured safety and hygiene programs. While the chemical industry exhibits relatively low accident frequency, accident severity and fatality rates are higher, underscoring the importance of engineering design and hazard elimination rather than behavior alone. [Low probability, severe impact quadrant]

The paper concludes that investment in safety improves profitability, operational efficiency, and workforce stability, proving that good safety engineering saves both money and lives. 


Citation

Van Antwerpen, F. J. (1940). Dollars and sense of safety. Industrial & Engineering Chemistry, 32(11), 1437–1444.
https://doi.org/10.1021/ie50371a007

Wednesday, 4 February 2026

From Productivity to Purpose: Reframing AI Adoption and the Emergence of New Occupational Roles

 Artificial intelligence is presently deployed predominantly as a productivity-enhancing technology within existing occupational roles. Across sectors, AI systems are used to automate repetitive tasks, improve operational efficiency, reduce labour costs, and accelerate decision-making processes. These applications are typically embedded within established professions such as data analysis, engineering, operations management, finance, and marketing. The widespread adoption of AI in these contexts is neither legally nor morally problematic; rather, it reflects prevailing economic incentives that prioritise measurable returns on investment, scalability, and short-term efficiency gains. Consequently, AI adoption has largely reinforced existing organisational structures instead of challenging their underlying assumptions about work and value creation.

However, the current pattern of AI utilisation reveals a significant structural imbalance. While roles that leverage AI for productivity optimisation are well developed, there is a notable absence of formal roles responsible for addressing the broader societal, human, and systemic implications of AI deployment. In most organisations, decisions concerning automation are framed almost exclusively in terms of technical feasibility and economic efficiency. Questions regarding whether AI should be applied in specific contexts, particularly those with significant labour surpluses or social vulnerabilities, are rarely assigned institutional ownership. As a result, the consequences of AI adoption for job quality, human dignity, skill erosion, and social cohesion are often treated as secondary or external considerations rather than core design parameters.

This absence of responsibility becomes particularly consequential in labour-surplus and lower-income contexts, where the diffusion of labour-saving AI technologies may exacerbate unemployment rather than alleviate labour shortages. As highlighted in contemporary debates on economic inequality, technologies initially developed to address workforce deficits in high-income economies frequently migrate into regions where gainful employment, rather than automation, is the more pressing need. In such settings, AI systems may unintentionally displace formal employment and accelerate informalisation, thereby deepening economic precarity. Despite these risks, few roles exist that are explicitly tasked with adapting AI systems to local employment realities or evaluating their distributive impacts across different socioeconomic contexts.

The lack of dedicated roles also extends to the long-term systemic consequences of AI adoption. Current optimisation paradigms emphasise speed, accuracy, and cost reduction, yet often neglect resilience, trust, and intergenerational equity. AI systems may improve short-term performance while increasing long-term fragility by eroding human expertise, reducing organisational redundancy, or concentrating decision-making authority within opaque algorithms. In the absence of roles that explicitly prioritise resilience and societal value, these risks remain under-analysed and under-managed. This reflects not a failure of technology, but a failure of institutional design.

The emergence of these unaddressed gaps suggests the necessity for new categories of professional roles that extend beyond traditional productivity-oriented functions. Such roles would focus on defining the purpose of AI systems prior to their deployment, safeguarding human dignity within AI-mediated workflows, adapting technologies to diverse socioeconomic contexts, and ensuring that AI contributes to long-term societal resilience rather than short-term efficiency alone. Importantly, these roles do not arise from opposition to AI, but from recognition that technological capability must be matched by deliberate governance and human-centred design.

Fresh entrants to the labour market are uniquely positioned to contribute to the creation of these new roles. Because such positions sit at the intersection of technology, ethics, policy, and human systems, they are not easily claimed by established professions or legacy hierarchies. Rather than being constrained by predefined job descriptions, early-career professionals may identify emerging problems created by AI adoption and articulate roles that address these unmet needs. Historically, many now-established professions, such as sustainability management, data science, and cybersecurity, emerged in precisely this manner, following the recognition of systemic risks that existing roles failed to manage.

In this context, the future of work should not be framed solely in terms of job displacement or skill obsolescence. Instead, it should be understood as a period of occupational reconfiguration in which new forms of value creation become visible. While AI will continue to enhance productivity within existing roles, it simultaneously generates demand for new forms of human labour that are oriented toward judgment, contextual understanding, ethical stewardship, and social adaptation. The capacity to invent such roles, rather than merely occupy predefined ones, represents a critical source of agency and opportunity for the next generation entering the workforce.

Monday, 2 February 2026

Why Suffering Does Not Transform Us. Why Disposition Determines Spiritual Growth

Difficulties do not inherently strengthen a person.The idea that difficulties strengthen one is not precisely correct. One can indeed use difficulties for self-strengthening, if one has the disposition to do so. If one lacks such a disposition, then the difficulties merely irritate one and make one unhappy.

It is not the difficulties that strengthen one but the disposition of the spiritual warrior that enables one to make constructive use of the difficulties.  They only become sources of growth when met with the right inner disposition,  a stable, cultivated orientation of mind that allows adversity to be used constructively rather than reacted to emotionally.

"Buddhist practice is difficult in daily life because suffering alone does not transform us; only a cultivated disposition can turn ordinary difficulties into genuine inner growth."

In everyday life, most people lack this disposition. As a result, difficulties tend to irritate, exhaust, or discourage rather than transform. This explains why Buddhist values are difficult to practice in daily life: daily situations trigger deeply conditioned habits,  desire, aversion, fear, and ego-defense,  faster than untrained awareness can intervene.

Buddhism does not claim that suffering itself leads to wisdom. Instead, it teaches that wise engagement with suffering, developed through intentional practice, leads to transformation. This corresponds directly to the passage’s assertion that there is no passive evolution. Inner growth requires aspiration, cultivation, and sustained effort.

Disposition is not fixed. It can be intentionally developed through aspiration (the desire to embody higher qualities), repeated practice, and conscious reflection. Over time, this work produces real changes in how a person responds—so that patience, compassion, and clarity become increasingly available without deliberate effort. At that stage, Buddhist values begin to express themselves naturally in daily life.

Human evolution, in this sense, is not automatic. It occurs only when experience is met with a trained disposition capable of converting difficulty into insight and resilience. Without such inner work, life continues, experiences accumulate, but no deep transformation takes place.

Saturday, 31 January 2026

Employability in the Age of AI: Why Fresh Entrants Must Build Portfolios, Not Wait for Jobs

 For generations, employability followed a predictable path: education led to entry-level jobs, jobs led to experience, and experience led to career progression. Industry growth reliably translated into more hiring, especially for young workers.

That model is breaking.

Today, industries can grow without creating proportional employment. Artificial intelligence, automation, and digital tools allow organisations to scale output while hiring fewer people—particularly at the junior level. Firms no longer need to groom large cohorts of young talent; instead, they select a small number who can contribute value quickly.

This shift creates a paradox for fresh entrants: jobs require experience, but experience is increasingly inaccessible without a job.

The issue is not a lack of ambition or ability among young people. It is a fundamental change in how work, value, and learning are structured.

The End of the Linear Career Path

Careers are no longer ladders. They are portfolios.

A portfolio career does not mean instability or unfocused job-hopping. It means deliberately assembling a set of complementary capabilities that travel across roles, industries, and economic cycles.

Employers no longer hire primarily on potential alone. They hire to reduce risk. In an AI-enabled workplace, the key question has shifted from “Can we train this person?” to “Can this person already do something useful?”

As a result, employability is no longer determined by credentials or tenure, but by demonstrated capability.

Experience Is No Longer Granted.  It Is Created

In the portfolio model, experience does not come only from formal employment. It is built through:

  • Applied projects

  • Case studies and simulations

  • Models and prototypes

  • Critical analysis and redesign of real systems

AI accelerates this shift. Used correctly, it allows young workers to simulate senior-level thinking, stress-test decisions, explore edge cases, and compress years of feedback into weeks of learning.

Those who use AI merely to generate answers will be replaced by it. Those who use AI to sharpen judgment and accelerate learning will stand out.



What Employable Fresh Entrants Actively Do

Fresh entrants who succeed in this environment behave differently from the outset.

First, they build proof, not promises.
Instead of saying “I am interested in…”, they produce tangible artefacts: a case study, a model, a prototype, a critique, or a redesign. These show how they think, not just what they claim.

Second, they design their skill portfolio intentionally.
They can clearly articulate:

  • Their anchor capability

  • What multiplies its impact

  • What allows them to translate it across contexts

  • What gives them real-world judgment

Random accumulation of skills no longer compounds value. Coherence does.

Third, they treat AI as an accelerator of experience, not a shortcut.
They use it to challenge assumptions, simulate decision-making, and learn faster than formal pathways allow.

Finally, they expect zig-zags, not ladders.


Early careers now include side projects, short contracts, hybrid roles, and pivots. These are not weaknesses. They are signals of adaptability in a volatile economy.

A Concrete Example

Consider a fresh graduate with an engineering or sustainability background.

Instead of waiting for an entry-level role, they analyse a real office building using publicly available data. They reconstruct its energy profile, identify inefficiencies, propose improvement options, and compare cost, carbon, and operational trade-offs. They document this as a short deck and simple model.

They use AI to stress-test their assumptions, challenge their logic, and refine their explanations. They add basic data visualisation and write a clear executive summary explaining decision trade-offs.

When interviewed, they do not say, “I lack experience.”
They say, “Here is how I analysed a real system, what I got wrong initially, how I corrected it, and what I would improve with better data.”

At that point, hiring them becomes less risky than hiring someone with credentials alone.

Why Growth No Longer Guarantees Jobs

AI decouples output from headcount and revenue from junior hiring. Economic growth alone can no longer be relied upon to absorb new entrants into the workforce.

The risk facing young workers is not technological unemployment alone, but capability mismatch. The opportunity lies in learning how to demonstrate value earlier and more clearly than previous generations needed to.

Redefining Employability

In a non-linear world, employability is no longer about rank, title, or tenure. It is about:

  • Learning velocity

  • Judgment under uncertainty

  • Ability to integrate technology as leverage

  • Breadth of problem exposure

  • Career optionality

The future will belong to those who treat themselves not as job seekers, but as evolving systems of value.

One line every fresh entrant should remember:

In a non-linear world, resilience comes from range, not rank.

The AI Cannibalization Economy: When Every Generation of AI Eats the Last

Another important implication of the AI cannibalization cycle is its impact on the sustainability of Software-as-a-Service (SaaS) business m...