Wednesday, 4 February 2026

From Productivity to Purpose: Reframing AI Adoption and the Emergence of New Occupational Roles

 Artificial intelligence is presently deployed predominantly as a productivity-enhancing technology within existing occupational roles. Across sectors, AI systems are used to automate repetitive tasks, improve operational efficiency, reduce labour costs, and accelerate decision-making processes. These applications are typically embedded within established professions such as data analysis, engineering, operations management, finance, and marketing. The widespread adoption of AI in these contexts is neither legally nor morally problematic; rather, it reflects prevailing economic incentives that prioritise measurable returns on investment, scalability, and short-term efficiency gains. Consequently, AI adoption has largely reinforced existing organisational structures instead of challenging their underlying assumptions about work and value creation.

However, the current pattern of AI utilisation reveals a significant structural imbalance. While roles that leverage AI for productivity optimisation are well developed, there is a notable absence of formal roles responsible for addressing the broader societal, human, and systemic implications of AI deployment. In most organisations, decisions concerning automation are framed almost exclusively in terms of technical feasibility and economic efficiency. Questions regarding whether AI should be applied in specific contexts, particularly those with significant labour surpluses or social vulnerabilities, are rarely assigned institutional ownership. As a result, the consequences of AI adoption for job quality, human dignity, skill erosion, and social cohesion are often treated as secondary or external considerations rather than core design parameters.

This absence of responsibility becomes particularly consequential in labour-surplus and lower-income contexts, where the diffusion of labour-saving AI technologies may exacerbate unemployment rather than alleviate labour shortages. As highlighted in contemporary debates on economic inequality, technologies initially developed to address workforce deficits in high-income economies frequently migrate into regions where gainful employment, rather than automation, is the more pressing need. In such settings, AI systems may unintentionally displace formal employment and accelerate informalisation, thereby deepening economic precarity. Despite these risks, few roles exist that are explicitly tasked with adapting AI systems to local employment realities or evaluating their distributive impacts across different socioeconomic contexts.

The lack of dedicated roles also extends to the long-term systemic consequences of AI adoption. Current optimisation paradigms emphasise speed, accuracy, and cost reduction, yet often neglect resilience, trust, and intergenerational equity. AI systems may improve short-term performance while increasing long-term fragility by eroding human expertise, reducing organisational redundancy, or concentrating decision-making authority within opaque algorithms. In the absence of roles that explicitly prioritise resilience and societal value, these risks remain under-analysed and under-managed. This reflects not a failure of technology, but a failure of institutional design.

The emergence of these unaddressed gaps suggests the necessity for new categories of professional roles that extend beyond traditional productivity-oriented functions. Such roles would focus on defining the purpose of AI systems prior to their deployment, safeguarding human dignity within AI-mediated workflows, adapting technologies to diverse socioeconomic contexts, and ensuring that AI contributes to long-term societal resilience rather than short-term efficiency alone. Importantly, these roles do not arise from opposition to AI, but from recognition that technological capability must be matched by deliberate governance and human-centred design.

Fresh entrants to the labour market are uniquely positioned to contribute to the creation of these new roles. Because such positions sit at the intersection of technology, ethics, policy, and human systems, they are not easily claimed by established professions or legacy hierarchies. Rather than being constrained by predefined job descriptions, early-career professionals may identify emerging problems created by AI adoption and articulate roles that address these unmet needs. Historically, many now-established professions, such as sustainability management, data science, and cybersecurity, emerged in precisely this manner, following the recognition of systemic risks that existing roles failed to manage.

In this context, the future of work should not be framed solely in terms of job displacement or skill obsolescence. Instead, it should be understood as a period of occupational reconfiguration in which new forms of value creation become visible. While AI will continue to enhance productivity within existing roles, it simultaneously generates demand for new forms of human labour that are oriented toward judgment, contextual understanding, ethical stewardship, and social adaptation. The capacity to invent such roles, rather than merely occupy predefined ones, represents a critical source of agency and opportunity for the next generation entering the workforce.

Monday, 2 February 2026

Why Suffering Does Not Transform Us. Why Disposition Determines Spiritual Growth

Difficulties do not inherently strengthen a person.The idea that difficulties strengthen one is not precisely correct. One can indeed use difficulties for self-strengthening, if one has the disposition to do so. If one lacks such a disposition, then the difficulties merely irritate one and make one unhappy.

It is not the difficulties that strengthen one but the disposition of the spiritual warrior that enables one to make constructive use of the difficulties.  They only become sources of growth when met with the right inner disposition,  a stable, cultivated orientation of mind that allows adversity to be used constructively rather than reacted to emotionally.

"Buddhist practice is difficult in daily life because suffering alone does not transform us; only a cultivated disposition can turn ordinary difficulties into genuine inner growth."

In everyday life, most people lack this disposition. As a result, difficulties tend to irritate, exhaust, or discourage rather than transform. This explains why Buddhist values are difficult to practice in daily life: daily situations trigger deeply conditioned habits,  desire, aversion, fear, and ego-defense,  faster than untrained awareness can intervene.

Buddhism does not claim that suffering itself leads to wisdom. Instead, it teaches that wise engagement with suffering, developed through intentional practice, leads to transformation. This corresponds directly to the passage’s assertion that there is no passive evolution. Inner growth requires aspiration, cultivation, and sustained effort.

Disposition is not fixed. It can be intentionally developed through aspiration (the desire to embody higher qualities), repeated practice, and conscious reflection. Over time, this work produces real changes in how a person responds—so that patience, compassion, and clarity become increasingly available without deliberate effort. At that stage, Buddhist values begin to express themselves naturally in daily life.

Human evolution, in this sense, is not automatic. It occurs only when experience is met with a trained disposition capable of converting difficulty into insight and resilience. Without such inner work, life continues, experiences accumulate, but no deep transformation takes place.

Saturday, 31 January 2026

Employability in the Age of AI: Why Fresh Entrants Must Build Portfolios, Not Wait for Jobs

 For generations, employability followed a predictable path: education led to entry-level jobs, jobs led to experience, and experience led to career progression. Industry growth reliably translated into more hiring, especially for young workers.

That model is breaking.

Today, industries can grow without creating proportional employment. Artificial intelligence, automation, and digital tools allow organisations to scale output while hiring fewer people—particularly at the junior level. Firms no longer need to groom large cohorts of young talent; instead, they select a small number who can contribute value quickly.

This shift creates a paradox for fresh entrants: jobs require experience, but experience is increasingly inaccessible without a job.

The issue is not a lack of ambition or ability among young people. It is a fundamental change in how work, value, and learning are structured.

The End of the Linear Career Path

Careers are no longer ladders. They are portfolios.

A portfolio career does not mean instability or unfocused job-hopping. It means deliberately assembling a set of complementary capabilities that travel across roles, industries, and economic cycles.

Employers no longer hire primarily on potential alone. They hire to reduce risk. In an AI-enabled workplace, the key question has shifted from “Can we train this person?” to “Can this person already do something useful?”

As a result, employability is no longer determined by credentials or tenure, but by demonstrated capability.

Experience Is No Longer Granted.  It Is Created

In the portfolio model, experience does not come only from formal employment. It is built through:

  • Applied projects

  • Case studies and simulations

  • Models and prototypes

  • Critical analysis and redesign of real systems

AI accelerates this shift. Used correctly, it allows young workers to simulate senior-level thinking, stress-test decisions, explore edge cases, and compress years of feedback into weeks of learning.

Those who use AI merely to generate answers will be replaced by it. Those who use AI to sharpen judgment and accelerate learning will stand out.



What Employable Fresh Entrants Actively Do

Fresh entrants who succeed in this environment behave differently from the outset.

First, they build proof, not promises.
Instead of saying “I am interested in…”, they produce tangible artefacts: a case study, a model, a prototype, a critique, or a redesign. These show how they think, not just what they claim.

Second, they design their skill portfolio intentionally.
They can clearly articulate:

  • Their anchor capability

  • What multiplies its impact

  • What allows them to translate it across contexts

  • What gives them real-world judgment

Random accumulation of skills no longer compounds value. Coherence does.

Third, they treat AI as an accelerator of experience, not a shortcut.
They use it to challenge assumptions, simulate decision-making, and learn faster than formal pathways allow.

Finally, they expect zig-zags, not ladders.


Early careers now include side projects, short contracts, hybrid roles, and pivots. These are not weaknesses. They are signals of adaptability in a volatile economy.

A Concrete Example

Consider a fresh graduate with an engineering or sustainability background.

Instead of waiting for an entry-level role, they analyse a real office building using publicly available data. They reconstruct its energy profile, identify inefficiencies, propose improvement options, and compare cost, carbon, and operational trade-offs. They document this as a short deck and simple model.

They use AI to stress-test their assumptions, challenge their logic, and refine their explanations. They add basic data visualisation and write a clear executive summary explaining decision trade-offs.

When interviewed, they do not say, “I lack experience.”
They say, “Here is how I analysed a real system, what I got wrong initially, how I corrected it, and what I would improve with better data.”

At that point, hiring them becomes less risky than hiring someone with credentials alone.

Why Growth No Longer Guarantees Jobs

AI decouples output from headcount and revenue from junior hiring. Economic growth alone can no longer be relied upon to absorb new entrants into the workforce.

The risk facing young workers is not technological unemployment alone, but capability mismatch. The opportunity lies in learning how to demonstrate value earlier and more clearly than previous generations needed to.

Redefining Employability

In a non-linear world, employability is no longer about rank, title, or tenure. It is about:

  • Learning velocity

  • Judgment under uncertainty

  • Ability to integrate technology as leverage

  • Breadth of problem exposure

  • Career optionality

The future will belong to those who treat themselves not as job seekers, but as evolving systems of value.

One line every fresh entrant should remember:

In a non-linear world, resilience comes from range, not rank.

Monday, 20 October 2025

Cybersecurity Risks in Water and Wastewater Operational Technology (OT)


The water and wastewater sector is increasingly under siege from cyberattacks targeting its operational technology (OT) systems that manage pumps, pressure controls, and chemical dosing. As these systems become more digitized and remotely accessible, their exposure to cyber threats grows rapidly.

Recent incidents highlight this escalating risk. In the United States, the Tipton, Indiana, and Texas municipal water facilities suffered OT breaches that exposed vulnerabilities in remote SCADA access, forcing operators to switch to manual control. The Municipal Water Authority of Aliquippa, Pennsylvania, was compromised in 2023 when Iranian-linked hackers infiltrated a Unitronics PLC using default passwords. The attack briefly disrupted pressure regulation before staff restored manual operations.

Across the Asia-Pacific, similar patterns are emerging. In Israel, an attempted OT attack in 2020 targeted chemical dosing systems, underlining the potential to endanger public health. Meanwhile, a 2025 study in Australia found that over 60% of utilities had experienced OT-targeted attacks, many traced to state-sponsored actors. While public disclosures remain limited in India and Southeast Asia, the widespread use of remote vendor connections, outdated PLCs, and weak authentication suggest latent vulnerabilities.

To address these challenges, a strategic, defense-in-depth approach is essential. This includes segregating IT and OT networks, implementing multi-factor authentication, and enhancing intrusion detection tailored for OT environments. Regular auditing of vendor access and enforcing strict password and patch policies can further reduce risk.

Action Plan for Water and Wastewater Utilities

  1. Conduct regular cybersecurity assessments to identify vulnerabilities.

  2. Implement strong access controls to prevent unauthorized entry.

  3. Train employees on cybersecurity best practices.

  4. Use encryption for data in transit and at rest.

  5. Apply firewalls and network segmentation to isolate OT systems.

  6. Maintain updated anti-virus and endpoint protection tools.

  7. Patch software carefully, balancing operational continuity.

  8. Develop robust backup and incident response plans.

  9. Enforce multi-factor authentication, especially for remote access.

Safeguarding water infrastructure is no longer optional; it is a matter of national resilience. Strengthening cyber hygiene and OT governance today ensures the uninterrupted delivery of one of humanity’s most essential resources.

References

Saturday, 18 October 2025

When AI Becomes Vice: A Three-Lens Callout of OpenAI’s Erotic Pivot



OpenAI’s announcement that ChatGPT will soon offer “erotic role-play” for verified adult users marks more than a product shift ,  it signals a drift in mission. As Parmy Olson observes in The Straits Times, this “pivot to porn” may indeed be problematic but lucrative. 

What began as a promise to benefit humanity now risks being reduced to monetizing human desire. To understand how this misstep could reshape the AI landscape, let’s revisit it through three essential lenses , technology, governance, and society , inspired by The AI Dilemma (author Juliette powell and Art Kleiner)

1. Technological Lens — Build with Intention, Not Exploitation

AI is not neutral: it shapes behavior, moods, and expectations. When systems simulate emotional or erotic interaction, every design decision becomes a moral one. Will the system promote dependency? Will it exploit loneliness? Designers must prioritize human flourishing over “stickiness.”


2. Governance Lens — Demand Accountability, Not Excuses

The move underscores a deeper industry tension,   monetization vs. dignity. Corporations and regulators must step up:

- Investors and boards need enforceable ethics guardrails, not just revenue targets.

- Policy must clarify boundaries around emotional manipulation and content in AI.

- Users deserve transparency: clear consent, opt-out, and auditability of decisions.

Without accountability, profits easily displace purpose.

3. Societal Lens — Center Humans, Not Fantasies

The rise of erotic AI reflects deep human needs,  connection, intimacy, validation. But as Olson notes, many chatbots are designed around male fantasies, reinforcing skewed stereotypes. 

We should ask:

- Who benefits ? Who is harmed by this shift?

- Are we normalizing emotional dependency on machines?

- Can AI invest in empathy, mental health, and inclusion instead?

True progress demands AI that uplifts, not exploits.

A Call to Recommit: Purpose Over Profit

OpenAI’s erotic pivot may bring short-term gains, but it sets a dangerous precedent: intelligence driven by impulse, not integrity.

If AI is to fulfill its promise to humanity, it must continually ask:

1) Does it respect our humanity?
2) Does it mentor, not manipulate?
3) Does it uplift, not exploit?

In the end, it’s not enough to build powerful systems. We must build systems with purpose.

AI’s Real Test: From Bubbles to Bridges



Two recent Financial Times and Straits Times articles, “AI’s Double Bubble Trouble” by John Thornhill [1] and “Chatbots Are a Waste of AI’s Real Potential” by Gary Marcus [2], capture a defining paradox of our time. One warns that AI’s market valuations are inflating faster than its real-world impact, while the other laments that AI’s brightest minds are building conversational tools instead of scientific breakthroughs. Both are right, but both miss a larger point.

AI today is not just a technology boom. It is a societal test. Thornhill’s concern about speculative excess is valid: when companies trade at 225 times earnings or promise trillion-dollar transformations before delivering measurable value, the risk of a financial bubble looms large. Yet beneath that excitement lies a quieter revolution where teachers use AI to simplify lesson plans, small businesses optimize energy use, and non-profits analyze data that was once locked behind technical walls. These are not speculative ventures. They are examples of AI democratizing data analytics at the ground level.

Marcus is equally correct that chatbots alone will not cure diseases or engineer new materials. But dismissing them as “a waste” misses their role as gateway technologies. Generative AI interfaces lower the barrier between human intent and machine reasoning. They allow billions of people to converse with data using natural language rather than code. This accessibility forms the foundation for more specialized, domain-specific AI to develop. Chatbots are not the pinnacle of AI. They are the bridge between human imagination and scientific application.

The real danger is not that society focuses too much on chatbots or that investors chase speculative valuations. The danger is that we create an AI divide. If advanced AI systems are accessible only to corporations and research labs while the public remains confined to surface-level tools, we risk reproducing inequality in digital form. The “AI haves” will innovate, while the “AI have-nots” will only consume.

What we need instead is a tiered vision of AI democratization:

1) Entry-level AI such as chatbots to empower everyday users

2) Intermediate AI for professionals in healthcare, education, and engineering

3) Advanced AI for scientific discovery and societal resilience

Each level should strengthen, not isolate, the others.

AI’s ultimate success will not be measured by stock prices or the number of startups it spawns. It will be judged by whether it lifts human capability across all levels of society. 

The task ahead is not to choose between speculation and science, or between chatbots and super-intelligence, but to ensure that AI remains a bridge, not a barrier, between progress and people.

References

[1] J. Thornhill, “AI’s double bubble trouble,” Financial Times, Oct. 17, 2025. https://www.ft.com/content/da16e2b1-4fc2-4868-8a37-17030b8c5498
[2] Chatbots are a waste of AI’s real potential
https://www.straitstimes.com/opinion/chatbots-are-a-waste-of-ais-real-potential. 

Friday, 17 October 2025

Sustainable End-of-Life Solar Panel Recycling: Turning Waste into Resources


As the world races toward a clean energy future, a new challenge has quietly emerged . What happens when solar panels reach the end of their life? With an average lifespan of 25 to 30 years, millions of panels installed in the early 2000s are now nearing retirement. If left unmanaged, these panels could end up in landfills, wasting valuable materials and harming the environment. 


The Solar PV Panels Recycling Solution provides a transformative answer. Designed with sustainability at its core, the system processes up to 1,500 kilograms or about 75 solar panels per hour, giving each component a second life.

Through an advanced sequence of aluminum frame removal, glass separation, and EVA sheet crushing, this technology recovers up to 95 percent of valuable materials such as glass, silicon, copper, and aluminum. Each recovered element reduces dependence on virgin mining, lowers manufacturing costs, and prevents pollution.

This process is adopted because it represents the next evolution of renewable responsibility — closing the loop between clean energy generation and end-of-life care. It is best in class because it combines precision engineering, energy efficiency, and minimal environmental impact. Unlike traditional shredding or chemical methods, it uses a clean mechanical process that is faster, safer, and more sustainable.

This is not just recycling; it is a vision for a circular energy future where technology and nature work in harmony. It reminds us that true sustainability begins not at the start of a product’s life, but at its end

From Productivity to Purpose: Reframing AI Adoption and the Emergence of New Occupational Roles

 Artificial intelligence is presently deployed predominantly as a productivity-enhancing technology within existing occupational roles. Acro...