Artificial intelligence is presently deployed predominantly as a productivity-enhancing technology within existing occupational roles. Across sectors, AI systems are used to automate repetitive tasks, improve operational efficiency, reduce labour costs, and accelerate decision-making processes. These applications are typically embedded within established professions such as data analysis, engineering, operations management, finance, and marketing. The widespread adoption of AI in these contexts is neither legally nor morally problematic; rather, it reflects prevailing economic incentives that prioritise measurable returns on investment, scalability, and short-term efficiency gains. Consequently, AI adoption has largely reinforced existing organisational structures instead of challenging their underlying assumptions about work and value creation.
However, the current pattern of AI utilisation reveals a significant structural imbalance. While roles that leverage AI for productivity optimisation are well developed, there is a notable absence of formal roles responsible for addressing the broader societal, human, and systemic implications of AI deployment. In most organisations, decisions concerning automation are framed almost exclusively in terms of technical feasibility and economic efficiency. Questions regarding whether AI should be applied in specific contexts, particularly those with significant labour surpluses or social vulnerabilities, are rarely assigned institutional ownership. As a result, the consequences of AI adoption for job quality, human dignity, skill erosion, and social cohesion are often treated as secondary or external considerations rather than core design parameters.
This absence of responsibility becomes particularly consequential in labour-surplus and lower-income contexts, where the diffusion of labour-saving AI technologies may exacerbate unemployment rather than alleviate labour shortages. As highlighted in contemporary debates on economic inequality, technologies initially developed to address workforce deficits in high-income economies frequently migrate into regions where gainful employment, rather than automation, is the more pressing need. In such settings, AI systems may unintentionally displace formal employment and accelerate informalisation, thereby deepening economic precarity. Despite these risks, few roles exist that are explicitly tasked with adapting AI systems to local employment realities or evaluating their distributive impacts across different socioeconomic contexts.
The lack of dedicated roles also extends to the long-term systemic consequences of AI adoption. Current optimisation paradigms emphasise speed, accuracy, and cost reduction, yet often neglect resilience, trust, and intergenerational equity. AI systems may improve short-term performance while increasing long-term fragility by eroding human expertise, reducing organisational redundancy, or concentrating decision-making authority within opaque algorithms. In the absence of roles that explicitly prioritise resilience and societal value, these risks remain under-analysed and under-managed. This reflects not a failure of technology, but a failure of institutional design.
The emergence of these unaddressed gaps suggests the necessity for new categories of professional roles that extend beyond traditional productivity-oriented functions. Such roles would focus on defining the purpose of AI systems prior to their deployment, safeguarding human dignity within AI-mediated workflows, adapting technologies to diverse socioeconomic contexts, and ensuring that AI contributes to long-term societal resilience rather than short-term efficiency alone. Importantly, these roles do not arise from opposition to AI, but from recognition that technological capability must be matched by deliberate governance and human-centred design.
Fresh entrants to the labour market are uniquely positioned to contribute to the creation of these new roles. Because such positions sit at the intersection of technology, ethics, policy, and human systems, they are not easily claimed by established professions or legacy hierarchies. Rather than being constrained by predefined job descriptions, early-career professionals may identify emerging problems created by AI adoption and articulate roles that address these unmet needs. Historically, many now-established professions, such as sustainability management, data science, and cybersecurity, emerged in precisely this manner, following the recognition of systemic risks that existing roles failed to manage.
In this context, the future of work should not be framed solely in terms of job displacement or skill obsolescence. Instead, it should be understood as a period of occupational reconfiguration in which new forms of value creation become visible. While AI will continue to enhance productivity within existing roles, it simultaneously generates demand for new forms of human labour that are oriented toward judgment, contextual understanding, ethical stewardship, and social adaptation. The capacity to invent such roles, rather than merely occupy predefined ones, represents a critical source of agency and opportunity for the next generation entering the workforce.

