AI in organizations: where strategy gaps turn into technostress
AI tools are already present in daily work. What’s still forming is clarity around expectations, evaluation, and responsibility.
In this interview, Iveta Bikse, Work and Organizational Psychologist, shares insights from recent discussions and qualitative research across Latvian organizations. Her observations point to a less visible layer of AI adoption, where unclear structures are beginning to surface as technostress, hidden usage, and shifting workplace dynamics.
1. Psychological contract: When AI becomes part of daily work, but expectations are not clearly defined, what happens to the psychological contract between employee and organization?
When AI becomes a daily tool without defined expectations, the psychological contract (the unwritten set of expectations between employee and employer) becomes strained. Employees may feel a "strategy-to-performance gap”, where they are expected to deliver market-beating results without clarity on how their roles are being redefined.
Loss/threats to Professional Identity. Beyond job security, employees now face a fragmented self where they lose their accustomed sense of identity at work. Professionals who use technologies and AI (but also without AI) now are facing a professional identity crisis (or at least feel that this is approaching) – should I work at the speed of a machine? Will my 10 or 20 years’ experience still be valuable to tech? Should I apply as many AI tools (at work, allowed ones, but in private – should I test a new one vigorously)?
Meaning and belonging. Without a purpose to believe in, employees may view AI as a threat to their autonomy rather than an enhancement of their skills. Lately, talking to tech HR representatives from global companies, I have recognized a trap of small market – when AI solutions must be used, but they are not applicable due to small samples. But behind the samples are potential and employees. Also, due to rapid changes in companies’ future that raises the question of the company’s business targets, client change, and it leads to employees' engagement level – will they be engaged in a fast-changing company, as it is hard to describe what kind of company it is (if strategy is shaped every quarter). And that leads to changes in motivation and rise of workplace stress – role clarity, additional tasks, and conflicts.
2. Hidden AI usage. Many employees actively use AI tools, but often outside official workflows and without sharing internally. What drives this behavior, and what does it signal about trust inside organizations?
Shadow AI or hidden usage often signals a lack of trust or fear that transparency will lead to increased quotas without increased rewards, and doubts about professionalism – who did the job, an expert, or AI, or experts of AI? Who will be praised?
If the organizational culture prioritizes individual competition, employees are incentivized to hide AI efficiencies to maintain a personal competitive edge. There is an effect of Spontaneous Emotional Bonds. Research indicates that people often form accidental emotional ties with general AI tools (like agents) while using them for simple tasks, leading to "intimate" digital lives that remain invisible to the organization and bring results that are highly evaluated.
3. Evaluation gap: As more work becomes AI-assisted, what exactly are organizations evaluating today: effort, output, or something else? Where do you see the biggest gaps?
It depends on the maturity of the organization and the definition of what constitutes high-quality task. Organizations are currently struggling to move from evaluating inputs (hours worked) to outcomes (value created). Activity vs value. Refining skills and looking for new skills development plans (more agile and individually based). But at the same time, many leaders still track activity-based plans rather than meaningful business results. The human and AI blend shows that the line between tool and "relationship" thins; it becomes difficult to measure where human effort ends, and AI output begins, leading to inconsistencies in how performance is rewarded. Also rises question – how will we look at a new candidate – rich experience or personality traits like openness, fast adaptation, and elasticity?
4. Leadership dependency: In many cases, AI adoption seems to depend heavily on leadership interest. Why does top management play such a decisive role in whether AI becomes integrated or remains experimental?
Agree, and it’s logical. Leaders decide on investments and company development. And therefore, they can hardly invest in pioneering with AI. Here, personal interest and beliefs in new technologies play a major role. AI adoption is rarely a bottom-up revolution (as all the life-changing innovations); it requires a top-down operating model and role modeling. Employees only change their mindsets and act proudly if they see respected leaders “walking the talk. C-suite interest is the primary driver of whether AI is integrated or remains a "toy”.
5. HR readiness: Where do you see the biggest capability gaps in HR when it comes to supporting AI-driven changes in work design, skills, and employee development?
The biggest gaps in HR involve moving beyond "staffing" to dynamic talent orchestration. HR should become more tech-savvy themselves or develop a new position within the department that understands how to restructure work design and evaluate skills-based contributions. And also plan technical skills for all employees – what to improve, how often. HR will likely need to split into two streams - one for technical skill assessment and another focused on culture and the psycho-emotional risks of digital identity fragmentation. Just ask your HR team – do they think they are people of tech?
6. Technostress signal: Interestingly, stress does not seem to appear at the beginning of AI adoption, but after people have already been using it for some time. Why does technostress emerge at that stage?
According to the latest research in Europe (March 2026), we must talk more about concerns, not stress - analyses show that concerns regarding privacy implications of AI are widespread, while AI stress is infrequent. Summarizing, trainers who teach AI tools evidence technostress in students (within the company in the Baltics, it’s quieter). Techno concerns typically do not appear at the start of AI adoption but emerge after the initial "novelty phase". At work the next phase is accompanied by change fatigue as the number of changes is increasing (and will not decrease). And from our Wellbeing index total results of 2025 suggest that working place impact on employees’ psycho-emotional health is exceeding 20%! For hard users, stress peaks when AI tools are updated or changed - if users perceive a change in a bot's "personality" as a personal loss, disrupting their workflow and emotional stability – because their resource is changing and low on supply.
7. Early workforce impact: There are early signals that hiring is slowing down more AI-exposed roles, particularly for younger professionals. From a psychological perspective, what impact might this have on how people enter and navigate the workforce?
Worldwide hiring is slowing in AI-exposed roles, with a 14% drop in job-finding rates for workers aged 22–25. Psychologically, this makes the transition into the workforce much harder for young professionals as traditional entry-level tasks are automated, leading to potential labor market exits or a return to education. Here we return to company sustainability principles – how it prepares for substitutability and planning in the long run. If we look at data, what youngsters plan to study, then the situation in Latvia (published research in 2026) shows that a lot of youngsters know their field of interests, but have no clue how to connect it with the needs of the labor market and the option to survive economically. Therefore, more help in orientation, longer onboarding, and personnel development in the business environment will grow the exact specialists the company will need in the future.
Sources:
https://hbr.org/2026/03/gen-ai-wont-make-your-employees-experts
* McKinsey, "A new operating model for a new world" (2025).
* McKinsey, "Change is changing" (2025).
Deloitte '2025 Global Human Capital Trends.'
* Anthropic, "Labor market impacts of AI" (2026).
Iveta Bikse Talentbe Partners Wellbeing Index 2025
Iveta Bikse, Qualitative research and insights on Latvian organizations and digital identity (2025).