Resources

Article

The state of AI in 2025

Article

The state of AI in 2025

Article

The state of AI in 2025

AI

Article

The state of AI in 2025

AI

AI is undeniably reshaping industries, and our cloud transformation is practically complete. At Visma, we are already several years into the AI transformation – one that promises even greater impact in the years to come.

A strong theme from Hg’s Leadership Summit in Silicon Valley was the risk of underutilising AI, which many think outweighs most other AI-related risks – as presented by Guido Appenzeller, partner at a16z. Companies that fail to adapt face real consequences, and disruption is inevitable for those who lag behind.

 So we’re committed to a supportive and permissive stance on AI experimentation, especially in cases that don’t involve personal or customer data, such as using AI tools for code generation. 

As our CEO, Merete, emphasises, “Experiment responsibly – but we also need to encourage and facilitate that experimentation.” 

Why is everyone talking about agentic AI? 

“Agentic AI” or “AI agents” refer to systems capable of planning, reasoning, and reflecting on their own work autonomously – and they continue to be a very hot topic for 2025. Because Large Language Model (LLM)-powered agents can process various unstructured input and select strategies and tools to achieve different goals, they are adaptive and can automate a wider range of tasks than traditional automation. What makes them powerful is their ability to automate tasks that once relied on human skills, which will inevitably revolutionalise business practices.

We see tremendous potential in agentic AI. Whether agents are sourced from third parties or developed in-house, they have the ability to enhance both internal processes and client interactions. A prime example is using agents in customer support. With access to tools allowing it to reset user passwords and change customer configuration, an AI agent can solve a greater proportion of customer tickets without needing to involve a human. Another example is OpenAI’s Operator, which can access and interact with the internet to carry out tasks independently. 

For SaaS companies like Visma, agentic AI brings both strategic challenges and opportunities, posing critical questions about how we might integrate them into our products and services to boost customers’ efficiency. For example, will business logic migrate from SaaS to agents over time – reducing SaaS to simple CRUD databases – or will agents make SaaS more valuable as our functionality becomes more accessible to more people?

Key processes in most of our domains are already impacted by LLMs today, and will likely be impacted by agentic AI soon. While we don’t have the answers to every question yet, it’s essential that we explore and experiment, assess risks and opportunities, and invest in innovation to disrupt ourselves before our competitors do.


Agentic Ai in action

Cognition's agentic system Devin helps developers be more productive; you could say it operates similarly to a junior developer. Just like it takes a little time to onboard and train a junior developer, Devin requires some time to learn certain software engineering tasks, but then it’ll hit the ground running. It will work on its delegated tasks more or less autonomously until they’re completed, resulting in code that a human can accept, modify, or reject. 

We are currently trialling Devin in multiple pilot projects, and early results are promising. For certain tasks, we see a reduction in human effort by up to  70%. 

AI tools are also transforming rapid prototyping by significantly shortening development cycles. Tools like Databutton, Bolt, or Lovable allow designers to swiftly create lower-fidelity prototypes that can speed up early concept validation and assumption testing. Code assistants like Replit, v0, Windsurf, or Cursor can be used by developers to produce higher fidelity prototypes depending on what the product trio is trying to achieve. 

With these advancements, agentic AI is set to revolutionise design and development processes, and we’ll likely begin to see more businesses leveraging agents in 2025. 

The state of AI at Visma 

Our organisation is now benefiting from AI-investments we made 3-10 years ago, such as Smartscan and Resolve, in the current generative AI cycle. When presenting on Visma’s AI journey at the Silicon Valley summit, Merete emphasised that companies just beginning to explore AI in 2025 can still achieve significant progress within just a few years. She also spoke about AI's transformative impact on customer support but stressed the necessity of escalating to a human agent when needed. 

Meanwhile, in software engineering, our focus has been on maintaining healthy software engineering practices and investing in our developers. Our CTO, Alex, explains, “It’s key for developers to be hands-on with the technology to experience the value in their own context – solving relevant problems in their own codebase.” 

We facilitate this in many different ways:

  • Cross-functional product focused hackathons 
  • ‘Fast Fridays’: Sessions where our Machine Learning team practices AI development techniques and experiment with AI-assisted development workflows
  • ‘No code’ workshops: Workshops where code changes are only allowed through prompting – participants are prohibited from writing and editing code.

We’re focusing on using AI to accelerate tech modernisation, and we’re currently undertaking 15 modernisation projects across the organisation, aiming for up to 40 more in 2025. Alex explains, “So far we see that AI can reduce human effort by 30-75% when modernising a tech stack, and it also helps reduce hosting costs by anywhere between 20-50% when replacing Windows/MSSQL.”

“Adopting AI technologies is ultimately about changing human habits."
- Merete Hverven, CEO of Visma 

And we’re supporting this transition through investing in upskilling, compliance clarification, and goal-setting on all levels, all underscored by transparent leadership. 


“The AI revolution won’t slow down, and the biggest risk is standing still.”
- T. Alexander Lystad, CTO of Visma 

The launch of ChatGPT in 2022 marked a significant shift in the era of LLMs, driven by breakthroughs that enabled training on nearly all publicly available internet data. As we near the limits of available training data, developers have turned to reinforcement learning and inference-time techniques to enhance capabilities. This resulted in advanced models like o1, o3, and DeepSeek-R1 – all excelling in tasks with clear, verifiable answers, like math and coding. 

With ongoing improvements in model performance paired with rapidly decreasing costs, revisiting AI tools that you once found lacking may now very well be worth your while. The pace of improvement is relentless, and what once seemed underwhelming may now be transformative.

Related content