This blog post was first published on our Norwegian blog, and you can read the original article here.
1. Competence
To be able to implement artificial intelligence, cutting-edge expertise is required. You must determine whether this is a competence you want to purchase or recruit for yourself. Data Scientists (people with machine learning and good analytical skills) have been voted the sexiest job in the 21st century. The demand for Data Scientists is greater than the supply. In other words, they are highly sought after by businesses of all industries and sizes.
If you want to build your own Data Science team, you need to be patient. At the same time, artificial intelligence does not live its own life; it must be put into production before gains are made. If you hire a team, you get the team in place much faster, but it can be more difficult to integrate the solution with your own systems, than if everyone is in-house.
2. Data capture
It has often been said that “Data is the new oil”, and whether this is true or not can surely be argued. However, if you are looking to implement AI, you need to have some data available for the system to learn from.
Often, the data quality is lower than desirable and in many cases so poor that it is not possible to build any sensible intelligence on top. This is so well-known that the term “garbage in, garbage out” has become a cliché in the industry.
The data must be captured in such a way that a system can learn something from it at a later time. In many cases, business processes need to be changed to better structure the data. Changing these processes to improve data capture is something that extends over a longer period of time, and is something you should include in the calculation of whether you want internal or external expertise.
You might also be interested in reading: Gartner’s top strategic technology trends for 2021.
3. Privacy and security
Many AI solutions are designed to make decisions that are ultimately about people. The consequences of this are that the data used for learning should be well secured, and the project must be impact assessed. Also, there must be a clear legal basis for being able to use the information. Risks can be reduced by, for example, anonymising the data set, but be aware that this process is also seen as data processing and thus should be authorised and impact assessed.
Security encompasses more than just privacy. The strategy should include elements so that the solutions are not easily deceived. In addition, as few people as possible should have access to training data and training servers.
4. Ethics
Ethical AI is about AI having to make choices that we humans can stand for. If you do not have a conscious relationship to this, it is easy to build models that (unintentionally) discriminate against people in an automatic decision flow. For example, if you have a bias in your training data, you can be so unlucky that your digital loan consultant rejects all loan applications from immigrants regardless of ability to pay.
Beyond this, it is important to think about whether one wants artificial intelligence to be explicable and verifiable. Will your system always make the same decision using the same input data?
Finally, companies must consider exactly what they want to use AI for. Should AI replace some of the tasks of human workers? If so, which other tasks should they take on instead? Perhaps you should rather use AI to increase profits instead of lowering costs, and thus retain the number of jobs?
5. A problem to solve
Last but not least, you need to find problems that it makes sense to solve. Some companies think they have adequate data that it is easy to “do something smart with” to create value. You know your business best, and which problem should be solved. There is no point in doing AI without a sensible business case behind it.
Do you want to learn more about how we work with artificial intelligence and help customers incorporate AI into their operations?