services
A holistic approach that accelerates your current vision while also making you future-proof. We help you face the future fluidically.
Digital Engineering

Value-driven and technology savvy. We future-proof your business.

Intelligent Enterprise
Helping you master your critical business applications, empowering your business to thrive.
Experience and Design
Harness the power of design to drive a whole new level of success.
Events and Webinars
Our Event Series
Featured Event
22 - 24 Jan
Booth #SA31 | ExCel, London
Our Latest Talk
By Kanchan Ray, Dr. Sudipta Seal
video icon 60 mins
About
nagarro
Discover more about us,
an outstanding digital
solutions developer and a
great place to work in.
Investor
relations
Financial information,
governance, reports,
announcements, and
investor events.
News &
press releases
Catch up to what we are
doing, and what people
are talking about.
Caring &
sustainability
We care for our world.
Learn about our
initiatives.

Fluidic
Enterprise

Beyond agility, the convergence of technology and human ingenuity.
talk to us
Welcome to digital product engineering
Thanks for your interest. How can we help?
 
 
Author
Anna-Liisa Lemettinen
Anna-Liisa Lemettinen

Organizations are intrigued by the ability of new technology and systems that can take decisions with varying degrees of autonomy. It becomes challenging where for example algorithms are a part of sensitive decision-making. You’ll have to deal with concerns like what is the responsible use of algorithms. Furthermore, having all these opportunities with algorithms, is there a safe way to choose the best suitable approach? Luckily, there is.

The following three considerations aims to facilitate a safer selection and operational use of algorithms:

  1. Consider the strengths and limitations of the algorithm
  2. Engage with the broader consequences of the algorithm
  3. Plan for auditability

The idea is to identify where things could go wrong in order to make choices that maximize the positive and minimize the negative, regardless of the approach and technology is used. So, for example, the factors are easy to consider, whether it’s about AI or complete human-written algorithms. An AI algorithm could be for intelligent character recognition to read handwriting, using Optical Character Recognition (OCR). Explore our deep learning solutions to understand the use of AI algorithms. An example of a human written algorithm could be a tailored algorithm to monitor a specific risk. This blog offers practical considerations to enhance safety, make effective use of algorithms, and design the world of tomorrow.

    • 1

      Consider the strengths and limitations of the algorithm

      You have an algorithm that already does its job or a potential one that you might use. To use the best algorithm, consider the strengths and limitations of your algorithm candidate in a 3-step approach, where you first want to understand the causal relationships around the task or problem where the algorithm will be used and what will drives business value. With this understanding, make a simulation to learn how well the algorithm works and finally come up with an informed decision.

      Step 1 - Establish real cause and effect relationships

      As algorithms are used to drive and support decisions for instance, the process of establishing valuable relationships of cause and effect is the key. Causality is to determine whether the results seen are caused by a certain action or whether other factors have triggered the process.

      While it can first appear to be clear what the causes and effects are, the relationships are often not so obvious. Processes are intertwined and overlapping, making it difficult to clearly articulate the real cause and effect relationships. To solve the challenge, it is tempting to look alone at patterns and correlation in data.

      For example, historical data on employment may show that people who are less motivated are not promoted and it is announced that this correlation shows that motivation drives promotion. However, is this truly the case? Motivation can be the cause, but it is equally possible that it is an effect.

      Obviously, looking only at correlation is not enough to come up with a valuable causality relationship. Supplementary methods are needed to establish real cause and effect relationships. For example, you can conduct interviews with a sample of external domain experts as well as end users and clients to understand the events and their correct order.

      Step 2 - Articulate the business value

      It appears still to be a core dilemma that algorithms are great to solve business problems and yet businesses are often not getting what they might need. Why so? Well, without knowing the business value related to the use of the algorithm, we are just shooting in the dark. The results can become disappointing, surprising, and even problematic. To articulate the business value means to define for the algorithm, what success looks like in order to spot useless outputs.

      For example, when data quality is the value driver, the business problem or task getting solved by the algorithm should increase aspects related to data quality. If the outcome produced by an algorithm is problematic for example from reliability point of view, this will not create a successful outcome. The same algorithm can be successful though in another context that drives another business value – but an algorithm that works well in one context may drive no value in another.

      Algorithm faux-pas can be avoided by understanding and articulating the business value and selecting algorithms that aid to drive that value.

      Step 3 - Make a simulation

      The Cambridge academic content dictionary defines simulation as “A model of a real activity, created for training purposes or to solve a problem.” Simulation is a good way to learn. In context of selecting the best algorithm, a simulation can be used as a method to understand better which algorithm is the most promising one to deliver the most useful output, with no harming effects to humans.

      The simulation design can be rigorous but a simple one can also do the job. For example, in a recent project, we planned simulations with three different algorithms: a tailored one, Chi Square, and Poisson Tolerance Interval. All three were used to solve one and the same business problem, to detect risk of Adverse Events (AE) under reporting in clinical trials. AE reporting, known to occur in both under and over reporting ways, is identified as one of the most important challenges in clinical research. In our case the over reporting was not in scope. We wanted to understand which algorithm to select since having three different ones used in the same context made no sense.

      Prior to the simulations, the team formulated what the success of the algorithm looks like, when considering the causal relationships and value drivers of the AE detection, processing, and reporting.

      In the simulation each algorithm was run with historical data, sourced by an electronical data capture system, which is a system where data like AE is collected from clinical trial sites. The outputs were looked at by cross-functional team containing process, data and technology experts.

      As result, a list of strengths and limitations of each algorithm candidate was prepared. Given that one algorithm can have several of the both, there needs to be a system of how to handle the case. Especially multiple limitations. Our system was to sum the limitations up and see if the business value is still reachable. A principle to follow could look like this:

      The limitation is negligent if and only if business value > SUM of limitations

       

      In our simulation case, we concluded that all three candidates were good algorithms. Each had limitations, but one algorithm was stronger to drive value, even with the given limitations. This algorithm became the number one.

    • 2

      Engage with the broader consequences of the algorithm

      The algorithms are increasingly woven into our lives. It seems we need ways to figure out how people are affected by them, and how they react, when algorithms do what people do, for example making decisions. Therefore, when considering the operational use of an algorithm, it is wise to think beyond typical computational aspects, and address human behavior and interaction. In an expert opinion paper “Man versus Machine or Man + Machine,” published in IEEE journal, Mary Cummings states that it is not a human vs. machine game, it is a game of sharing and combining human and machine interaction. Since it is about balancing the two, what defines the best balance?

      As a framework for engineers and scientists, she proposes to formulate critical questions as guiding principles to engage with the broader consequences of the algorithm and agree upon a well-balanced solution. The questions facilitate an assessment. There is much to say about assessments by using questions, but let’s keep this brief for now. The basic idea here is to make an assessment which characterizes human-machine interaction and come up with a role allocation for human and machine. Following Mary Cummings, the characterization can be made by using a taxonomy of skills, rules, and knowledge-based behavior (SRK taxonomy by Jens Rasmussen). The characterization expresses the complexity of tasks linked with the information processing behaviors and cognitions.

      How to make this work? Take the example of a team that wants to assess an algorithm that produces a credit score as an output. They start with the formulation of critical questions relevant to characterize the human-machine interaction. With the help of questions, they come up with a categorization and assignment of roles. For example, in our case the credit score system they characterize as knowledge-base. Regarding the role assignment, the team concludes that it is essential to have a human in the loop to notice unexpected occurrences of important factors. The algorithm output is therefore looked by human expertise as this is a good protection against unwished or unintended negative Outcomes.

    • 3

      Plan for auditability

      This consideration of auditability is especially relevant in a scientific domain, but it happens, that it is not clear when it should start to play a role. If there is any doubt, in my opinion, it is better to assume that it will (until proven otherwise). So make the algorithm selection process auditable, and work with a mindset that you always plan for audit. Auditability means for example, that you can demonstrate how you made sure that the algorithm selection is sound and accurate. This means, the documentation becomes key. The good thing in writing down, sometimes in painstaking detail is, that you can make more out of your documentation. Some benefits include:


      1. Forcing you to be explicit, and by doing that, it is likely that you reach better understanding, and this can be a way to get better and innovate.
      2. Enabling you to see problematic outcomes throughout the process, and correcting them proactively, instead of fixing the problems when they already happened.

      I have not touched the aspect of data specifically. Yet, it is a topic related to auditability. It may be needed that you provide your raw data and provide a description of the process that led you to take that specifically raw data, and how you derived it further and further to obtain the output.

      In summary, auditability is not enough just to show how was your selection process or which result the algorithm came to, and which steps were taken to reach that output, but part of auditability as well as the transparency about the use of data. Plan and welcome for audits as at the end of the day what the regulator wants is what the business demands, which is to improve.

 

Conclusion

The three considerations gives you a mini-package for you to start with. The more you practice how to bring safety and accuracy to your algorithm selection process, the more confident you will get about the use of algorithms. It is intentional that the three points interlinked with each other:

  • Understanding the consequences is easier when you know the strengths and limitations.
  • Selecting the best algorithm is safer when you understand the consequences the algorithm brings.
  • Designing for auditability is not only ensuring the regulatory requirements are satisfied, but can be a booster to drive business value.

 

Cover Image - With kindest courtesy of Gabor Mayerhofer

More Information