Artificial intelligence is a powerful technology and a helpful tool for many companies. Just like that, the technology is more relevant in this world than ever. The results we have already seen is priceless for companies on areas like cost reduction, product optimization and especially data patterns.
But how can we use AI on valuable areas like helping the children? Especially the children who are vulnerable and distressed in the community. And how far are we actually willing to go in order to make use of data when we talk about vulnerable subjects - sucs as children?
The online conference, European Forum on AI & Data Ethics 21 & 22 October 2020, illuminates the issue with the session, "AI - for the sake of the children!?", where we have invited the people behind three cases 'on stage'. Each of them will present their individual case on how they help the weakest (the children) in our society, and how they work with privacy and data ethics. We invite you all to join the debate.
Read all about the exiting cases, which will be discussed at the conference, right here:
The client case of itelligence: Improving Child Welfare with AI
Children’s Welfare (Børns Vilkår) is a Danish child protection organisation, which works for children’s rights in Denmark, focusing especially on neglected children. With the Child Helpline, the organisation offers free, anonymous, and professional counselling over the phone as well as through SMS and chat to thousands of children each year.
With the rise in demand for advice among children in recent years, more than half of the children’s attempts to get in touch with the Child Helpline have gone unanswered because the organisation has insufficient technical capacity and too few counsellors. Furthermore, studies have shown that children are not getting the same quality of advice and opportunities even though their situations are the same.
In finding a solution to make sure that every child receives the help he or she needs, the Children’s Welfare has partnered with the company itelligence. They have developed an AI solution - an Advisory Agent system - that supports, improves, and speeds up counselling sessions.
As a result, the counsellors can handle more conversations without compromising the quality of the interaction with each child. Yet, the combination of AI and child welfare has its opponents even in a progressive country like Denmark.
Thomas Noermark from itelligence will tell you more about the project.
The Gladsaxe model: Data-driven decision making in relation to ensuring children’s well-being and developmentThe Gladsaxe model was set to be used in an experiment to prevent distress among children in vulnerable or exposed situations in Gladsaxe Municipality in Denmark.
The purpose of the model was to create an early warning system to trace children who were vulnerable due to social circumstances even before they showed actual symptoms of dysfunction. Gladsaxe Municipality conducted a lot of statistical analyzes, so they could assess the possibilities of developing the model and be sure, that the model could become a reality. Based on previous use of statistics, the authorities decided to combine information about “risk indicators”.
Gladsaxe Municipality worked on the principle of algorithm transparency in decision support. It was important that every citizen could understand the considerations underlying the process and how “the model” reached the various decisions.
This means, that data can not and should not stand alone. In the case of an identified child, based on the data-driven decision support, the child would always to be assessed by a relevant professional before any personal contact to ensure accurate and respectful actions. Nonetheless, the model faced public criticism, and Gladsaxe Municipality decided to shut down the project.
Thomas Berlin Hovmand from Gladsaxe Municipality will present the case of the Gladsaxe model.
The client case of 2021.AI: Applying AI to create more comprehensive, safe, and accurate assessments of social service cases in Norrtälje Municipality
One of the departments of the social services in the Swedish municipality, Norrtälje Kommun, investigates and administrates concerns of various child and youth mistreatments. Over the years, the numbers of case notifications about suspected mistreatment have increased significantly. The cases are handled manually, and it takes a social assistant a good deal of time to assess each case properly.
Furthermore, it is a very long process with several steps along the way. With the high volume of cases and only a handful of staff, the long process could prevent the right actions from being taken at the right time to help the children. And for some children, timing is critical! Furthermore, the timely process may increase costs for the public sector due to case intervention at a later stage when a situation could have become more severe and require more resources.
The Danish company 2021.AI has worked with Norrtälje Municipality to find a way to optimize the processes of the department. They have developed and implemented an AI model to support decision making in the pre-assessment stage to become more productive, without sacrificing the quality and accuracy of each assessment.
Thus, 2021.AI has created a more accurate, safe, and efficient system to assess social service cases through AI. The AI model does not have access to personal data but makes its predictions based on a combination of reporting factors such as domestic violence, substance abuse problems or the like, but it can never be diverted to individuals.
Björn Preuß from 2021.AI will tell you more about the results of the collaboration between Norrtälje Municipality and 2021.AI.
Check out the rest of the program which contains many relevant cases on AI and data ethics: