Human-Machine Learning

28/03/2025

Technologies are ideas, and ideas cannot be eliminated. Mustafa Suleyman

The message is short, it's real, it was written in a digital app and it's an immediate response to a current event. In a small amount of space, the adult author recounted his experience at an official league game: he reported his thought process, he guessed a crucial error, he decided to check his assumption about the error, he outlined how he would process the error, he outlined how he would use it. He did not accept the error as an exclusively unpleasant experience but presented it as a normal part of his improvement in chess thinking.

Even such a small report can be packed with information about learning: how to proceed with checking learning, how to approach learning mistakes or how to plan learning development. Much has already been written about these topics. Significantly more than about the part of the report in which the author indicates functional learning using a machine for analysis. Of course, by this machine one can best imagine an online application that compares the position of the opponents' pieces and expresses the value of individual moves based on this comparison. Let us remember that the author of the report decided to use machine analysis to test a hypothesis about probably the most serious mistake he had made during the game. Our chess player used the analytical machine to form a new insight into his train of thought. He became more aware of the mistakes he made in this way; he was better able to process what mistakes were made, how serious the mistakes were, and at what stage of the game they occurred. To tentatively generalize this example: machine analysis in an application can support the development of a person's higher cognitive functions - for example, by providing an analytical starting point for synthetic thinking - in our case, for inferring a principle or rule that the author will consciously apply in the next game. In this way, the machine of our example can also be credited with developing and enhancing a person's awareness of his or her own learning.

The information in the report can be interpreted from several angles. Let's try to approach the situation presented in the report as the subject of a thought experiment: Let us suppose a machine that, while still analysing data, is analysing it in a direct conversation with a human learner. Based on the analysis, the application then answers questions and provides the requested information. The learner receives answers to questions in areas of interest, so the learner can develop in contexts to which he or she relates or for which he or she has prerequisites. One likely outcome of such an approach could be more precise, more targeted, and therefore more effective instruction that stimulates and develops thinking in a purposeful way. On the other hand, one must see some ethical risks associated with incorporating machine analysis into teaching. At least in the sense that Mark Coecklebergh talks about it in the context of AI: we need to remember the difference between what to do and how to do it. In doing so, more work needs to be done on the methods, practices and institutions needed to bring machine analysis into teaching safely and to bring developmental impact to learners. What impact will the inclusion of machine analysis resources have on the content of teachers' performance if we expect the informational part of their work to be handled more flexibly by an analytical machine? After all, we assume a machine that does not crash after four hours of counting, that does not judge learners' inputs, that does not suffer from stereotypes, but endlessly answers questions, explains everything, and generates data on demand that can be relied upon for learning. Is it realistic, therefore, to persist in the belief that support from live teachers will be increasingly sought after and more important than support from machine guides who respond as if they really understand the person? Isn't this in fact a trivial question, answered long ago by Hans Moravec when he called for the building of lifeboats, because he foresaw a flood of machines capable of gradually replacing human thought? Should we not therefore read his metaphor in the first place as a call to find ways that allow humans to navigate skilfully on a sea of results of machine analysis, and thus technologically navigate the development of human thought in all areas of education?

It is clear that as a civilisation we are standing on the shores that are being inexorably approached by a massive wave of change and innovation associated with the entry of more-than-human intelligence into education. One of the key questions, therefore, will be how we respond to this wave in the development of human thinking. The key will probably be how we harness its innovative power and how we formally introduce this power into the educational environment - it is no longer necessary to introduce it into the process of self-learning, where it is already taking root in one way or another. For this force to function as responsible innovation, it will be necessary to work from below, i.e. in such a way that there is a broad consensus among educational actors on the implementation principles and the choice of procedures that do not replace thinking but functionally support its development. The matter can be well illustrated by the learning requirement to test a hypothesis during the solution of an exploratory problem: which learning activities associated with exploration may or may not rely on the assistance of a machine in order to develop a person's ability to test hypotheses? Will it help this development if, for example, we allow the machine to analytically summarize the main points of the problem to be investigated in order to arrive at a successful solution? What criteria should be used to differentiate machine learning support in terms of the learners' current level of thinking? What will the functional repertoire of machine support for thinking development look like in each learning subject? The requirement to act responsibly when introducing innovations into education does not only have an ethical background but also implies a number of specific situations that go far beyond the needs of a chess player to confirm an assumption of an error in a league match. In the future, for example, we can count on machines not only to provide teachers with data on the course of a particular student's learning ascent, but also to use this data to design addressable learning scenarios tailored to the individual's level of thinking in a specific domain, starting with the culture of communication in the mother tongue, not ending with the culture of academic integrity. From this perspective, the teacher can then be seen as a professional who can make informed choices about the most appropriate scenarios from those proposed and then implement them practically in teaching. The machine that will provide the basis for the selection of learning paths can then be imagined as a helper that does not follow blindly given rules but continuously adjusts its suggestions according to the teacher's instructions (prompts).

The human ability to play chess is relatively far below the surface of Hans Moravec's metaphorical sea: machines have been reliably beating us at chess since at least 1997, when Garry Kasparov was beaten by Deep Blue. Yet humans have not stopped playing chess. They are getting better at it, and they are doing so by relying on machine analysis during the learning process, which allows them to recapitulate the development of the game more efficiently, to identify more accurately the weaknesses or strengths of their logical-mathematical thinking, and to plan more thoughtfully for continued learning. Supporting learning through the means of machine analytics or artificial intelligence is appealing for many reasons. Less appealing is the notion of the thematic breadth and potential depth of discussion that must take place before the latest technological means in education can be legalised or legitimised. Despite the complexity of the problem and despite the many relevant actors, a consensus needs to be found at national, European or global level. Without it, there is a risk that the flood of learning machines will arrive before we have time to build the first ship.

Karel Dvořák PhD. – DaCoSiDe expert

All rights reserved EDUAWEN EUROPE, Ltd.

Automated analysis of text or data within the meaning of Article 4 of Directive 2019/790/EU is prohibited without the consent of the right holder.

Sources:

BRIDLE, James. 2024. Způsoby bytí. Za hranice lidské inteligence. Brno : Host. 359 p. ISBN 978-80-275-2264-4

COECKELBERGH, Mark. 2023. Etika umělé inteligence. Praha : Filosofia. 268 p. ISBN 978-80-7007-746-7

GARDNER, Howard. 1999. Dimenze myšlení. Teorie rozmanitých myšlení. Praha : Portál. 398 p. ISBN 80-7178-279-3

SULEYMAN, Mustafa – BHASKAR, Michael. 2024. Nezadržitelná vlna. Technologie, umělá inteligence, moc a největší dilema 21. století. Praha : Audiolibrix. 383 p. ISBN 978-80-88494-39-3

TEGMARK, Max. 2020. Život 3.0. Člověk v éře umělé inteligence. Praha : Argo Dokořán, 294 p. ISBN 978-80-7363-948-8