Assessing the ICAP Framework


The paper by Chi and Wylie (2014) entitled ‘The ICAP Framework: Linking Cognitive
Engagement to Active Learning Outcomes’ describes parameters to assess learner’s activities in the modes of passive, active, constructive or interactive (ICAP) wherein one mode is subsumed by the later mode, interactive being the highest and include all activities of the previous modes. For instance constructive activities for authors include activities such as

  • drawing a concept map
  • taking notes in one’s own words
  • asking questions
  • posing problems
  • integrating two texts or integrating across multimedia resources
  • making plans inducing hypotheses and causal relations
  • drawing analogies
  • generating predictions
  • reflecting and monitoring one’s understanding and other self-regulatory activities
  • constructing time lines for historical
  • self-explaining

These activities can be and have been observed and evaluated in learner behaviors in a classroom and controlled settings by the authors.

Important for us is the way these modes have been used to get at learner’ understanding, in which we are interested, as per our proposal. The authors have drawn the relationships of different modes with learner’ understanding e.g. in passive mode displays minimal understanding [p.227], in active mode, somewhat shallow understanding takes place (p.227), more deepen in constructive mode [p.228] and deepest in interactive mode [p.228].

They have not drawn any explicit relation between the activities per se and the occurrence of learning e.g. whether the instances of ‘posing problems’ can really be taken as an indicator of conceptualization/understanding/or even ambiguity about a topic in the mind of learner. Learner may question just for the sake of questioning as normally happens in a classroom in order to reflect that he/she is not passive, but active participant in the learning discourse being undertaken in class. However, the learner may display other activities better than this particular activity and still end up displaying an ‘active’ mode.

One suggestion is to start drawing out relations among different activities suggested by this framework. For instance, a learner displaying the activity of ‘drawing analogies’ may be said to display some understanding (and accordingly activities can be planned in CUBE which require learners to draw analogies), but not ‘posing problems’ only.

Suitable to us would be those activities, and their relations, which give to us ideas of learner’s understanding in different media: we need to see it across our different platforms, offlien and online both. ‘Self-explaining’ would be more observable in whtsapp groups, but not in a face to face group interaction in a causerie, where some voices remain perpetually silent. In that scenario, we cannot give same weight to this indicator with other ‘drawing analgies’ for instance, which is more sure-shot than it. So, this learning behavior is expected to be more pronounced in online media, but the mechanism leading to this ‘self-explaining’ behavior remains concealed. As we would have noticed, participants many times display this behavior, when Arunan asks them something, through their sophisticated use of language and terms, about which they reveal to be knowing little on further prodding. At times, they come up with explanations culled out from internet. Necessarily then, the activity of ‘self-explaning’ in itself cannot reveal the level of understanding in a learner. It only classifies their utterance/behavior as ‘self-explaining’.

Thus, two points become clear at this juncture:

the equal/unequal weights to be given to all activities. Should we have some hierarchy where some indicator will be more weightage than the other? Like learning displaying behavior of ‘drawing analogies’ seems more profound to me than just ‘explaining’. Remember that, hierarchy has been drawn by authors on modes as whole as interactive <- constructive <- active <- passive, but not activities within these modes.

Further, as I mentioned before, no inter-relations between different activities have been drawn beyond saying that these activities may be different, but knowledge-change processes behind them are same (p.227). Because of this reason, all these activities appear standalone activities within a mode, as if ‘drawing analogies’ is mutually exclusive to ‘making a concept map’. If we could operationalize these relations, this would be a valuable contribution.

Should we have these hierarchies differ from platform to platform as some indicators will be more visible than the others in these platforms? How to make relations between the two sets of hierarchies. The framework has been used so far on face to face interactions in classroom and controlled settings in laboratories only. They explicitly say that ICAP can be used to guide your ‘control conditions’ in research design [p. 237] and admit that ‘there may be other theoretical interpretations of ICAP, viewing cognition that relies much less on representations and memory. However, we cannot derive an ICAP hypothesis from such an alternative view, other than the behavioural view…’ [p.239]. That is, it has not been used on learning in the wild! So, we have to careful in applying the framework to our wild, wild settings.

Theoretically, the framework, on the whole, appeared to me fostering computational understanding of mind, where learners are storing information in passive mode (p.225), engaging in different activities to integrate new information in their mental schema in active mode (p.226), or activation of their schemas in interactive mode to engage in a purposeful, interactive dialogue (p.226). To what extent, this framework can be utilized in wild settings like that of CUBE hubs, where learning processes are not only distributed across different geographies, but still remain typical to a particular setting, would require extreme care.