An oldie but goodie. Although this 2017 article “Using Multimedia for eLearning” was written years ago by Richard Mayer, much of it still holds true in today’s context.
Unable to read the full article? No worries. This excellent blog post by Andrew Bell on "How to use Mayer's 12 Principles of Multimedia" comes with specific examples for each principle. Its sufficient on its own to illustrate each point. However, if you are interested in the history of how Mayer's 12 Principles came about, do find a way to get your hands on this article. Its worth the read.
For examples closer to home (referring to NUS faculty members reading this blog post), I will also attempt to highlight some of the key principles, and explain how they were used to design a BL2.0 module.
To help you remember all these golden nuggets, I created an infographic (below) to accompany this MS Sway created by my CIT colleagues, which also neatly summarizes Mayer’s 12 Principles of Multimedia Learning.
In modern day, content creation tools are so abundant, inexpensive and easy to use. Hence, it's easy for instructors (educators) to get carried away with the bells and whistles while creating instructional content for our learners, and indirectly overloading the learners with visual and verbal cues.
Which brings us to the first pit-stop “Key Principles from Cognitive Sciences”. In a nutshell, people can learn through visual and verbal information (dual channel), have limited capacity to process only a few elements in each channel simultaneously, and meaningful learning occurs when people are able to process this information and integrate it into their memory.
The holy grail for instructors is to create multimedia instructional messages that will engage the learner that primes these 3 cognitive processes of selecting, organising and integrating.
Infographic
Mayer's article shares several principles useful in ensuring that the quality of your instructional content would not be left to chance. Using these guiding principles, our BL2.0 modules can be purposefully designed to enhance student learning. Special thanks to Mr Liang for his permission to reference his "DTK1234A Design Thinking" module for the following examples:
Coherence & Redundancy Principle
This slide introduced the concept of “Ideate” from Design Thinking Process:
Coherence Principle - simple text and simple visuals that relate directly to the learning topic. Remove all the fluff. Throughout the lecture, learners will appreciate how clean this slide deck looks (thanks to super TA, Matthew for designing this great looking, minimalistic template)
Redundancy Principle - throughout this lecture, there is already a narrated audio. So the instructor was able to include mainly graphics with minimal text, or even both. By having closed captions, it gives learners a choice to turn on/off depending on their individual preferences.
Multimedia Principle - the choice of every image used was deliberately selected. E.g. The prototype you see in Dia 1.0 shows an actual student’s prototype that occurs in another course, Design Studio, which runs tandem with this DTK1234A module. Students by this point would have created something similar and be able to relate to it. These same students will be able to connect the dots and reinforce their understanding.
Signaling Principle:
This principle basically recommends instructors to direct the learner's attention to the most important part. We can do this by using simple animation / text-formatting to point out significant information. Something to watch out for is to not overkill with the colour / bolding / italics, etc., else it defeats the original purpose.
Segmenting Contiguity
Integrating YouTube videos into the main recorded lecture (via MediaWeb) allows learners to continue learning seamlessly through an integrated multimedia lesson, rather than to “leave” and watch these YouTube videos separately. The different playback speeds gives learners control over how fast/slow they want and the book-marks / headers allow learners to jump to the different segments anytime. The “CC” button next to playback speeds allows learners to turn on/off captions at any time.
Closed Captions vs Subtitles
All BL2.0 content is designed to be inclusive. If it comes in the form of a recorded video, it must have closed captions so as to allow hearing-imparied students to learn by reading the captions. Since we are mentioning captions, it might be useful to know the difference between closed captions and subtitles, and how they are sometimes used interchangeably.
Also, the closed captions also take into consideration how the visual and verbal system takes in new information. I.e. having too much text on screen while a running commentary is going on, could overload learners. In terms of visual display, seeing graphics, printed words while processing voice-over narration could lead to a potential cognitive overload. Hence, allowing captions to be turned off, helps offload some of the processing capacity and enhance learning (this is backed up by research shown in Mayer’s article).
Tip: MediaWeb has a built-in auto captioning feature. While it's not 100% accurate, it does enough of the brunt work (around 80%) and leaves the rest for you to edit manually. If you are a NUS faculty who is looking to do closed captioning, you can tap on institution funding and engage external vendors who provide captioning service at a higher accuracy. Check out this user guide for Captioning & Transcription Services or Contact us to find out more.
Social agency theory and Cognitive theory of multimedia learning
The theories state that having the content animated as the instructor explains yields better retention than having it already pre-drawn. Popularity of Doodle-like animations (e.g. Ken Robinson’s classic talk on “changing education paradigms”) is proof that this works. It has given rise to more people using similar styles to help engage learners.
No need for fancy pen-like doodles, just simple animation to give emphasis on the centre of the intersecting circles (representing Design, Technology and Business respectively) just as the instructor hammered home the point about Innovation lying squarely at the intersection between 3 circles.
Personalisation, voice and Image (embodiment) Principles
For these recorded lectures, Mr Liang's “talking head” only appears at the start, end, and at key portions of the video, e.g. giving context of the real world examples seen from YouTube videos.
Despite advances in Siri and Alexa, human voice still is preferred to computer voice for now. Knowing this, Mr Liang chose to narrate the entire lesson. Fortunately Mr Liang has a nice soothing voice that engages the learner, so we did not have to find a voice-talent.
On topic of speech recognition, I want to share about this Voiceitt (Thanks Wanyun for sharing about this new technology) that uses machine learning to identify and learn your unique speech. How cool is that? What a boon to people with speech impairments!
The script used was also deliberately designed to be informal. More like a casual conversation to keep the lesson less tense and more engaging.
Oldie but goodie
Richard Mayer’s teachings have been around for a number of years, but many of his recommendations remain relevant and worth noting. While these best practices are skewed to narrated animations and slideshows, it could also be relevant to new education technology (e.g. AI-powered chatbot, AR/VR/MR, etc.). His article left us with 18 possible research areas to deep-dive, which is another blog post for another day.
In conclusion, Richard Mayer is a gem. An oldie but goodie.
Benedict
30 July 2022
PS+ thanks Jonta for sharing Rachel Mainero's Introduction to Multimedia Learning. It is by far the best example I have seen. Check it out, you are welcomed :)
Comments