Following on from our article last week about the types of Communication Professionals, we wanted to answer a question that we get asked a lot. What is the difference between Electronic Notetaking, live captions and Speech to Text Reporters? As all students started studying remotely during the Pandemic, deaf students who may have not received NMH support for their studies have needed to, due to the online environment.
So, let’s first start with
Speech to Text Reporting vs Live Captioning
Speech to Text Reporter: human-generated captions
A Speech to Text Reporter (STTR) listens to what is being said and inputs it word for word into an electronic shorthand keyboard called a Stenography or a Palantype, which is linked to a laptop.
Unlike a QWERTY keyboard, not every letter is pressed, but several keys will be pressed at once which represent whole words, phrases or short forms.
Specifically designed software will then convert these phonetic chords back into English which can then be displayed for someone to read. The text is displayed on either the screen of a laptop for a sole user or projected onto a large screen or a series of plasma screens for a larger number of users.
A Speech to Text Reporter produces a verbatim account of what is said at speeds in excess of 200 words per minute. Extra information such as (laughter) or (applause) is shared in brackets. This informs the user of the service of the mood and the environment aspects such as shouting and banging as well.
The user also gets an understanding of who is speaking, by the use of names/colours, which is necessary in a seminar environment, for example.
Live Captioning: computer-generated captions
There is technology available that can provide this service, but there can be lower levels of accuracy and as this is being provided via tech rather than an individual. It is difficult therefore, for the deaf student to receive the services in a way that supports them specifically.
These captions can also be heavily delayed, meaning that the captions displayed often don’t match the verbal rendition of the lecture. This means that the student is always ‘behind’ the content and likely to miss content and context.
Relying on computer-generated captions in a live environment will provide even higher levels of inaccuracy, as the sound quality will be much lower.
What should you recommend?
Discuss this with the deaf student first and be transparent about the differences between human-generated and computer-generated captions.
Also run this as a trial and make sure that the deaf student knows that this is a trial and can come back to you and request a Speech to Text Reporter if it is not working for them.
What about Electronic Notetaking?
It is important to remember that when a deaf person is either trying to lip read the presenter or watch the Interpreter, they are unable to take notes at the same time, as this would involve switching eye gaze and missing the content.
Hearing people can take notes at the same time as listening to the content.
So, an Electronic Notetakers would be used to take notes during both lectures and seminars and even during work alongside peers.
Cambridge Research found that without good notes, students capture just 35% of what is being said by the lecturer.
In addition to this, 65% of content is lost from class without notes, which included deadlines and the key points that students need to prepare for exams.
So, for a deaf student who is either watching a BSL Interpreter, watching/lipreading the lecturer or reading live notes, whether this be through a Speech to Text Reporter or Live Captioning – notes are always necessary for a deaf student, as whilst they are watching, they cannot take notes.
We provide Specialist Notetaking services remotely, where the notetaker accesses the lecture and then provides the notes to the deaf student afterwards. These notes are provided in an electronic format, which means that the deaf student can refer to them afterwards and can easily use the search function to find the area of the notes that they are looking for.
One question that often comes up is whether Live Captioning could be a substitute for Notetaking?
Well no, these are very different services.
Live captioning (human or computer) will be happening at the same time as the lecture is happening and will still not enable the deaf student to take notes, as they will be reading the captions.
Notetaking support is there to enable deaf students to recap afterwards to reinforce learning.
So, to summarise – a deaf student would need either human-generated captions or computer-generated captions alongside notes, as both of these services provide entirely different solutions.
Check out these other posts...
In recent years, there has been a growing recognition of the need to address the unique challenges faced by deaf individuals in accessing government services