An Analysis and Projection of AI in Education
The History of Natural Language Processing
Natural Language Processing (NLP) is a subsection of artificial intelligence that helps computer systems understand, interpret, and utilize human language (Chowdhury & Chowdhury, 2019). The creation of NLP can be traced back to the work of Swiss linguistics professor Ferdinand de Saussure who described language as "systems" (Dataversity, 2022). Within any language, a sound represents a certain concept that can shift meaning as the context of the situation in which it is used changes. Saussure contended that meaning is created inside of language through relations and contrasts between different parts. A shared string of language, or a system, is what legitimizes communication; this is referred to as synchronic linguistics (Dataversity, 2022).
​
The focus of studying language as a system outlined the structuralist perspective in linguistics (Dataversity, 2022). The structuralist approach examines language at any given point in time versus looking at its historical development. It evaluates the internal structure and relationships within a language at a definitive point in time. This structuralist approach strongly influenced early computational language processing in the 1950s and 1960s. Researchers began to explore and make sense of ways to process and understand language using computers, and this period marked the beginning of Natural Language Processing within AI (Dataversity, 2022).
Below is a timeline that maps NLP advancements from 1950 to the early 2010's, illustrating the marked growth in this subsection of AI and the current considerations for using NLP (Dataversity, 2022). More recent NLP developments are outlined in the section below the timeline.
NLP Today
The field of Natural Language Processing has made significant advancements in the past 5 years alone, specifically in the realm of large language models. Large language models are a form of NLP that uses something called Transformer architecture to understand massive amounts of uncoded text (Vaswani et al., 2017). What makes it such an effective and popular model of NLP is the fact that it is trained on a vast amount of data and, from this data, can infer new content. This is called "parameters", and large language models contain, on average, at least one billion parameters (Vaswani et al., 2017).
In 2017 at Google Brain, Vaswani et al. published a paper entitled "Attention Is All You Need" which introduced the Transformer architecture for computers to decode, process, and generate language. Rather than previous computing methods that process each word at a time, the Transformer architecture uses a process called "self-attention" (Vaswani et al., 2017).
Self attention allows a computer the ability to focus on individual words in a sentence. This helps the model understand which words are important to make predictions about and generate new text. It can also understand different aspects of a sentence in a non-linear manner to detect relationships between words in a sentence (Vaswani et al., 2017).
The Transformer architecture is superior to other NLP models due to its ability to capture more complex relationships between words and sentences at a faster speed.
​
In 2018, OpenAI, an artificial intelligence research and development lab, released GPT-1 (Radford et al., 2018). GPT stands for "Generative Pre-trained Transformer" which is a type of large language model that uses the Transformer architecture to train neural network models on a large text dataset (Radford et al., 2018). This was a monumental development for creating AI that could rapidly process and generate large amounts of text to communicate in a human-like, conversational manner.
​
OpenAI released GPT-2 in 2019 and GPT-3 in 2020, both with improved text generation capabilities (OpenAI, 2023). Their publicly-available chatbot, ChatGPT, operates on the GPT-3 model and currently has over 100 million users. They also have a subscription-based model, GPT-4 (OpenAI, 2023).
​
​
click to read the full paper
General NLP Considerations
Since the advent of artificial intelligence, researchers, developers, and scholars across various disciplines have been evaluating the risks and ethical dilemmas associated with creating models trained on inherently biased human language and using these models to communicate, share information, and complete tasks both with and for humans.
For a long time, AI models operated on "if-then" algorithms, where a known input would generate a known and anticipated output (Roose & Newton, 2022). Under this model, we have seen machine learning replace jobs like manufacturing, store checkouts, customer service, travel agencies, data entry jobs, toll booths, warehouse jobs, and more. The phasing out of these professions led to job displacement for individuals, specifically those with fewer education credentials. However, it also created and transformed various jobs, allowing people to focus on more complex and impactful aspects of their jobs that cannot be automated.
​
Until recently, the realm of automation that AI occupied appeared finite; processes that required creativity, critical thought, and careful analysis could not be replicated to a human-like degree (Roose & Newton, 2022). With the advent of advanced large language models with more complex processing algorithms, machine learning progressed substantially (Roose & Newton, 2022). For many, their first encounter with publicly available generative AI was in 2022 with DALL-E, a text-to-image generator created by OpenAI. Now, generative AI is embedded within many applications that we use, with companies creating multimodal text-to-image generators, chatbots, code generators, online tutors in virtually every subject, and more. Generative AI is capable of automating more human tasks than ever before, which will allow for advancements and optimizations in every field and discipline.
If viewed through the lens of productivity, the use of generative AI is advantageous, if not empowering, for people to use in their work and personal life. People can us AI to automate many repetitive and time-consuming tasks, thus creating more opportunities for individuals to focus on tasks that require critical thinking, creativity, and nuance. For example, generative AI has already made significant strides in education for teachers. They can use generative AI to create lesson plans and map out the steps of specific activities that adhere to learning standards. Considering the recent and ubiquitous issue of low teacher morale, the ability to save time on certain tasks is imperative to teacher retention.
​
​
Conversely, if viewed through the lens of human creation and creativity, generative AI's productivity value diminishes. Its ability to automate processes allows people to skip straight to the outcome of a task. From there, they can make adjustments and personalizations, but ultimately the owner of the work is nebulous. Is it the human, or the AI? What does it mean if the AI cannot claim ownership, since it is not actually capable of thought (although it is capable of creation)?
​
In a May op-ed in The New York Times, columnist and journalist Ezra Klein considers advancements in technology in the recent decades and how they have influenced productivity, creation, attention, and quality of life (Klein, 2023). He mentions the paradox of having access to more information than ever before, and being able to find anything within seconds, but not achieving any notable gains in productivity (Klein, 2023). Our attention is spread so thin that we cannot focus on one task for more than a matter of seconds when using technology. But now, generative AI is completing tasks in a fraction of the time, and without the mental strain. We receive the product of its "work" without having to will ourselves to focus longer than it takes to write a prompt in a chatbot.
​
Currently, AI struggles with generating content that requires reputable sources; therefore, Klein contends, productivity will actually be diminished, because for the hundreds of ideas and articles and code that generative AI can produce in a matter of seconds, someone will have to review and validate all of it (Klein, 2023). It is important to understand the limitations of the model currently in order to evaluate just how effective it is (Bender et al., 2021).
Even if we reach a point where AI can effectively summarize documents and create reputable content, we are all losing something by not going through the process ourselves. There is so much critical thinking, reflecting, processing, revising, questioning, collaborating, and outsourcing that happens when we create our own work, and all of these phases of creation are just as valuable as the end product (Klein, 2023). We learn about the world and ourselves when we immerse ourselves in a project, and this could be diminished by AI in the name of productivity. But, would we really be more productive?
​
Listen to an audio transcription by Ezra Klein of the opinion piece below.
​