A Tour of Python NLP Libraries - KDNuggets (2024)

A Tour of Python NLP Libraries - KDNuggets (1)
Image Generated with DALL·E 3

NLP, or Natural Language Processing, is a field within Artificial Intelligence that focuses on the interaction between human language and computers. It tries to explore and apply text data so computers can understand the text meaningfully.

As the NLP field research progresses, how we process text data in computers has evolved. Modern times, we have used Python to help explore and process data easily.

With Python becoming the go-to language for exploring text data, many libraries have been developed specifically for the NLP field. In this article, we will explore various incredible and useful NLP libraries.

So, let’s get into it.

NLTK


NLTK, or Natural Language Tool Kit, is an NLP Python library with many text-processing APIs and industrial-grade wrappers. It’s one of the biggest NLP Python libraries used by researchers, data scientists, engineers, and others. It’s a standard NLP Python library for NLP tasks.

Let’s try to explore what NLTK could do. First, we would need to install the library with the following code.

pip install -U nltk

Next, we would see what NLTK could do. First, NLTK can perform the tokenization process using the following code:

import nltk from nltk.tokenizeimport word_tokenize# Download the necessary resourcesnltk.download('punkt')text = "The fruit in the table is a banana"tokens = word_tokenize(text)print(tokens)
Output>> ['The', 'fruit', 'in', 'the', 'table', 'is', 'a', 'banana']

Tokenization basically would divide each word in a sentence into individual data.

With NLTK, we can also perform Part-of-Speech (POS) Tags on the text sample.

from nltk.tag import pos_tagnltk.download('averaged_perceptron_tagger')text = "The fruit in the table is a banana"pos_tags = pos_tag(tokens)print(pos_tags)
Output>>[('The', 'DT'), ('fruit', 'NN'), ('in', 'IN'), ('the', 'DT'), ('table', 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('banana', 'NN')]

The output of the POS tagger with NLTK is each token and its intended POS tags. For example, the word Fruit is Noun (NN), and the word ‘a’ is Determinant (DT).

It’s also possible to perform Stemming and Lemmatization with NLTK. Stemming is reducing a word to its base form by cutting its prefixes and suffixes, while Lemmatization also transforms to the base form by considering the words' POS and morphological analysis.

from nltk.stem import PorterStemmer, WordNetLemmatizernltk.download('wordnet')nltk.download('punkt')text = "The striped bats are hanging on their feet for best"tokens = word_tokenize(text)# Stemmingstemmer = PorterStemmer()stems = [stemmer.stem(token) for token in tokens]print("Stems:", stems)# Lemmatizationlemmatizer = WordNetLemmatizer()lemmas = [lemmatizer.lemmatize(token) for token in tokens]print("Lemmas:", lemmas)
Output>> Stems: ['the', 'stripe', 'bat', 'are', 'hang', 'on', 'their', 'feet', 'for', 'best']Lemmas: ['The', 'striped', 'bat', 'are', 'hanging', 'on', 'their', 'foot', 'for', 'best']

You can see that the stemming and lentmatization processes have slightly different results from the words.

That’s the simple usage of NLTK. You can still do many things with them, but the above APIs are the most commonly used.

SpaCy


SpaCy is an NLP Python library that is designed specifically for production use. It’s an advanced library, and SpaCy is known for its performance and ability to handle large amounts of text data. It’s a preferable library for industry use in many NLP cases.

To install SpaCy, you can look at their usage page. Depending on your requirements, there are many combinations to choose from.

Let’s try using SpaCy for the NLP task. First, we would try performing Named Entity Recognition (NER) with the library. NER is a process of identifying and classifying named entities in text into predefined categories, such as person, address, location, and more.

import spacynlp = spacy.load("en_core_web_sm")text = "Brad is working in the U.K. Startup called AIForLife for 7 Months."doc = nlp(text)#Perform the NERfor ent in doc.ents: print(ent.text, ent.label_)
Output>>Brad PERSONthe U.K. Startup ORG7 Months DATE

As you can see, the SpaCy pre-trained model understands which word within the document can be classified.

Next, we can use SpaCy to perform Dependency Parsing and visualize them. Dependency Parsing is a process of understanding how each word relates to the other by forming a tree structure.

import spacyfrom spacy import displacynlp = spacy.load("en_core_web_sm")text = "SpaCy excels at dependency parsing."doc = nlp(text)for token in doc: print(f"{token.text}: {token.dep_}, {token.head.text}")displacy.render(doc, jupyter=True)
Output>> Brad: nsubj, workingis: aux, workingworking: ROOT, workingin: prep, workingthe: det, StartupU.K.: compound, StartupStartup: pobj, incalled: advcl, workingAIForLife: oprd, calledfor: prep, called7: nummod, MonthsMonths: pobj, for.: punct, working

The output should include all the words with their POS and where they are related. The code above would also provide tree visualization in your Jupyter Notebook.

Lastly, let’s try performing text similarity with SpaCy. Text similarity measures how similar or related two pieces of text are. It has many techniques and measurements, but we will try the simplest one.

import spacynlp = spacy.load("en_core_web_sm")doc1 = nlp("I like pizza")doc2 = nlp("I love hamburger")# Calculate similaritysimilarity = doc1.similarity(doc2)print("Similarity:", similarity)
Output>>Similarity: 0.6159097609586724

The similarity measure measures the similarity between texts by providing an output score, usually between 0 and 1. The closer the score is to 1, the more similar both texts are.

There are still many things you can do with SpaCy. Explore the documentation to find something useful for your work.

TextBlob


TextBlob is an NLP Python library for processing textual data built on top of NLTK. It simplifies many of NLTK's usage and can streamline text processing tasks.

You can install TextBlob using the following code:

pip install -U textblobpython -m textblob.download_corpora

First, let’s try to use TextBlob for NLP tasks. The first one we would try is to do sentiment analysis with TextBlob. We can do that with the code below.

from textblob import TextBlobtext = "I am in the top of the world"blob = TextBlob(text)sentiment = blob.sentimentprint(sentiment)
Output>>Sentiment(polarity=0.5, subjectivity=0.5)

The output is a polarity and subjectivity score. Polarity is the sentiment of the text where the score ranges from -1 (negative) to 1 (positive). At the same time, the subjectivity score ranges from 0 (objective) to 1 (subjective).

We can also use TextBlob for text correction tasks. You can do that with the following code.

from textblob import TextBlobtext = "I havv goood speling."blob = TextBlob(text)# Spelling Correctioncorrected_blob = blob.correct()print("Corrected Text:", corrected_blob)
Output>>Corrected Text: I have good spelling.

Try to explore the TextBlob packages to find the APIs for your text tasks.

Gensim


Gensim is an open-source Python NLP library specializing in topic modeling and document similarity analysis, especially for big and streaming data. It focuses more on industrial real-time applications.

Let’s try the library. First, we can install them using the following code:

pip install gensim

After the installation is finished, we can try the Gensim capability. Let’s try to do topic modeling with LDA using Gensim.

import gensimfrom gensim import corporafrom gensim.models import LdaModel# Sample documentsdocuments = [ "Tennis is my favorite sport to play.", "Football is a popular competition in certain country.", "There are many athletes currently training for the olympic."]# Preprocess documentstexts = [[word for word in document.lower().split()] for document in documents]dictionary = corpora.Dictionary(texts)corpus = [dictionary.doc2bow(text) for text in texts]#The LDA modellda_model = LdaModel(corpus, num_topics=2, id2word=dictionary, passes=15)topics = lda_model.print_topics()for topic in topics: print(topic)
Output>>(0, '0.073*"there" + 0.073*"currently" + 0.073*"olympic." + 0.073*"the" + 0.073*"athletes" + 0.073*"for" + 0.073*"training" + 0.073*"many" + 0.073*"are" + 0.025*"is"')(1, '0.094*"is" + 0.057*"football" + 0.057*"certain" + 0.057*"popular" + 0.057*"a" + 0.057*"competition" + 0.057*"country." + 0.057*"in" + 0.057*"favorite" + 0.057*"tennis"')

The output is a combination of words from the document samples that cohesively become a topic. You can evaluate whether the result makes sense or not.

Gensim also provides a way for users to embed content. For example, we use Word2Vec to create embedding from words.

import gensimfrom gensim.models import Word2Vec# Sample sentencessentences = [ ['machine', 'learning'], ['deep', 'learning', 'models'], ['natural', 'language', 'processing']]# Train Word2Vec modelmodel = Word2Vec(sentences, vector_size=20, window=5, min_count=1, workers=4)vector = model.wv['machine']print(vector)
Output>>[ 0.01174188 -0.02259516 0.04194366 -0.04929082 0.0338232 0.01457208 -0.02466416 0.02199094 -0.00869787 0.03355692 0.04982425 -0.02181222 -0.00299669 -0.02847819 0.01925411 0.01393313 0.03445538 0.03050548 0.04769249 0.04636709]

There are still many applications you can use with Gensim. Try to see the documentation and evaluate your needs.

Conclusion

In this article, we explored several Python NLP libraries essential for many text tasks. All of these libraries would be useful for your work, from Text Tokenization to Word Embedding. The libraries we are discussing are:

  1. NLTK
  2. SpaCy
  3. TextBlob
  4. Gensim

I hope it helps

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.


More On This Topic

  • Python Libraries Data Scientists Should Know in 2022
  • Explainable AI: 10 Python Libraries for Demystifying Your Model's Decisions
  • Introduction to Python Libraries for Data Cleaning
  • Beyond Numpy and Pandas: Unlocking the Potential of Lesser-Known…
  • Level 50 Data Scientist: Python Libraries to Know
A Tour of Python NLP Libraries - KDNuggets (2024)
Top Articles
Latest Posts
Article information

Author: Kimberely Baumbach CPA

Last Updated:

Views: 5612

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Kimberely Baumbach CPA

Birthday: 1996-01-14

Address: 8381 Boyce Course, Imeldachester, ND 74681

Phone: +3571286597580

Job: Product Banking Analyst

Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.