Posts

Showing posts with the label NaturalLanguageProcessing

Levenshtein distance concept in NLP

Image
In my last blog we have discussed how we can use TF - IDF method implementation using python for more details refer [ TF - IDF Implementation Using Python  ]. In this blog we will discuss how to deal with spelling correction to do the stemming or lemmatization effectively.  There is a concept known as "Edit Distance". "Edit distance is the minimum number of edits required to one string to another string". We can perform following types of operations Insertion of a letter Deletion of a letter Modification of a letter Let's take an example to understand this concept in more detail. "success" is written as "sucess". We have two strings one with length 7 [with correct spelling] and another with length 6 [ with incorrect spelling].  Step 1:   If the string of length M and N then we need to create the matrix of size (M+1) and (N+1). In our case we will create the matrix of size 7 X 8 as follows. Step 2: Initialize the first row and first column st...

TF - IDF Method

Image
In my last blog we have discussed about how can we create the bag of words using python [refer this link  CreatingBag-of-Words using Python  ]. Now we have seen that bag-of-word approach is purely dependent on the frequency of words. Now let’s discuss another approach to convert the textual data into matrix format, it called us TF-IDF [Term Frequency – Inverse document Frequency] and it is the most preferred way used by most of data scientist and machine learning professionals. In this approach we consider a term is relevant to document if that term appears frequently in the document and term is unique to document i.e. term should not appear in all the documents. So its frequency considering with respect to all documents should be small and term frequency for specific document should be high. TF-IDF score is calculated as follows: Term frequency of a term (t) in a document (d).   Inverse document frequency of a term   Below are formulas for calculating the and...

Extract Features from text for Machine Learning : Bag-of-Words

Image
In my last blog we have seen how can we generate the tokens.[Refer link:  Tokenization ] Now its time to discuss how can we convert the textual data into matrix form which can be understandable by machine learning algorithms. Let's get started with method known as "Bag-of-Words". The idea behind this method is that  "Any piece of text can be represented by list of words or tokens used in it".  Before we move forward, I just want us to revise power law[Reference Link: Power Law ] discussed earlier in my blog where we discussed that stopwords are not important and does not provide any useful information about the piece of text or document. Before converting the textual data into matrix form we should remove all the stopwords from the list of tokens. Let's understand bag of words in more detail considering the following sentence: Tiger is the biggest wild animal in the cat family. If we generate tokens for above sentence after removing the stopwords th...

Lexical Processing

Image
In my previous blog WordFrequency Distribution : Power Law . I explained basic concepts of lexical processing for normal distribution of data for machine learning algorithm. In this article we will go through the high level steps to process the textual data for machine learning and as part of this series I will be explaining the lexical processing of the text using different types of tokenization feature in python. In processing the textual data for machine learning following steps are performed: Lexical Processing of text:  Converting raw text into words, sentences, paragraphs etc. Syntactic Processing of text: Understanding the relationships among words used in the sentences. Semantic processing of text: Understanding the meaning of text. To do the lexical processing of text we perform: -  Tokenization and Extraction of features from text “To kenization ” is technique that is used to split the text into smaller elements. These elements can be characters...

Power Law or Zipf’s Law: Word Frequency Distribution

Image
This article will help you to understand the basic concepts for lexical processing for text data before using in any machine learning model.   Working with any type of data, be it numeric, textual or images, involves following steps Explore : Performing  pre-processing of data Understand the data As text is made up of words, sentences and paragraphs, hence exploring of text data, can be started by analyzing  the words frequency distribution . Famous linguist,  George Zipf  had started a simple exercise:  Count the number of times each word appear in the document Create a rank order on the frequency of each word.  The most frequent word was given the rank 1, second most frequent work was given rank 2 and so on.  He repeated this exercise on many documents and found a specific pattern in which words are distributed in the document. Basis the pattern observed he has given a principle known as “Zipf Law or Power Law” Let’s analyze the word freq...