Codementor Events

Computer Vision Tutorial - Blob Detection

Published Jan 26, 2020Last updated Feb 05, 2024
Computer Vision Tutorial - Blob Detection

Tutorial - Computer Vision using C# and AForge Imaging

This tutorial shows how computers or machines visualize an image. In this algorithm the developers have used the C# Programming Language and the AForge Imaging Platform. The program allows the user to visualize how a computer/machine could find or determine a pattern using the concept of binarization.

For reference please refer to my first post: 1 - Basic Concept of Computer Vision

Introduction.png

Understanding the Detection of Blobs

FlowChart.PNG
Starting from the concept of binarization we are now ready to tackle the concept of blob detection. Blob detection is a process wherein all pixels with different shapes and sizes are considered as a target by our image processing algorithm unless given a limit. So if we are to manipulate the shapes and sizes to be detected there will be limited targets to be shown or seen by our computer vision making the other blobs null.

First Trial

Detection Parameters are set to:

140 binarization threshold level.
Minimum 50 pixels height and width.
Maximum 100 pixels height and width.
Step 1.png

Second Trial

Detection Parameters are set to:

100 binarization threshold level.
Minimum 50 pixels height and width.
Maximum 100 pixels height and width.
Step 2.png

Third Trial

Detection Parameters are set to:

140 binarization threshold level.
Minimum 140 pixels height and width.
Maximum 200 pixels height and width.
Step 3.png

Detection Process Conclusion

Given different parameters starting with the image processing until the blob detection manipulation results are still based on how the developer processes an image and what parameters does the user want in order to achieve the desired target blob. If we are to base it from the binarization process the user can determine if it would be detected when the number of 1's or white pixels are dominant. And if the number of 0's are much dominant and given a tighter blob detection parameter then the lesser chance it would be detected.

Code Proper - Image Processor.cs

Finally this part shows how the algorithm works for the users to visualize the blob detection concept.
Download Project here: Image Processor v1.1

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Drawing.Imaging;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;

using AForge;
using AForge.Imaging;
using AForge.Imaging.Filters;

namespace ImageProcessor
{
    public partial class ImageProcessor : Form
    {
        public ImageProcessor()
        {
            InitializeComponent();
            this.Text = "Image Processor v" + Assembly.GetEntryAssembly().GetName().Version; 
        }

        private void openToolStripMenuItem_Click(object sender, EventArgs e)
        {
            try
            {
                if (openFileDialog1.ShowDialog() == DialogResult.OK)
                {
                    pictureBox1.Image = (Bitmap)Bitmap.FromFile(openFileDialog1.FileName);
                    LogAction("Successfully Opened an Image!");
                }
            }
            catch (Exception ex)
            { throw ex; }
        }

        private void exitToolStripMenuItem_Click(object sender, EventArgs e)
        {
            this.Close();
        }

        private void checkBox1_CheckedChanged(object sender, EventArgs e)
        {
            try
            {
                Bitmap image = (Bitmap)pictureBox1.Image;
                image = AForge.Imaging.Image.Clone(image, PixelFormat.Format24bppRgb);
                if (checkBox1.Checked)
                {
                    Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);
                    image = filter.Apply(image);
                    pictureBox2.Image = image;
                    LogAction("Grayscaling was successfully applied!");
                    ActivateButtons(checkBox1.Checked);
                }
                else
                {
                    pictureBox2.Image = image;
                    LogAction("Grayscaling was successfully reversed!");
                    ActivateButtons(checkBox1.Checked);
                }
            }
            catch
            { throw new Exception("Please open an image first! 'File>Open>Select desired image'"); }
        }

        private void trackBar1_Scroll(object sender, EventArgs e)
        {
            try
            {
                label1.Text = "Threshold Level: " + trackBar1.Value;
                Bitmap image = (Bitmap)pictureBox1.Image;
                image = AForge.Imaging.Image.Clone(image, PixelFormat.Format24bppRgb);
                if (checkBox1.Checked)
                {
                    Grayscale grayfilter = new Grayscale(0.2125, 0.7154, 0.0721);
                    Threshold filter = new Threshold(trackBar1.Value);
                    image = grayfilter.Apply(image);
                    image = filter.Apply(image);
                    LogAction("Successfully applied Threshold Level: " + trackBar1.Value);
                }
                pictureBox2.Image = image;
            }
            catch (Exception ex) { throw ex; }
        }

        private void button1_Click(object sender, EventArgs e)
        {
            WaitCallback del = delegate
            {
                Invoke(new Action(() =>
                {
                    try
                    {
                        Bitmap image = (Bitmap)pictureBox2.Image;
                        image = AForge.Imaging.Image.Clone(image, PixelFormat.Format24bppRgb);
                        if (checkBox1.Checked)
                        {
                            Erosion filter = new Erosion();
                            filter.ApplyInPlace(image);
                            LogAction("Erosion was successfully applied!");
                        }
                        pictureBox2.Image = image;
                    }
                    catch (Exception ex) { throw ex; }
                }));
            };
            ThreadPool.QueueUserWorkItem(del);
        }

        private void button2_Click(object sender, EventArgs e)
        {
            WaitCallback del = delegate
            {
                Invoke(new Action(() =>
                {
                    try
                    {

                        Bitmap image = (Bitmap)pictureBox2.Image;
                        image = AForge.Imaging.Image.Clone(image, PixelFormat.Format24bppRgb);
                        if (checkBox1.Checked)
                        {
                            Dilatation filter = new Dilatation();
                            filter.ApplyInPlace(image);
                            LogAction("Dilatation was successfully applied!");
                        }
                        pictureBox2.Image = image;

                    }
                    catch (Exception ex) { throw ex; }
                }));
            };
            ThreadPool.QueueUserWorkItem(del);
        }

        private void button3_Click(object sender, EventArgs e)
        {
            WaitCallback del = delegate
            {
                this.Invoke(new Action(() => BlobDetection((Bitmap)pictureBox2.Image)));
            };
            ThreadPool.QueueUserWorkItem(del);
        }

        private void BlobDetection(Bitmap image)
        {
            image = (Bitmap)pictureBox2.Image;
            image = AForge.Imaging.Image.Clone(image, PixelFormat.Format24bppRgb);
            Bitmap originalImage = (Bitmap)Bitmap.FromFile(openFileDialog1.FileName);
            originalImage = AForge.Imaging.Image.Clone(originalImage, PixelFormat.Format24bppRgb);
            BlobCounter bc = new BlobCounter();
            // set filtering options
            bc.FilterBlobs = true;
            bc.MinWidth = int.Parse(textBox1.Text);
            bc.MinHeight = int.Parse(textBox2.Text);
            bc.MaxWidth = int.Parse(textBox3.Text);
            bc.MaxHeight = int.Parse(textBox4.Text);
            // set ordering options
            bc.ObjectsOrder = ObjectsOrder.Size;
            // process binary image
            bc.ProcessImage(image);
            Rectangle[] blobs = bc.GetObjectsRectangles();
            // extract the biggest blob
            Graphics g = Graphics.FromImage(originalImage);
            Pen highLighter = new Pen(Color.White, 2);
            if (blobs.Length > 0)
            {
                foreach (Rectangle blob in blobs)
                {
                    g.DrawRectangle(highLighter, blob.X, blob.Y, blob.Width, blob.Height);
                }
                LogAction("Detected " + blobs.Length + " blobs present.");
                LogStatus(true);
                pictureBox2.Image = originalImage;
            }
            else
            {
                LogAction("Detected " + blobs.Length + " blobs present.");
                LogStatus(false);
                pictureBox2.Image = originalImage;
            }    
        }

        private void ActivateButtons(bool Activation)
        {
            try
            {
                button1.Enabled = Activation;
                button2.Enabled = Activation;
                button3.Enabled = Activation;
                trackBar1.Enabled = Activation;
            }
            catch (Exception ex) { throw ex; }
        }

        private void LogAction(string Log)
        {
            try
            {
                richTextBox1.AppendText(DateTime.Now.ToString("MM/dd/yyyy - hh:mm:ss") + " " + Log);
                richTextBox1.AppendText(Environment.NewLine);
                richTextBox1.SelectionStart = richTextBox1.Text.Length;
                richTextBox1.ScrollToCaret();
            }
            catch (Exception ex) { throw ex; }
        }

        private void LogStatus(bool Status)
        {
            try
            {
                if(Status)
                {
                    label2.Text = "       DETECTED       ";
                    label2.BackColor = Color.YellowGreen;
                }
                else
                { 
                    label2.Text = "          FAILED          ";
                    label2.BackColor = Color.Red;
                }
            }
            catch (Exception ex) { throw ex; }
        }
    }
}

How to Operate the Image Processor Project

  1. Go to File>Open> then find a supported image file.
  2. Put a check on the Grayscale Overlay.
  3. Apply Binarization on a certain Threshold (Until target/s are seen).
  4. Use Erosion or Dilatation for further exposure.
  5. Try and Detect blobs.
Discover and read more posts from Jonathan James Acoba
get started
post commentsBe the first to share your opinion
Abood Ghazal
4 years ago

Hey guys, I am new to this community. I want to understand and implement object tracking. I want some text or papers l which linearly progresses in terms of difficulty. I have some background in features detection methods like Harris corners and sift. I also have idea of probabilistic graphical methods and Monte Carlo techniques. I have sound mathematical background. But It’s very easy to lose in papers on computer vision.

Show more replies