天天看點

如何檢測聖誕樹? [關閉]

本文翻譯自:How to detect a Christmas Tree? [closed]

Which image processing techniques could be used to implement an application that detects the christmas trees displayed in the following images?

哪些圖像處理技術可用于實作檢測以下圖像中顯示的聖誕樹的應用程式?
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

I'm searching for solutions that are going to work on all these images.

我正在尋找适用于所有這些圖像的解決方案。

Therefore, approaches that require training haar cascade classifiers or template matching are not very interesting.

是以,需要訓練haar級聯分類器或模闆比對的方法不是很有趣。

I'm looking for something that can be written in any programming language, as long as it uses only Open Source technologies.

我正在尋找可以用任何程式設計語言編寫的東西, 隻要它隻使用開源技術。

The solution must be tested with the images that are shared on this question.

必須使用此問題上共享的圖像測試解決方案。

There are 6 input images and the answer should display the results of processing each of them.

有6個輸入圖像 ,答案應顯示處理每個圖像的結果。

Finally, for each output image there must be red lines draw to surround the detected tree.

最後,對于每個輸出圖像 ,必須有紅線繪制以包圍檢測到的樹。

How would you go about programmatically detecting the trees in these images?

您将如何以程式設計方式檢測這些圖像中的樹?

#1樓

參考:https://stackoom.com/question/1P9yf/如何檢測聖誕樹-關閉

#2樓

EDIT NOTE: I edited this post to (i) process each tree image individually, as requested in the requirements, (ii) to consider both object brightness and shape in order to improve the quality of the result.

編輯注:我編輯了這篇文章,以(i)按照要求的要求單獨處理每個樹形圖像,(ii)同時考慮物體亮度和形狀,以提高結果的品質。

Below is presented an approach that takes in consideration the object brightness and shape.

下面介紹一種考慮物體亮度和形狀的方法。

In other words, it seeks for objects with triangle-like shape and with significant brightness.

換句話說,它尋找具有三角形形狀和明顯亮度的物體。

It was implemented in Java, using Marvin image processing framework.

它是用Java實作的,使用Marvin圖像處理架構。

The first step is the color thresholding.

第一步是顔色門檻值處理。

The objective here is to focus the analysis on objects with significant brightness.

這裡的目标是将分析集中在具有顯着亮度的物體上。

output images:

輸出圖像:
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

source code:

源代碼:
public class ChristmasTree {

private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill");
private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding");
private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert");
private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation");

public ChristmasTree(){
    MarvinImage tree;

    // Iterate each image
    for(int i=1; i<=6; i++){
        tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png");

        // 1. Threshold
        threshold.setAttribute("threshold", 200);
        threshold.process(tree.clone(), tree);
    }
}
public static void main(String[] args) {
    new ChristmasTree();
}
}
           

In the second step, the brightest points in the image are dilated in order to form shapes.

在第二步中,圖像中最亮的點被擴張以形成形狀。

The result of this process is the probable shape of the objects with significant brightness.

該過程的結果是具有顯着亮度的物體的可能形狀。

Applying flood fill segmentation, disconnected shapes are detected.

應用填充填充分段,檢測斷開的形狀。

output images:

輸出圖像:
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

source code:

源代碼:
public class ChristmasTree {

private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill");
private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding");
private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert");
private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation");

public ChristmasTree(){
    MarvinImage tree;

    // Iterate each image
    for(int i=1; i<=6; i++){
        tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png");

        // 1. Threshold
        threshold.setAttribute("threshold", 200);
        threshold.process(tree.clone(), tree);

        // 2. Dilate
        invert.process(tree.clone(), tree);
        tree = MarvinColorModelConverter.rgbToBinary(tree, 127);
        MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+i+"threshold.png");
        dilation.setAttribute("matrix", MarvinMath.getTrueMatrix(50, 50));
        dilation.process(tree.clone(), tree);
        MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+1+"_dilation.png");
        tree = MarvinColorModelConverter.binaryToRgb(tree);

        // 3. Segment shapes
        MarvinImage trees2 = tree.clone();
        fill(tree, trees2);
        MarvinImageIO.saveImage(trees2, "./res/trees/new/tree_"+i+"_fill.png");
}

private void fill(MarvinImage imageIn, MarvinImage imageOut){
    boolean found;
    int color= 0xFFFF0000;

    while(true){
        found=false;

        Outerloop:
        for(int y=0; y<imageIn.getHeight(); y++){
            for(int x=0; x<imageIn.getWidth(); x++){
                if(imageOut.getIntComponent0(x, y) == 0){
                    fill.setAttribute("x", x);
                    fill.setAttribute("y", y);
                    fill.setAttribute("color", color);
                    fill.setAttribute("threshold", 120);
                    fill.process(imageIn, imageOut);
                    color = newColor(color);

                    found = true;
                    break Outerloop;
                }
            }
        }

        if(!found){
            break;
        }
    }

}

private int newColor(int color){
    int red = (color & 0x00FF0000) >> 16;
    int green = (color & 0x0000FF00) >> 8;
    int blue = (color & 0x000000FF);

    if(red <= green && red <= blue){
        red+=5;
    }
    else if(green <= red && green <= blue){
        green+=5;
    }
    else{
        blue+=5;
    }

    return 0xFF000000 + (red << 16) + (green << 8) + blue;
}

public static void main(String[] args) {
    new ChristmasTree();
}
}
           

As shown in the output image, multiple shapes was detected.

如輸出圖像所示,檢測到多個形狀。

In this problem, there a just a few bright points in the images.

在這個問題中,圖像中隻有幾個亮點。

However, this approach was implemented to deal with more complex scenarios.

但是,實施此方法是為了處理更複雜的情況。

In the next step each shape is analyzed.

在下一步中,分析每個形狀。

A simple algorithm detects shapes with a pattern similar to a triangle.

一種簡單的算法檢測具有類似于三角形的圖案的形狀。

The algorithm analyze the object shape line by line.

該算法逐行分析對象形狀。

If the center of the mass of each shape line is almost the same (given a threshold) and mass increase as y increase, the object has a triangle-like shape.

如果每個形狀線的品質的中心幾乎相同(給定門檻值)并且随着y的增加品質增加,則該對象具有類似三角形的形狀。

The mass of the shape line is the number of pixels in that line that belongs to the shape.

形狀線的品質是該線中屬于該形狀的像素數。

Imagine you slice the object horizontally and analyze each horizontal segment.

想象一下,您水準切割對象并分析每個水準線段。

If they are centralized to each other and the length increase from the first segment to last one in a linear pattern, you probably has an object that resembles a triangle.

如果它們彼此集中并且長度從第一個段增加到線性模式中的最後一個段,則可能有一個類似于三角形的對象。

source code:

源代碼:
private int[] detectTrees(MarvinImage image){
    HashSet<Integer> analysed = new HashSet<Integer>();
    boolean found;
    while(true){
        found = false;
        for(int y=0; y<image.getHeight(); y++){
            for(int x=0; x<image.getWidth(); x++){
                int color = image.getIntColor(x, y);

                if(!analysed.contains(color)){
                    if(isTree(image, color)){
                        return getObjectRect(image, color);
                    }

                    analysed.add(color);
                    found=true;
                }
            }
        }

        if(!found){
            break;
        }
    }
    return null;
}

private boolean isTree(MarvinImage image, int color){

    int mass[][] = new int[image.getHeight()][2];
    int yStart=-1;
    int xStart=-1;
    for(int y=0; y<image.getHeight(); y++){
        int mc = 0;
        int xs=-1;
        int xe=-1;
        for(int x=0; x<image.getWidth(); x++){
            if(image.getIntColor(x, y) == color){
                mc++;

                if(yStart == -1){
                    yStart=y;
                    xStart=x;
                }

                if(xs == -1){
                    xs = x;
                }
                if(x > xe){
                    xe = x;
                }
            }
        }
        mass[y][0] = xs;
        mass[y][3] = xe;
        mass[y][4] = mc;    
    }

    int validLines=0;
    for(int y=0; y<image.getHeight(); y++){
        if
        ( 
            mass[y][5] > 0 &&
            Math.abs(((mass[y][0]+mass[y][6])/2)-xStart) <= 50 &&
            mass[y][7] >= (mass[yStart][8] + (y-yStart)*0.3) &&
            mass[y][9] <= (mass[yStart][10] + (y-yStart)*1.5)
        )
        {
            validLines++;
        }
    }

    if(validLines > 100){
        return true;
    }
    return false;
}
           

Finally, the position of each shape similar to a triangle and with significant brightness, in this case a Christmas tree, is highlighted in the original image, as shown below.

最後,在原始圖像中突出顯示每個形狀類似于三角形并具有顯着亮度的位置(在這種情況下為聖誕樹),如下所示。

final output images:

最終輸出圖像:
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

final source code:

最終源代碼:
public class ChristmasTree {

private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill");
private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding");
private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert");
private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation");

public ChristmasTree(){
    MarvinImage tree;

    // Iterate each image
    for(int i=1; i<=6; i++){
        tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png");

        // 1. Threshold
        threshold.setAttribute("threshold", 200);
        threshold.process(tree.clone(), tree);

        // 2. Dilate
        invert.process(tree.clone(), tree);
        tree = MarvinColorModelConverter.rgbToBinary(tree, 127);
        MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+i+"threshold.png");
        dilation.setAttribute("matrix", MarvinMath.getTrueMatrix(50, 50));
        dilation.process(tree.clone(), tree);
        MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+1+"_dilation.png");
        tree = MarvinColorModelConverter.binaryToRgb(tree);

        // 3. Segment shapes
        MarvinImage trees2 = tree.clone();
        fill(tree, trees2);
        MarvinImageIO.saveImage(trees2, "./res/trees/new/tree_"+i+"_fill.png");

        // 4. Detect tree-like shapes
        int[] rect = detectTrees(trees2);

        // 5. Draw the result
        MarvinImage original = MarvinImageIO.loadImage("./res/trees/tree"+i+".png");
        drawBoundary(trees2, original, rect);
        MarvinImageIO.saveImage(original, "./res/trees/new/tree_"+i+"_out_2.jpg");
    }
}

private void drawBoundary(MarvinImage shape, MarvinImage original, int[] rect){
    int yLines[] = new int[6];
    yLines[0] = rect[1];
    yLines[1] = rect[1]+(int)((rect[3]/5));
    yLines[2] = rect[1]+((rect[3]/5)*2);
    yLines[3] = rect[1]+((rect[3]/5)*3);
    yLines[4] = rect[1]+(int)((rect[3]/5)*4);
    yLines[5] = rect[1]+rect[3];

    List<Point> points = new ArrayList<Point>();
    for(int i=0; i<yLines.length; i++){
        boolean in=false;
        Point startPoint=null;
        Point endPoint=null;
        for(int x=rect[0]; x<rect[0]+rect[2]; x++){

            if(shape.getIntColor(x, yLines[i]) != 0xFFFFFFFF){
                if(!in){
                    if(startPoint == null){
                        startPoint = new Point(x, yLines[i]);
                    }
                }
                in = true;
            }
            else{
                if(in){
                    endPoint = new Point(x, yLines[i]);
                }
                in = false;
            }
        }

        if(endPoint == null){
            endPoint = new Point((rect[0]+rect[2])-1, yLines[i]);
        }

        points.add(startPoint);
        points.add(endPoint);
    }

    drawLine(points.get(0).x, points.get(0).y, points.get(1).x, points.get(1).y, 15, original);
    drawLine(points.get(1).x, points.get(1).y, points.get(3).x, points.get(3).y, 15, original);
    drawLine(points.get(3).x, points.get(3).y, points.get(5).x, points.get(5).y, 15, original);
    drawLine(points.get(5).x, points.get(5).y, points.get(7).x, points.get(7).y, 15, original);
    drawLine(points.get(7).x, points.get(7).y, points.get(9).x, points.get(9).y, 15, original);
    drawLine(points.get(9).x, points.get(9).y, points.get(11).x, points.get(11).y, 15, original);
    drawLine(points.get(11).x, points.get(11).y, points.get(10).x, points.get(10).y, 15, original);
    drawLine(points.get(10).x, points.get(10).y, points.get(8).x, points.get(8).y, 15, original);
    drawLine(points.get(8).x, points.get(8).y, points.get(6).x, points.get(6).y, 15, original);
    drawLine(points.get(6).x, points.get(6).y, points.get(4).x, points.get(4).y, 15, original);
    drawLine(points.get(4).x, points.get(4).y, points.get(2).x, points.get(2).y, 15, original);
    drawLine(points.get(2).x, points.get(2).y, points.get(0).x, points.get(0).y, 15, original);
}

private void drawLine(int x1, int y1, int x2, int y2, int length, MarvinImage image){
    int lx1, lx2, ly1, ly2;
    for(int i=0; i<length; i++){
        lx1 = (x1+i >= image.getWidth() ? (image.getWidth()-1)-i: x1);
        lx2 = (x2+i >= image.getWidth() ? (image.getWidth()-1)-i: x2);
        ly1 = (y1+i >= image.getHeight() ? (image.getHeight()-1)-i: y1);
        ly2 = (y2+i >= image.getHeight() ? (image.getHeight()-1)-i: y2);

        image.drawLine(lx1+i, ly1, lx2+i, ly2, Color.red);
        image.drawLine(lx1, ly1+i, lx2, ly2+i, Color.red);
    }
}

private void fillRect(MarvinImage image, int[] rect, int length){
    for(int i=0; i<length; i++){
        image.drawRect(rect[0]+i, rect[1]+i, rect[2]-(i*2), rect[3]-(i*2), Color.red);
    }
}

private void fill(MarvinImage imageIn, MarvinImage imageOut){
    boolean found;
    int color= 0xFFFF0000;

    while(true){
        found=false;

        Outerloop:
        for(int y=0; y<imageIn.getHeight(); y++){
            for(int x=0; x<imageIn.getWidth(); x++){
                if(imageOut.getIntComponent0(x, y) == 0){
                    fill.setAttribute("x", x);
                    fill.setAttribute("y", y);
                    fill.setAttribute("color", color);
                    fill.setAttribute("threshold", 120);
                    fill.process(imageIn, imageOut);
                    color = newColor(color);

                    found = true;
                    break Outerloop;
                }
            }
        }

        if(!found){
            break;
        }
    }

}

private int[] detectTrees(MarvinImage image){
    HashSet<Integer> analysed = new HashSet<Integer>();
    boolean found;
    while(true){
        found = false;
        for(int y=0; y<image.getHeight(); y++){
            for(int x=0; x<image.getWidth(); x++){
                int color = image.getIntColor(x, y);

                if(!analysed.contains(color)){
                    if(isTree(image, color)){
                        return getObjectRect(image, color);
                    }

                    analysed.add(color);
                    found=true;
                }
            }
        }

        if(!found){
            break;
        }
    }
    return null;
}

private boolean isTree(MarvinImage image, int color){

    int mass[][] = new int[image.getHeight()][11];
    int yStart=-1;
    int xStart=-1;
    for(int y=0; y<image.getHeight(); y++){
        int mc = 0;
        int xs=-1;
        int xe=-1;
        for(int x=0; x<image.getWidth(); x++){
            if(image.getIntColor(x, y) == color){
                mc++;

                if(yStart == -1){
                    yStart=y;
                    xStart=x;
                }

                if(xs == -1){
                    xs = x;
                }
                if(x > xe){
                    xe = x;
                }
            }
        }
        mass[y][0] = xs;
        mass[y][12] = xe;
        mass[y][13] = mc;   
    }

    int validLines=0;
    for(int y=0; y<image.getHeight(); y++){
        if
        ( 
            mass[y][14] > 0 &&
            Math.abs(((mass[y][0]+mass[y][15])/2)-xStart) <= 50 &&
            mass[y][16] >= (mass[yStart][17] + (y-yStart)*0.3) &&
            mass[y][18] <= (mass[yStart][19] + (y-yStart)*1.5)
        )
        {
            validLines++;
        }
    }

    if(validLines > 100){
        return true;
    }
    return false;
}

private int[] getObjectRect(MarvinImage image, int color){
    int x1=-1;
    int x2=-1;
    int y1=-1;
    int y2=-1;

    for(int y=0; y<image.getHeight(); y++){
        for(int x=0; x<image.getWidth(); x++){
            if(image.getIntColor(x, y) == color){

                if(x1 == -1 || x < x1){
                    x1 = x;
                }
                if(x2 == -1 || x > x2){
                    x2 = x;
                }
                if(y1 == -1 || y < y1){
                    y1 = y;
                }
                if(y2 == -1 || y > y2){
                    y2 = y;
                }
            }
        }
    }

    return new int[]{x1, y1, (x2-x1), (y2-y1)};
}

private int newColor(int color){
    int red = (color & 0x00FF0000) >> 16;
    int green = (color & 0x0000FF00) >> 8;
    int blue = (color & 0x000000FF);

    if(red <= green && red <= blue){
        red+=5;
    }
    else if(green <= red && green <= blue){
        green+=30;
    }
    else{
        blue+=30;
    }

    return 0xFF000000 + (red << 16) + (green << 8) + blue;
}

public static void main(String[] args) {
    new ChristmasTree();
}
}
           

The advantage of this approach is the fact it will probably work with images containing other luminous objects since it analyzes the object shape.

這種方法的優點是它可能适用于包含其他發光物體的圖像,因為它分析了物體形狀。

Merry Christmas!

聖誕節快樂!

EDIT NOTE 2

編輯說明2

There is a discussion about the similarity of the output images of this solution and some other ones.

讨論了該解決方案的輸出圖像與其他一些解決方案的輸出圖像的相似性。

In fact, they are very similar.

實際上,它們非常相似。

But this approach does not just segment objects.

但這種方法不隻是分割對象。

It also analyzes the object shapes in some sense.

它還從某種意義上分析了物體的形狀。

It can handle multiple luminous objects in the same scene.

它可以處理同一場景中的多個發光物體。

In fact, the Christmas tree does not need to be the brightest one.

事實上,聖誕樹不一定是最亮的。

I'm just abording it to enrich the discussion.

我隻是為了豐富讨論而加以論述。

There is a bias in the samples that just looking for the brightest object, you will find the trees.

樣品中存在偏差,隻是尋找最亮的物體,你會發現樹木。

But, does we really want to stop the discussion at this point?

但是,我們真的想在此時停止讨論嗎?

At this point, how far the computer is really recognizing an object that resembles a Christmas tree?

在這一點上,計算機在多大程度上真正識别出類似聖誕樹的物體?

Let's try to close this gap.

讓我們試着彌補這個差距。

Below is presented a result just to elucidate this point:

下面的結果隻是為了闡明這一點:

input image

輸入圖像
如何檢測聖誕樹? [關閉]

output

産量
如何檢測聖誕樹? [關閉]

#3樓

Here is my simple and dumb solution.

這是我簡單而愚蠢的解決方案。

It is based upon the assumption that the tree will be the most bright and big thing in the picture.

它基于這樣的假設:樹将是圖檔中最明亮和最重要的東西。
//g++ -Wall -pedantic -ansi -O2 -pipe -s -o christmas_tree christmas_tree.cpp `pkg-config --cflags --libs opencv`
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>

using namespace cv;
using namespace std;

int main(int argc,char *argv[])
{
    Mat original,tmp,tmp1;
    vector <vector<Point> > contours;
    Moments m;
    Rect boundrect;
    Point2f center;
    double radius, max_area=0,tmp_area=0;
    unsigned int j, k;
    int i;

    for(i = 1; i < argc; ++i)
    {
        original = imread(argv[i]);
        if(original.empty())
        {
            cerr << "Error"<<endl;
            return -1;
        }

        GaussianBlur(original, tmp, Size(3, 3), 0, 0, BORDER_DEFAULT);
        erode(tmp, tmp, Mat(), Point(-1, -1), 10);
        cvtColor(tmp, tmp, CV_BGR2HSV);
        inRange(tmp, Scalar(0, 0, 0), Scalar(180, 255, 200), tmp);

        dilate(original, tmp1, Mat(), Point(-1, -1), 15);
        cvtColor(tmp1, tmp1, CV_BGR2HLS);
        inRange(tmp1, Scalar(0, 185, 0), Scalar(180, 255, 255), tmp1);
        dilate(tmp1, tmp1, Mat(), Point(-1, -1), 10);

        bitwise_and(tmp, tmp1, tmp1);

        findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
        max_area = 0;
        j = 0;
        for(k = 0; k < contours.size(); k++)
        {
            tmp_area = contourArea(contours[k]);
            if(tmp_area > max_area)
            {
                max_area = tmp_area;
                j = k;
            }
        }
        tmp1 = Mat::zeros(original.size(),CV_8U);
        approxPolyDP(contours[j], contours[j], 30, true);
        drawContours(tmp1, contours, j, Scalar(255,255,255), CV_FILLED);

        m = moments(contours[j]);
        boundrect = boundingRect(contours[j]);
        center = Point2f(m.m10/m.m00, m.m01/m.m00);
        radius = (center.y - (boundrect.tl().y))/4.0*3.0;
        Rect heightrect(center.x-original.cols/5, boundrect.tl().y, original.cols/5*2, boundrect.size().height);

        tmp = Mat::zeros(original.size(), CV_8U);
        rectangle(tmp, heightrect, Scalar(255, 255, 255), -1);
        circle(tmp, center, radius, Scalar(255, 255, 255), -1);

        bitwise_and(tmp, tmp1, tmp1);

        findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
        max_area = 0;
        j = 0;
        for(k = 0; k < contours.size(); k++)
        {
            tmp_area = contourArea(contours[k]);
            if(tmp_area > max_area)
            {
                max_area = tmp_area;
                j = k;
            }
        }

        approxPolyDP(contours[j], contours[j], 30, true);
        convexHull(contours[j], contours[j]);

        drawContours(original, contours, j, Scalar(0, 0, 255), 3);

        namedWindow(argv[i], CV_WINDOW_NORMAL|CV_WINDOW_KEEPRATIO|CV_GUI_EXPANDED);
        imshow(argv[i], original);

        waitKey(0);
        destroyWindow(argv[i]);
    }

    return 0;
}
           

The first step is to detect the most bright pixels in the picture, but we have to do a distinction between the tree itself and the snow which reflect its light.

第一步是檢測圖檔中最亮的像素,但我們必須區分樹本身和反映其光線的雪。

Here we try to exclude the snow appling a really simple filter on the color codes:

在這裡,我們嘗試排除雪應用顔色代碼上的一個非常簡單的過濾器:
GaussianBlur(original, tmp, Size(3, 3), 0, 0, BORDER_DEFAULT);
erode(tmp, tmp, Mat(), Point(-1, -1), 10);
cvtColor(tmp, tmp, CV_BGR2HSV);
inRange(tmp, Scalar(0, 0, 0), Scalar(180, 255, 200), tmp);
           

Then we find every "bright" pixel:

然後我們找到每個“明亮”的像素:
dilate(original, tmp1, Mat(), Point(-1, -1), 15);
cvtColor(tmp1, tmp1, CV_BGR2HLS);
inRange(tmp1, Scalar(0, 185, 0), Scalar(180, 255, 255), tmp1);
dilate(tmp1, tmp1, Mat(), Point(-1, -1), 10);
           

Finally we join the two results:

最後我們加入了兩個結果:
bitwise_and(tmp, tmp1, tmp1);
           

Now we look for the biggest bright object:

現在我們尋找最大的亮點:
findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
max_area = 0;
j = 0;
for(k = 0; k < contours.size(); k++)
{
    tmp_area = contourArea(contours[k]);
    if(tmp_area > max_area)
    {
        max_area = tmp_area;
        j = k;
    }
}
tmp1 = Mat::zeros(original.size(),CV_8U);
approxPolyDP(contours[j], contours[j], 30, true);
drawContours(tmp1, contours, j, Scalar(255,255,255), CV_FILLED);
           

Now we have almost done, but there are still some imperfection due to the snow.

現在我們差不多完成了,但是由于積雪,仍然有一些不完美的地方。

To cut them off we'll build a mask using a circle and a rectangle to approximate the shape of a tree to delete unwanted pieces:

為了剪掉它們,我們将使用圓形和矩形建構一個蒙版來近似樹的形狀以删除不需要的部分:
m = moments(contours[j]);
boundrect = boundingRect(contours[j]);
center = Point2f(m.m10/m.m00, m.m01/m.m00);
radius = (center.y - (boundrect.tl().y))/4.0*3.0;
Rect heightrect(center.x-original.cols/5, boundrect.tl().y, original.cols/5*2, boundrect.size().height);

tmp = Mat::zeros(original.size(), CV_8U);
rectangle(tmp, heightrect, Scalar(255, 255, 255), -1);
circle(tmp, center, radius, Scalar(255, 255, 255), -1);

bitwise_and(tmp, tmp1, tmp1);
           

The last step is to find the contour of our tree and draw it on the original picture.

最後一步是找到樹的輪廓并将其繪制在原始圖檔上。
findContours(tmp1, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
max_area = 0;
j = 0;
for(k = 0; k < contours.size(); k++)
{
    tmp_area = contourArea(contours[k]);
    if(tmp_area > max_area)
    {
        max_area = tmp_area;
        j = k;
    }
}

approxPolyDP(contours[j], contours[j], 30, true);
convexHull(contours[j], contours[j]);

drawContours(original, contours, j, Scalar(0, 0, 255), 3);
           

I'm sorry but at the moment I have a bad connection so it is not possible for me to upload pictures.

對不起,但此刻我連接配接不好,是以我無法上傳圖檔。

I'll try to do it later.

我會試着以後再做。

Merry Christmas.

聖誕節快樂。

EDIT:

編輯:

Here some pictures of the final output:

這裡有一些最終輸出的圖檔:
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

#4樓

Some old-fashioned image processing approach...

一些老式的圖像處理方法......

The idea is based on the assumption that images depict lighted trees on typically darker and smoother backgrounds (or foregrounds in some cases).

這個想法是基于這樣的假設:圖像描繪了通常更暗和更光滑的背景 (或某些情況下的前景) 上的光照樹 。

The lighted tree area is more "energetic" and has higher intensity .

光照樹區域更“精力充沛”,強度更高 。

The process is as follows:

過程如下:
  1. Convert to graylevel 轉換為graylevel
  2. Apply LoG filtering to get the most "active" areas 應用LoG過濾以獲得最“活躍”的區域
  3. Apply an intentisy thresholding to get the most bright areas 應用intentisy門檻值來獲得最明亮的區域
  4. Combine the previous 2 to get a preliminary mask 結合之前的2來獲得初步掩模
  5. Apply a morphological dilation to enlarge areas and connect neighboring components 應用形态膨脹來擴大區域并連接配接相鄰元件
  6. Eliminate small candidate areas according to their area size 根據區域大小消除小的候選區域

What you get is a binary mask and a bounding box for each image.

你得到的是每個圖像的二進制掩碼和邊界框。

Here are the results using this naive technique:

以下是使用這種天真技術的結果:
如何檢測聖誕樹? [關閉]

Code on MATLAB follows: The code runs on a folder with JPG images.

MATLAB上的代碼如下:代碼在帶有JPG圖像的檔案夾上運作。

Loads all images and returns detected results.

加載所有圖像并傳回檢測結果。
% clear everything
clear;
pack;
close all;
close all hidden;
drawnow;
clc;

% initialization
ims=dir('./*.jpg');
imgs={};
images={}; 
blur_images={}; 
log_image={}; 
dilated_image={};
int_image={};
bin_image={};
measurements={};
box={};
num=length(ims);
thres_div = 3;

for i=1:num, 
    % load original image
    imgs{end+1}=imread(ims(i).name);

    % convert to grayscale
    images{end+1}=rgb2gray(imgs{i});

    % apply laplacian filtering and heuristic hard thresholding
    val_thres = (max(max(images{i}))/thres_div);
    log_image{end+1} = imfilter( images{i},fspecial('log')) > val_thres;

    % get the most bright regions of the image
    int_thres = 0.26*max(max( images{i}));
    int_image{end+1} = images{i} > int_thres;

    % compute the final binary image by combining 
    % high 'activity' with high intensity
    bin_image{end+1} = log_image{i} .* int_image{i};

    % apply morphological dilation to connect distonnected components
    strel_size = round(0.01*max(size(imgs{i})));        % structuring element for morphological dilation
    dilated_image{end+1} = imdilate( bin_image{i}, strel('disk',strel_size));

    % do some measurements to eliminate small objects
    measurements{i} = regionprops( logical( dilated_image{i}),'Area','BoundingBox');
    for m=1:length(measurements{i})
        if measurements{i}(m).Area < 0.05*numel( dilated_image{i})
            dilated_image{i}( round(measurements{i}(m).BoundingBox(2):measurements{i}(m).BoundingBox(4)+measurements{i}(m).BoundingBox(2)),...
                round(measurements{i}(m).BoundingBox(1):measurements{i}(m).BoundingBox(3)+measurements{i}(m).BoundingBox(1))) = 0;
        end
    end
    % make sure the dilated image is the same size with the original
    dilated_image{i} = dilated_image{i}(1:size(imgs{i},1),1:size(imgs{i},2));
    % compute the bounding box
    [y,x] = find( dilated_image{i});
    if isempty( y)
        box{end+1}=[];
    else
        box{end+1} = [ min(x) min(y) max(x)-min(x)+1 max(y)-min(y)+1];
    end
end 

%%% additional code to display things
for i=1:num,
    figure;
    subplot(121);
    colormap gray;
    imshow( imgs{i});
    if ~isempty(box{i})
        hold on;
        rr = rectangle( 'position', box{i});
        set( rr, 'EdgeColor', 'r');
        hold off;
    end
    subplot(122);
    imshow( imgs{i}.*uint8(repmat(dilated_image{i},[1 1 3])));
end
           

#5樓

I wrote the code in Matlab R2007a.

我在Matlab R2007a中編寫了代碼。

I used k-means to roughly extract the christmas tree.

我用k-means粗略地提取聖誕樹。

I will show my intermediate result only with one image, and final results with all the six.

我将僅使用一張圖像顯示我的中間結果,并使用所有六張圖像顯示最終結果。

First, I mapped the RGB space onto Lab space, which could enhance the contrast of red in its b channel:

首先,我将RGB空間映射到Lab空間,這可以增強其b通道中紅色的對比度:
colorTransform = makecform('srgb2lab');
I = applycform(I, colorTransform);
L = double(I(:,:,1));
a = double(I(:,:,2));
b = double(I(:,:,3));
           
如何檢測聖誕樹? [關閉]

Besides the feature in color space, I also used texture feature that is relevant with the neighborhood rather than each pixel itself.

除了色彩空間的特征,我還使用了與鄰域相關的紋理特征而不是每個像素本身。

Here I linearly combined the intensity from the 3 original channels (R,G,B).

在這裡,我線性地組合了3個原始通道(R,G,B)的強度。

The reason why I formatted this way is because the christmas trees in the picture all have red lights on them, and sometimes green/sometimes blue illumination as well.

我這樣格式化的原因是因為圖檔中的聖誕樹上都有紅燈,有時還有綠色/有時是藍色照明。
R=double(Irgb(:,:,1));
G=double(Irgb(:,:,2));
B=double(Irgb(:,:,3));
I0 = (3*R + max(G,B)-min(G,B))/2;
           
如何檢測聖誕樹? [關閉]

I applied a 3X3 local binary pattern on

I0

, used the center pixel as the threshold, and obtained the contrast by calculating the difference between the mean pixel intensity value above the threshold and the mean value below it.

我在

I0

上應用了3X3局部二值模式,使用中心像素作為門檻值,并通過計算門檻值以上的平均像素強度值與其下方的平均值之間的差異來獲得對比度。
I0_copy = zeros(size(I0));
for i = 2 : size(I0,1) - 1
    for j = 2 : size(I0,2) - 1
        tmp = I0(i-1:i+1,j-1:j+1) >= I0(i,j);
        I0_copy(i,j) = mean(mean(tmp.*I0(i-1:i+1,j-1:j+1))) - ...
            mean(mean(~tmp.*I0(i-1:i+1,j-1:j+1))); % Contrast
    end
end
           
如何檢測聖誕樹? [關閉]

Since I have 4 features in total, I would choose K=5 in my clustering method.

由于我總共有4個功能,我會在我的聚類方法中選擇K = 5。

The code for k-means are shown below (it is from Dr. Andrew Ng's machine learning course. I took the course before, and I wrote the code myself in his programming assignment).

k-means的代碼如下所示(來自Andrew Ng博士的機器學習課程。我之前參加過該課程,并且我自己在程式設計任務中編寫了代碼)。
[centroids, idx] = runkMeans(X, initial_centroids, max_iters);
mask=reshape(idx,img_size(1),img_size(2));

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [centroids, idx] = runkMeans(X, initial_centroids, ...
                                  max_iters, plot_progress)
   [m n] = size(X);
   K = size(initial_centroids, 1);
   centroids = initial_centroids;
   previous_centroids = centroids;
   idx = zeros(m, 1);

   for i=1:max_iters    
      % For each example in X, assign it to the closest centroid
      idx = findClosestCentroids(X, centroids);

      % Given the memberships, compute new centroids
      centroids = computeCentroids(X, idx, K);

   end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function idx = findClosestCentroids(X, centroids)
   K = size(centroids, 1);
   idx = zeros(size(X,1), 1);
   for xi = 1:size(X,1)
      x = X(xi, :);
      % Find closest centroid for x.
      best = Inf;
      for mui = 1:K
        mu = centroids(mui, :);
        d = dot(x - mu, x - mu);
        if d < best
           best = d;
           idx(xi) = mui;
        end
      end
   end 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function centroids = computeCentroids(X, idx, K)
   [m n] = size(X);
   centroids = zeros(K, n);
   for mui = 1:K
      centroids(mui, :) = sum(X(idx == mui, :)) / sum(idx == mui);
   end
           
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

Since the program runs very slow in my computer, I just ran 3 iterations.

由于程式在我的計算機上運作速度很慢,我隻運作了3次疊代。

Normally the stop criteria is (i) iteration time at least 10, or (ii) no change on the centroids any more.

通常,停止标準是(i)疊代時間至少為10,或者(ii)質心不再有變化。

To my test, increasing the iteration may differentiate the background (sky and tree, sky and building,...) more accurately, but did not show a drastic changes in christmas tree extraction.

根據我的測試,增加疊代可以更準确地區分背景(天空和樹木,天空和建築物......),但沒有顯示聖誕樹提取的劇烈變化。

Also note k-means is not immune to the random centroid initialization, so running the program several times to make a comparison is recommended.

另請注意,k-means對随機質心初始化不起作用,是以建議多次運作程式進行比較。

After the k-means, the labelled region with the maximum intensity of

I0

was chosen.

在k均值之後,選擇具有最大強度

I0

的标記區域。

And boundary tracing was used to extracted the boundaries.

邊界追蹤用于提取邊界。

To me, the last christmas tree is the most difficult one to extract since the contrast in that picture is not high enough as they are in the first five.

對我來說,最後一棵聖誕樹是最難提取的,因為該圖檔中的對比度不夠高,因為它們在前五個中。

Another issue in my method is that I used

bwboundaries

function in Matlab to trace the boundary, but sometimes the inner boundaries are also included as you can observe in 3rd, 5th, 6th results.

我的方法中的另一個問題是我在Matlab中使用

bwboundaries

函數來跟蹤邊界,但有時也會包含内部邊界,因為您可以在第3,第5,第6個結果中觀察到。

The dark side within the christmas trees are not only failed to be clustered with the illuminated side, but they also lead to so many tiny inner boundaries tracing (

imfill

doesn't improve very much).

聖誕樹内的黑暗面不僅沒有被照亮的一側聚集,而且它們也導緻了許多微小的内部邊界追蹤(

imfill

并沒有很大的改善)。

In all my algorithm still has a lot improvement space.

在我的所有算法中仍然有很多改進空間。
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]
如何檢測聖誕樹? [關閉]

Some publication s indicates that mean-shift may be more robust than k-means, and many graph-cut based algorithms are also very competitive on complicated boundaries segmentation.

一些出版物表明平均移位可能比k均值更穩健,并且許多基于圖切割的算法在複雜的邊界分割上也非常有競争力。

I wrote a mean-shift algorithm myself, it seems to better extract the regions without enough light.

我自己寫了一個均值漂移算法,它似乎更好地提取沒有足夠光線的區域。

But mean-shift is a little bit over-segmented, and some strategy of merging is needed.

但是平均移位有點過分,需要一些合并政策。

It ran even much slower than k-means in my computer, I am afraid I have to give it up.

它比我電腦裡的k-means慢得多,恐怕我不得不放棄它。

I eagerly look forward to see others would submit excellent results here with those modern algorithms mentioned above.

我熱切地期待看到其他人在這裡用上面提到的那些現代算法送出出色的結果。

Yet I always believe the feature selection is the key component in image segmentation.

但我始終認為特征選擇是圖像分割的關鍵組成部分。

With a proper feature selection that can maximize the margin between object and background, many segmentation algorithms will definitely work.

通過适當的特征選擇可以最大化對象和背景之間的邊距,許多分割算法肯定會起作用。

Different algorithms may improve the result from 1 to 10, but the feature selection may improve it from 0 to 1.

不同的算法可以将結果從1改為10,但是特征選擇可以将其從0改善為1。

Merry Christmas !

聖誕節快樂 !

#6樓

...another old fashioned solution - purely based on HSV processing :

......另一種老式解決方案 - 純粹基于HSV處理 :
  1. Convert images to the HSV colorspace 将圖像轉換為HSV顔色空間
  2. Create masks according to heuristics in the HSV (see below) 根據HSV中的啟發式建立蒙版(見下文)
  3. Apply morphological dilation to the mask to connect disconnected areas 将形态擴張應用于面罩以連接配接斷開的區域
  4. Discard small areas and horizontal blocks (remember trees are vertical blocks) 丢棄小區域和水準塊(記住樹木是垂直塊)
  5. Compute the bounding box 計算邊界框

A word on the heuristics in the HSV processing:

關于 HSV進行中啟發式的一句話:
  1. everything with Hues (H) between 210 - 320 degrees is discarded as blue-magenta that is supposed to be in the background or in non-relevant areas 色度(H)在210 - 320度之間的所有東西都被丢棄為藍色洋紅色,應該是在背景中或在非相關區域
  2. everything with Values (V) lower that 40% is also discarded as being too dark to be relevant 價值(V)低于40%的所有東西也被丢棄,因為太暗而不相關

Of course one may experiment with numerous other possibilities to fine-tune this approach...

當然,人們可以嘗試許多其他可能來微調這種方法......

Here is the MATLAB code to do the trick (warning: the code is far from being optimized!!! I used techniques not recommended for MATLAB programming just to be able to track anything in the process-this can be greatly optimized):

以下是用于處理技巧的MATLAB代碼(警告:代碼遠未被優化!!!我使用了不推薦用于MATLAB程式設計的技術,隻是為了能夠跟蹤流程中的任何内容 - 這可以大大優化):
% clear everything
clear;
pack;
close all;
close all hidden;
drawnow;
clc;

% initialization
ims=dir('./*.jpg');
num=length(ims);

imgs={};
hsvs={}; 
masks={};
dilated_images={};
measurements={};
boxs={};

for i=1:num, 
    % load original image
    imgs{end+1} = imread(ims(i).name);
    flt_x_size = round(size(imgs{i},2)*0.005);
    flt_y_size = round(size(imgs{i},1)*0.005);
    flt = fspecial( 'average', max( flt_y_size, flt_x_size));
    imgs{i} = imfilter( imgs{i}, flt, 'same');
    % convert to HSV colorspace
    hsvs{end+1} = rgb2hsv(imgs{i});
    % apply a hard thresholding and binary operation to construct the mask
    masks{end+1} = medfilt2( ~(hsvs{i}(:,:,1)>(210/360) & hsvs{i}(:,:,1)<(320/360))&hsvs{i}(:,:,3)>0.4);
    % apply morphological dilation to connect distonnected components
    strel_size = round(0.03*max(size(imgs{i})));        % structuring element for morphological dilation
    dilated_images{end+1} = imdilate( masks{i}, strel('disk',strel_size));
    % do some measurements to eliminate small objects
    measurements{i} = regionprops( dilated_images{i},'Perimeter','Area','BoundingBox'); 
    for m=1:length(measurements{i})
        if (measurements{i}(m).Area < 0.02*numel( dilated_images{i})) || (measurements{i}(m).BoundingBox(3)>1.2*measurements{i}(m).BoundingBox(4))
            dilated_images{i}( round(measurements{i}(m).BoundingBox(2):measurements{i}(m).BoundingBox(4)+measurements{i}(m).BoundingBox(2)),...
                round(measurements{i}(m).BoundingBox(1):measurements{i}(m).BoundingBox(3)+measurements{i}(m).BoundingBox(1))) = 0;
        end
    end
    dilated_images{i} = dilated_images{i}(1:size(imgs{i},1),1:size(imgs{i},2));
    % compute the bounding box
    [y,x] = find( dilated_images{i});
    if isempty( y)
        boxs{end+1}=[];
    else
        boxs{end+1} = [ min(x) min(y) max(x)-min(x)+1 max(y)-min(y)+1];
    end

end 

%%% additional code to display things
for i=1:num,
    figure;
    subplot(121);
    colormap gray;
    imshow( imgs{i});
    if ~isempty(boxs{i})
        hold on;
        rr = rectangle( 'position', boxs{i});
        set( rr, 'EdgeColor', 'r');
        hold off;
    end
    subplot(122);
    imshow( imgs{i}.*uint8(repmat(dilated_images{i},[1 1 3])));
end
           

Results: 結果:

In the results I show the masked image and the bounding box.

在結果中,我顯示了蒙版圖像和邊界框。
如何檢測聖誕樹? [關閉]