Final Report of Implementing Large Scale Image Annotation

上传人:飞*** 文档编号:7203830 上传时间:2017-09-18 格式:DOC 页数:7 大小:254KB
返回 下载 相关 举报
Final Report of Implementing Large Scale Image Annotation_第1页
第1页 / 共7页
Final Report of Implementing Large Scale Image Annotation_第2页
第2页 / 共7页
Final Report of Implementing Large Scale Image Annotation_第3页
第3页 / 共7页
Final Report of Implementing Large Scale Image Annotation_第4页
第4页 / 共7页
Final Report of Implementing Large Scale Image Annotation_第5页
第5页 / 共7页
点击查看更多>>
资源描述

《Final Report of Implementing Large Scale Image Annotation》由会员分享,可在线阅读,更多相关《Final Report of Implementing Large Scale Image Annotation(7页珍藏版)》请在金锄头文库上搜索。

1、Final Report ofImplementing the Efficient Large-Scale Image Annotationby Probabilistic Collaborative Multi-Label Propagation0. IntroductionOur implementation includes three major parts:1. A gradient histogram calculation that marks the feature of an image.2. A local-sensitive hashing algorithm that

2、maps similar images into same buckets.3. A l1-norm optimization algorithm that calculate the weighted edge of a KNN graph.4. A series of optimization that calculates the probability of a tag belongs to an untagged image.1. A gradient histogram calculation that marks the feature of an image.In this s

3、ection we found some papers and documents that explain how gradient histogram works on marking the main feature of an image, such as 1. In implementation we first used Canny algorithm in OpenCV to recognize major gradient changes in an image, so the pixels that belongs to an edge are stored (see var

4、iable canny and canny_m in get_features() in HammingLSH.cpp). Then we used Sobel algorithm in OpenCV to measure the gradient drop intensity in both direction x and y, so that we can calculate all the gradient drop direction vectors. After that, a histogram that records how frequently those vectors a

5、ppear in a certain angle range. In these steps a feature vector of an image is acquired.=tan1How to calculate the gradient drop angleActually how gradient histogram of edges works on identifying an image from the others is a little vague. But we can indicate a major gradient change often mean there

6、are edges around, and Canny algorithm is such a well-performed method recognizing edges. To our knowledge a gradient histogram records shapes of the objects, e.g., a circle distribute equally in a histogram while a square have only two angles of gradient drop. The global variable dimention(misspelle

7、d) defined in HammingLSH.cpp can be modified to change the columns in histograms , the same to feature vector dimension, so it can yield different results. If the histogram contains more columns, the identification is more sensitive and vulnerable to noises, therefore fewer images can crash into a s

8、ame bucket. Also an over-sensitive histogram can cause isolated subgraphs in graph construction, which we do not wish for.Histograms of each image2. A local-sensitive hashing algorithm that maps similar images into same buckets.Now we have feature vector of images, its time to decide which images ar

9、e in a same bucket.We used get_hashfamily() to generate a set of random integers. Those integers are for random sampling of a histogram. In get_hashstring(), we convert that histogram into 11110000(this pattern repeat n times, n equals to dimention), each substring 11110000 represent the height of e

10、ach column. In this way, every gradient drop can be equally sampled.In function to_str(), we sampled the histograms and result in a shorter vector. As each bit of the histogram has equal probability to be sampled, the more similar the histogram of the two images are, the higher probability we obtain

11、 a same bit string from them. Therefore we can how hash pictures in several buckets.One of the hashmapsWe obtained ten hashmaps, each of them have different results, so that we avoid isolated subgraphs and every images in different concepts can link. We can defer the more hashmaps we obtained the hi

12、gher probability we get noisy data( irrelevant images that link each other), so we should control the size of the hashmap set.In function get_neighbor() we combined all the results in ten hashmaps, and it was a bit technological tricky in data structure. And so we get KNN graph.Adjacent matrix of a

13、KNN graph3. A l1-norm optimization algorithm that calculate the weighted edge of a KNN graph.The method the authors used to get edge weights are novel. The main idea of the optimization problem is that images in the same bucket are linear relevant. So we minimize l1 norm of a vector that indicates t

14、he level of linear relevant of a vertex to its KNN neighbors.The optimization problemWe downloaded a toolbox that solve the problem in MatLab on Internet(see l1_ls and l1_ls nonneg). L1_lsnonneg is a optimization problem solver with non-negativity restrictions, which is more suitable for us. To calc

15、ulate wij we export data in txt format and read it in MatLab( matrix.txt for adjacent matrix and features.txt for feature vectors). To see the whole process please read script optimize.m. After we solve the problem we need to rescale wij so sum(wij) in every line equals to 1. At last we export wij.t

16、xt and read it in VC+ project again. (see read_wij() in Hamming_LSH.cpp)Part of the wij matrix in matlab4. A series of optimization that calculates the probability of a tag belongs to an untagged image.After we got the hole content of ,we use the algorithms mentioned in the ijWpaper to calculate the probability of P and Q for every unlabeled pictures and then output them to a txt-document to express what we have do. In this step, we achieve the algorit

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 行业资料 > 其它行业文档

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号