外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning

上传人:hs****ma 文档编号:569840054 上传时间:2024-07-31 格式:PDF 页数:5 大小:467.87KB
返回 下载 相关 举报
外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning_第1页
第1页 / 共5页
外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning_第2页
第2页 / 共5页
外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning_第3页
第3页 / 共5页
外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning_第4页
第4页 / 共5页
外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning_第5页
第5页 / 共5页
亲,该文档总共5页,全部预览完了,如果喜欢就下载吧!
资源描述

《外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning》由会员分享,可在线阅读,更多相关《外文翻译-- Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning(5页珍藏版)》请在金锄头文库上搜索。

1、Rapid Multimodal Medical Image Registration and Fusion in 3D conformal radiotherapy treatment planning Bin Li, Lianfang Tian School of Automation Science and Engineering South China University of Technology Guangzhou, China Shanxing Ou Dept. of Radiology and Pediatrics Guangzhou General Hospital of

2、 Guangzhou Command Guangzhou, China AbstractIn order to realize effectively and efficiently the automatic registration and fusion of multimodal medical images data in 3D conformal radiotherapy treatment planning (3D CRTP), a rapid image registration and fusion method is proposed in this paper. This

3、proposed registration method is based on hierarchical adaptive free-form deformation(FFD) algorithm, which can be described as follows: First the ROI(region of interest) is extracted by using C-V level sets algorithm, and feature points are matched automatically which is based on parallel computing

4、method. Then, the global rough registration is carried out by employing principal axes algorithm. Next, the automatic fine registration of the multimodal medical images is realized by a FFD method based on hierarchical B-splines. Moreover, in order to speed up the calculation of the FFD coefficients

5、, stochastic gradient descent method-Simultaneous Perturbation(SP) and the criteria of maximum mutual information entropy are adopted. After the registration of multimodal images, their sequence images are fused by applying an image fusion method based on parallel computing and wavelet transform wit

6、h the fusion rule of combining the local standard deviation and energy. This study demonstrates the superiority of the proposed method. Keywords-3D conformal radiotherapy planning; multimodal image registration; image fusion; hierarchical B-spline; parallel computing; wavelet transform I. INTRODUCTI

7、ON Generally, different medical images provide different information for diagnosis. When 3D CRTP is employed for tumor treatment, the relative position between the tumor and its adjacent tissues, could be obtained easily and accurately through analysing the medical data sets which are fused the info

8、rmation of different images, such as functional mages and anatomical images1. Here, the important thing is the fusion of multimodal images, and the registration is the basis of image fusion. In 3D CRTP, non-rigid registration methods are needed because the position, size and shape of internal organs

9、 and tissues are affected by the involuntary and other physiological movements of patient. It is a major challenge to develop a rapid and automatic registration method whose accuracy can reach to that of manual guided registration2,3. In the mean time, data sets in 3D CRTP are so mass that it is ver

10、y difficult to match and fuse the information of multimodal sequence images in real time. In order to realize effectively and efficiently the automatic registration and fusion of multimodal images, a rapid image registration and fusion method is proposed in this paper. This proposed automatic regist

11、ration method is based on hierarchical adaptive FFD, stochastic gradient descent and parallel computing, and the proposed parallel multimodal medical image fusion method is based on wavelet transform with fusion rule of combining the local standard deviation and energy. II. REGISTRATION OF MULTIMODA

12、L MEDICAL IMAGES A. Flow chart for image registration The proposed image registration method applying adaptive FFD which is based on hierarchical B-splines algorithm is shown as Fig.1. Figure 1. Flow chart for image registration method applying adaptive FFD B. Measure of similarity for multimodal me

13、dical images The mutual information of multimodal medical images is taken as similarity criterion for registration, which is essentially the expression about the statistical characteristic of gray information between two images. The mutual information4 of two images is defined as Eq.(1): ),(/)()(),(

14、FRFRFRIIHIHIHIIMI+=. (1) Where, RIis the reference image, FI is the floating image;()RIHis the information entropy for RI , and ()FIH is forFI; ()FRIIH, denotes the combined information entropy of RI and FI. When two images are strictly matched, ()FRIIMI, will be the maximum. This work is funded by

15、natural science foundation of GuangDong, China (No. 8451064101000631) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200805610018) 978-1-4244-4713-8/10/$25.00 2010 IEEEC. Automatic Matching of Feature Points C.1 Automatic matching of feature points The main work for automat

16、ic fine registration by the FFD based on hierarchical B-splines is to find out some suitable feature points, which contain the points of ROI and the internal distribution points. The operation is as follows. When carrying out the recognition of feature points, the ROI should be first extracted from

17、the CT images, which is realized by using C-V level sets algorithm. Then the corresponding feature points on the PET image are searched by employing the maximum mutual information algorithm. Next, internal distribution points are randomly selected on the internal edge. And all of the feature points

18、are matched automatically which is based on parallel computing method. Finally, the pixel with larger standard uptake value(SUV) is selected from PET, and their corresponding feature points are searched from CT images. Thus automatic matching of the initial feature points are realized, and the local

19、 deformation adjustment will be done according to the follow stochastic gradient descent-SP coefficients correction said as section D.1. C.2 ROI extraction based on improved C-V level sets method In this paper, ROI, including the organ contour and the focus region, is extracted by improved C-V level

20、 set method5. The improved C-V level set method is based on a region-based active contour model, which avoids expensive re-initialization of the evolving level set function. The partial differential equations(PDE) defined by level set function is: 0)()()|div()(22022101=+=cIcIt (2) where,)( is slight

21、ly regularized versions of Dirac measure )(; 21, represents the weight of the corresponding energy term, respectively; 0I is the object region; 21,cc is the average intensity value inside and outside contour. The procedure for ROI extraction using improved C-V level set method are as follows: Initia

22、lize level set function n by 0, 0=n. The initial curve is set, and the SDF(signed distance function) is also set according to the shortest distance between the point and curve. Compute 1c and 2c . Solve the PDE in level set function iteratively. Check whether the solution is stationary. If not, 1+=

23、nn and repeat. C.3 Auto-matching method of feature points based on parallel computing In the matching process of feature points, steps of searching and matching of each feature point are independent, so the matching of feature points can be processed by the parallel computing method. In this paper,

24、the cluster computing system is designed to perform with MPI high performance computation-parallel image matching algorithm6. The followings are the steps of the parallel algorithm: The management process broadcasts all of the data to be processed, including CT-PET image data and position of feature

25、 points, to all the processes of the communication domain. Each process computes the assigned start number, end number and amount of processed feature points according to the process index. The assigned feature points are matched independently in each process in turn, which is according to section C

26、.1. The result is sent to the management process. And the management process receives and saves all the results. D. Local fine registration based on adaptive FFD and stochastic gradient descent method Local deformation of image is based on local information, so it is easy to result in mismatch if ex

27、ecuting elastic deformation directly. To solve such problem, an automatic fine registration of multimodal medical images based on adaptive FFD-hierarchical B-splines is proposed in this paper. And this paper adopts stochastic gradient descent method-Simultaneous Perturbation(SP)3 to implement fast F

28、FD local fine registration, in which step adjusting is adapted based on maximization of mutual information. In the mean time, the image to be processed is deformed to a new image using the reverse mapping mode which can eliminate hole phenomenon. Flow chart is shown as Fig.2. 1.initialize the contro

29、l point onthe bottom layer(h=L), accordingto feature points take theinitial value of deformationfunction2.transform the floating imageusing deformation function3.calculate the stochasticgradient vector of penaltyfunction5.whether error meets demand4.calculate the error betweenimage deformed and refe

30、renceimage6.correct thedeformationcofficient bytaking maximummutualinformation asregistrationmeasureNYWhether achieve the most fineresolutionYN7.increasetheresolution ofcontrolgrid(to nextlayerupward),transformdeformationfunctioncofficientFinal result of fineregistrationY Figure 2. Local fine regist

31、ration using a FFD based on hierarchical B-splines D.1 Registration using stochastic gradient descent-SP The mutual information is taken as the cost function for the proposed medical image registration method. That is, the maximum mutual information entropy is taken as registration measure to test w

32、hether the pre-set error is achieved or not . If not achieved, the deformation coefficients should be corrected. Then a global optimal solution is )(minarg=C. The SP method is used to solve the extreme value of coefficient matrix . So, the iterative algorithm for control point is shown as Eq.(3). kt

33、itig)() 1(=+ (3) where CIi, CI is the grid spatial image after deformation, tis iterative number, is iterative step; kgis the derivative vector of the cost function evaluated at the current position )(ti, it is computed by Eq.(4). ikkkkkkkkkkikccCcCg2)()(+=+ (4) where ikg represents the ith element

34、of kg, kc is a small scalar, k denotes the “random perturbation vector” of which each element is randomly assigned 1 in each iteration, with equal probability. +k and krepresent the approximate errors. D.2 Registration based on B-splines FFD method The principle of the FFD method2 is that the object

35、 shape is changed and controlled through controlling the control points of control framework. For B-spline only affects local deformation, so, when some of the feature points of a two-dimensional image are moved only the vicinal points are affected, not all the points in the image are deformed, so c

36、ubic B-splines tensor product of two variables is adopted as the FFD deformation function. Its shape function his: =+=3030)()()(),h(kllJkIlktBsBvu (5) Where, 1) 2/(+=muI, 1) 2/(+=nvJ, ) 2/() 2/(+=mumus,)2/()2/(+=nvnvt;)(sBk and )(tBl are the uniform cubic B-spline basis function of vectors s and t ,

37、 respectively; is a control point grid of )3()3(+nm covering on nmimage , IJ denotes the position coordinate (JI,) in , mu1,nv1. III. FUSION OF MULTIMODAL MEDICAL IMAGE A. Image fusion based on wavelet transform After the registration of CT and PET images, their sequence images are fused by applying

38、 a image fusion method based on parallel computing and wavelet transform with the fusion rule of combining the local standard deviation and energy. The followings are the steps of the fusion algorithm: Step 1. The CT and PET images are encoded by a 3-level wavelet decomposition with Daubechies 9/7 b

39、iorthogonal wavelet filter banks. Step 2. Compute the average value of wavelet coefficients ),(jiDCT/),(jiDPET of the CT and PET images. Step 3. CT and PET images are fused based on wavelet transform by employing fusion rule of combining the local standard deviation and energy1. Let XA denote the ac

40、tivity measure based on local standard deviation. ()()()+=TtSsXXXjiDlktjsiDtsjiA,2,),( (6) Let CT and PET denote the weight that the activity measure based on local standard deviation assigned to CT and PET, respectively. ()()()()()()+=+=jiAjiAyxAjiAjiAjiAPETCTPETPETPETCTCTCT, (7) Where is a adjusta

41、ble parameter. When 0, the higher activity measure is, the more it weights. Here, let equal 1.8. Let XB denote the activity measure based on local energy. ()()()+=TtSsXXlktjsiDtsjiB,2, (8) Let CT andPET denote the weight that the activity measure based on local energy assigned to CT and PET, respect

42、ively. ()()()()()()+=+=jiBjiBjiBjiBjiBjiBPETCTPETPETPETCTCTCT, (9) After combining the local standard deviation and energy, wavelet coefficients of fused image FD is ()()()()()+=jiDjiDjiDjiDjiDPETPETCTCTPETPETCTCTF, (10) Where,、 are adjustable parameters,1=+. The image intensity gets stronger asincr

43、eases; and the edge of intensity get sharper as increases, thus the blur of the edge is avoided as possible as we can if / is adjusted suitably. Step 4. The approximate coefficients CTC and PETC through wavelet transform of CT and PET image are processed. The approximate coefficients of fusion image

44、FJC is the average of CTC and PETC. Step 5. The fused image F is gotten by wavelet inverse transform using all of the wavelet coefficients FD and the approximate coefficients FJC. B. Parallel image fusion In image fusion, it becomes more computationally expensive as the image data and its level of w

45、avelet decomposition increase. Because parallel computing can potentially further increase fusion efficiency, a parallel multimodal medical image fusion method based on wavelet transform is proposed in this paper shown as Fig.3, which is implemented in similar manner of parallel matching of feature

46、points said as section C.3 of chapter II. IV. EXPERIMENTAL RESULTS In this paper, a cluster computing system is developed, whose configurations consist of: Operation system:Windows Server 2003; Network card: 100M b/s Realtek Figure 3. The flowchart of parallel sequence images fusion RTL8139 PCI Fast

47、 Ethernet NIC; Parallel software package: MPICH; Node comfigurations: Intel Pentium 4, 3.0GHz/ 1.00GB RAM; display card, NVIDIA Quadro FX 1400. compiler: Visual C+6.0, the programming language is C+. A. Effect evaluation for image registration and fusion A.1 Effect evaluation for registration method

48、 Original images CT(512512) and PET(128128), which come from the thorax image sequences, are shown as Fig.4. Fig.5 is the processing result of feature points based on parallel computing and ROI extraction by applying C-V level sets method. The edge curve in Fig.5(a) is the result by applying the edg

49、e extraction method of C-V level sets, the regular points in the middle are the selected feature points with 8 interval pixels; the points of Fig.5(b) are the corresponding feature points of Fig.5(a). Fig.6 is the result of registration by adopting the proposed registration algorithm based on hierar

50、chical B-splines adaptive FFD. The quantitative evaluation results for each registration method are shown in Table.1. The Maximum Information entropy(MI), Root Mean Square error(RMS error), and Correlation Coefficient(CC) are used to evaluate each registration method, by analyzing the qualitative in

51、dexes for each method, it can be concluded that the proposed registration algorithm is better than other traditional methods. (a) CT image(reference image) (b) PET image(floating image) Figure 4. Original images (a) CT image (b) PET image Figure 5. Feature points matched based on parallel computing

52、and ROI extraction by C-V level sets method Figure 6. Registration image by the proposed method TABLE I. COMPARISONS AMONG DIFFERENT REGISTRATION METHODS A.2 Effect evaluation for fusion method In the experiment, CT and PET slices are from a male lung-cancer person. CT and PET images are fused by ap

53、plied the proposed parallel multimodal medical image fusion method based on wavelet transform with fusion rule of combining the local standard deviation and energy. Results are shown as Fig.7. The fusion image depict clearly the corresponding relation between the region of nodular shadows in CT slic

54、e and the region of cancer permeability in PET slice. Experimental results demonstrate that the edge and texture features of the multimodal images are reserved effectively by the proposed fusion method. Therefore, the relative position between the tumor and its adjacent tissues could be obtained eas

55、ily through analysing the medical data sets which are fused the information of functional mages and anatomical images. The fusion results are evaluated by applying objective evaluation methods: mean, standard deviation(SD), Information entropy (IE) and Joint entropy (JE). Experiments show that the e

56、valuation index of this proposed method is superior to other fusion methods, the evaluation index of each method are shown in Table 2. (a)Original CT image (b)Original PET image (c)Fusion image Figure 7. Fusion result by the proposed fusion method TABLE II. QUANTITATIVE EVALUATION OF FUSION IMAGE B.

57、 Efficiency comparison B.1 Efficiency comparison for registration method In this paper, multimodal medical image registration is adapted based on adaptive FFD and SP. Moreover, the feature points are matched based on parallel computing. SD IE JE (CT) JE(PET) weighted mean328.545 4.691806 6.1439965.6

58、20486maximum385.560 4.830680 8.3703595.902740local energy162.497 5.052476 8.3763376.134338local standard deviation415.144 5.810895 7.7301136.800253This proposed method383.129 5.987878 8.4237616.997364The average number of cycling for the proposed method is about 3.15, and the registration position i

59、s searched only using about 82.56 steps. While the number of cycling for the traditional method is about 50 to 60 and more than 300 steps for searching, which is much large more than the proposed algorithm . It demonstrates that the proposed registration method is more efficient, and its searching s

60、peed is much faster than traditional algorithm. The runtime of feature-points matching based on parallel computing in the cluster computing system is shown in Fig.8. The runtime of all the registration process based serial computing is 335 seconds. The runtime of the process of feature-points matchi

61、ng based on serial computing is 170 seconds, and the runtime of finding the corresponding feature points of CT image from PET image is 156.5 seconds, which accounts for 92% of all the feature- points-matching process. The runtime of feature-points matching based on parallel computing using 5 process

62、ors is 32 seconds, and all the registration process costs 43 seconds. It is obvious that the runtime of registration decreases obviously. Moreover, the parallel system efficiency keeps about 0.97, thus the algorithm is of good expansibility so that the runtime will decrease more if more processors i

63、s used. It is obvious that the runtime of registration decreases obviously. So, comparing to the traditional methods, the efficiency of the proposed registration method has been greatly improved. Because, on one hand, the proposed multimodal medical image registration is adapted based on adaptive FF

64、D and stochastic gradient descent method-SP; on other hand, the feature points are matched efficiently based on parallel computing. B.2 Efficiency comparison for fusion method The comparison of run time is shown in table 3. )(pSis the speedup factor, and E11 is parallel efficiency. It is obvious tha

65、t the runtime of parallel sequence images fusion decreases obviously. Moreover, the parallel system efficiency keeps about 0.97, thus the algorithm is of good expansibility so that the runtime will decrease more if more processors is used. C. Experiment results in 3D Conformal Radiotherapy Treatment

66、 Planning The experiment results in 3D CRTP System are shown as Fig.9. Fig.9 is the interface of 3D CRTPS. Fig.9 consists of four windows respectively: No.1 is the 3D volume rendering result; No.3 and No.4 are CT image and PET image, respectively. These slices correspond to the position showed by wh

67、ite line in No.1 window; No.2 is the registration and fusion result of CT and PET. The technologist can give diagnosis by using the system. V. CONCLUSIONS A rapid image registration and fusion method is proposed in this paper. This proposed automatic registration method is based on hierarchical adap

68、tive FFD and SP algorithm. After the registration of multimodal images, their sequence images are fused by applying a image fusion method based on wavelet transform with the fusion rule of combining the local standard deviation and energy. The proposed multimodal medical image registration and fusio

69、n method can improve effect and efficiency and meet the requirement of 3D conformal radiotherapy treatment planning. 050100150200serial2345processor numberruntime Figure 8. Efficiency of the feature-point matching based on parallel computing TABLE III. TIME PERFORMANCE OF PARALLEL SEQUENCES IMAGE FU

70、SION(267 IMAGE) Figure 9. Experimental result of cases REFERENCES 1 Li Bin, Tian Lianfang, Kang Yuanyuan, Yu Xia. Parallel multimodal medical image fusion in 3D conformal radiotherapy treatment planningA. The 2nd International Conference on Bioinformatics and Biomedical Engineering, iCBBE 2008 C. Ch

71、ina, Shanghai, 2008, 5:2600-2603 2 David M., David R.H., et al. PET-CT image registration in the chest using free-form deformationsJ. IEEE Transactions on Medical Image. 2003, 22(1):120-128. 3 Stefan Klein, Marius S., josien P.W.P. Evaluation of optimization methods for nonrigid medical image regist

72、ration using mutual information and B-SplinesJ. IEEE Transactions on Image Processing. 2007, 16(12) :2879-2890. 4 Studholme C, Hill DLG, Hawkes DJ. An overlap invariant entropy measure of 3D medical images alignmentJ. Pattern Recognition. 1999, 32: 71-86 5 Chan TF, Vase LA. Active contours without e

73、dgesJ. IEEE Transactions on Image Processing. 2001, 10(2): 266-277. 6 Yasuhiro K., Fumihiko I., Yasuharu M., et al. High-performance computing service over the internet for intraoperative image processingJ. IEEE transactions on information technology in biomedicine. 2004, 8(1):36-46. sequential algorithm Parallel algorithm processor 1 processors 2 processors 4 runtime 251.468s 259.757s 130.103s 65.090s )(pS 0.97 1.93 3.86E 0.97 0.97 0.97 2 4 1 3

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 大杂烩/其它

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号