Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop

上传人:新** 文档编号:567580848 上传时间:2024-07-21 格式:PPT 页数:63 大小:783.50KB
返回 下载 相关 举报
Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop_第1页
第1页 / 共63页
Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop_第2页
第2页 / 共63页
Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop_第3页
第3页 / 共63页
Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop_第4页
第4页 / 共63页
Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop_第5页
第5页 / 共63页
点击查看更多>>
资源描述

《Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop》由会员分享,可在线阅读,更多相关《Chapter2厦门大学林子雨大数据技术原理与应用第二章大数据处理架构Hadoop(63页珍藏版)》请在金锄头文库上搜索。

1、1Data Mining: Concepts and Techniques (3rd ed.) Chapter 3 Jiawei Han, Micheline Kamber, and Jian PeiUniversity of Illinois at Urbana-Champaign &Simon Fraser University2011 Han, Kamber & Pei. All rights reserved.2 2Chapter 3: Data PreprocessingnData Preprocessing: An OverviewnData QualitynMajor Tasks

2、 in Data PreprocessingnData CleaningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary3Data Quality: Why Preprocess the Data?nMeasures for data quality: A multidimensional viewnAccuracy: correct or wrong, accurate or notnCompleteness: not recorded, unavailable, nCon

3、sistency: some modified but some not, dangling, nTimeliness: timely update? nBelievability: how trustable the data are correct?nInterpretability: how easily the data can be understood?4Major Tasks in Data PreprocessingnData cleaningnFill in missing values, smooth noisy data, identify or remove outli

4、ers, and resolve inconsistenciesnData integrationnIntegration of multiple databases, data cubes, or filesnData reductionnDimensionality reductionnNumerosity reductionnData compressionnData transformation and data discretizationnNormalization nConcept hierarchy generation5 5Chapter 3: Data Preprocess

5、ingnData Preprocessing: An OverviewnData QualitynMajor Tasks in Data PreprocessingnData CleaningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary6Data CleaningnData in the Real World Is Dirty: Lots of potentially incorrect data, e.g., instrument faulty, human or co

6、mputer error, transmission errornincomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate datane.g., Occupation=“ ” (missing data)nnoisy: containing noise, errors, or outliersne.g., Salary=“10” (an error)ninconsistent: containing discrepancies in cod

7、es or names, e.g.,nAge=“42”, Birthday=“03/07/2010”nWas rating “1, 2, 3”, now rating “A, B, C”ndiscrepancy between duplicate recordsnIntentional (e.g., disguised missing data)nJan. 1 as everyones birthday?7Incomplete (Missing) DatanData is not always availablenE.g., many tuples have no recorded value

8、 for several attributes, such as customer income in sales datanMissing data may be due to nequipment malfunctionninconsistent with other recorded data and thus deletedndata not entered due to misunderstandingncertain data may not be considered important at the time of entrynnot register history or c

9、hanges of the datanMissing data may need to be inferred8How to Handle Missing Data?nIgnore the tuple: usually done when class label is missing (when doing classification)not effective when the % of missing values per attribute varies considerablynFill in the missing value manually: tedious + infeasi

10、ble?nFill in it automatically withna global constant : e.g., “unknown”, a new class?! nthe attribute meannthe attribute mean for all samples belonging to the same class: smarternthe most probable value: inference-based such as Bayesian formula or decision tree9Noisy DatanNoise: random error or varia

11、nce in a measured variablenIncorrect attribute values may be due tonfaulty data collection instrumentsndata entry problemsndata transmission problemsntechnology limitationninconsistency in naming convention nOther data problems which require data cleaningnduplicate recordsnincomplete dataninconsiste

12、nt data10How to Handle Noisy Data?nBinningnfirst sort data and partition into (equal-frequency) binsnthen one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc.nRegressionnsmooth by fitting the data into regression functionsnClusteringndetect and remove outliersnCombined c

13、omputer and human inspectionndetect suspicious values and check by human (e.g., deal with possible outliers)11Data Cleaning as a ProcessnData discrepancy detectionnUse metadata (e.g., domain, range, dependency, distribution)nCheck field overloading nCheck uniqueness rule, consecutive rule and null r

14、ulenUse commercial toolsnData scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make correctionsnData auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers)nData migration and i

15、ntegrationnData migration tools: allow transformations to be specifiednETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interfacenIntegration of the two processesnIterative and interactive (e.g., Potters Wheels)1212Chapter 3: Data Preproc

16、essingnData Preprocessing: An OverviewnData QualitynMajor Tasks in Data PreprocessingnData CleaningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary1313Data IntegrationnData integration: nCombines data from multiple sources into a coherent storenSchema integration:

17、 e.g., A.cust-id B.cust-#nIntegrate metadata from different sourcesnEntity identification problem: nIdentify real world entities from multiple data sources, e.g., Bill Clinton = William ClintonnDetecting and resolving data value conflictsnFor the same real world entity, attribute values from differe

18、nt sources are differentnPossible reasons: different representations, different scales, e.g., metric vs. British units1414Handling Redundancy in Data IntegrationnRedundant data occur often when integration of multiple databasesnObject identification: The same attribute or object may have different n

19、ames in different databasesnDerivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenuenRedundant attributes may be able to be detected by correlation analysis and covariance analysisnCareful integration of the data from multiple sources may help reduce/avoid re

20、dundancies and inconsistencies and improve mining speed and quality15Correlation Analysis (Nominal Data)n2 (chi-square) testnThe larger the 2 value, the more likely the variables are relatednThe cells that contribute the most to the 2 value are those whose actual count is very different from the exp

21、ected countnCorrelation does not imply causalityn# of hospitals and # of car-theft in a city are correlatednBoth are causally linked to the third variable: population16Chi-Square Calculation: An Examplen2 (chi-square) calculation (numbers in parenthesis are expected counts calculated based on the da

22、ta distribution in the two categories)nIt shows that like_science_fiction and play_chess are correlated in the groupPlay chessNot play chessSum (row)Like science fiction250(90)200(360)450Not like science fiction50(210)1000(840)1050Sum(col.)3001200150017Correlation Analysis (Numeric Data)nCorrelation

23、 coefficient (also called Pearsons product moment coefficient)where n is the number of tuples, and are the respective means of A and B, A and B are the respective standard deviation of A and B, and (aibi) is the sum of the AB cross-product.nIf rA,B 0, A and B are positively correlated (As values inc

24、rease as Bs). The higher, the stronger correlation.nrA,B = 0: independent; rAB 0, then A and B both tend to be larger than their expected values.nNegative covariance: If CovA,B 0.2222Chapter 3: Data PreprocessingnData Preprocessing: An OverviewnData QualitynMajor Tasks in Data PreprocessingnData Cle

25、aningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary23Data Reduction StrategiesnData reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same (or almost the same) analytical resultsnWhy data reduction? A

26、database/data warehouse may store terabytes of data. Complex data analysis may take a very long time to run on the complete data set.nData reduction strategiesnDimensionality reduction, e.g., remove unimportant attributesnWavelet transformsnPrincipal Components Analysis (PCA)nFeature subset selectio

27、n, feature creationnNumerosity reduction (some simply call it: Data Reduction)nRegression and Log-Linear ModelsnHistograms, clustering, samplingnData cube aggregationnData compression24Data Reduction 1: Dimensionality ReductionnCurse of dimensionalitynWhen dimensionality increases, data becomes incr

28、easingly sparsenDensity and distance between points, which is critical to clustering, outlier analysis, becomes less meaningfulnThe possible combinations of subspaces will grow exponentiallynDimensionality reductionnAvoid the curse of dimensionalitynHelp eliminate irrelevant features and reduce nois

29、enReduce time and space required in data miningnAllow easier visualizationnDimensionality reduction techniquesnWavelet transformsnPrincipal Component AnalysisnSupervised and nonlinear techniques (e.g., feature selection)25Mapping Data to a New SpaceTwo Sine WavesTwo Sine Waves + NoiseFrequencynFouri

30、er transformnWavelet transform 26What Is Wavelet Transform?nDecomposes a signal into different frequency subbandsnApplicable to n-dimensional signalsnData are transformed to preserve relative distance between objects at different levels of resolutionnAllow natural clusters to become more distinguish

31、ablenUsed for image compression27Wavelet Transformation nDiscrete wavelet transform (DWT) for linear signal processing, multi-resolution analysisnCompressed approximation: store only a small fraction of the strongest of the wavelet coefficientsnSimilar to discrete Fourier transform (DFT), but better

32、 lossy compression, localized in spacenMethod:nLength, L, must be an integer power of 2 (padding with 0s, when necessary)nEach transform has 2 functions: smoothing, differencenApplies to pairs of data, resulting in two set of data of length L/2nApplies two functions recursively, until reaches the de

33、sired length Haar2Daubechie428Wavelet DecompositionnWavelets: A math tool for space-efficient hierarchical decomposition of functions nS = 2, 2, 0, 2, 3, 5, 4, 4 can be transformed to S = 23/4, -11/4, 1/2, 0, 0, -1, -1, 0nCompression: many small detail coefficients can be replaced by 0s, and only th

34、e significant coefficients are retained29Haar Wavelet Coefficients Coefficient “Supports”2 2 0 2 3 5 4 4-1.252.750.5 0 0 -1 0 -1+-+-+-+-+-+-+-+- -1 -1 0.5 0 2.75 -1.25 0 0 Original frequency distributionHierarchical decomposition structure (a.k.a. “error tree”)30Why Wavelet Transform?nUse hat-shape

35、filtersnEmphasize region where points clusternSuppress weaker information in their boundaries nEffective removal of outliersnInsensitive to noise, insensitive to input ordernMulti-resolutionnDetect arbitrary shaped clusters at different scalesnEfficientnComplexity O(N)nOnly applicable to low dimensi

36、onal data31x2x1ePrincipal Component Analysis (PCA)nFind a projection that captures the largest amount of variation in datanThe original data are projected onto a much smaller space, resulting in dimensionality reduction. We find the eigenvectors of the covariance matrix, and these eigenvectors defin

37、e the new space32nGiven N data vectors from n-dimensions, find k n orthogonal vectors (principal components) that can be best used to represent data nNormalize input data: Each attribute falls within the same rangenCompute k orthonormal (unit) vectors, i.e., principal componentsnEach input data (vec

38、tor) is a linear combination of the k principal component vectorsnThe principal components are sorted in order of decreasing “significance” or strengthnSince the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance (i.e., using

39、the strongest principal components, it is possible to reconstruct a good approximation of the original data)nWorks for numeric data onlyPrincipal Component Analysis (Steps)33Attribute Subset SelectionnAnother way to reduce dimensionality of datanRedundant attributes nDuplicate much or all of the inf

40、ormation contained in one or more other attributesnE.g., purchase price of a product and the amount of sales tax paidnIrrelevant attributesnContain no information that is useful for the data mining task at handnE.g., students ID is often irrelevant to the task of predicting students GPA34Heuristic S

41、earch in Attribute SelectionnThere are 2d possible attribute combinations of d attributesnTypical heuristic attribute selection methods:nBest single attribute under the attribute independence assumption: choose by significance testsnBest step-wise feature selection:nThe best single-attribute is pick

42、ed firstnThen next best attribute condition to the first, .nStep-wise attribute elimination:nRepeatedly eliminate the worst attributenBest combined attribute selection and eliminationnOptimal branch and bound:nUse attribute elimination and backtracking35Attribute Creation (Feature Generation)nCreate

43、 new attributes (features) that can capture the important information in a data set more effectively than the original onesnThree general methodologiesnAttribute extractionn Domain-specificnMapping data to new space (see: data reduction)nE.g., Fourier transformation, wavelet transformation, manifold

44、 approaches (not covered)nAttribute construction nCombining features (see: discriminative frequent patterns in Chapter 7)nData discretization36Data Reduction 2: Numerosity ReductionnReduce data volume by choosing alternative, smaller forms of data representationnParametric methods (e.g., regression)

45、nAssume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers)nEx.: Log-linear modelsobtain value at a point in m-D space as the product on appropriate marginal subspaces nNon-parametric methods nDo not assume modelsnMajor fami

46、lies: histograms, clustering, sampling, 37Parametric Data Reduction: Regression and Log-Linear ModelsnLinear regressionnData modeled to fit a straight linenOften uses the least-square method to fit the linenMultiple regressionnAllows a response variable Y to be modeled as a linear function of multid

47、imensional feature vectornLog-linear modelnApproximates discrete multidimensional probability distributions38Regression AnalysisnRegression analysis: A collective name for techniques for the modeling and analysis of numerical data consisting of values of a dependent variable (also called response va

48、riable or measurement) and of one or more independent variables (aka. explanatory variables or predictors)nThe parameters are estimated so as to give a best fit of the datanMost commonly the best fit is evaluated by using the least squares method, but other criteria have also been usednUsed for pred

49、iction (including forecasting of time-series data), inference, hypothesis testing, and modeling of causal relationshipsyxy = x + 1X1Y1Y139nLinear regression: Y = w X + bnTwo regression coefficients, w and b, specify the line and are to be estimated by using the data at handnUsing the least squares c

50、riterion to the known values of Y1, Y2, , X1, X2, .nMultiple regression: Y = b0 + b1 X1 + b2 X2nMany nonlinear functions can be transformed into the abovenLog-linear models:nApproximate discrete multidimensional probability distributionsnEstimate the probability of each point (tuple) in a multi-dime

51、nsional space for a set of discretized attributes, based on a smaller subset of dimensional combinationsnUseful for dimensionality reduction and data smoothingRegress Analysis and Log-Linear Models40Histogram AnalysisnDivide data into buckets and store average (sum) for each bucketnPartitioning rule

52、s:nEqual-width: equal bucket rangenEqual-frequency (or equal-depth)41ClusteringnPartition data set into clusters based on similarity, and store cluster representation (e.g., centroid and diameter) onlynCan be very effective if data is clustered but not if data is “smeared”nCan have hierarchical clus

53、tering and be stored in multi-dimensional index tree structuresnThere are many choices of clustering definitions and clustering algorithmsnCluster analysis will be studied in depth in Chapter 1042SamplingnSampling: obtaining a small sample s to represent the whole data set NnAllow a mining algorithm

54、 to run in complexity that is potentially sub-linear to the size of the datanKey principle: Choose a representative subset of the datanSimple random sampling may have very poor performance in the presence of skewnDevelop adaptive sampling methods, e.g., stratified sampling: nNote: Sampling may not r

55、educe database I/Os (page at a time)43Types of SamplingnSimple random samplingnThere is an equal probability of selecting any particular itemnSampling without replacementnOnce an object is selected, it is removed from the populationnSampling with replacementnA selected object is not removed from the

56、 populationnStratified sampling: nPartition the data set, and draw samples from each partition (proportionally, i.e., approximately the same percentage of the data) nUsed in conjunction with skewed data44Sampling: With or without ReplacementSRSWOR(simple random sample without replacement)SRSWRRaw Da

57、ta45Sampling: Cluster or Stratified SamplingRaw Data Cluster/Stratified Sample46Data Cube AggregationnThe lowest level of a data cube (base cuboid)nThe aggregated data for an individual entity of interestnE.g., a customer in a phone calling data warehousenMultiple levels of aggregation in data cubes

58、nFurther reduce the size of data to deal withnReference appropriate levelsnUse the smallest representation which is enough to solve the tasknQueries regarding aggregated information should be answered using data cube, when possible47Data Reduction 3: Data CompressionnString compressionnThere are ext

59、ensive theories and well-tuned algorithmsnTypically lossless, but only limited manipulation is possible without expansionnAudio/video compressionnTypically lossy compression, with progressive refinementnSometimes small fragments of signal can be reconstructed without reconstructing the wholenTime se

60、quence is not audionTypically short and vary slowly with timenDimensionality and numerosity reduction may also be considered as forms of data compression48Data CompressionOriginal DataCompressed DatalosslessOriginal DataApproximated lossy49Chapter 3: Data PreprocessingnData Preprocessing: An Overvie

61、wnData QualitynMajor Tasks in Data PreprocessingnData CleaningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary50Data TransformationnA function that maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be ide

62、ntified with one of the new valuesnMethodsnSmoothing: Remove noise from datanAttribute/feature constructionnNew attributes constructed from the given onesnAggregation: Summarization, data cube constructionnNormalization: Scaled to fall within a smaller, specified rangenmin-max normalizationnz-score

63、normalizationnnormalization by decimal scalingnDiscretization: Concept hierarchy climbing51NormalizationnMin-max normalization: to new_minA, new_maxAnEx. Let income range $12,000 to $98,000 normalized to 0.0, 1.0. Then $73,000 is mapped to nZ-score normalization (: mean, : standard deviation):nEx. L

64、et = 54,000, = 16,000. ThennNormalization by decimal scalingWhere j is the smallest integer such that Max(|) 152Discretization nThree types of attributesnNominalvalues from an unordered set, e.g., color, professionnOrdinalvalues from an ordered set, e.g., military or academic rank nNumericreal numbe

65、rs, e.g., integer or real numbersnDiscretization: Divide the range of a continuous attribute into intervalsnInterval labels can then be used to replace actual data values nReduce data size by discretizationnSupervised vs. unsupervisednSplit (top-down) vs. merge (bottom-up)nDiscretization can be perf

66、ormed recursively on an attributenPrepare for further analysis, e.g., classification53Data Discretization MethodsnTypical methods: All the methods can be applied recursivelynBinning nTop-down split, unsupervisednHistogram analysisnTop-down split, unsupervisednClustering analysis (unsupervised, top-d

67、own split or bottom-up merge)nDecision-tree analysis (supervised, top-down split)nCorrelation (e.g., 2) analysis (unsupervised, bottom-up merge)54Simple Discretization: BinningnEqual-width (distance) partitioningnDivides the range into N intervals of equal size: uniform gridnif A and B are the lowes

68、t and highest values of the attribute, the width of intervals will be: W = (B A)/N.nThe most straightforward, but outliers may dominate presentationnSkewed data is not handled wellnEqual-depth (frequency) partitioningnDivides the range into N intervals, each containing approximately same number of s

69、amplesnGood data scalingnManaging categorical attributes can be tricky55Binning Methods for Data SmoothingqSorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34* Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28,

70、29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 3456Discretization Without Using Class Labels(Binning vs. Clustering) DataEqual interval width (binning)

71、Equal frequency (binning)K-means clustering leads to better results57Discretization by Classification & Correlation AnalysisnClassification (e.g., decision tree analysis)nSupervised: Given class labels, e.g., cancerous vs. benignnUsing entropy to determine split point (discretization point)nTop-down

72、, recursive splitnDetails to be covered in Chapter 7nCorrelation analysis (e.g., Chi-merge: 2-based discretization)nSupervised: use class informationnBottom-up merge: find the best neighboring intervals (those having similar distributions of classes, i.e., low 2 values) to mergenMerge performed recu

73、rsively, until a predefined stopping condition58Concept Hierarchy GenerationnConcept hierarchy organizes concepts (i.e., attribute values) hierarchically and is usually associated with each dimension in a data warehousenConcept hierarchies facilitate drilling and rolling in data warehouses to view d

74、ata in multiple granularitynConcept hierarchy formation: Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as youth, adult, or senior)nConcept hierarchies can be explicitly specified by domain experts and/or dat

75、a warehouse designersnConcept hierarchy can be automatically formed for both numeric and nominal data. For numeric data, use discretization methods shown.59Concept Hierarchy Generation for Nominal DatanSpecification of a partial/total ordering of attributes explicitly at the schema level by users or

76、 expertsnstreet city state countrynSpecification of a hierarchy for a set of values by explicit data groupingnUrbana, Champaign, Chicago IllinoisnSpecification of only a partial set of attributesnE.g., only street city, not othersnAutomatic generation of hierarchies (or attribute levels) by the anal

77、ysis of the number of distinct valuesnE.g., for a set of attributes: street, city, state, country60Automatic Concept Hierarchy GenerationnSome hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set nThe attribute with the most

78、distinct values is placed at the lowest level of the hierarchynExceptions, e.g., weekday, month, quarter, yearcountryprovince_or_ statecitystreet15 distinct values365 distinct values3567 distinct values674,339 distinct values61Chapter 3: Data PreprocessingnData Preprocessing: An OverviewnData Qualit

79、ynMajor Tasks in Data PreprocessingnData CleaningnData IntegrationnData ReductionnData Transformation and Data DiscretizationnSummary62SummarynData quality: accuracy, completeness, consistency, timeliness, believability, interpretabilitynData cleaning: e.g. missing/noisy values, outliersnData integr

80、ation from multiple sources: nEntity identification problemnRemove redundanciesnDetect inconsistenciesnData reductionnDimensionality reductionnNumerosity reductionnData compressionnData transformation and data discretizationnNormalizationnConcept hierarchy generation63ReferencesnD. P. Ballou and G.

81、K. Tayi. Enhancing data quality in data warehouse environments. Comm. of ACM, 42:73-78, 1999nA. Bruce, D. Donoho, and H.-Y. Gao. Wavelet analysis. IEEE Spectrum, Oct 1996nT. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003nJ. Devore and R. Peck. Statistics: The Explor

82、ation and Analysis of Data. Duxbury Press, 1997.nH. Galhardas, D. Florescu, D. Shasha, E. Simon, and C.-A. Saita. Declarative data cleaning: Language, model, and algorithms. VLDB01nM. Hua and J. Pei. Cleaning disguised missing data: A heuristic approach. KDD07nH. V. Jagadish, et al., Special Issue o

83、n Data Reduction Techniques. Bulletin of the Technical Committee on Data Engineering, 20(4), Dec. 1997nH. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining Perspective. Kluwer Academic, 1998nJ. E. Olson. Data Quality: The Accuracy Dimension. Morgan Kaufmann, 20

84、03nD. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999nV. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and Transformation, VLDB2001nT. Redman. Data Quality: The Field Guide. Digital Press (Elsevier), 2001nR. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 资格认证/考试 > 自考

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号