来自维基百科对大大数据地定义

上传人:汽*** 文档编号:503062751 上传时间:2023-07-02 格式:DOC 页数:11 大小:70KB
返回 下载 相关 举报
来自维基百科对大大数据地定义_第1页
第1页 / 共11页
来自维基百科对大大数据地定义_第2页
第2页 / 共11页
来自维基百科对大大数据地定义_第3页
第3页 / 共11页
来自维基百科对大大数据地定义_第4页
第4页 / 共11页
来自维基百科对大大数据地定义_第5页
第5页 / 共11页
点击查看更多>>
资源描述

《来自维基百科对大大数据地定义》由会员分享,可在线阅读,更多相关《来自维基百科对大大数据地定义(11页珍藏版)》请在金锄头文库上搜索。

1、Big data -From WikipediaIn information technology, big data12 is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The challenges include capture, curation, storage,3 search, sharin

2、g, analysis,4 and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to spot business trends, deter

3、mine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions.567 在信息技术中, “大数据”是指一些使用目前现有数据库管理工具或传统数据处理应用很难 处理的大型而复杂的数据集。其挑战包括采集、管理、存储、搜索、共享、分析和可视化。 更大的数据集的趋势是由于从相关数据的单一大数据集推导而来的额外信息, 与分离的较小 的具有相同数据总量的数据集相比, 能够发现相关性来 “识别商业趋势 ( spot

4、 business trends )、 确定研究的质量、预防疾病、法律引用、打击犯罪以及实时确定道路交通状态” 。As of 2012, limits on the size of data sets that are feasible to process in a reasonable amount of time were on the order of exabytes of data.89 Scientists regularly encounter limitations due to large data sets in many areas, including meteoro

5、logy, genomics,10 connectomics, complex physics simulations,11 and biological and environmental research.12 The limitations also affect Internet search, finance and business informatics. Data sets grow in size in part because they are increasingly being gathered by ubiquitous information -sensing mo

6、bile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, radio -frequency identification readers, and wireless sensor networks.1314 The worlds technological per -capita capacity to store information has roughly doubled every 40 months since the 1980s;15 as of

7、2012, every day 2.5 qui ntillio n (2.5x 1018) bytes of data were created.16截至 2012 年,数据集大小尺寸的限制是 exabyte 数量级的数据,这种规模是指以可行的处 理方式在合理的时间进行数据处理。 在许多领域科学家们经常遇到大数据集的限制, 这些领 域包括气象学、基因学、 connectomics 、复杂的物理仿真、以及生物和环境研究。这些限制 也影响到了互联网、 金融和商业情报信息的研究。 数据集大小的增长是由于这些数据集不断地通过无处不在的信息感应移动设备、 航空传感技术 (遥感)、软件日志、 摄像头、 麦

8、克风、 无线频率识别阅读器(radio-frequency identification readers ) -RFID和无线传感网络来收集和聚集。从 80 年代起,全球存储信息人均信息存储能力在技术上大致每 40 个月就翻一番; as of 2012, every day 2.5 qui ntillio n (2.5x 1018) bytes of data we截至latedDIlB 年,每天产生的数据为2.5 quintillion (2.5*10A18 )字节。Big data is difficult to work with using relational databases a

9、nd desktop statistics and visualization packages, requiring instead massively parallel software running on tens, hundreds, or even thousands of servers.17 What is considered big data varies depending on the capabilities of the organization managing the set, and on the capabilities of the application

10、s that are traditionally used to process and analyze the data set in its domain. For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a

11、significant consideration.18 使用关系型数据库和桌面统计和可视化软件包对大数据进行处理是困难的,它需要“将大规模并行软件运行在数十台、数百台或甚至数千台服务器(来处理)”。什么是“大数据”取决于企业管理数据集的能力、以及在其领域使用传统方式对数据集的处理和分析能力。“对某些企业来说,在第一次面对处理上百 G 字节的数据时就要重新考虑数据管理的选择,而对 其他的企业来说,处理数百TB字节的数据量不成问题。DefinitionBig data usually includes data sets with sizes beyond the ability of comm

12、only -used software tools to capture, curate, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set. With this difficulty, a new platform of big data to

13、ols has arisen to handle sensemaking over large quantities of data, as in the Apache Hadoop Big Data Platform. 大数据通常包括在尺寸上超出常用软件工具对数据在一定的可容忍时间间隔进行采集、管理和处理的能力的数据集。大数据的尺寸是一个不断变化的目标,截至到 2012 年在一个单一 数据集中的数据围从十数TB到数个PB。由于这种困难性,出现了新的“大数据“平台工具来在大量的数据中处理合理的数据,例如 Apache Hadoop 大数据平台。MIKE2.0, an open approach

14、 to Information Management, defines big data in terms of useful permutations, complexity, and difficulty to delete individual records.MIKE2.0, 个开放的信息管理方式,从有用的排列、复杂性和难以删除单一记录几个方面 定义了大数据。In a 2001 research report19 and related lectures, META Group (now Gartner) analyst Doug Laney defined data growth c

15、hallenges and opportunities as being three -dimensional, i.e. increasing volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources). Gartner, and now much of the industry, continue to use this 3Vs model for describing big data.20 In 2012, Gartner upda

16、ted its definition as follows: Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.21 在 2001 年的研究报告和相关文献中, META Group (现在的 Gartner )的分析师 Doug Laney 将数据增长的挑战和机遇定义成三维方式,即总量(数据量) 、速度(数据进出(变化)的 速度)和多样性(数据类型和数据源的围) 。 Gartner 和目前业界大多数(人)延续使用这种“3V “模型来描述大数据。在

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 办公文档 > 活动策划

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号