实用教程Teradata

上传人:cl****1 文档编号:567525143 上传时间:2024-07-21 格式:PPT 页数:81 大小:895KB
返回 下载 相关 举报
实用教程Teradata_第1页
第1页 / 共81页
实用教程Teradata_第2页
第2页 / 共81页
实用教程Teradata_第3页
第3页 / 共81页
实用教程Teradata_第4页
第4页 / 共81页
实用教程Teradata_第5页
第5页 / 共81页
点击查看更多>>
资源描述

《实用教程Teradata》由会员分享,可在线阅读,更多相关《实用教程Teradata(81页珍藏版)》请在金锄头文库上搜索。

1、1 7/21/2024实用教程Teradata Still waters run deep.流静水深流静水深,人静心深人静心深 Where there is life, there is hope。有生命必有希望。有生命必有希望2 7/21/2024问题总结常见问题分类:常见问题分类:1.表属性不对: Set / Multiset问题:INSERT操作慢2.主索引(PI)设置不合理问题1:数据倾斜度大,空间爆满。问题2:JOIN操作,数据需要重分布。 3.分区索引(PPI)设置不合理问题:全表扫描4.连接条件过于复杂问题:系统无法优化执行计划5.缺乏统计信息问题:系统无法找到最优化的执行计划S

2、QL跑得慢哈!跑得慢哈!3 7/21/2024提纲Teradata架构常见问题,及解决方法Teradata工具实用小技巧JOIN的实现机制JOIN的优化4 7/21/2024Teradata 体系架构Teradata and MPP SystemsRDBMS ARCH5 7/21/2024Logical Example of NPPI versus PPI4 AMPs with Orders Table defined with PPI on O_Date.RHO_#O_DateRHO_#O_DateRHO_#O_DateRHO_#O_Date35100706/0126100206/01241

3、00406/0120100506/0139101106/0136101206/0132100306/0143101006/0114100106/0106100906/0104100806/0108100606/0103101606/0217101306/0248102306/0207101706/0216102106/0245101506/0209101806/0227101406/0244102206/0202102406/0211101906/0222102006/0201102806/0312103106/0328103206/0310103406/0329103306/03341029

4、06/0319102506/0340103506/0347102706/0325103606/0331102606/0346103006/0323104006/0430103806/0442104706/0413103706/0421104506/0436104306/0405104806/0415104206/0433103906/0418104106/0438104606/0441104406/04 4 AMPs with Orders Table defined with NPPI.01102806/0312103106/0328103206/0310103406/0329103306/

5、0334102906/0319102506/0340103506/0347102706/0325103606/0331102606/0346103006/0303101606/0217101306/0248102306/0207101706/0216102106/0245101506/0209101806/0227101406/0244102206/0202102406/0211101906/0222102006/0214100106/0135100706/0139101106/0106100906/0126100206/0136101206/0104100806/0124100406/013

6、2100306/0108100606/0120100506/0143101006/0123104006/0430103806/0442104706/0413103706/0421104506/0436104306/0405104806/0415104206/0433103906/0418104106/0438104606/0441104406/04RHO_#O_DateRHO_#O_DateRHO_#O_DateRHO_#O_Date 6 7/21/2024提纲Teradata架构常见问题,及解决方法Teradata工具实用小技巧JOIN的实现机制JOIN的优化7 7/21/2024表属性:S

7、et & MultiSetSet Table不允许记录重复MultiSet Table允许记录重复默认值:Set TableCreate Table. AS . 生成的目标表属性默认为Set Table对SET Table进行INSERT操作,需要检查是否存在重复记录相当的耗资源若真要限定唯一性,可以通过UPI或USI实现CREATE SET TABLE pmart.RPT_NM_GRP_PRE_WARN_MON,(CAL_Month INTEGER TITLE 统计月份 ,ORG_NUM CHAR(12) TITLE 集团编号 ,City_ID CHAR(3) TITLE 地市标识 ,ORG

8、_SUBS_GRP_NUM CHAR(10) TITLE 集团用户群编号 ,ORG_Title VARCHAR(200) TITLE 集团名称 ,ORG_Level CHAR(2) TITLE 集团级别 ,STAT_Item_Code CHAR(2) TITLE 统计项 ,STAT_Value DECIMAL(18,2) TITLE 统计值 ) PRIMARY INDEX (ORG_NUM);例子:pmart.RPT_NM_GRP_PRE_WARN_MON 内蒙移动集团客户预警指标月报表假设原有1286449条记录插入:152853条记录耗时:15秒8 7/21/2024表属性:Set & M

9、ultiSet (cont.)CREATE MULTISET TABLE pmart.RPT_NM_GRP_PRE_WARN_MON(CAL_Month INTEGER TITLE 统计月份 ,ORG_NUM CHAR(12) TITLE 集团编号 ,City_ID CHAR(3) TITLE 地市标识 ,ORG_SUBS_GRP_NUM CHAR(10) TITLE 集团用户群编号 ,ORG_Title VARCHAR(200) TITLE 集团名称 ,ORG_Level CHAR(2) TITLE 集团级别 ,STAT_Item_Code CHAR(2) TITLE 统计项 ,STAT_V

10、alue DECIMAL(18,2) TITLE 统计值 ) PRIMARY INDEX (ORG_NUM);例子:pmart.RPT_NM_GRP_PRE_WARN_MON 内蒙移动集团客户预警指标月报表建议:建议:Teradata中都用中都用 MultiSet假设原有1286449条记录插入:152853条记录耗时:1秒例子: CREATE MULTISET TABLE tttemp.VT_SUBS_VIOC_QUAN as ( SELECT * FROM tttemp.MID_SUBS_VIOC_QUAN WHERE CAL_MONTH = 200802 AND * )WITH DATA

11、 PRIMARY INDEX ( subs_id);临时表临时表, 默认为默认为: Set需要指定为需要指定为: Multiset字段越多,记录越多字段越多,记录越多差别越明显差别越明显9 7/21/2024PI(Primary Index 主索引)的选择PI影响数据的存储与访问影响数据的存储与访问,其选择标准:不同值尽量多的字段(More Unique Values)使用频繁的字段:包括值访问和连接访问少更新PI字段不宜太多最好是手动指定PI 例子:用户语音业务量中间表用户语音业务量中间表CREATE MULTISET TABLE tttemp.MID_SUBS_VIOC_QUAN ( CA

12、L_Month INTEGER TITLE 统计月份, City_ID CHAR(4) TITLE 地市标识, Channel_ID CHAR(8) TITLE 渠道标识, Subs_id CHAR(12) TITLE 用户标识, 。) PRIMARY INDEX ( subs_id);例子:用户语音业务量临时表用户语音业务量临时表CREATE MULTISET TABLE tttemp.VT_SUBS_VIOC_QUAN as ( SELECT * FROM tttemp.MID_SUBS_VIOC_QUAN WHERE CAL_MONTH = 200802 AND * )WITH DATA

13、 PRIMARY INDEX ( subs_id);Subs_ID: 频繁使用频繁使用 Unique Value多多如果不指定如果不指定PI,系统默认为系统默认为:Cal_Month10 7/21/2024PI(Primary Index 主索引)的选择(cont.)例子:梦网客户活跃客户分析梦网客户活跃客户分析CREATE MULTISET TABLE PMART.FCT_DATA_MONNET_ACTIVE_MON ( CAL_Month INTEGER TITLE 统计月份, City_ID CHAR(4) TITLE 地市标识, Channel_ID CHAR(8) TITLE 渠道标

14、识, Mont_SVC_Type_Cod CHAR(3) TITLE 梦网业务类型编码, Mont_SVC_CAT_MicroCls_Cod CHAR(3) TITLE 梦网业务分类小类编码, Mont_SVC_CHRG_Type_Cod CHAR(2) TITLE 梦网业务计费类型编码, THR_Brand_Cod CHAR(1) TITLE 三大品牌编码, Mont_Consume_Level_Cod CHAR(2) TITLE 梦网消费层次编码, Consume_Level_Cod CHAR(2) TITLE 消费层次编码, 。)PRIMARY INDEX ( CAL_Month ,Ci

15、ty_ID ,Channel_ID ,Mont_SVC_Type_Cod ,Mont_SVC_CAT_MicroCls_Cod ,Mont_SVC_CHRG_Type_Cod,THR_Brand_Cod ,Mont_Consume_Level_Cod ,Consume_Level_Cod );PI:9字段 2字段: City_ID ,Channel_ID 调整PI后,在右边的SQL中,PI是否起作用?以下以下SQL,PI是否起作用?:是否起作用?:1.值访问Select *From FCT_DATA_MONNET_ACTIVE_MONWhere City_ID = 070010 and Cha

16、nnel_ID= 0100 and cal_month = 2007072.连接访问Select *From FCT_DATA_MONNET_ACTIVE_MON ALEFT JOIN MID_CHANNEL_INFO_DAILY B ON A. Channel_ID = B. Channel_ID and A. City_ID = b. City_ID LEFT JOIN VW_CDE_REGION_TYPE C ON A. City_ID = C. City_ID 3、值访问连接访问Select *From FCT_DATA_MONNET_ACTIVE_MON A, VT_INFO BWH

17、ERE A. Channel_ID = B. Channel_ID AND A. City_ID = B. City_ID AND A.CAL_MONTH = 200707 AND A. Consume_Level_Cod=B. Consume_Level_Cod11 7/21/2024PPI的使用PPI(Partition Primary Index,分区索引),把具有相同分区值的数据聚簇存放在一起;类似于SQL Server的聚簇索引(Cluster Index),Oracle的聚簇表(Cluster Table)。利用PPI,可以快速插入/访问同一个Partition(分区)的数据。CR

18、EATE MULTISET TABLE qdata.TB_DQC_KPI_CHECK_RESULT ( TX_DATE DATE FORMAT YYYYMMDD TITLE 数据日期 NOT NULL, KPI_CODE INTEGER TITLE 指标代码 NOT NULL, 。 )PRIMARY INDEX ( KPI_CODE )PARTITION BY RANGE_N(TX_DATE BETWEEN CAST(20030101) AS DATE FORMAT YYYYMMDD) AND CAST(20191231) AS DATE FORMAT YYYYMMDD) EACH INTER

19、VAL 1 DAY , NO RANGE OR UNKNOWN);Select *From TB_DQC_KPI_CHECK_RESULTWhere tx_date = 20070701;或Where tx_date between 20070701 and 20070731;或Where tx_date 20070701;但Where tx_date like 200707%;不起作用12 7/21/2024PPI的使用(cont.)Partition上不要使用表达式,否则Partition不能被正确使用。T1. tx_date/100=CAST(20070917AS DATE FORMAT

20、 YYYYMMDD)/100Substring(T1. tx_date from 1 for 6) =200709应该修改为 T1. tx_date=CAST(20070901 AS DATE FORMAT YYYYMMDD)13 7/21/2024PPI的使用(cont.) 脚本:tb_030040270.pl/* 删除当月 */ 2小时 del BASS1.tb_03004 where proc_dt = 200709;insert into BASS1.tb_03004 7小时。 sel . from pview.vw_evt_cust_so cust where acpt_date=c

21、ast(200710|01 as date) cast(200710|01 as date)写法错误,PPI不起作用日期的正确写法日期的正确写法:Cast(20071001 as date format YYYYMMDD)在proc_dt建立PPIPPI字段从Load_Date调整为acpt_date14 7/21/2024创建可变临时表它仅存活于同一个Session之内注意指定可变临时表为multiset(通常也要指定PI)可变临时表不能带有PPI例子1:create volatile multiset table vt_RETAIN_ANLY_MON as ( select col1,co

22、l2, from where group by . )with data PRIMARY INDEX (PI_Cols) ON COMMIT PRESERVE ROWS;例子2: create volatile multiset table vt_RETAIN_ANLY_MON ( col1 char(2), col2 varchar(12) NOT NULL)PRIMARY INDEX (PI_Cols) ON COMMIT PRESERVE ROWS;15 7/21/2024创建可变临时表(cont.) 例子3:create volatile multiset table vt_RETAI

23、N_ANLY_MON as ( select col1, cast(adc as varchar(12) col2 from where )with no data PRIMARY INDEX (col1) ON COMMIT PRESERVE ROWS; 例子4:create volatile multiset table vt_net_gsm_nl as pdata.tb_net_gsm_nl with no data ON COMMIT PRESERVE ROWS;字段col2将用unicode字符集;当跟普通字段(latin字符集)join时,需要进行数据重新分布。不建议不建议失败:因

24、为pdata.tb_net_gsm_nl 有PPI而可变临时表不允许有PPI16 7/21/2024固化临时表固化临时表,就是把查询结果存放到一张物理表。固化临时表,就是把查询结果存放到一张物理表。共下次分析或他人使用共下次分析或他人使用Session断开之后,仍然可以使用。断开之后,仍然可以使用。示例1: CREATE MULTISET TABLE tttemp.TMP_BOSS_VOIC as ( select * from pview.vw_net_gsm_nl) WITH no DATA PRIMARY INDEX (subs_id);INSERT INTO tttemp.TMP_BO

25、SS_VOICSELECT * FROM pview.vw_net_gsm_nl WHERE *;示例2:CREATE MULTISET TABLE tttemp.TMP_BOSS_VOIC as ( select * from pview.vw_net_gsm_nl WHERE *) WITH DATA PRIMARY INDEX (subs_id);示例3:(复制表,数据备份)CREATE MULTISET TABLE tttemp.TMP_BOSS_VOIC AS pdata.tb_net_gsm_nl WITH DATA ;17 7/21/2024数据类型注意非日期字段与日期字段cha

26、r & date的转换与关联:如果数据类型一致可以直接使用;在CASE WHEN or COALESCE一定要使用显式的类型转换(CAST)CASE WHEN A = B THEN DATE1 ELSE 20061031 END应写成CASE WHEN A = B THEN DATE1 ELSE CAST(20061031 AS DATE) END数值运算时,确保运算过程中不丢失计算精度。CAST(100/3 AS DEC(5,2)应该写成CAST(100/3.00 AS DEC(5,2)18 7/21/2024字符(串)与数字相比较比较规则比较规则:1) 比较两个值比较两个值(字段字段),它

27、们的类型必须一样!,它们的类型必须一样!2) 当字符当字符(串串)与数字相比较时,先把字符与数字相比较时,先把字符(串串)转换成数字,再进行比较。转换成数字,再进行比较。3) 经分系统中容易出错的,有经分系统中容易出错的,有Cal_Month字段字段Case 1Table 1CREATE TABLE Emp1 (Emp_noCHAR(6), Emp_nameCHAR(20)PRIMARY INDEX (Emp_no);Statement 1SELECT *FROMEmp1WHEREEmp_no = 1234;Statement 2SELECT *FROMEmp1WHEREEmp_no = 12

28、34;Table 1CREATE TABLE Emp2 (Emp_noINTEGER, Emp_nameCHAR(20)PRIMARY INDEX (Emp_no);Statement 1SELECT *FROMEmp2WHEREEmp_no = 1234;Statement 2SELECT *FROMEmp2WHEREEmp_no = 1234;Case 2Results in Full Table ScanResults in unnecessary conversion 19 7/21/2024目标列的选择减少目标列,可以少消耗SPOOL空间,从而提高SQL的效率当系统任务繁忙,系统内存

29、少的时候,效果尤为明显。举例:GSM语言话单表,PDATA.TB_NET_GSM_NL共有73字段,以下SQL供返回1.6亿条记录左边的SQL,记录最长为:698字节,平均399字节右边的SQL,记录最长为:59字节, 平均30字节两者相差两者相差400多多GB的的SPOOL空间,空间,IO次数也随着相差甚大!次数也随着相差甚大!SPOOL空间估计:497 GBSPOOL空间估计:42 GBSELECT SUBS_ID ,MSISDN ,Begin_Date ,Begin_Time ,Call_DUR ,CHRG_DURFROM PDATA.TB_NET_GSM_NLWHERE PROC_DA

30、TE BETWEEN 20070701 AND 20070731 SELECT * FROM PDATA.TB_NET_GSM_NLWHERE PROC_DATE BETWEEN 20070701 AND 2007073120 7/21/2024Where条件的限定根据Where条件先进行过滤数据集,再进行连接(JOIN)等操作这样,可以减少参与连接操作的数据集大小,从而提高效率好的查询引擎,可以自动优化;但有些复杂SQL,查询引擎优化得并不好。注意:系统的系统的SQL优化,只是避免最差的,选择相对优的,未必能够优化,只是避免最差的,选择相对优的,未必能够得到最好的优化结果。得到最好的优化结果

31、。SELECT A.TX_DATE, A.KPI_CODE ,B.SRC_NAME,A.KPI_VALUEFROM ( select * from qdata.tb_dqc_kpi_check_result where TX_DATE = 20070701 AND KPI_CODE = 65 ) ALEFT JOIN ( SELECT * FROM qdata.tb_dqc_kpi_def where KPI_CODE = 65 and N_TYPE = M) BON A.KPI_CODE = B.KPI_CODE SELECT A.TX_DATE, A.KPI_CODE ,coalesce(

32、B.SRC_NAME, no name) ,A.KPI_VALUEFROM qdata.tb_dqc_kpi_check_result ALEFT JOIN qdata.tb_dqc_kpi_def BON A.KPI_CODE = B.KPI_CODE WHERE A. TX_DATE = 20070701 AND A.KPI_CODE = 65 AND B.N_TYPE = M rewrite21 7/21/2024用Case When替代UNION sel city_id,channel_id,cust_brand_id,sum(stat_values) as stat_valuesfr

33、om ( . select t.city_id -语音杂志计费量 ,coalesce(v.channel_id,b.channel_id,-) as channel_id ,cust_brand_id ,sum(case when SMS_SVC_Type_Level_SECND = 017 and Call_Type_Code in (00,10,01,11) then sms_quan else 0 END) as stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SUBS v on t.subs_id=v.su

34、bs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.City_ID where cal_date=20070914 group by 1,2,3 union all select t.city_id -梦网短信计费量 ,coalesce(v.channel_id,b.channel_id,-) as channel_id ,cust_brand_id ,sum(sms_quan) as stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SU

35、BS v on t.subs_id=v.subs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.City_ID where cal_date=20070914 and SMS_SVC_Type_Level_SECND like 02% and SMS_SVC_Type_Level_SECND not in (021,022) group by 1,2,3 .)tmpGroup by 1,2,3两个子查询的表连接部分完全一样两个子查询除了取数据条件,其它都一样。Union all是多余的,它需要重复扫描数据,进行重复的

36、JOIN可以用Case when替代union 作业:KPI_NWR_SMS_BILL_QUAN描述:点对点短信计费量脚本: kpi_nwr_sms_bill_quan0600.pl22 7/21/2024用Case When替代UNION (cont.)sel city_id,channel_id,cust_brand_id,sum(stat_values) as stat_valuesfrom ( select t.city_id ,coalesce(v.channel_id,b.channel_id,-) as channel_id ,cust_brand_id ,sum(CASE WH

37、EN SMS_SVC_Type_Level_SECND = 017 and Call_Type_Code in (00,10,01,11) THEN sms_quan -语音杂志计费量语音杂志计费量 WHEN SMS_SVC_Type_Level_SECND like 02% and SMS_SVC_Type_Level_SECND not in (021,022) THEN sms_quan -梦网短信计费量梦网短信计费量 ELSE 0 END ) as stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SUBS

38、v on t.subs_id=v.subs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.City_ID where cal_date=20070914.)tmpGroup by 1,2,3SQL优化重写优化重写23 7/21/2024用OR替代UNION Select city_id , channel_id, cust_brand_id, sum(sms_quan ) stat_valuesfrom( select t.city_id -语音杂志计费量 ,coalesce(v.channel_id,b.chann

39、el_id,-) as channel_id ,cust_brand_id ,sum(sms_quan ) stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SUBS v on t.subs_id=v.subs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.City_ID where cal_date=20070914 and SMS_SVC_Type_Level_SECND = 017 and Call_Type_Code in (00,

40、10,01,11) group by 1,2,3 union all select t.city_id -梦网短信计费量 ,coalesce(v.channel_id,b.channel_id,-) as channel_id ,cust_brand_id ,sum(sms_quan) as stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SUBS v on t.subs_id=v.subs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.

41、City_ID where cal_date=20070914 and SMS_SVC_Type_Level_SECND like 02% and SMS_SVC_Type_Level_SECND not in (021,022) group by 1,2,3 )T Group by 1,2,3两个子查询的表连接部分完全一样两个子查询除了取数据条件,其它都一样。Union all是多余的,它需要重复扫描数据,进行重复的JOIN可以用OR替代union 此类的问题,在脚本中经常见到。24 7/21/2024用OR替代UNION (cont.)select t.city_id ,coalesce(

42、v.channel_id,b.channel_id,-) as channel_id ,cust_brand_id ,sum( sms_quan) as stat_values from PVIEW.vw_mid_sms_svc_quan_daily t left join VT_SUBS v on t.subs_id=v.subs_id left join PVIEW.vw_FCT_CDE_BUSN_CITY_TYPE b on t.city_id=b.City_ID where cal_date=20070914 and ( SMS_SVC_Type_Level_SECND = 017 -

43、语音杂志计费量语音杂志计费量 and Call_Type_Code in (00,10,01,11) ) OR (SMS_SVC_Type_Level_SECND like 02% -梦网短信计费量梦网短信计费量 and SMS_SVC_Type_Level_SECND not in (021,022) ) )Group by 1,2,3SQL优化重写优化重写25 7/21/2024去掉多余的Distinct与Group by sel t.operator ,t.acpt_channel_id ,t.acpt_city_id ,t.subs_id ,t.acpt_date as evt_dat

44、e From ( sel operator, ACPT_Channel_ID, acpt_city_id,subs_id, acpt_date from pview.vw_evt_cust_so cust where acpt_date =20071007 and so_meth_code in(0,1,2) and PROC_STS_Code =-1 group by 1,2,3,4,5union all sel operator_num as operator, ACPT_Channel_ID, acpt_city_id, subs.subs_id, charge_date as acpt

45、_date from pview.vw_fin_busi_rec bus join crmmart.subs_day_info_daily subs on subs.msisdn=bus.msisdn where charge_date =20071007 group by 1,2,3,4,5 )t group by 1,2,3,4,5;既然t查询外层有group by操作去重,那么子查询内的Group by去重是多余的。而且,两个子查询group by后再用union all,就可能再产生重复记录,那么group by也失去意义了。解决方法: 把把t查询内部的两个查询内部的两个group b

46、y去掉即可去掉即可 类似的类似的Distinct问题,可效仿解决。问题,可效仿解决。去重去重去重26 7/21/2024Group by vs. DistinctDistinct是去除重复的操作Group by是聚集操作某些情况下,两者可以起到相同的作用。两者的执行计划不一样,效率也不一样建议:使用建议:使用Group byselect subs_id ,acct_idfrom PVIEW. VW_FIN_ACCT_SUBS_HISwhere efct_date 20070701 group by 1,2select DISTINCT subs_id ,acct_idfrom PVIEW. V

47、W_FIN_ACCT_SUBS_HISwhere efct_date 20070701 27 7/21/2024Union vs. Union all Union与Union all的作用是将多个SQL的结果进行合并。Union将自动剔除集合操作中的重复记录;需要耗更多资源。Union all则保留重复记录,一般建议使用Union all。第一个SELECT语句,决定输出的字段名称,标题,格式等要求所有的SELECT语句: 1) 必须要有同样多的表达式数目; 2) 相关表达式的域必须兼容select *from (select a) T1(col1)unionselect *from (sel

48、ect bc)T2(col2)select *from (select bc)T3(col3)union allselect *from (select a) T1(col1)union allselect *from (select bc)T2(col2)col3-abcbccol1-ab28 7/21/2024先Group by再join脚本:脚本:rpt_mart_new_comm_mon0400.pl 11小时小时Select case when b.CUST_Brand_ID is null then 5020 when b.CUST_Brand_ID in(2000,5010) t

49、hen 5020 else b.CUST_Brand_ID end ,sum(COALESCE(b.Bas_CHRG_DUR_Unit,0) as Thsy_Accum_New_SUBS_CHRG_DUR , sum(case when b.call_type_code =20 then b.Bas_CHRG_DUR_Unit else 0 END) from VTNEW_SUBS_THISYEAR t inner join VTDUR_MON b on t.Subs_ID=b.Subs_ID left join PVIEW.vw_MID_CDE_LONG_CALL_TYPE_LVL c on

50、 b.Long_Type_Level_SECND= c.Long_Type_Level_SECND left join PVIEW.vw_MID_CDE_ROAM_TYPE_LVL d on b.Roam_Type_Level_SECND= d.Roam_Type_Level_SECND group by 1;记录数情况:记录数情况:t: 580万,万,b: 9400万万, c:8, d:8 主要问题:主要问题:假如连接顺序为: ( (b join c) join d) join t)则是( (9400万 join 8) join 8) join 580万)数据分布时间长数据分布时间长(IO多

51、多),连接次数多,连接次数多解决方法:解决方法: 先执行先执行(t join b),然后,然后groupby,再,再join c,d29 7/21/2024先Group by再join (cont.)脚本:脚本:rpt_mart_new_comm_mon0400.pl40秒秒Select case when b.CUST_Brand_ID is null then 5020 when b.CUST_Brand_ID in(2000,5010) then 5020 else b.CUST_Brand_ID end ,sum(COALESCE(b.Bas_CHRG_DUR_Unit,0) as T

52、hsy_Accum_New_SUBS_CHRG_DUR , sum(case when b.call_type_code =20 then b.Bas_CHRG_DUR_Unit else 0 END) from (select CUST_Brand_ID, call_type_code, Long_Type_Level_SECND, Roam_Type_Level_SECND, sum(Bas_CHRG_DUR_Unit) Bas_CHRG_DUR_Unit, count(*) quan from VTDUR_MON where subs_id in (select subs_id from

53、 VTNEW_SUBS_THISYEAR) group by 1,2,3,4 )b left join PVIEW.vw_MID_CDE_LONG_CALL_TYPE_LVL c on b.Long_Type_Level_SECND= c.Long_Type_Level_SECND left join PVIEW.vw_MID_CDE_ROAM_TYPE_LVL d on b.Roam_Type_Level_SECND= d.Roam_Type_Level_SECND group by 1;记录数情况:记录数情况:t: 580万,万,b: 9400万万, c:8, d:8 处理过程:处理过程:

54、 先执行先执行(t join b),然后,然后groupby,再,再join c,d结果:结果:1、 VTDUR_MON join VTNEW_SUBS_THISYEAR PI相同,merge join,只需10秒2、经过、经过group by,b表只有表只有332记录记录3、b join c join d, 就是:就是: 332 8 84、最终结果:、最终结果:5记录,共记录,共40秒秒30 7/21/2024先Group by再join(cont.)先汇总再连接,可以减少参与连接的数据集大小,减少比较次数,从而提高效率。以下面SQL为例,假设历史表( History )有1亿条记录左边的S

55、QL,需要进行 1亿 90次比较右边的SQL,则只需要 1亿 1 次比较SELECT H.product_id ,sum(H.account_num)FROM History H , Calendar DTWHERE H.sale_date = DT.calendar_date AND DT.quarter = 3 GROUP BY 1;SELECT H.product_id, SUM(H.account_num)FROM History H , (SELECT min(calendar_date) min_date ,max(calendar_date) max_date FROM Cale

56、ndar WHERE quarter = 3 ) DT WHERE H.sale_date BETWEEN DT.min_date and DT.max_dateGROUP BY 1;31 7/21/2024提取公共SQL形成临时表脚本:rpt_nmmart_comm_subs_mon0403.pl出现以下SQL代码段,共5次,平均每次执行需10分钟 。 FROM PVIEW.VW_MID_VOIC_SVC_QUAN_MON a ,PVIEW.VW_MID_CDE_SUBS_BRAND_LVL b ,vt_subs c WHERE a.CUST_Brand_ID=b.SUBS_Brand_Le

57、vel_Third AND a.CAL_Month=200708 AND a.SUBS_ID=c.SUBS_ID 。整个脚本需要扫描以下SQL 14次,平均每次执行需3分钟PVIEW.VW_MID_VOIC_SVC_QUAN_MON where CAL_Month=200708提取公共提取公共SQL,形成临时表,较少扫描,形成临时表,较少扫描(IO)次数。次数。该脚本,经过优化之后,从该脚本,经过优化之后,从50分钟缩减至分钟缩减至10分钟分钟32 7/21/2024关联条件 (1)Select A.a2, B.b2 from A join Bon substring(A.a1 from 1

58、for 7) = B.b1应该写为Select A.a2, B.b2 from (select substring(a1 from 1 for 7) as a1_new,a2 from A ) A_newjoin Bon a1_new = b133 7/21/2024关联条件 (2)Select A.a2, B.b2 from A join Bon TRIM(A.a1 ) = TRIM(B.b1)应该写为Select A.a2, B.b2 from A join Bon A.a1 = B.b134 7/21/2024SQL书写不当可能会引起笛卡儿积以下面两个SQL为例,它们将进行笛卡儿积操作。

59、例子1:Select employee.emp_no , employee.emp_nameFrom employee A例子2:SELECT A.EMP_Name, B.Dept_NameFROM employee A, Department BWhere a.dept_no = b.dept_no;表表Employee与表与表A进行笛卡儿积进行笛卡儿积表表A与表与表B进行笛卡儿积进行笛卡儿积表表A与表与表B进行进行Inner Join35 7/21/2024修改表定义常见的表定义修改操作:增加字段修改字段长度建议的操作流程建议的操作流程1.Rename table db.tablex as

60、 db.tabley;2.通过Show table语句获得原表db.tablex的定义3.定义新表: db.tablex4.Insert into db.tablex(。) select 。 From db.tabley;5.Drop table db.tabley;Teradata提供ALTER TABLE语句,可进行修改表定义但,不建议采用ALTER TABLE方式。36 7/21/2024插入/更新/删除记录时,尽量不要Abort当目标表有数据时,插入和更新操作,以及部分删除,都产生TJ如果此时abort该操作,系统将会回滚 Delete BASS1.tb_03004 where pro

61、c_dt = 200709 ;UPDATE Customer SET Credit_Limit = Credit_Limit * 1.20 ; DELETE FROM Trans WHERE Trans_Date 7/21/2024Update/Delete操作UPDATE Customer SET Credit_Limit = Credit_Limit * 1.20 ; CREATE multiset TABLE Customer_N AS Customer with no data;INSERT INTO Customer_NSELECT Credit_Limit * 1.20 FROM

62、Customer ;DROP TABLE Customer ;RENAME TABLE Customer_N TO Customer ;CREATE multiset TABLE Trans_N as Trans with no data;INSERT INTO Trans_NSELECT * FROM Trans WHERE Trans_Date 981231;DROP TABLE Trans;RENAME TABLE Trans_N TO Trans;先建立空表,通过先建立空表,通过insert / select 方式插入数据这是非常快的操作!方式插入数据这是非常快的操作!先备份,然后做变

63、更操作,更加安全!先备份,然后做变更操作,更加安全!对于大表进行Update/DELETE操作,将耗费相当多的资源与相当长的时间。Update/Delete操作,需要事务日志TJ(Transient Journal)以防意外中断导致数据受到破坏在Update/Delete操作中途被Cancel,系统则需回滚,这将耗更多的资源与时间!在经分系统中,应严防此类事件发生!DELETE FROM Trans WHERE Trans_Date 7/21/2024经分系统的实体命名规范实体的命名,最长不超过30个字母;通常要求都是大写。实体的命名:_后缀前缀:表:基础表以TB_开头中间表以MID_开头应用

64、模块的表以相应的主体缩写开头视图:一般地,视图名称与表名称一一对应。以VW_开头。对于TB_开头的表,把TB_替换成VW;对于其他表,加上VW_即可。宏:以M_或者Macro_开头后缀:历史表:_HIS月表:_MON日表:_DAILY39 7/21/2024实体的命名规范示例TB_OFR_SUBS_HIS 用户历史Efct_date, End_date示例:Select *From pview.vw_ofr_subs_hisWhere efct_date cast( 20080401 as date format yyyymmdd)MID_COMP_OPPN_DISTRICT_MON 区域管理

65、月中间表RPT_CHK_FREE_RES_LOS_DAILY 免费资源促销资料丢失用户表VW_MID_SUBS_INFO_MON 视图:用户资料月中间表40 7/21/2024提纲Teradata架构常见问题,及解决方法Teradata工具实用小技巧JOIN的实现机制JOIN的优化41 7/21/2024Teradata扩展SQL(1) SHOWSHOW TABLEpdata.tb_net_gsm_nl ;显示表显示表pdata.tb_net_gsm_nl的定义的定义42 7/21/2024Teradata扩展SQL(2) HELP HELP DATABASE MIS; 列举数据库列举数据库M

66、IS的内容的内容 HELP TABLE pdata.tb_net_gsm_nl; 列举该表的字段列举该表的字段43 7/21/2024Teradata扩展SQL(3) - MACRO宏(MACRO)是Teradata对ANSI SQL的扩展。宏(宏(MACRO)是一组)是一组SQL的集合的集合SQL之间用分号”;”隔开没有if then else语句与存储过程(Store Procedure)不一样:存储过程类似于C语言,需要先编译,才能执行;而宏不需要。44 7/21/2024Teradata扩展SQL(4) 临时表可变临时表(Volatile Table)是一种比较常用的Teradata临

67、时表一般用它来存储公共部分的数据,以提高程序的执行效率。它仅存活于同一个Session之内INSERT INTO Target_tableSelect *From (sel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=02 unionsel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=03 unionsel proc_dt tx_date, kpi_code , kpi_value f

68、rom pview.vw_kpi_day where kpi_code=04 unionsel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=14 unionsel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=15 unionsel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=16

69、unionsel proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_day where kpi_code=22) T1Where tx_date = 20070701需要访问表需要访问表pview.vw_kpi_day 7 次,效率低下!次,效率低下!45 7/21/2024Teradata扩展SQL(4) 临时表Create volatile multiset table vt_kpi_day as (Select proc_dt tx_date, kpi_code , kpi_value from pview.vw_kpi_da

70、y where tx_date = 20070701) WITH DATA Primary Index(kpi_code )ON COMMIT PRESERVE ROWS;INSERT INTO Target_tableSelect *From (sel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=02 unionsel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=03 unionsel proc_dt tx

71、_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=04 unionsel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=14 unionsel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=15 unionsel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_cod

72、e=16 unionsel proc_dt tx_date, kpi_code , kpi_value from vt_kpi_day where kpi_code=22) T1只需要访问表只需要访问表pview.vw_kpi_day 1 次,效率提高!次,效率提高!46 7/21/2024SQL变量SELECT DATABASE; 显示当前数据库PVIEWSELECT USER; 显示当前Session登陆的用户名luscSELECT DATE, CURRENT_DATE ; 显示当前日期20070806 , 20070806定义格式: SELECT CAST(DATE AS DATE FO

73、RMAT YYYYMMDD)Select TIME, CURRENT_TIMESTAMP(0);显示当前时间18:46:35, 2007-08-06 18:46:34+00:00转换: SELECT CAST(CURRENT_TIMESTAMP(0) AS CHAR(19);2007-08-06 18:47:5947 7/21/2024日期(DATE)的操作取当前天:select cast( current_date as DATE FORMAT YYYYMMDD)取当前天的前一天,后一天select cast( current_date -1 as DATE FORMAT YYYYMMDD)

74、select cast( current_date + 1 as DATE FORMAT YYYYMMDD)取前(后)一个月的同一天Select add_months(current_date , -1)Select add_months(current_date , 1)若current_date为20070331,结果是什么?取当前天所在月的第一天select substr(cast(current_date as date format YYYYMMDD),1,6) | 01;取当前天所在月的最后一天select cast( substr(cast( add_months(current

75、_date,1) as date format YYYYMMDD),1,6) | 01 as date format YYYYMMDD) -1 日期相减SELECT ( DATE 2007-03-01 - DATE 2004-01-01) day(4);SELECT (DATE2007-03-01- DATE2004-01-01) month(4) ; 48 7/21/2024日历表:Sys_calendar.Calendar用于进行复杂的日期计算判断日期是否合法,例如20070229SELECT * FROM Sys_calendar.Calendar WHERE calendar_date

76、 = cast(2007-02-29 as date format yyyy-mm-dd);返回空值,则说明该日期是非法的。判断某日归属当月(当年)的第几周,当年的第几季度等Select week_of_month, Week_of_year, quarter_of_yearFrom Sys_calendar.Calendar WHERE calendar_date = cast(2006-10-15 as date format yyyy-mm-dd);取当前月的天数Select max(day_of_month) From Sys_calendar.Calendar WHERE cast(

77、 cast(calendar_date as date format yyyymmdd) as char(8) like 200708%或 where month_of_calendar in ( select month_of_calendar From Sys_calendar.Calendar where calendar_date = cast(2007-08-01 as date format yyyy-mm-dd) )49 7/21/2024提纲Teradata架构常见问题,及解决方法Teradata工具实用小技巧JOIN的实现机制JOIN的优化50 7/21/2024LEFT O

78、uter Join 举例SELECT E.Last_name,E.Department_Number,D.Department_NameFROMEmployee E LEFT OUTER JOINDepartment DONE.Department_Number = D.Department_Number;与内连接相比,这个查询的结果与内连接相比,这个查询的结果集会增加下面的一些记录集会增加下面的一些记录:部门号为空的员工。部门号为空的员工。部门号不在部门代码表里部门号不在部门代码表里面的员工。面的员工。 Last_Name Department_NumberDepartment_Name C

79、rane402software supportJames111?Runyon501marketing and salesStein301research and developGreen?Trainer100executiveKanieski301research and developEmployeeDepartment2 551 7/21/2024Join之前的重分布AMP1SELECT. . .FROMTable1 T1INNER JOINTable2 T2ONT1.A = T2.A;Join 的列都是两个表的的列都是两个表的PI不需要数据重分布不需要数据重分布.Subs_idBCPI1

80、00 214433T2Subs_idBCPI100 725002Subs_idBCPI200 214433T2Subs_idBCPI200 725002AMP2话单表用户资料表要求参与要求参与JOIN的数据必须同处于一个的数据必须同处于一个AMP上!上!52 7/21/2024Join之前的重分布AMP1SELECT. . .FROM用户资料表用户资料表 T1INNER JOIN客户资料表客户资料表ON T1.Cust_id = T2.Cust_id;Join 的列都是在一个表上是的列都是在一个表上是PI,另外一个表上不是,另外一个表上不是PI是是PI的表不需要重分布的表不需要重分布.Subs

81、_id Cust_idPI100 214T2Cust_idBCPI 214 725002Subs_id Cust_idPI200 214T2Cust_idBCPI408133009AMP2用户资料表客户资料表要求参与要求参与JOIN的数据必须同处于一个的数据必须同处于一个AMP上!上!53 7/21/2024Join之前的重分布AMP1Subs_id Cust_idPI100 214 100 408T2Cust_idBCPI 214 725002Subs_id Cust_idPI200 214T2Cust_idBCPI408133009AMP2用户资料表客户资料表Subs_id Cust_id

82、 PI100 214 200 214Subs_id Cust_id PI 100 408Spool空间要求参与要求参与JOIN的数据必须同处于一个的数据必须同处于一个AMP上!上!54 7/21/2024重分布的问题大表的代码字段与小表的PI字段Select city_name, count(distinct subs_id)From TB_Ofr_Subs AJoin TB_CDE_City BOn A.city_id = B.city_id如果选择重分布的策略,意味着将所有的用户按照地市重分布到所有的AMP上系统总共130个AMP、总共13个盟市意味着只有13个AMP有数据,其它AMP根本

83、不能被用到同时,由于呼市的用户数较大,意味这呼市所在的AMP上数据量很大,造成了13个AMP运行过程中的不均匀。55 7/21/2024Join之前的复制小表到所有的AMPAMP1City_id City_namePI100 呼市呼市 T2Cust_id City_idPI 214 200 City_id City_namePI200 包头包头T2Cust_id City_idPI408 100AMP2地市代码表用户资料表 City_id City_name 100 呼市呼市 200 包头包头 City_id City_name 100 呼市呼市 200 包头包头Spool空间要求参与要求参与

84、JOIN的数据必须同处于一个的数据必须同处于一个AMP上!上!56 7/21/2024复制小表到Spool空间1M rows1M rows1M rows1M rows1M rows1M rows1M rows1M rowsTableTable1M rows8M rowsSPOOL(表被复制到所有的表被复制到所有的AMP上上)1M rows8M rows1M rows8M rows1M rows8M rows1M rows8M rows1M rows8M rows1M rows8M rows1M rows8M rows57 7/21/2024数据已经到了一个AMP上,关联怎么做?58 7/21/

85、2024关联策略 Merge Join用来用来Join的记录必须位于相同的的记录必须位于相同的AMP上上Merge Join 仅仅读取每个表一次仅仅读取每个表一次.对于等值条件的对于等值条件的Join,优化器经常会选用,优化器经常会选用Merge Join.通常情况下比通常情况下比product join的效率更高的效率更高.Merge join 处理流程处理流程:找到一个小表找到一个小表.如果需要如果需要:将一个或者两个表要用到的数据都放在将一个或者两个表要用到的数据都放在Spool空间里空间里.基于基于Join列的列的hash值将记录重分布到相应的值将记录重分布到相应的AMP.根据根据Jo

86、in列的列的hash顺序对顺序对spool里面的记录进行排序里面的记录进行排序.对于对于Join列的列的Hash值相同的记录进行比较值相同的记录进行比较.与与Product Join相比,比较次数大大降低相比,比较次数大大降低.A3DataA3DataA3DataB7DataB7DataC4DataC4DataJoin ColumnHashA3DataB8DataC4DataJoin ColumnHash适用情况:适用情况:两个表的数据量都比较大时例如 100万 30万59 7/21/2024关联策略 Product JoinRows must be on the same AMP to be

87、 joined.DataDataDataDataDataDataDataDataDataData不对记录做排序不对记录做排序如果内存里面放不下的时候需要多次读取某张表如果内存里面放不下的时候需要多次读取某张表.Table1 的每条记录要与的每条记录要与 Table2 的每条记录进行比对的每条记录进行比对.满足条件的记录会被放到满足条件的记录会被放到 spool空间中空间中.之所以会被称作之所以会被称作Product Join 是因为是因为:总共的比较次数总共的比较次数 = Table 1 的记录条数的记录条数 * Table 2的记录条数的记录条数当内存里面不能存放某一个表的所有数据的时候,这

88、种比较会变得非常的消耗资源,因为当内存里面不能存放某一个表的所有数据的时候,这种比较会变得非常的消耗资源,因为总是需要内外存的交换。总是需要内外存的交换。如果没有如果没有where条件,条件,Product Join通常会产生无意义的结果通常会产生无意义的结果.Product Join 处理步骤处理步骤:找到小表并在找到小表并在Spool空间中复制到所有空间中复制到所有AMP上上.在每个在每个AMP上,上,Spool空间里的小表的每一行和大表在该空间里的小表的每一行和大表在该AMP上的每一行做上的每一行做Join.适用情况:适用情况:大表非PI字段对小表例如 30万 5060 7/21/202

89、4关联策略 Hash Join优化器技术有效的将小表放在优化器技术有效的将小表放在Cache内存中,并且与未排序的大表进行关联内存中,并且与未排序的大表进行关联.Row Hash Join的处理流程的处理流程:找到小表找到小表.重分布小表或者复制小表到各个重分布小表或者复制小表到各个AMP的内存中的内存中.将小表在将小表在Cache内存中按照内存中按照join字段的字段的 row hash顺序排序顺序排序.将记录放在内存中将记录放在内存中.用大表的用大表的join字段的字段的row hash在内存中进行折半查找在内存中进行折半查找.这种这种join将减少大表的排序、重分布或者拷贝将减少大表的排

90、序、重分布或者拷贝.EXPLAIN 将会看见类似于将会看见类似于“Single Partition Hash Join”的术语的术语.Join ColumnHashC4DataA3DataC6DataF6DataB7DataC4DataA3DataA3DataB8DataC4DataJoin ColumnHashCache Memory适用情况:适用情况:大表非PI字段对中等小的表例如 700万 1万61 7/21/2024Merge Join (e.g.)1) 关联表PI相同 SELECT a.acyc_id, coalesce(c.city_code,*), coalesce(c.cmcc

91、_brand_code,*), coalesce(c.user_id,*) FROMdwpmdvw.vw_smsdetail_mon a left join dwpmdvw.vw_user_info_mon c on a.user_id=c.user_id WHERE a.acyc_id=2006112)PI不同 SELECT coalesce(a.city_code,*), coalesce(a.cmcc_brand_code,*), coalesce(a.user_id,*) FROM dwpmdvw.vw_user_info_mon a left join dwpdata.tf_f_cu

92、stomer b on a.cust_id=b.cust_id WHERE a.acyc_id=200611 and b.eparchy_code = 091362 7/21/2024记录重分布 (Product Join 计划 P1)将小表复制到每个将小表复制到每个AMP上上.EmployeeEnumNameDept PK FK UPI1BROWN2002SMITH3103JONES3104CLAY4005PETERS1506FOSTER2007GRAY3108BAKER3109TYLER450 10CARR450DepartmentDeptNamePKUPI150PAYROLL200FIN

93、ANCE310MFG.400EDUCATION450ADMINDepartment表按照表按照Dept (UPI)的的Row hash分布数据分布数据400 EDUCATION200FINANCE450ADMIN310 MFG.150 PAYROLLEmployee 表按照表按照Enum (UPI)的的Row hash分布数据分布数据6FOSTER 2008BAKER3104CLAY4003JONES3109TYLER4501BROWN 2007GRAY310 5 PETERS 150 2 SMITH31010 CARR450在在Spool空间中将空间中将 Department表的记录复制到每

94、个表的记录复制到每个AMP上上150 PAYROLL200FINANCE310MFG.400EDUCATION450ADMIN150 PAYROLL200FINANCE310MFG.400EDUCATION450ADMIN150 PAYROLL200FINANCE310MFG.400EDUCATION450ADMIN150 PAYROLL200FINANCE310MFG.400EDUCATION450ADMINExample:SELECT*FROMEmployee EINNER JOINDepartment DONE.Dept D.Dept ;63 7/21/2024如何确认自己的查询效率很高

95、Explain SQL语句数据分布的变化(Redistribute,Duplicate). which is redistributed by hash code to all AMPs Redistributing data (in SPOOL) in preparation for a join. which is duplicated on all AMPs Duplicating data (in SPOOL) from the smaller table in preparation for a join.64 7/21/2024Merge Join (Matching Primar

96、y Indexes)QUERYEXPLAIN SELECTLast_Name, First_Name,Area_Code, Phone_Number,ExtensionFROM Employee EINNER JOIN Emp_Phone PON E.Employee_Number = P.Employee_NumberORDER BY1, 2;EXPLANATION V2R6.1 EXPLAIN-1) First, we lock a distinct TFACT.pseudo table for read on a RowHash to prevent global deadlock fo

97、r TFACT.P. 2) Next, we lock a distinct TFACT.pseudo table for read on a RowHash to prevent global deadlock for TFACT.E. 3) We lock TFACT.P for read, and we lock TFACT.E for read. 4) We do an all-AMPs JOIN step from TFACT.E by way of a RowHash match scan with no residual conditions, which is joined t

98、o TFACT.P. TFACT.E and TFACT.P are joined using a merge join, with a join condition of (TFACT.E.Employee_Number = TFACT.P.Employee_Number). The input table TFACT.E will not be cached in memory, but it is eligible for synchronized scanning. The result goes into Spool 1 (group_amps), which is built lo

99、cally on the AMPs. Then we do a SORT to order Spool 1 by the sort key in spool field1. The result spool file will not be cached in memory. The size of Spool 1 is estimated with low confidence to be 52,000 rows. The estimated time for this step is 0.72 seconds. 5) Finally, we send out an END TRANSACT

100、ION step to all AMPs involved in processing the request.- The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.72 seconds. Employee(26,000 rows)Emp_Phone(52,000 rows)Employee_Number Last_Name First_Name Area_Code Phone_Number Extension65 7/21/

101、2024Hash JoinEmployee(26,000 rows)Department(1403 rows)Last_Name First_Name Dept_NameDept_NumberQUERYEXPLAIN SELECTLast_Name, First_Name,Dept_NameFROM Employee EINNER JOIN Department DON E.Dept_Number = D.Dept_Number;EXPLANATION V2R6.1 EXPLAIN-: 4) We do an all-AMPs RETRIEVE step from TFACT.E by way

102、 of an all-rows scan with a condition of (NOT (TFACT.E.Dept_Number IS NULL) into Spool 2 (all_amps), which is redistributed by hash code to all AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The size of Spool 2 is estimated with high confidence to b

103、e 26,000 rows. The estimated time for this step is 0.52 seconds. 5) We do an all-AMPs JOIN step from TFACT.D by way of an all-rows scan with no residual conditions, which is joined to Spool 2 (Last Use) by way of an all-rows scan. TFACT.D and Spool 2 are joined using a single partition hash join, wi

104、th a join condition of (Dept_Number = TFACT.D.Dept_Number). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 1 is estimated with low confidence to be 26,000 rows. The estimated time for this step is 0

105、.33 seconds. 6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.- The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.85 seconds. 66 7/21/2024Product JoinQUERYEXPLAINSELECT D.Dept_Name, E.Employee_N

106、umber, E.Last_Name, E.First_NameFROM Employee E INNER JOIN Department D ON E.Dept_Number = D.Dept_NumberOR E.Employee_Number = D.Dept_Mgr_NumberORDER BY1, 2, 3, 4;EXPLAIN output on following page.Employee(26,000 rows)Department(1403 rows)Employee_Number Last_Name First_Name Dept_NameDept_NumberEmplo

107、yee_NumberDept_Mgr_Number67 7/21/2024Product Join (cont.)EXPLAIN EXPLANATION V2R6.1 EXPLAIN-1) First, we lock a distinct TFACT.pseudo table for read on a RowHash to prevent global deadlock for TFACT.E.2) Next, we lock a distinct TFACT.pseudo table for read on a RowHash to prevent global deadlock for

108、 TFACT.D. 3) We lock TFACT.E for read, and we lock TFACT.D for read. 4) We do an all-AMPs RETRIEVE step from TFACT.D by way of an all-rows scan with no residual conditions into Spool 2 (all_amps), which is duplicated on all AMPs. The size of Spool 2 is estimated with high confidence to be 11,224 row

109、s. The estimated time for this step is 0.03 seconds. 5) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an all-rows scan, which is joined to TFACT.E by way of an all-rows scan with no residual conditions. Spool 2 and TFACT.E are joined using a product join, with a join condition of (TF

110、ACT.E.Dept_Number = Dept_Number) OR (TFACT.E.Employee_Number = Dept_Mgr_Number). The input table TFACT.E will not be cached in memory, but it is eligible for synchronized scanning. The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The result spool file will not be cached

111、 in memory. The size of Spool 1 is estimated with no confidence to be 973,873 rows. The estimated time for this step is 24.34 seconds.6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.- The contents of Spool 1 are sent back to the user as the result of st

112、atement 1. The total estimated time is 24.38 seconds.68 7/21/2024n-Table Joins(多表连接)多表连接可以分解为两两连接多表连接可以分解为两两连接.对下面的对下面的SQL,查询引擎可以选择较优的执行计划:例如,查询引擎可以选择较优的执行计划:例如,Plan1或者或者Plan2。 SELECT . FROM Table_A, Table_B, Table_C, Table_D WHERE . . . ;对下面的对下面的SQL,查询引擎只能选择,查询引擎只能选择Plan2,否则结果有可能不对。,否则结果有可能不对。SELEC

113、T . FROM Table_A left join Table_B on A.c1 = B.c2 INNER JOIN Table_C ON B.c2 = c.c3 LEFT JOIN Table_D ON D.C4 = A.C1 WHERE . . . ;SPOOLFILETable_ATable_BTable_CTable_DSPOOLFILESPOOLFILESPOOLFILESPOOLFILESPOOLFILERESULTJoin Plan 1SPOOLFILETable_ATable_BTable_CTable_DSPOOLFILESPOOLFILESPOOLFILESPOOLFI

114、LESPOOLFILERESULTJoin Plan 269 7/21/2024提纲Teradata架构常见问题,及解决方法Teradata工具实用小技巧JOIN的实现机制JOIN的优化70 7/21/2024改变查询计划的手段修改PI收集统计信息关联字段上的统计信息Partition上的统计信息Where条件上的统计信息Group by 字段上的统计信息查看某个表的统计信息情况:help stat DBName.TableName查看详尽的统计情况:select * from pview.vw_statistic_info通过Explain查看,尚需统计哪些信息?diagnostic hel

115、pstats on for session;71 7/21/2024优化示例1:数据分布 与 JOIN方法脚本:脚本:tb_rmis_bb40_mon2900280.pl, 16小时小时select 200709, a.City_ID, a.SUBS_ID, a.BELONG_DISTRICT from ( select City_ID, SUBS_ID, BELONG_DISTRICT from MMART.TB_MIS_DISTRICT_DIVISION where CAL_Month=200612 union select City_ID, SUBS_ID, BELONG_DISTRIC

116、T from MMART.TB_MIS_DISTRICT_DIVISION_NEW ) a inner join pview.vw_mid_subs_info_mon b on a.subs_id = b.subs_id Where CAL_Month = 200709 And SUBS_STS_Code not In (10,11,12,13,20,30,60) and SUBS_STS_EFCT_Date between 200709 | 01 and 20070930 group by 1,2,3,4;主要问题:主要问题:1、把表b进行Duplicate 统计信息不齐全,认为表b经过条件

117、过滤只有130条记录, 实际上有200万记录左右2、用Product Join连接算法 表A有1000万记录 Duplicate连接,共进行比较次数: 1000万200万 3、最优的Join方法?4、解决办法: 对表b收集统计相应字段的信息 必要的话,固化表A,并统计字段subs_id72 7/21/2024优化示例1explain5) We execute the following steps in parallel. 1) We do an all-AMPs RETRIEVE step from MMART.TB_MIS_DISTRICT_DIVISION_NEW by way of a

118、n all-rows scan with no residual conditions into Spool 1 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 1 by the sort key in spool field1 eliminating duplicate rows. The size of Spool 1 is estimated with low confidence to be 7,278,252 rows. The estimate

119、d time for this step is 1.46 seconds. 2) We do an all-AMPs RETRIEVE step from a single partition of PMART.MID_SUBS_INFO_MON with a condition of ( PMART.MID_SUBS_INFO_MON.CAL_Month = 200709) with a residual condition of (PMART.MID_SUBS_INFO_MON.SUBS_STS_EFCT_Date = DATE 2007-09-01) AND (PMART.MID_SUB

120、S_INFO_MON.SUBS_STS_EFCT_Date 7/21/2024优化示例2:数据分布 与 JOIN方法脚本:脚本:rpt_mart_new_comm_mon0400.pl 11小时小时Select case when b.CUST_Brand_ID is null then 5020 when b.CUST_Brand_ID in(2000,5010) then 5020 else b.CUST_Brand_ID end ,sum(COALESCE(b.Bas_CHRG_DUR_Unit,0) as Thsy_Accum_New_SUBS_CHRG_DUR ,。 from VTN

121、EW_SUBS_THISYEAR t inner join VTDUR_MON b on t.Subs_ID=b.Subs_ID left join PVIEW.vw_MID_CDE_LONG_CALL_TYPE_LVL c on b.Long_Type_Level_SECND= c.Long_Type_Level_SECND left join PVIEW.vw_MID_CDE_ROAM_TYPE_LVL d on b.Roam_Type_Level_SECND= d.Roam_Type_Level_SECND group by 1;记录数情况:记录数情况:t: 580万,万,b: 9400

122、万万, c:8, d:8都有统计信息都有统计信息主要问题:主要问题:1、连接顺序: ( (b Join c) join d ) join t2、对表b进行3次redistribute 3、连接算法:Merge Join 4、原因:b表经过汇总而得,虽然知有总记录数,但未知各个join字段的情况解决办法? 固化b,并对b表连接字段进行统计 调整连接顺序?74 7/21/2024。 2) Next, we do an all-AMPs RETRIEVE step from LUSC.b by way of an all-rows scan with no residual conditions i

123、nto Spool 4 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 4 by row hash. The result spool file will not be cached in memory. The size of Spool 4 is estimated with high confidence to be 94,078,021 rows. The estimated time for this step is 17 minutes and

124、 6 seconds. 3) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of a RowHash match scan, which is joined to PMART.MID_CDE_LONG_CALL_TYPE_LVL by way of a RowHash match scan with no residual conditions. Spool 4 and PMART.MID_CDE_LONG_CALL_TYPE_L

125、VL are left outer joined using a merge join, with a join condition of (Long_Type_Level_SECND = PMART.MID_CDE_LONG_CALL_TYPE_LVL.Long_Type_Level_SECND). The result goes into Spool 5 (all_amps), which is built locally on the AMPs. The result spool file will not becached in memory. The size of Spool 5

126、is estimated with index join confidence to be 94,078,021 rows. The estimatedtime for this step is 27.53 seconds. 2)。 4) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an all-rows scan, which is joined to Spool 6 (Last Use) by way of an all-rows scan. Spool 5 and Spool 6 are left outer

127、 joined using a product join, with a join condition of (Roam_Type_Level_SECND = Roam_Type_Level_SECND). The result goes into Spool 7 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 7 by row hash. The result spool file will not be cached in memory. The si

128、ze of Spool 7 is estimated with index join confidence to be 94,078,021 rows. The estimated time for this step is 1 minute and 27 seconds. 5) We do an all-AMPs JOIN step from LUSC.t by way of a RowHash match scan with no residual conditions, which is joined to Spool 7 (Last Use) by way of a RowHash m

129、atch scan. LUSC.t and Spool 7 are joined using a merge join, with a join condition of (LUSC.t.SUBS_ID = SUBS_ID). The input table LUSC.t will not be cached in memory, but it is eligible for synchronized scanning. The result goes into Spool 3 (all_amps), which is built locally on the AMPs. The result

130、 spool file will not be cached in memory. The size of Spool 3 is estimated with index join confidence to be 582,133,174 rows. The estimated time for this step is 2 minutes and 30 seconds. 优化示例2-explain75 7/21/2024优化示例3固化临时表脚本:脚本:rpt_mart_new_comm_mon0400.pl. . Drop table ttemp.mart_new_comm_mon0400_

131、DUR_1;Create multiset table ttemp.mart_new_comm_mon0400_DUR_1 as ( ) with data primary index(subs_id);Collect stat on ttemp.mart_new_comm_mon0400_DUR_1 column subs_id;Select case when b.CUST_Brand_ID is null then 5020 when b.CUST_Brand_ID in(2000,5010) then 5020 else b.CUST_Brand_ID end ,。 from VTNE

132、W_SUBS_THISYEAR t inner join ttemp.mart_new_comm_mon0400_DUR_1 b on t.Subs_ID=b.Subs_ID left join . group by 1;Drop table ttemp.mart_new_comm_mon0400_DUR_1;固化临时表:固化临时表:注意命名的唯一性注意命名的唯一性并非所有临时表都需要固化并非所有临时表都需要固化!76 7/21/2024优化示例4强制多表JOIN的顺序select c.prov_title ,sum(.) AS fee from PVIEW.VW_STL_GSM_INTER

133、a INNER JOIN pview.vw_CdE_TARRIF_dIST b ON a.OPPER_Home_Area = b.Long_Prefix INNER JOIN dwpview.vw_LOC_prov c ON b.prov_code =c.prov_id WHERE Self_Home_Area IN -主叫为本省用户 ( select city_Id from ttemp.vt_city_Id ) and trim(SETLMT_Area) in -结算地为内蒙 (select city_Id from ttemp.vt_city_Id ) and SELF_Telco_Op

134、erator_Code = 2 and PROC_Date between 20070601 and 20070630 group by 1418记录记录32记录记录12记录记录9亿亿记录记录12记录记录JOIN顺序:1. a join b: 比较次数9亿418 结果9亿条2、 (a join b) join c : 比较次数9亿 32 结果9亿条9亿418 + 9亿32次比较9亿450次比较77 7/21/2024优化示例4强制多表JOIN的顺序 (cont.)select c.prov_title ,sum(.) AS fee from ( select * from ttemp.TB_S

135、TL_GSM_INTER06 where SELF_Telco_Operator_Code = 2 and PROC_Date between 20070601 and 20070630 and Self_Home_Area IN -主叫为本省用户 ( select city_Id from ttemp.vt_city_Id ) and trim(SETLMT_Area) in -结算地为内蒙 (select city_Id from ttemp.vt_city_Id ) )a INNER join ( select b1.Long_Prefix, c1.prov_title FROM pvi

136、ew.vw_CdE_TARRIF_dIST b1 INNER join dwpview.vw_LOC_prov c1 ON b1.prov_code =c1.prov_id ) b ON a.OPPER_Home_Area = b.Long_Prefixgroup by 1SQL重写JOIN顺序:1. b join c: 比较次数32418 结果: 418条2、 a join (b join c) : 比较次数9亿 418 结果9亿条9亿418次比较78 7/21/2024优化示例5强制多表JOIN的顺序 select a.subs_id ,a.Phone_NUM ,* ,sum(coales

137、ce(c.last_arpu,0)last_arpu ,sum(coalesce(d.moths_arpu,0)moths_arpu ,sum(coalesce(f.accum_intg,0)CUST_INTG FROM SUB_CUST_TV a left join ( select * from * )b -取得三大服务品牌 on a.subs_id=b.subs_id left join (select * from * )c -上月ARPU值 on a.subs_id=c.subs_id left join (select * from *)d -前三个月的平均ARPU值 on a.s

138、ubs_id=d.subs_id left join (select cust_id,Contact_Person,Contact_Phone from PVIEW.vw_PAR_CUST_HIS where efct_date20070831 ) e on a.cust_id=e.cust_id left join (select subs_id,accum_intg from pview.vw_OFR_SUBS_INTG_ACCUM_HIS where efct_date20070831 )f -用户积分累计历史表中取得用户的积分 on a.subs_id=f.subs_idPI:subs

139、_id除了表e, 其它表PI都是subs_ID除了表e,与其它表的join都是基于subs_idJoin 顺序为:顺序为: a * b * c * d * e * fJoin e之前,需要数据重分布Join e之后,join f 又要重分布解决方法:解决方法:把 left join e 放在f之后: a * b * c * d * f * e只需一次的数据重分布PI:cust_idPI:subs_id79 7/21/2024并非所有SQL都可进行简单优化以下SQL限定了:数据的分布方式:只能用duplicateJOIN算法:只能用product join无法从SQL层面进行简单优化:共共2万亿

140、次万亿次(”取长度取长度”,”字符串截取字符串截取”与与”比较比较”)脚本:rpt_org_adc_mon0400.pl, 16小时内无法完成SELECT A.Msisdn ,A.Opper_Num ,2 ,COUNT(*) AS Sms_QuanFROM PVIEW.VW_NET_GSM_SMS AINNER JOIN VT_ORDER_USER BON SUBSTRING( A.Opper_Num FROM 1 FOR (CHAR_LENGTH(B.Msisdn) ) = B.MSISDNAND B.Serv_Id = A01WHERE A.Proc_Date .GROUP BY 1,2,

141、3;表表A:PI(subs_ID),30天共天共6亿记录;亿记录; 表表B:PI(SUBS_ID), 3千记录千记录请修改请修改”业务逻辑业务逻辑” 80 7/21/2024JOIN问题的经验分析运行速度慢的运行速度慢的SQL,绝大多数都是,绝大多数都是JOIN例外1:INSERT操作慢,可能是因为目标表为set类型,或者PI不对例外2:数据读取慢,可能用like操作,或者数据本身就很大 JOIN的问题,主要在于:的问题,主要在于:1.数据分布方式不对:把大表进行duplicate,或者redistribute大表Redistribute有可能导致数据分布不均衡2.JOIN算法不对:例如,大表join小表,用merge join导致大表需要重新hash与sort例如,大表join大表不用merge joinJOIN问题的解决办法:问题的解决办法:对参与join的字段进行统计信息必要的时候,固化临时表,并统计信息一般情况下,不需要调整一般情况下,不需要调整SQL的业务逻辑的业务逻辑81 7/21/2024Thank YouQ & A

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 建筑/环境 > 施工组织

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号