首页
登录 | 注册

teradata sql优化-手工固定执行计划之性能几十倍提高

Teradata优化器自己选择执行计划:
explain
SELECT  A.Oprting_Fee_Id , A.Order_Row_Id , A.Order_Id , C.Order_Item_Row_Id , A.Root_Order_Item_Id , A.Charge_Id , A.OCharge_Item_Id , A.OCharge_Item_Name , A.Act_Type_Cd , A.Pay_Charge , A.CEmployee_Id , CAST( A.Pay_Dt AS DATE FORMAT 'YYYYMMDD'  ) , A.Cprd_Name , A.Cprd_Id , A.Cacct_Id , A.Pay_State_Cd , A.Action_Cd , A.Action_Type , A.Active_Flg , A.Asset_Integ_Id , B.Order_State , CAST( C.CDsc_Eff_Dt AS DATE FORMAT 'YYYYMMDD'  ) , CAST( C.CDsc_Exp_Dt AS DATE FORMAT 'YYYYMMDD'  ) , CAST( '20081201' AS DATE FORMAT 'YYYYMMDD'  ) , CAST( '20081224' AS DATE FORMAT 'YYYYMMDD'  ) , NULL , NULL , NULL , CUST.Statistic_Type , CUST.Market_Class , B.City_Type_Name , NULL , NULL , NULL , A.Root_Asset_Integ_Id , G.Circ_Nbr , A.Accs_Nbr , A.Username , A.Pay_Mode_Cd , CAST( B.Cpl_Dt AS DATE FORMAT 'YYYYMMDD'  ) , A.CCust_Id , A.OOrg_Id , A.Area_Id , D.std_code_id , D.Bureau_Id , B.Bureau_Id_Pas , B.Exchange_Id_Pas , NULL , NULL , NULL , A.Annotation , A.Stmt_Dt , A.Latn_Id
FROM  FRTEST.FIN_OPRTING_FEE_TEST_A A --小表
LEFT JOIN FRTEST.EVT_OORI_STATE_A B --大表
 ON  A.Root_Order_Item_Id = B.Root_Order_Item_Id
 LEFT JOIN FRTEST.EVT_OORI_CDSCT_A C
 ON  A.Charge_Id = C.Order_Item_CDsc_Row_Id
 LEFT JOIN FRTEST.V_CRM_EXCHANGE D --小表
 ON  B.Area_Id_Pas = D.Sour_Code_Id
 AND  B.Exchange_Id_Pas = CAST( D.Exchange_Id AS VARCHAR ( 30 )  ) LEFT JOIN FRTEST.EVT_OORDER_KEY_A CUST
 ON  A.Order_Row_Id = CUST.Order_Row_Id
 LEFT JOIN FRTEST.V_EVT_EQUIP_CIRC_A G
 ON  G.Order_Item_Row_Id = A.Root_Order_Item_Id
WHERE  ( A.Pay_State_Cd = 3
 OR  A.Pay_State_Cd = 4 )
 AND  A.Active_Flg = 0
 AND  COALESCE( A.Pay_Charge , 0 ) <> 0
 AND  A.Stmt_Dt = dAtE'2008-12-01' ; 
 
Explanation
   1) First, we lock a distinct FRTEST."pseudo table" for read on a
      RowHash to prevent global deadlock for FRTEST.A.
   2) Next, we lock a distinct FRTEST."pseudo table" for read on a
      RowHash to prevent global deadlock for FRTEST.B.
   3) We lock a distinct FRTEST."pseudo table" for read on a RowHash to
      prevent global deadlock for FRTEST.C.
   4) We lock a distinct FRTEST."pseudo table" for read on a RowHash to
      prevent global deadlock for FRTEST.CUST.
   5) We lock FRTEST.A for read, we lock FRTEST.B for read, we lock
      FRTEST.C for read, we lock FRTEST.CUST for read, we lock
      FRTEST.DIM_EXCHANGE for access, we lock FRTEST.CDM_CODE_MAPPING
      for access, and we lock BSSDATA.EVT_ORDI_EQUIP_HIST_A for access.
   6) We execute the following steps in parallel.
        1) We do a single-AMP RETRIEVE step from FRTEST.c by way of the
           primary index "FRTEST.c.CODE_TYPE = 'CRM_AREA_NAME'" with a
           residual condition of ("NOT (FRTEST.c.STD_CODE_ID IS NULL)")
           into Spool 2 (all_amps), which is duplicated on all AMPs.
           The size of Spool 2 is estimated with low confidence to be
           568 rows.  The estimated time for this step is 0.00 seconds.
        2) We do an all-AMPs RETRIEVE step from a single partition of
           FRTEST.A with a condition of ("FRTEST.A.Stmt_Dt = DATE
           '2008-12-01'") with a residual condition of (
           "(FRTEST.A.Stmt_Dt = DATE '2008-12-01') AND (((( CASE WHEN
           (NOT (FRTEST.A.Pay_Charge IS NULL )) THEN
           (FRTEST.A.Pay_Charge) ELSE (0.) END ))<> 0.) AND
           ((FRTEST.A.Active_Flg = 0) AND ((FRTEST.A.Pay_State_Cd = 3)
           OR (FRTEST.A.Pay_State_Cd = 4))))") into Spool 3 (all_amps),
           which is redistributed by hash code to all AMPs.  Then we do
           a SORT to order Spool 3 by row hash.  The size of Spool 3 is
           estimated with low confidence to be 291,242 rows.  The
           estimated time for this step is 0.52 seconds.
   7) We execute the following steps in parallel.
        1) We do an all-AMPs JOIN step from Spool 3 (Last Use) by way of
           a RowHash match scan, which is joined to
           BSSDATA.EVT_ORDI_EQUIP_HIST_A by way of a RowHash match scan
           with a condition of ("NOT
           (BSSDATA.EVT_ORDI_EQUIP_HIST_A.Circ_Nbr IS NULL)").  Spool 3
           and BSSDATA.EVT_ORDI_EQUIP_HIST_A are left outer joined using
           a merge join, with a join condition of (
           "BSSDATA.EVT_ORDI_EQUIP_HIST_A.Order_Item_Row_Id =
           Root_Order_Item_Id").  The input table
           BSSDATA.EVT_ORDI_EQUIP_HIST_A will not be cached in memory.
           The result goes into Spool 4 (all_amps), which is
           redistributed by hash code to all AMPs.  Then we do a SORT to
           order Spool 4 by row hash.  The size of Spool 4 is estimated
           with index join confidence to be 291,242 rows.  The estimated
           time for this step is 3.53 seconds.
        2) We do an all-AMPs RETRIEVE step from FRTEST.B by way of an
           all-rows scan with a condition of ("NOT
           (FRTEST.B.Root_Order_Item_Id IS NULL)") into Spool 7
           (all_amps), which is redistributed by hash code to all AMPs.
           The input table will not be cached in memory, but it is
           eligible for synchronized scanning.  The result spool file
           will not be cached in memory.  The size of Spool 7 is
           estimated with no confidence to be 114,082,076 rows.  The
           estimated time for this step is 33.01 seconds.
        3) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of
           an all-rows scan, which is joined to FRTEST.d by way of an
           all-rows scan with no residual conditions.  Spool 2 and
           FRTEST.d are joined using a product join, with a join
           condition of ("(TRANSLATE((STD_CODE_ID )USING
           LATIN_TO_UNICODE)(FLOAT, FORMAT '-9.99999999999999E-999'))=
           (FRTEST.d.AREA_ID)").  The result goes into Spool 8
           (all_amps), which is redistributed by hash code to all AMPs.
           The size of Spool 8 is estimated with no confidence to be 531
           rows.  The estimated time for this step is 0.01 seconds.
   8) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of a
      RowHash match scan, which is joined to FRTEST.CUST by way of a
      RowHash match scan.  Spool 4 and FRTEST.CUST are left outer joined
      using a merge join, with a join condition of ("Order_Row_Id =
      FRTEST.CUST.Order_Row_Id").  The result goes into Spool 9
      (all_amps), which is redistributed by hash code to all AMPs.  Then
      we do a SORT to order Spool 9 by row hash.  The result spool file
      will not be cached in memory.  The size of Spool 9 is estimated
      with index join confidence to be 18,348,217 rows.  The estimated
      time for this step is 46.25 seconds.
   9) We execute the following steps in parallel.
        1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of
           a RowHash match scan, which is joined to FRTEST.C by way of a
           RowHash match scan.  Spool 9 and FRTEST.C are left outer
           joined using a merge join, with a join condition of (
           "Charge_Id = FRTEST.C.Order_Item_CDsc_Row_Id").  The result
           goes into Spool 10 (all_amps), which is redistributed by hash
           code to all AMPs.  Then we do a SORT to order Spool 10 by row
           hash.  The result spool file will not be cached in memory.
           The size of Spool 10 is estimated with index join confidence
           to be 1,453,177,981 rows.  The estimated time for this step
           is 1 hour and 20 minutes.
        2) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of
           an all-rows scan, which is joined to Spool 8 (Last Use) by
           way of an all-rows scan.  Spool 7 and Spool 8 are left outer
           joined using a product join, with a join condition of (
           "(Area_Id_Pas = (TRANSLATE((( CASE WHEN (Field_2 IS NULL)
           THEN (NULL) ELSE (( CASE WHEN (SOUR_CODE_ID = '南浔区') THEN
           ('市本级') ELSE (TRANSLATE((SOUR_CODE_ID )USING
           LATIN_TO_UNICODE)) END )) END) )USING UNICODE_TO_LATIN))) AND
           (Exchange_Id_Pas = (TRIM(BOTH FROM {RightTable}.EXCHANGE_ID
           (VARCHAR(30), CHARACTER SET LATIN, CASESPECIFIC, FORMAT
           '---------9.'))(VARCHAR(30), CHARACTER SET LATIN,
           CASESPECIFIC)))").  The result goes into Spool 11 (all_amps),
           which is redistributed by hash code to all AMPs.  Then we do
           a SORT to order Spool 11 by row hash.  The result spool file
           will not be cached in memory.  The size of Spool 11 is
           estimated with no confidence to be 114,590,657 rows.  The
           estimated time for this step is 1 minute and 13 seconds.
  10) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of a
      RowHash match scan, which is joined to Spool 11 (Last Use) by way
      of a RowHash match scan.  Spool 10 and Spool 11 are left outer
      joined using a merge join, with a join condition of (
      "Root_Order_Item_Id = Root_Order_Item_Id").  The result goes into
      Spool 1 (group_amps), which is built locally on the AMPs.  The
      result spool file will not be cached in memory.  The size of Spool
      1 is estimated with no confidence to be 141,903,478,124 rows.  The
      estimated time for this step is 24 hours and 14 minutes.
  11) Finally, we send out an END TRANSACTION step to all AMPs involved
      in processing the request.
   -> The contents of Spool 1 are sent back to the user as the result of
      statement 1.  The total estimated time is 25 hours and 35 minutes.
 
固定执行计划:
create  multiset volatile table T_CRM_EXCHANGE as (
select  sour_code_id,Std_Code_Id,Bureau_Id, CAST(Exchange_Id AS VARCHAR ( 30 )  ) Exchange_Id
from  frtest.V_CRM_EXCHANGE)
with  data primary index(Sour_Code_Id,Exchange_Id)
 on 
commit  preserve rows;
create  multiset volatile table FIN_OPRTING_FEE_TEST_A_T as
 (
select  *
from  FRTEST.FIN_OPRTING_FEE_TEST_A a
WHERE  ( A.Pay_State_Cd = 3
 OR  A.Pay_State_Cd = 4 )
 AND  A.Active_Flg = 0
 AND  COALESCE( A.Pay_Charge , 0 ) <> 0
 AND  A.Stmt_Dt = dAtE'2008-12-01'
 )
 
with  data primary index(Root_Order_Item_Id)
 on 
commit  preserve rows;
 
explain
SELECT  A.Oprting_Fee_Id , A.Order_Row_Id , A.Order_Id , C.Order_Item_Row_Id , A.Root_Order_Item_Id , A.Charge_Id , A.OCharge_Item_Id , A.OCharge_Item_Name , A.Act_Type_Cd , A.Pay_Charge , A.CEmployee_Id , CAST( A.Pay_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , A.Cprd_Name , A.Cprd_Id , A.Cacct_Id , A.Pay_State_Cd , A.Action_Cd , A.Action_Type , A.Active_Flg , A.Asset_Integ_Id ,
 B.Order_State ,
 CAST( C.CDsc_Eff_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , CAST( C.CDsc_Exp_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , CAST( A.Stmt_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , CAST( A.Stmt_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , NULL , NULL , NULL , CUST.Statistic_Type , CUST.Market_Class ,
  B.City_Type_Name , NULL , NULL , NULL , A.Root_Asset_Integ_Id , G.Circ_Nbr , A.Accs_Nbr , A.Username , A.Pay_Mode_Cd ,
  CAST( B.Cpl_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) ,
 A.CCust_Id , A.OOrg_Id , A.Area_Id , D.std_code_id , D.Bureau_Id ,
B.Bureau_Id_Pas , B.Exchange_Id_Pas ,
 NULL , NULL , NULL , A.Annotation , CAST( A.Stmt_Dt AS DATE FORMAT 'YYYY-MM-DD'  ) , A.Latn_Id
FROM  FIN_OPRTING_FEE_TEST_A_T A
LEFT JOIN FRTEST.EVT_OORI_STATE_A B
 ON  A.Root_Order_Item_Id = B.Root_Order_Item_Id
LEFT JOIN FRTEST.EVT_OORI_CDSCT_A C
 ON  A.Charge_Id = C.Order_Item_CDsc_Row_Id
LEFT JOIN T_CRM_EXCHANGE D
 ON  B.Area_Id_Pas = D.Sour_Code_Id
 AND  B.Exchange_Id_Pas = D.Exchange_Id
LEFT JOIN FRTEST.EVT_OORDER_KEY_A CUST
 ON  A.Order_Row_Id = CUST.Order_Row_Id
LEFT JOIN FRTEST.V_EVT_EQUIP_CIRC_A G
 ON  G.Order_Item_Row_Id = A.Root_Order_Item_Id
WHERE  ( A.Pay_State_Cd = 3
 OR  A.Pay_State_Cd = 4 )
 AND  A.Active_Flg = 0
 AND  COALESCE( A.Pay_Charge , 0 ) <> 0
 AND  A.Stmt_Dt = dAtE'2008-12-01' ; 
 
 Explanation
   1) First, we lock a distinct FRTEST."pseudo table" for read on a
      RowHash to prevent global deadlock for FRTEST.B.
   2) Next, we lock a distinct FRTEST."pseudo table" for read on a
      RowHash to prevent global deadlock for FRTEST.C.
   3) We lock a distinct FRTEST."pseudo table" for read on a RowHash to
      prevent global deadlock for FRTEST.CUST.
   4) We lock FRTEST.B for read, we lock FRTEST.C for read, we lock
      FRTEST.CUST for read, and we lock BSSDATA.EVT_ORDI_EQUIP_HIST_A
      for access.
   5) We do an all-AMPs RETRIEVE step from DWDBA.A by way of an all-rows
      scan with a condition of ("(DWDBA.A.Stmt_Dt = DATE '2008-12-01')
      AND (((( CASE WHEN (NOT (DWDBA.A.Pay_Charge IS NULL )) THEN
      (DWDBA.A.Pay_Charge) ELSE (0.) END ))<> 0.) AND
      ((DWDBA.A.Active_Flg = 0) AND ((DWDBA.A.Pay_State_Cd = 3) OR
      (DWDBA.A.Pay_State_Cd = 4 ))))") into Spool 2 (all_amps), which is
      built locally on the AMPs.  The size of Spool 2 is estimated with
      no confidence to be 13,308 rows.  The estimated time for this step
      is 0.03 seconds.
   6) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of a
      RowHash match scan, which is joined to
      BSSDATA.EVT_ORDI_EQUIP_HIST_A by way of a RowHash match scan with
      a condition of ("NOT (BSSDATA.EVT_ORDI_EQUIP_HIST_A.Circ_Nbr IS
      NULL)").  Spool 2 and BSSDATA.EVT_ORDI_EQUIP_HIST_A are left outer
      joined using a merge join, with a join condition of (
      "BSSDATA.EVT_ORDI_EQUIP_HIST_A.Order_Item_Row_Id =
      Root_Order_Item_Id").  The input table
      BSSDATA.EVT_ORDI_EQUIP_HIST_A will not be cached in memory.  The
      result goes into Spool 3 (all_amps), which is built locally on the
      AMPs.  The size of Spool 3 is estimated with no confidence to be
      13,308 rows.  The estimated time for this step is 0.15 seconds.
   7) We execute the following steps in parallel.
        1) We do an all-AMPs JOIN step from Spool 3 (Last Use) by way of
           a RowHash match scan, which is joined to FRTEST.B by way of a
           RowHash match scan with no residual conditions.  Spool 3 and
           FRTEST.B are left outer joined using a merge join, with a
           join condition of ("Root_Order_Item_Id =
           FRTEST.B.Root_Order_Item_Id").  The input table FRTEST.B will
           not be cached in memory.  The result goes into Spool 4
           (all_amps), which is built locally on the AMPs.  Then we do a
           SORT to order Spool 4 by row hash.  The size of Spool 4 is
           estimated with no confidence to be 1,423,956 rows.  The
           estimated time for this step is 1.67 seconds.
        2) We do an all-AMPs RETRIEVE step from DWDBA.D by way of an
           all-rows scan with a condition of ("(NOT
           (DWDBA.D.sour_code_id IS NULL )) AND (NOT
           (DWDBA.D.Exchange_Id IS NULL ))") into Spool 5 (all_amps),
           which is duplicated on all AMPs.  Then we do a SORT to order
           Spool 5 by row hash.  The size of Spool 5 is estimated with
           no confidence to be 1,877,382 rows.  The estimated time for
           this step is 0.52 seconds.
   8) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of a
      RowHash match scan, which is joined to Spool 5 (Last Use) by way
      of a RowHash match scan.  Spool 4 and Spool 5 are left outer
      joined using a merge join, with a join condition of (
      "(Exchange_Id_Pas = (Exchange_Id )) AND ((TRANSLATE((Area_Id_Pas
      )USING LATIN_TO_UNICODE))= (sour_code_id ))").  The result goes
      into Spool 6 (all_amps), which is redistributed by hash code to
      all AMPs.  Then we do a SORT to order Spool 6 by row hash.  The
      size of Spool 6 is estimated with no confidence to be 2,747,403
      rows.  The estimated time for this step is 6.63 seconds.
   9) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a
      RowHash match scan, which is joined to FRTEST.CUST by way of a
      RowHash match scan.  Spool 6 and FRTEST.CUST are left outer joined
      using a merge join, with a join condition of ("Order_Row_Id =
      FRTEST.CUST.Order_Row_Id").  The result goes into Spool 7
      (all_amps), which is redistributed by hash code to all AMPs.  Then
      we do a SORT to order Spool 7 by row hash.  The result spool file
      will not be cached in memory.  The size of Spool 7 is estimated
      with index join confidence to be 173,086,109 rows.  The estimated
      time for this step is 9 minutes and 5 seconds.
  10) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of a
      RowHash match scan, which is joined to FRTEST.C by way of a
      RowHash match scan.  Spool 7 and FRTEST.C are left outer joined
      using a merge join, with a join condition of ("Charge_Id =
      FRTEST.C.Order_Item_CDsc_Row_Id").  The result goes into Spool 1
      (group_amps), which is built locally on the AMPs.  The result
      spool file will not be cached in memory.  The size of Spool 1 is
      estimated with index join confidence to be 13,827,785,402 rows.
      The estimated time for this step is 2 hours and 22 minutes.
  11) Finally, we send out an END TRANSACTION step to all AMPs involved
      in processing the request.
   -> The contents of Spool 1 are sent back to the user as the result of
      statement 1.  The total estimated time is 2 hours and 31 minutes.

该语句拆分后的实际性能比拆分前提升了50多倍,由于统计的问题,很多时候优化器是非常不聪明的,所以固定执行计划确保优化器选择可接受的连接顺序以及连接方法是很重要的。

相关文章

  •  更多Java培训.Java视频教程学习资料,请登录尚硅谷网站下载:www.atguigu.com  ?1. 背景及目标? 厦门游家公司(4399.com)用于员工培训和分享.? 针对用户群为已经使用过mysql环境,并有一定开发经验的工程 ...
  • 如何优化网站,网站推广优化一般流程
    在做网站信息架构的时候,我们并不单纯的只是简单的罗列网站的栏目结构,同时我们还需要考量: 网站URL的标准化 2.关键词挖掘 利用站长工具对特定关键词挖掘 利用百度推广的关键词分析,拓展更多关键词. 当我们整理好关键词列表的时候,我们需要创 ...
  • 版权声明:本文为博主原创文章,未经博主允许不得转载.         最近在做金融数据库相关的数据库SQL审核工作,期间发现非常多不符合基本规范的SQL.和相关的开发沟通后,发现很多的开发,甚至工作了很多年的高工对于数据库建表的基本规范可能 ...
  • Spider引擎分布式数据库解决方案(最全的spider教程)
    最近开始负责财付通的数据库的相关维护工作,其中有几套系统使用的spider引擎,为了以后能更好地对这套系统进行维护,对spider做了一些功课,将spider引擎的功能.使用场景.部署.实战测试等做个简单的总结,希望不了解spider引擎的 ...
  • 基于<MySQL学习分享--MySQL 5.7性能改进>文中提到的事务锁的优化,MySQL在5.6之前,trx_sys事务锁一直是影响性能的主要因素.在应用中也会经常发现系统资源利用不起来,追查的结果往往是trx_sys事务锁的 ...
  • 性能优化方法和技巧
    系列目录 性能优化方法和技巧性能优化的方法和技巧:概述性能优化的方法和技巧:代码性能优化的方法和技巧:工具 这是一个可以用一本书来讲的话题,用一系列博客来讲,可能会比较单薄一点,这里只捡重要的说,忽略很多细节,当然以后还可以补充和扩展这个话 ...

2020 unjeep.com webmaster#unjeep.com
12 q. 0.013 s.
京ICP备10005923号