here is a performace test i’ve maded on worked_hours.
insert into work_hours select mod(rownum,56),
rownum,mod(rownum,7),1 from dba_objects;
select count(1) from worked_hours;
49647
select weekno
, empno
, mon
, tue
, wed
, thu
, fri
, sat
, sun
from worked_hours
model
return updated rows
partition by (weekno, empno)
dimension by ( day )
measures ( hours, 0 mon, 0 tue, 0 wed, 0 thu, 0 fri, 0 sat, 0 sun)
RULES upsert
(
mon [0] = hours [1]
, tue [0] = hours [2]
, wed [0] = hours [3]
, thu [0] = hours [4]
, fri [0] = hours [5]
, sat [0] = hours [6]
, sun [0] = hours [7]
)
/
Statistics
———————————————————-
10 recursive calls
0 db block gets
184 consistent gets
1260 physical reads
0 redo size
1228664 bytes sent via SQL*Net to client
36900 bytes received via SQL*Net from client
3310 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
49635 rows processed
49635 rows selected.
Elapsed: 00:00:04.21
SQL> select weekno
2 , empno
3 , mon
4 , tue
5 , wed
6 , thu
7 , fri
8 , sat
9 , sun
10 from (
11 select weekno,empno,sum(decode(day,2,hours,0)) mon,
12 sum(decode(day,3,hours,0)) tue
13 ,sum(decode(day,4,hours,0)) wed
14 ,sum(decode(day,5,hours,0)) thu
15 ,sum(decode(day,6,hours,0)) fri
16 ,sum(decode(day,7,hours,0)) sat
17 ,sum(decode(day,7,hours,0)) sun from
18 worked_hours group by weekno,empno)
19 /
49635 rows selected.
Elapsed: 00:00:01.92
Statistics
———————————————————-
4 recursive calls
6 db block gets
184 consistent gets
272 physical reads
0 redo size
943540 bytes sent via SQL*Net to client
36900 bytes received via SQL*Net from client
3310 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
49635 rows processed
It seems that the old pivot method is better. It seems that the difference connect somehow to phiscal reads.
參考來源
沒有留言:
張貼留言