Benchmarking Techniques Using T-SQL Part 1 - System Statistical
Functions by Bill
Wunder
If you caught my article last week you know that the
alternative to benchmarking and trend analysis can be loosely
described as SQL Darwinism.
I don't believe in SQL Darwinism and neither should you! SQL
Darwinism is a term I've borrowed from biology to describe the
unsubstantiated attempts to "throw hardware at it" that have
been far too common in recent times to resolve performance and
tuning issues in software applications. In this article we will
examine some tools available in SQL Server to help you easily
establish baselines and trends for your database servers,
identify sources for recommended best practices available to
help to determine what to baseline, and reveal some techniques
useful to automate benchmarking and trend analysis on your SQL
Servers consistent with those best practices.
There is a large assortment of measurement tools available
for capturing the sort of information we will be going after
here. There are low level network sniffers, OS system monitoring
and debugger options, third party and in house custom
applications, SQL Server tools such as Profiler and Query
Analyzer and others that offer some level of ability to help you
see how things are performing and even identify opportunities to
improve performance long before the problems manifest themselves
as limitation to an application. I certainly don't want to
discourage the use of any tool you have at your disposal to
attack the performance and tuning beast before it gets the best
of your application. Still, there are far too many things to
explore in all of those topics to cover here. Not only that, but
it's just too critical that you baseline and begin monitoring a
system as soon as possible. What I will do here is to focus on
some tools that any good SQL Server DBA or performance minded
developer - such as your incredibly intelligent and good looking
self - can quickly and easily deploy that will take care of the
devil in the details for you without a huge learning curve. The
way to avoid a heavy ramp-up time is, of course, to use T-SQL in
stored procedures to query tables to collect the necessary data.
Stuff we are all very familiar with right!
Whether we are discussing SQL Server performance and tuning
or any other flavor of performance and tuning, rule number 1 is
"establish a baseline". Sounds easy enough, but as anyone that
has gone through the effort to baseline their SQL Server knows,
base lining a production server can be a real challenge. It's
always better to get that first baseline before an application
goes to production. Deciding what data to collect requires a
solid understanding of the application, the database engine, and
the platform. Only with careful holistic heuristic consideration
is there realistic possibility that what you measure is what you
need to measuring. Part of what we'll have to do here is
identify metrics that will be the most useful in most situations
and focus on collecting only those metrics. What else to monitor
on any server will come out of the "it depends" bag. I suggest
not that you try to cover all the possibilities with a base
lining tool, but rather that a more desirable and useful
objective for the baseline tool we will build here is to produce
a common set of metrics that look very much the same in
substance - even if not in actual data point values - regardless
the server where the data is collected. The obvious advantage is
that you don't need to stop and figure out what is being
monitored. You'll know because it's the same everywhere.
Right off the top let's consider that base lining individual
queries is not what we are shooting for in this automated tool .
Identification of poorly performing queries, bad indexes, slow
joins, unnecessary sorts, and query plan surprise is too
important and probably to voluminous to stuff away in an
automated baseline and trend history generator. We will include
some indicators that will show when query related issues are
systemic, but the goal of our automated tool is to tell us how
the server is doing rather than how individual queries are
behaving. Since 80% of what we need to actively keep an eye on
as administrators and developers are those queries, we want the
automated job to help us with the other 20%: the stuff we tend
to get slapped around by because we're out fighting the fires
associated with individual queries. Our tool will aim to help us
identify bottlenecks and evaluate general performance for such
purposes as determining when and where a hardware upgrade may be
needed. In general we'll measure server wide response time
(length of time to produce a result) and throughput (volume of
results produced) at the hardware, OS and SQL Server layer.
Books Online makes the following recommendations:
Establishing a Performance Baseline
To determine whether your Microsoft® SQL Server™ system is
performing optimally, take performance measurements over time
and establish a server performance baseline. Compare each new
set of measurements with those taken earlier.
After you establish a server performance baseline, compare
the baseline statistics to current server performance. Numbers
far above or far below your baseline are candidates for further
investigation. They may indicate areas in need of tuning or
reconfiguration. For example, if the amount of time to execute a
set of queries increases, examine the queries to determine if
they can be rewritten or if column statistics or new indexes
must be added.
At a minimum, use baseline measurements to determine:
·Peak and off-peak hours of operation.
·Production query or batch command response times.
·Database backup and restore completion times.
A good start, but probably a little broad to get us very far.
Lets break this down a little better by looking at logical and
physical database design aspects as they relate to performance.
Again from Books Online: "Logical database design involves
modeling your business requirements and data using database
components, such as tables and constraints, without regard for
how or where the data will be physically stored." From the
logical design aspect, in our tool we may want to examine such
things as rates of blocking, deadlocks, duration of
transactions, and table/index scan rates. "Physical database
design involves mapping the logical design onto physical media,
taking advantage of the hardware and software features
available, which allows the data to be physically accessed and
maintained as quickly as possible, and indexing." From the
Physical design aspect there are many more aspects that we need
to pick from to set benchmarks and measure change.
CPU utilization, memory utilization, Disk IO, Network IO, file
sizes and growth rates. It is important to benchmark early when
possible to assure that you have correctly designed the database
to model your business requirements, and to take advantage of
hardware and software features early in the development cycle of
a database application, because it is difficult to make changes
to these components later. In particular, if you can benchmark a
development box you can often provide valid metrics to help
define the requirements for a production box and in the process
learn about your application. Hopefully it's becoming
interesting to you that we can find all this information inside
SQL Server, Using ordinary T-SQL.
Logging sp_monitor
SQL Server includes as small but useful set of
nondeterministic System Statistical Functions that can be used
to see what is happening at the physical layer. These counters
include 10 globals that return metrics for the server and a very
cool function that can give you great information at the
database file level. I'll talk first about the 10 globals. One
huge problem related to using these globals is that they return
a value that is initialized to 0 each time SQL Server is
started. This is somewhat problematic because, for instance, if
you look at total reads on your server and the server has been
running several days or weeks you really don't find out much
about what the reads looked like on the server today.
Fortunately, the system stored procedure sp_monitor provides
some help here. sp_monitor not only returns the cumulative value
from each of these globals since the server was started but also
a delta value since the last time the procedure was executed for
each counter as well as a percentage value that is relevant to
the delta value.
"For each column, the statistic is printed in the form
number(number)-number% or number(number). The first number
refers to the number of seconds (for cpu_busy, io_busy, and
idle) or the total number (for the other variables) since SQL
Server was restarted. The number in parentheses refers to the
number of seconds or total number since the last time sp_monitor
was run. The percentage is the percentage of time since
sp_monitor was last run. For example, if the report shows
cpu_busy as 4250(215)-68%, the CPU has been busy 4250 seconds
since SQL Server was last started up, 215 seconds since
sp_monitor was last run, and 68 percent of the total time since
sp_monitor was last run." (BOL)
If you look inside the code of sp_monitor not only will you
see that it based solely on the 10 global values but also that
it does it's delta magic by storing the values in a table in
master: spt_monitor each time it is run. spt_monitor always has
only one row in it. It does not maintain any sort of a history.
However, if we set up a job that runs once a day and once each
time the SQL Server starts and then, immediately after each time
sp_monitor is executed by the job, move the new row into a
history table, I'll be able to recreate the sp_monitor output
for any interval I've captured to the spt_monitor_log that I've
created. I use the stored procedure Get_sp_monitor_History in
the
script provided for this article in a SQL Agent job (sample job is
in the script too) that runs once a day and each time the SQL
Server service is started to capture a daily metric of these
basic hardware metrics to a table on every SQL Server. If you
use nothing else from this article, add this process to your SQL
Servers and you will have the basic baseline and trend
measurement tool installed. I have also copied the code that
calculates the deltas from sp_monitor so I can run it against
the spt_monitor_log, That code is provided as the stored
procedure MonitorLogTrend also included in the referenced
script. If you look at the bottom of the script you can see a
sample output from the MonitorLogTrend from a server that has
been running for several weeks and the data stored in
spt_monitor_log that produced the sample output.
Three things to note with this process are:
1. sp_monitor and the globals use int values in T-SQL but
there are provide by the SQL Server binary so they have slightly
different behavior than the int data type we know and love. If
your server stays up long enough the integer value will
overflow. When this happens the counter simply goes back to o
and the high byte is lost. This can mean that you will
occasionally get a bad value that you may need to ignore or that
will result in an 8115 Arithmetic overflow error. In my opinion
this is acceptable behavior because we are going after a trend
here and loss of one data point does not significantly affect
the trend analysis..
2. @@CPU_BUSY and @@IDLE return values that are based on 100%
per CPU Values. This means that the percent busy or percent idle
can be a value up to 400% on a 4CPU machine or 200% on a 2CPU
machine for example.
3 Notice that the "Seconds" column is fairly consistent in
this output. If for any reason someone ran an ad hoc sp_monitor
the seconds value would be reduced but the returned metric data
would still be accurate even if abbreviated for that particular
interval.
I encourage you to install the benchmarking tool described in
this article as soon as possible. In addition to scheduling the
job provided in the script that will execute sp_monitor daily,
consider creating another job that will mail the MonitorLogTrend
report to you once every week or two so you can begin to gain an
understanding of how your SQL Server runs at the hardware
level.
While sp_monitor has been with us for many versions of SQL
Server the ::fn_virtualfilestats table-valued function that
provides metrics at the database level is a newcomer to the
arsenal of T-SQL monitoring tools. There is no single rowed
table that stored the result of each execution of this function
so to use it base lining and trend analysis you can simply throw
a row of the results into a table each day and then build an
extraction query based on the logic used in the MonitorLogTrend
stored procedure. In my next article I'll show you how I handle
that task and I show you some other very useful database level
metrics from the sysperfinfo table and how to work with them
form a stored procedure that runs from the SQL Agent. Once we
get through those two components you will find that you now not
only have a formidable T-SQL table based benchmark for your SQL
Servers but are well on the way to developing empirical trend
analysis datasets to help you keep things running smoothly. No
SQL Darwinism necessary!
The Scripts for the components discussed in this article at
available here.
/*
Benchmarking Techniques Using T-SQL - System Statistical Functions
*/
use admin
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Get_sp_monitor_History]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[Get_sp_monitor_History]
GO
create procedure dbo.Get_sp_monitor_History
as
/*******************************************************************************************************
* dbo.Get_sp_monitor_History.PRC
* Creator: Bill Wunder
*
* Description: execute system procedure sp_monitor and capture the result to a logging table
*
* Usage:
EXECUTE admin.dbo.Get_sp_monitor_History
*
*
* Modifications:
* Developer Name Date Brief Description
* ------------------ -------- ------------------------------------------------------------
*
********************************************************************************************************/
if object_id('admin.dbo.spt_monitor_log','U') is null
create table admin.dbo.spt_monitor_log
(id int identity(1,1),
lastrun datetime not null,
cpu_busy int not null,
io_busy int not null,
idle int not null,
pack_received int not null,
pack_sent int not null,
connections int not null,
pack_errors int not null,
total_read int not null,
total_write int not null,
total_errors int not null
constraint pkc_spt_monitor_log__id primary key (id))
-- if someone else ran sp_monitor since last time I did, add a row to the log for their data
if not exists (select lastrun
from admin.dbo.spt_monitor_log
where id = (select max(id)
from admin.dbo.spt_monitor_log)
and lastrun = (select lastrun from master.dbo.spt_monitor))
insert admin.dbo.spt_monitor_log
(lastrun,
cpu_busy,
io_busy,
idle,
pack_received,
pack_sent,
connections,
pack_errors,
total_read,
total_write,
total_errors)
select lastrun,
cpu_busy,
io_busy,
idle,
pack_received,
pack_sent,
connections,
pack_errors,
total_read,
total_write,
total_errors
from master.dbo.spt_monitor
exec master.dbo.sp_monitor
insert admin.dbo.spt_monitor_log
(lastrun,
cpu_busy,
io_busy,
idle,
pack_received,
pack_sent,
connections,
pack_errors,
total_read,
total_write,
total_errors)
select lastrun,
cpu_busy,
io_busy,
idle,
pack_received,
pack_sent,
connections,
pack_errors,
total_read,
total_write,
total_errors
from master.dbo.spt_monitor
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
use admin
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MonitorLogTrend]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[MonitorLogTrend]
GO
create procedure dbo.MonitorLogTrend
@StartDate datetime = '1970-01-01',
@EndDate datetime = null
as
/*******************************************************************************************************
* dbo.MonitorLogTrend.PRC
* Creator: Bill Wunder
*
* Description: produce a result set of daily sp_monitor results for a specified time period
*
* Usage:
EXECUTE admin.dbo.MonitorLogTrend -- all
EXECUTE admin.dbo.MonitorLogTrend '2003.07.06', '2003.07.13'-- one week
*
*
* Modifications:
* Developer Name Date Brief Description
* ------------------ -------- ------------------------------------------------------------
*
********************************************************************************************************/
declare @msPerTick int
set nocount on
print 'Column name - Description
Date - Time sp_monitor was executed.
Seconds - Number of elapsed seconds since sp_monitor was run.
CPU Busy - Number of seconds that the server computer''s CPU has been doing SQL Server work.
I/O Busy - Number of seconds that SQL Server has spent doing input and output operations.
Idle - Number of seconds that SQL Server has been idle.
Packets Received - Number of input packets read by SQL Server.
Packets Sent - Number of output packets written by SQL Server.
Packet Errors - Number of errors encountered by SQL Server while reading and writing packets.
Total Reads - Number of reads by SQL Server.
Total Write - Number of writes by SQL Server.
Total Errors - Number of errors encountered by SQL Server while reading and writing.
Connections - Number of logins or attempted logins to SQL Server.'
/*
** Set @mspertick. This is just used to make the numbers easier to handle
** and avoid overflow.
*/
select @mspertick = convert(int, @@timeticks / 1000.0)
select
[Date] = convert(varchar(11),s1.lastrun),
[Seconds] = datediff(ss, s2.lastrun, s1.lastrun),
[CPU Busy] = substring(convert(varchar(11),
convert(bigint, ((s1.cpu_busy * @mspertick) / 1000)))
+ '(' + convert(varchar(11), convert(int, (((s1.cpu_busy - s2.cpu_busy)
* @mspertick) / 1000))) + ')'
+ '-' + convert(varchar(11), convert(int, ((((s1.cpu_busy - coalesce(s2.cpu_busy,0))
* @mspertick) / 1000) * 100) / datediff(ss, s2.lastrun, s1.lastrun))) + '%',
1, 25),
[I/O Busy] = substring(convert(varchar(11),
convert(bigint, ((s1.io_busy * @mspertick) / 1000)))
+ '(' + convert(varchar(11), convert(int, (((s1.io_busy - s2.io_busy)
* @mspertick) / 1000))) + ')'
+ '-' + convert(varchar(11), convert(int, ((((s1.io_busy - s2.io_busy)
* @mspertick) / 1000) * 100) / datediff(ss, s2.lastrun, s1.lastrun))) + '%',
1, 25),
[Idle] = substring(convert(varchar(11),
convert(int, ((convert(bigint,s1.idle) * @mspertick) / 1000)))
+ '(' + convert(varchar(11), convert(int, (((convert(bigint,s1.idle) - convert(bigint,s2.idle))
* @mspertick) / 1000)))+ ')'
+ '-' + convert(varchar(11), convert(int, ((((convert(bigint,s1.idle) - convert(bigint,s2.idle))
* @mspertick) / 1000) * 100) / datediff(ss, s2.lastrun, s1.lastrun))) + '%',
1, 25),
[Packets Received] = substring(convert(varchar(11), s1.pack_received) + '(' +
convert(varchar(11), s1.pack_received - s2.pack_received) + ')', 1, 25),
[Packets Sent] = substring(convert(varchar(11), s1.pack_sent) + '(' +
convert(varchar(11), s1.pack_sent - s2.pack_sent) + ')', 1, 25),
[Packet Errors] = substring(convert(varchar(11), s1.pack_errors) + '(' +
convert(varchar(11), s1.pack_errors - s2.pack_errors) + ')', 1, 25),
[Total Reads] = substring(convert(varchar(11), s1.total_read) + '(' +
convert(varchar(11), s1.total_read - s2.total_read) + ')', 1, 19),
[Total Writes] = substring(convert(varchar(11), s1.total_write) + '(' +
convert(varchar(11), s1.total_write - s2.total_write) + ')', 1, 19),
[Total Errors] = substring(convert(varchar(11), s1.total_errors) + '(' +
convert(varchar(11), s1.total_errors - s2.total_errors) + ')', 1, 19),
[Connections] = substring(convert(varchar(11), s1.connections) + '(' +
convert(varchar(11), s1.connections - s2.connections) + ')', 1, 18)
from admin.dbo.spt_monitor_log s1
left join admin.dbo.spt_monitor_log s2
on s1.Id = s2.Id + 1
where s1.lastrun between @StartDate and coalesce(@EndDate,getdate())
and s1.cpu_busy > s2.cpu_busy
and s1.io_busy > s2.io_busy
and s1.idle > s2.idle
order by s1.lastrun desc
GO
-- job to get so_monitor history daily and at each SQL Server startup
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
begin transaction
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
if (select count(*) from msdb.dbo.syscategories where name = N'Monitoring') < 1
execute msdb.dbo.sp_add_category @name = N'Monitoring'
select @JobID = job_id from msdb.dbo.sysjobs where (name = N'get sp_monitor history')
if (@JobID is not NULL)
begin
if (exists (select * from msdb.dbo.sysjobservers where (job_id = @JobID) AND (server_id <> 0)))
begin
RAISERROR (N'Unable to import job since there is already a multi-server job with this name.', 16, 1)
goto QuitWithRollback
end else
execute msdb.dbo.sp_delete_job @job_name = N'get sp_monitor history'
select @JobID = NULL
end
select @JobID = job_id from msdb.dbo.sysjobs where (name = N'get sp_monitor history')
if (@JobID is NULL)
BEGIN
execute @ReturnCode = msdb.dbo.sp_add_job @job_id = @JobID OUTPUT, @job_name = N'get sp_monitor history', @enabled = 1, @start_step_id = 1, @notify_level_eventlog = 2, @notify_level_email = 0, @notify_level_netsend = 0, @notify_level_page = 0, @delete_level = 0, @description = N'No description available.', @category_name = N'Monitoring', @owner_login_name = N'sa'
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID , @step_id = 1, @cmdexec_success_code = 0, @on_success_action = 1, @on_success_step_id = 0, @on_fail_action = 4, @on_fail_step_id = 2, @retry_attempts = 0, @retry_interval = 1, @os_run_priority = 0, @flags = 0, @step_name = N'step 1', @subsystem = N'TSQL', @command = N'exec admin.dbo.Get_sp_monitor_History
', @database_name = N'master'
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID , @step_id = 2, @cmdexec_success_code = 0, @on_success_action = 1, @on_success_step_id = 0, @on_fail_action = 1, @on_fail_step_id = 0, @retry_attempts = 0, @retry_interval = 1, @os_run_priority = 0, @flags = 0, @step_name = N'failure notification', @subsystem = N'TSQL', @command = N'exec admin.dbo.job_log_error', @database_name = N'master'
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id = @JobID, @name = N'at startup', @enabled = 1, @freq_type = 64, @freq_interval = 0, @freq_subday_type = 0, @freq_subday_interval = 0, @freq_relative_interval = 0, @freq_recurrence_factor = 0, @active_start_date = 20030318, @active_end_date = 99991231, @active_start_time = 0, @active_end_time = 235959
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id = @JobID, @name = N'daily', @enabled = 1, @freq_type = 4, @freq_interval = 1, @freq_subday_type = 1, @freq_subday_interval = 0, @freq_relative_interval = 0, @freq_recurrence_factor = 0, @active_start_date = 20020911, @active_end_date = 99991231, @active_start_time = 220000, @active_end_time = 235959
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
execute @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)'
if (@@ERROR <> 0 OR @ReturnCode <> 0) goto QuitWithRollback
END
commit transaction
goto EndSave
QuitWithRollback:
if (@@TRANCOUNT > 0) rollback transaction
EndSave:
GO
/*
sample output from MonitoLogTrend
Date Seconds CPU Busy I/O Busy Idle Packets Received Packets Sent Packet Errors Total Reads Total Writes Total Errors Connections
----------- ----------- ------------------------- ------------------------- ------------------------- ------------------------ ------------------------ ------------------------ ------------------- ------------------- ------------------- ------------------
Feb 28 2004 84457 1172164(60194)-71% 35219(2122)-2% 2733938(264947)-313% 147169220(9126873) 185263066(13039807) 1280(22) 25205322(1740107) 111320245(7478212) 0(0) 1063851(73183)
Feb 27 2004 86400 1111970(92373)-106% 33096(2853)-3% 2468990(239562)-277% 138042347(13209764) 172223259(16381685) 1258(83) 23465215(2221812) 103842033(10042220) 0(0) 990668(80531)
Feb 26 2004 86400 1019596(94192)-109% 30243(2870)-3% 2229428(237728)-275% 124832583(13414618) 155841574(16623477) 1175(105) 21243403(2326204) 93799813(9854245) 0(0) 910137(80992)
Feb 25 2004 86400 925404(99199)-114% 27372(3108)-3% 1991699(232484)-269% 111417965(13813968) 139218097(17454242) 1070(139) 18917199(2637439) 83945568(10738827) 0(0) 829145(92372)
Feb 24 2004 86401 826204(92383)-106% 24263(2969)-3% 1759214(239439)-277% 97603997(12857582) 121763855(16607319) 931(120) 16279760(2134725) 73206741(11138844) 0(0) 736773(95345)
Feb 23 2004 86398 733820(85225)-98% 21294(2521)-2% 1519775(247035)-285% 84746415(11941917) 105156536(16198591) 811(87) 14145035(1822173) 62067897(7210392) 0(0) 641428(95224)
Feb 22 2004 86401 648595(48301)-55% 18773(1694)-1% 1272740(284799)-329% 72804498(6622939) 88957945(9438719) 724(22) 12322862(795107) 54857505(7595942) 0(0) 546204(89108)
Feb 21 2004 86401 600293(60808)-70% 17078(2243)-2% 987941(271742)-314% 66181559(10701505) 79519226(13074105) 702(32) 11527755(1673810) 47261563(8136268) 0(0) 457096(89143)
Feb 20 2004 86400 539485(84550)-97% 14834(2836)-3% 716198(247405)-286% 55480054(13003304) 66445121(16146137) 670(90) 9853945(2243552) 39125295(9802351) 0(0) 367953(94545)
Feb 19 2004 86400 454934(181136)-209% 11998(4850)-5% 468793(148804)-172% 42476750(14063008) 50298984(17291557) 580(152) 7610393(3304533) 29322944(10104186) 0(0) 273408(95868)
spt_monitor_log data used to produce this output
id lastrun cpu_busy io_busy idle pack_received pack_sent connections pack_errors total_read total_write total_errors
----------- ------------------------------------------------------ ----------- ----------- ----------- ------------- ----------- ----------- ----------- ----------- ----------- ------------
570 2004-02-28 21:27:58.107 37811752 1136106 88191562 147169220 185263066 1063851 1280 25205322 111320245 0
569 2004-02-27 22:00:21.417 35870007 1067644 79644865 138042347 172223259 990668 1258 23465215 103842033 0
568 2004-02-26 22:00:21.853 32890213 975584 71917047 124832583 155841574 910137 1175 21243403 93799813 0
567 2004-02-25 22:00:21.453 29851742 882977 64248377 111417965 139218097 829145 1070 18917199 83945568 0
566 2004-02-24 22:00:21.003 26651759 782701 56748868 97603997 121763855 736773 931 16279760 73206741 0
565 2004-02-23 22:00:20.617 23671638 686924 49025014 84746415 105156536 641428 811 14145035 62067897 0
564 2004-02-22 22:00:22.973 20922426 605588 41056138 72804498 88957945 546204 724 12322862 54857505 0
563 2004-02-21 22:00:21.803 19364309 550922 31869069 66181559 79519226 457096 702 11527755 47261563 0
562 2004-02-20 22:00:20.687 17402744 478544 23103172 55480054 66445121 367953 670 9853945 39125295 0
561 2004-02-19 22:00:20.443 14675318 387046 15122360 42476750 50298984 273408 580 7610393 29322944 0
*/
Bill
|