Edmonds, consider (1), (2) and (3) if you will.  Let's see what the local greens have to offer in rebuttal.

By steven d keeler | Oct 17, 2013

 

 

“That’s the result that they get when you premeditate your science,” said Dr. Tim Ball, former professor of climatology at the University of Winnipeg. “When you set out to establish a certain scientific outcome and you program your computers to do that, you shouldn’t be surprised if that’s the result you get. The problem is what they’re getting out of their computers is not fitting with what’s actually happening. Of course, that’s been the problem with the IPCC all along.”

 

(1)

Extracted from the WORKING GROUP I CONTRIBUTION TO THE IPCC FIFTH ASSESSMENT REPORT CLIMATE CHANGE 2013: THE PHYSICAL SCIENCE BASIS

(c) Model Response Error

GMST = Global Mean Surface Temperature
CMIP5 = Coupled Model Intercomparison Project Phase 5     http://cmip-pcmdi.llnl.gov/cmip5/

The discrepancy between simulated and observed GMST trends during 1998–2012 could be explained in part by a tendency for some CMIP5 models to simulate stronger warming in response to increases in greenhouse-gas concentration than is consistent with observations (Section 10.3.1.1.3, Figure 10.4). Averaged over the ensembles of models assessed in Section 10.3.1.1.3, the best-estimate greenhouse-gas (GHG) and other anthropogenic (OA) scaling factors are less than one (though not significantly so, Figure 10.4), indicating that the model-mean GHG and OA responses should be scaled down to best match observations. This finding provides evidence that some CMIP5 models show a larger response to greenhouse gases and other anthropogenic factors (dominated by the effects of aerosols) than the real world (medium confidence). As a consequence, it is argued in Chapter 11 that near-term model projections of GMST increase should be scaled down by about 10% (Section 11.3.6.3). This downward scaling is, however, not sufficient to explain the model-mean overestimate of GMST trend over the hiatus period.

Another possible source of model error is the poor representation of water vapour in the upper atmosphere (Section 9.4.1.2). It has been suggested that a reduction in stratospheric water vapour after 2000 caused a reduction in downward longwave radiation and hence a surface-cooling contribution (Solomon et al., 2010), possibly missed by the models, However, this effect is assessed here to be small, because there was a recovery in stratospheric water vapour after 2005 (Section 2.2.2.1, Figure 2.5).  In summary, the observed recent warming hiatus, defined as the reduction in GMST trend during 1998–2012 as compared to the trend during 1951–2012, is attributable in roughly equal measure to a cooling contribution from internal variability and a reduced trend in external forcing (expert judgment, medium confidence). The forcing trend reduction is primarily due to a negative forcing trend from both volcanic eruptions and the downward phase of the solar cycle. However, there is low confidence in quantifying the role of forcing trend in causing the hiatus, because of uncertainty in the magnitude of the volcanic forcing trend and low confidence in the aerosol forcing trend.

 

Authors note :  Until this point the IPCC has simply dismissed any role for forcing of our climate by the sun, now, suddenly, it's a factor to consider ?  What does this say about the IPCC method and their 'science' ?

(2)

Along comes this 'study' as reported in Nature :  http://www.nature.com/nature/journal/v502/n7470/full/nature12540.html

Mora's team crunched data from climate models to come up with the findings, which are based on the premise that greenhouse gas emissions will continue unabated in the years to come. The data analyzed by his team comes from 39 different climate models out of 12 countries, most of which have already been used in other published research. But Mora and his colleagues wanted to use the data to do something different: rather than offer projections of average temperature changes worldwide, they wanted to offer location-specific changes and compare those to historical temperature norms.

And once again, computer models, programmed to mimic the past, so as to project the future, are supposed to be believed ?  Read, in part, the Nature editors summary :  Projections of warming are now a fixture of climate modelling exercises. Camilo Mora et al. have used an ensemble of these simulations to estimate ....


(3)

To grasp the full impact of this 'try to discredit the pause in global warming',  jump BACK in time to the year 2009 :   http://eesc.columbia.edu/courses/v1003/readings/Kerr.Science.2009.pdf

From this result, the group concludes that the model can reproduce natural jostlings of the climate system—perhaps a shift in heat - carrying ocean currents—that can cool the world and hold off greenhouse warming for a decade. But natural climate variability in the model has its limits. Pauses as long as 15 years are rare in the simulations, and “we expect that [real-world] warming will resume in the next few years,” the Hadley Centre group writes.


Well, NOW the wait is almost 17 years, and where is this warming ?  The last desperate gasp now is that the heat is 'hiding' in the oceans.  Somehow, although surface ocean temperatures have not risen, the heat has jumped into the deep oceans.  It's desperation time for the AGW crowd.

http://www.argo.ucsd.edu/global_change_analysis.html
.

... Domingues et al (2008) and Levitus et al (2009) have recently estimated the multi-decadal upper ocean heat content using best-known corrections to systematic errors in the fall rate of expendable bathythermographs (Wijffels et al, 2008). For the upper 700m, the increase in heat content was 16 x 1022 J since 1961. This is consistent with the comparison by Roemmich and Gilson (2009) of Argo data with the global temperature time-series of Levitus et al (2005), finding a warming of the 0 – 2000 m ocean by 0.06°C since the (pre-XBT) early 1960′s.

 

These scientists are trying to convince YOU that (a) they have temp data accurate to 2 decimal places and (b) instrumentation that can measure with such accuracy, finally (c) there is NO plus or minus error ** in either their data OR their methods !  Is this ' science ' , that you find credible ?  And why does a reader have to dig down into such ' science ' to discover references that discuss ANY notion of data error ?

 

** Investigate for yourself:

 

ftp://ftp.nodc.noaa.gov/pub/WOD09/DOC/wod09_intro.pdf

Ch 6, Page 120, in part.

 

6.7. ORIGINATORS FLAGS
The originators flags from the Argo ( ocean temp data, in part ) program are kept intact in the WOD09 data.
The flags are as follows:
0 – no quality control (QC) performed
1 – good data
2 – probably good data
3 – bad data that are potentially correctible
4 – bad data
(from Argo quality control manual Version 2.0b, 2004).
Note that not all data marked with originators 3 or 4 are marked with WOD09 quality control flags. Visual inspection of examples of these data found no reason not to use these data for scientific research. This just means that a quality control test that failed byArgo standards did not fail byWOD09 standards, or that the failing test was not performed for WOD09. The user of WOD09 can choose to
use the Argo flags, the WOD09 flags, both, or neither.  Argo also supplies a grey list. This is a list of floats and sensors which have been deemed to have failed at some point. The date of failure is also listed.

 


 

 

 

 

Comments (0)
If you wish to comment, please login.