We have produced a series of technical presentations to highlight the innovative work undertaken in the delivery of an integrated and reliable transport network. The presentations showcase our employees – leaders in their fields and demonstrates their contribution to practical and affordable infrastructure design solutions.
Smarter congestion analytics
This technical presentation explores the exciting work TMR is doing in smarter congestion analytics. Using a ground-breaking methodology to determine the community costs associated with different causes of congestion, TMR has the opportunity to identify and prioritise future congestion management initiatives, evaluate current initiatives, and benchmark performance.
Kath Johnston has 15 years’ experience in the transport sector, and her work at TMR specialises in the increasingly complex area of traffic congestion – a high priority challenge for many cities. The cost and causes of congestion work presented in this video won a Distinguished Scientific Paper Award at the ITS Congress (Asia Pacific category).
So the congestion analytics work I’m going to present today can be linked to all five TMR values. We are using customer-centric metrics. We're unleashing the potential of our vast datasets. We're doing ground-breaking, courageous research. We're turning this great research into action using business intelligence tools. And empowering our decision makers.
The US Department of Transport Chief Data Officer, or Data Dan as most people like to call him, was asked 12 months into the role what his greatest challenge was. His response stressed that collecting data is almost too easy. We have so much data and the real challenge is actually in turning that data into information and presenting it to the decision-makers. In TMR we’re data rich, but information poor. We're working to better integrate our datasets and improve our storytelling and present that to our decision makers.
Autonomous vehicles, cooperative ITS and mobility as a service – three of the favourite buzzwords at the moment - all rely on data. They also generate a huge amount of data. The Google self-driving car generates 60 gigabytes of data an hour. You might not need it all, but the potential is there. And with three million vehicles in South-East Queensland that's 180 million, sorry, 180 Megabytes of data per hour. Customer expectations are also changing. That's a very low figure, I think it's much higher now, in terms of the percentage of Australian owning smartphones. And these offer great opportunities. Opportunities to improve our awareness of how people are travelling but also opportunities to communicate with our customers.
The other big external influence on TMR’s data at the moment is this exponential drop in data storage costs, as well as the capability of our analytic platforms. So we need to prepare ourselves for all of this. Improve our existing data management, and the interoperability, be agile in our use of emerging data sets, and when cooperative ITS and autonomous vehicles is the reality, we will be able to manage our existing network to efficiently, more efficiently through historic, real-time and predictive analytics.
So today I'm going to provide you with some context as well as showing you three vital components of the congestion analytics work that we're doing at the moment.
Congestion needs to be defined in a customer-centric way and it’s focusing on the worst components of congestion. We also need to understand the impact of that congestion on users and we're seeing a huge growth in congestion of about seven percent per annum. We also need the methodologies to understand the causes of that congestion so that we can do something about it. We need to focus our infrastructure and non-infrastructure solutions where they’re most needed. Australian and State infrastructure plans have evidence-based decision-making as one of the few key recommendations. Improving our metrics and analytics is going to be important to position ourselves into the future and TMR is leading the way in congestion analytics and today I'm going to provide you an overview of some of the exciting things we are doing across the Department.
So this graph shows Australian cities’ annual congestion levels - and see on the y-axis showing cost of congestion costs, sorry cost of congestion, and along the x-axis by population from 2005 to 2015. The Bureau of Infrastructure Transport and Regional Economics (or BITRE) updated these figures last year and they're costing avoidable congestion across Australian capital cities. Their costs are based on long-term trends in urban traffic growth and estimating the associated impacts on road users. The components include delay, fuel emissions and they also have a factor for applying travel time reliability. So everyone accepts that congestion - that we're not going to get zero congestion in our cities. This concept of avoidable congestion allows a certain amount of delay and costs and so these costs can potentially be saved under appropriate policy or operational intervention.
So each of these coloured lines is showing a capital city. And see Brisbane is shown in green. And Sydney in blue. And Melbourne in red. So currently Brisbane’s about $2 billion per annum in terms of cost of avoidable congestion, but in the next 15 years BITRE is predicting Brisbane congestion to double or triple - which means we're going to be up at Sydney levels in 15 years. Brisbane and Perth congestion is also expected to be the highest growth in the nation. So what are Brisbane commuters going to say? And what is TMR going to do about it? Solutions like public transport and congestion pricing are going to play an important role.
So our team in Engineering and Technology analysed percentage growth of our congestion performance against registered vehicles. And vehicle kilometres travelled. As well as the cost of petrol. So you can see vehicle kilometres travelled - which is a measure of traffic demand on our network - multiplied by the length of the road, and you can see that that's grown about 11% over the last five years.
We've also seen a similar growth in our registered vehicles. So this high-growth in vehicle kilometres travelled is the highest growth we've seen in nine years in the last 12 months and we don't know exactly what the cause is but we think that it could be related to petrol price. So you can see that there's been a sharp decline in petrol price in the last few years.
So how are our congestion metrics fairing. Average travel time is not doing too badly considering the growth in traffic, and the petrol prices, however our travel reliability measure is worse and expected to grow much more in time unless a significant investment in management, operations and alternative options. So travel reliability is an important focus for road agencies now that infrastructure is becoming more congested.
It's defined from a user perspective by the additional time that commuters need to allow for in their schedules to get where they need to go on time most of the time. I've got more on reliability coming up.
As you’d know we can't build our way out of congestion. There needs to be effort into maintaining and operating the network - especially if we want to improve reliability outcomes. PIARC network operations is reporting this big shift in road operations and I heard the other day that VIC Roads is actually spending just as much in operations and maintenance as they are on building the network. So how do we better inform this shift? How do we focus our attention where it matters? And how do we demonstrate the benefits?
In 2013 TMR approved a congestion management approach. It's currently being reviewed but the fundamentals are not going to change. Essentially it says that we accept a certain amount of congestion. And that's a product of contemporary urban living and a vibrant economy. So the analytics support this in three ways. We measure how the public experience congestion, with the focus on multiple modes and travel time reliability. We inform, consult and engage the Department and externally by turning data into information. We respond by making decisions especially about investments. And this is where strategy hits the road - pardon the pun.
There's been much debate on how we define congestion. In 2011 Austroads agreed on five measures of congestion which are based on road-user and manager surveys. The travel time/10km measure reports the average across the network normalized by a normal 10km trip. So as an example of vehicle travelling at 60km/hr will take 10 minutes to travel 10km. The variation from posted speed measure reports the percentage of the posted speed and provides a mechanism to report arterial and motorway efficiency performance across the network - so it's a way of adding those two different speed environments together.
The arterial intersection measure reports number of congested minutes for each approach at our intersections. Reliability is a measure of the amount of buffer time that users need to add to their regular travel time in order to be on time most of the time. And finally productivity is a productive of speed and flow or ‘throughput’ on our network.
So at the moment we source this data from STREAMS which is our intelligent transport system platform that operates as signals. We collect this data from all the loops in the road that are cut into the pavements at the signals, and also along our motorways. It's given us good coverage in terms of time and space for performance reporting. The data is updated nightly into our reporting tool and can be calculated and accessed without impacting the real time operations of the motorway and signal system. So this time TMR is a national leader in reporting and using these national performance indicators. We're the only state that currently reports them to Austroads and we've aligned them without agency decision making.
So we can use the same data at various levels of granularity to make strategic, tactical and operational decisions. So the strategic reporting includes monitoring trends, benchmarking performance, the stuff that I've just showed you before, and assessing network demand and loading. The tactical reporting can include the monitoring and effectiveness of improvements, as well as ranking the network. The operation reporting includes signal evaluations, responding to incidents, and informing the public - which is a key role of our traffic management centres and transport co-ordination centres and will become increasingly important into the future.
So I've got a few examples of these, you would have seen the report - average travel time in percentage of our network with good performance for the other national performance indicators, which is how we report it in the annual report and our service delivery statements.
We set the targets and we track them over time, and the green arrow means that we are hitting our targets. One of the technical tools that we've developed is an intersection ranking tool. We've got state-controlled signals ranked according to the congested minutes indicator. It can be used to target investment and also for handling complaints in the regions. So it's updated monthly and it's located on a SharePoint site in SEQ operations. So we can report -you can see there the 20 most congested intersections across metro. I’ve blown up that section to show you the top ten - four of the top ten are on Gympie Road. And this has been used to justify studies into Gympie Road in metro region. We're also looking at rolling this out across SEQ so north coast and south coast as well.
Finally the operational example here is what it looks like in STREAMS. So you can see the MPI speed data is presented using coloured roads in the STREAMS explorer and that can be used in the traffic management centres. So what impact is this congestion having on our users? So we've used Australian Road Research Board (or ARRB) Austroads methodology to cost the delay, vehicle operating costs and also the emissions below a speed threshold. We've used 70% for motorways and 55% of posted speed for arterials. So we say for example if you're travelling 80km/hr on a motorway and the speed limit’s 100 that that's not excessive congestion. So we don't cost it. It’s only wants to speed limit drops below 70 that it's costed and considered as excessive congestion. So we cost travel time which is people's time wasted in cars and is estimated as a cost per hour. We also estimate the fuel costs and the emissions and this is calculated again using our STREAMS data. So we have it along each road that we've got instrumentation. We’ve also compared the results against the BITRE results and they are comparable.
So our daily benchmarking analysis is shown on this slide. We've got average daily cost along the Y axis and along the X axis we've got 2012 through to 2015. The different lines show the different days of the week. You can see that it fluctuates between about $600,000 and $1.7million per day on weekdays. And it's less than half of that on weekends. There's a lower cost on Mondays. And it progressively builds up over the week. You can also see seasonal fluctuations in December/January. And you can see a clear uptrend.
This is calculated at 7% per annum increasing daily cost of congestion for Brisbane state-controlled roads. And that's higher than the traffic growth and is therefore an increase in relative terms. It's three years so it's 2012 through the 2015. Four years. So we also have done analysis around vehicle types and we, in Brisbane at least, see that a lot of the congestion is associated with passenger vehicles. And we also have a substantial proportion of that related to delay costs, so over 90% of that is delays as opposed to omissions or vehicle operating costs.
This is a similar graph but we've done the same 2012-2015 analysis per congested vehicle kilometres travelled. So it's per VKT, and you can see that it fluctuates between 55 and 65 cents per kilometre - which is about $3 for a 5km section and essentially our free roads are not free, especially in congested conditions.
We’ve also done numerous case studies and this is just one example of an incident on the Bruce Highway. So in 2014 there was a truck rollover and about five cars involved. People probably remember it fondly. The incident happened about or just after midday, and the highway was closed in both directions for about an hour. And there were a lot of queues at this stage in both directions - impacts across a lot of the network - and it wasn't until five o'clock that the road was reopened in the southbound direction. So using our cost of congestion data and methodology were able to look at the typical days - this is typically what happens on that section of the road, and this is the average. And on the incident day you can see the cost of congestion. And we're able to then calculate the congestion that’s associated with that specific incident so $580,000 and putting that in perspective is about half the congestion for the whole of Brisbane, for that one incident. So it lasted - you can also see clearly how long the incident lasted and the impact of the incident - eight hours - and we've used that information to inform incident response services, or justify the expenditure of incident response services.
So the information isn't perfect - this map shows the coverage of the STREAMS data that we’re using at the moment. We've got about 900km but we’re still missing a section in the middle, and this is due to the different signal systems with Brisbane City Council and we're working with Brisbane City Council to help close these gaps so I'll talk a bit more about that.
And all, I think it's really important to know that all data and analysis has limitations and it's important for us to understand them so we can one improve them, but also look at the information and with that knowledge. So for an example we don't pick up detectors that have failed, volumes where the vehicles are stopped on the road, and also at the moment we don't cost bus delays - but we've worked with Australian Road Research Board and TransLink to develop a methodology to incorporate those multimodal components. The on road bus excessive congestion costs, and we've also come up with a methodology to cost reliability - and that was what won the ITS Work Congress paper award.
So we're also working with emerging data sources, for example speed data from mobile phones and in-car navigation systems, and that includes our local road network as well. Hopefully that's not going to happen. So to improve that understanding of congestion we've come up with the latest research on understanding the underlying causes of congestion – so this is going to sharpen the information for decision makers. The federal highways agency in the US has produced a ‘Sources of Congestion’ pie chart a few decades ago that's been referenced many times since. And the purpose of our pilot which I’m going to present today is to replicate the pie chart but also to come up with a methodology that can be implemented. So it's a multivariable analysis building on that cost of excessive congestion.
We can use the STREAMS data to build normal traffic profiles and separate abnormal from normal congestion. The normal congestion can help inform our capacity investment choices. We then use other external events such as road works, incidents, special event data from Queensland Traffic, and weather data from the Bureau of Meteorology, to then identify the causes of the abnormal congestion. We then use that information to assist our response and manage network operations. That third tier is what we use to build the pie chart.
I'm going to step you through an example. So we've used 12 months of data in 2014 and this graph is showing you one link on the network. So it's actually the Pacific Motorway southbound just before the Gateway merge at 5:30 in the afternoon. So this example is, you know, there's weekends and school holidays, public holidays where the speed is not too bad. So this is showing you the number of days at different speed limits, and the posted speed here is a 100km/hr. Now so on these days the performance of the road is not too bad. This is our excessive congestion threshold so at 70km/hr you can see there are quite a few days where that section roads congested. So we cost all of the congestion below that threshold. For this link at this time of day the average speed is 53km/hr across the year. This is what we use to then define a range of normal.
So you can also see that a lot of the normal congestion is excessive. So this - the cost below the threshold is then contributed to the normal recurring part of our pie chart. Fairly straightforward. It's a little bit more complicated when we have an incident. So this is the same section of road and an incident will drop the speed limit below the normal range. So it looks like it's about 14km/hr, and this is what we cost for that incident example. A section of that however cannot necessarily be attributed to the incident, because if the incident didn't occur, you could still be congested on that section of road. So we have another section of the pie, where we put it into a recurring bucket that occurs during an abnormal event. The last bit of the speed drop that's below normal is put into an incident slice of the pie. So we now know, once we've done that across the whole network, across the 12 months of analysis, we know what the causes of congestion are for Brisbane in 2014.
A large slice of that is recurring congestion. You'll also notice that there's a lot of incidents that happen during congestion, or whether incidents that happened during, sorry during normal congestion, so that's about half of that recurring section.
So this is congestion that we need to provide infrastructure solutions for. The incidents contribute to $23m per annum, so might look like a small slice of the pie but it's still a significant amount of congestion.
So this is showing you across the whole network, but we're also able to - now that we've got the methodology - implement or understand what's happening on each link so in this incident congestion we can actually rank the network for purely incident related congestion and then place our traffic response units closer to where that congestion hits the network hardest thereby reducing congestion. So you'll also notice that there is an unknown slice of the pie and this is where we can't attribute a cause to the abnormal congestion. So there are some improvements in long-term road works that are underway to better improve this - we suspect that quite a large part of that slice is due to roadworks. And we're now, as well as improving the methodology, we’re also implementing it into a business intelligence tool called NetBI. It's a web-based visualisation and reporting tool and I can do a little demo of it quickly if you're keen to see because I think the real benefit is seeing the functionality live.
So this is the same graph that I showed you, showing the daily cost of congestion across - its only one year though 2015 - but it's across all of SEQ. So it's showing the days of the week on the lines - you can see that when you hover over it. So using this tool we're able to, because it's SEQ we’re able to split into the regions. You can see this is metro region with the highest amount of condition, South Coast isn't far behind, and North Coast is much smaller. And you know you can slice and dice this information however you want, and this is designed to have the decision makers on their desks and able to access.
The other examples that I wanted to show you, was I did a ranking of the most congested links in SEQ. So this report here is set up so that you can see the links down the side that looks like, the link that I was showing you before, M1 between Gateway on ramp and Rochedale, it’s a bit further South, but you can then see all of the information that a decision-maker might need for that link around the map: level of service, speed occupancy and volumes, along that link of the road.
So this is where we're going to sharpen our evidence-based decision-making for capacity choices and operations. This is where we're going to see that congestion management approach and the Australian and State infrastructure plan recommendations realised. This is where strategy hits the road. And this is where we're going to prepare ourselves for the rapidly changing transport sector and make smarter decisions to improve congestion for the benefits of Australian community – Queensland community (could go Australia if you really want).
Speed choices with respect to design speed-simple yet often misunderstood
Ricky Cox, a road design specialist at TMR with 51 years expertise in the planning and design of roads, is an established authority in the research and development of road design standards.
Ricky has championed the development of multiple computer systems used in road design and also has extensive experience in their application. He has played a key role in formulating sections of TMR’s Road Planning and Design Manual, and has also had major input into new Austroads design guides.
Ricky’s approach to his work has seen him recognised for a number of awards and achievements, including an Australia Day Achievement Award in 1999, an Austroads Achievement Award in 2010 and a Public Service Medal in June 2011.
Hello everybody, what you're going to see next is an absolute tour-de-force on speed design and how it applies to our road network. People think ‘all you do is drive along a road, and you can drive to conditions,’ but there's actually a lot more to it than that.
What you're going to see next is from one of our imminent experts, Ricky Cox, just the background to his decades of experience and I'm sure you'll get a lot out of it.
Ricky Cox: Well thank you very much for this invitation today to present a DG’s Tek-Talk. We start off by looking at our values and how these can relate to this particular subject matter. Design speed is fundamental to geometric road design. It's fundamental to coming up with a safe road for our customers. There are challenges associated with road design and all of these values come into play in achieving the Department's roll with connecting Queensland.
So today we'll be looking initially at some key definitions; the effect of design speed on the resulting alignment; we’ll look briefly at the design speed history within the department; consequences of previous practices; and how that has led on to some current issues with respect to design speed; and then we'll conclude.
So the key definitions I want to touch on are of course Design Speed, Desired Speed, and Speed Choices. A couple of others will come out through the wash. Desired Speed - it's the fundamental parameter for geometric road design. It's used to relate all the other road design parameters such as Horizontal Curve Size, Vertical Curve Size, Grade, Sight Distance and so on. You'd think being a fundamental parameter it’d be well understood, but as we'll see there have been issues over the years.
The first one we look at now is Desired Speed. It's the 85th percentile speed that drivers build up to and settle at on the longer straights, and through the larger radius curves under light traffic conditions. I think from our own driving experience we can sort of relate to that – that we drive a particular road, there's times we will build up speed, settle at those speeds, reduce speed later on the curves and so on, if necessary for some of the types of road. So Desired Speed together with the road type influences driver expectations or their choice of speed, as they drive along the road. With some roads drivers do expect they'll have to slow down for curves and so on, and other times on our higher-order roads they do expect to have a more uniform travel speed.
The other term I want to introduce is Speed Choices because that has come about in recent times with respect to speed limits at roadworks. It was recently introduced within the department, it's been intended to overcome problems with compliance with speed limits at road works and this is often due to unrealistically low speed limits having been set. So fundamental to this Speed Choices concept, is recognising what drivers are prepared to accept under different conditions, and really we will see this whole approach is consistent with what has to be undertaken with respect to setting Design Speed for new roads and even upgrading of existing roads.
So the Design Speed then when we think about it, what do we really need it to achieve for us? It has to suit a large percentage of our road users. It has to reflect the way the road’s driven really, from a design point of view we're trying to predict ‘how is this road going to be driven?’ so we can design all of the various aspects accordingly. Some of our roads if the operating speed on the road varies then the design speed varies accordingly. At the end of the day our customers, our drivers out there on the roads, they know nothing about this concept of Design Speed. They read the road according to a number of factors which we will look at shortly and that sets their Desired Speed, their expectations for driving the road. If our design doesn't meet the driver expectations from how they've read the road, we can have safety problems.
So what are the things that influence how drivers read the road and set their Desired Speed and their expectations? Obviously the geometry of the road has a fundamental influence on driver expectations. Likewise the terrain and the physical environment - they can see from that why the road might be a bit tighter here than it is out in western Queensland obviously. The built environment where the road’s located influence their expectations. The road function does. Speed limit does. The road cross-section. Road surface conditions, and the level of enforcement. And that list there is generally sort of in descending order of influence. The level of enforcement can vary a little bit, but it's not necessarily as much as people may think. So these are the factors that influence drivers’ reading of the road.
So how then do we come up with our Design Speed? We do have a definition in Australia - the Design Speed has to be greater than or equal to the operating speed along the road. The operating speed in turn is taken to be the 85th percentile speed at any point on the road under light traffic conditions. So Design Speed certainly is the critical parameter, as I said, because it relates all of the other geometric parameters to make sure we've got them all of appropriate size.
If we underestimate the speed that we will get on the road, will be deluding ourselves in terms of how we've provided the essential capabilities such a Sight Distance, curve size and so on, at some point on the road.
So the Design Speed on some roads, the speed can be different at some point in each direction and consequently the Design Speed can be different in each direction - something simple as that still causes quite a bit of angst for some designers. And a big issue for us these days is that much more of our road design work is associated with existing roads - we're upgrading parts of the roads - it's quite rare these days to build a brand new section of road.
And we will find that the 85th percentile speed on this section of road that we wish to upgrade or undertake work on, is certainly greater than the original Design Speed for that road. How that's come about will see shortly as well.
So I've mentioned this 85th percentile speed. What do we really mean by that? It's the speed that 85% of the drivers will be at or less than that speed at some point on the road. We choose the 85th percentile speed because (if I can get this to work, it did work before sorry). It's over in this area here the point of inflection on the normal distribution which… speed distributions on the road pretty well align with a normal distribution. Or can be expressed in an alternative form where you express it in a cumulative form, so it’s up here, so there, or there, how that's been represented. Above that we're dealing with a rapidly decreasing percentage of the road users and how they would drive the road. And these drivers who are at a speed above the 85th percentile speed, they're considered to be aware of the increased risk that they're taking, and be more aware and so on of what they're doing.
Desired Speed and Design Speed together also have a bearing on whether we are seen to be providing a high standard piece of road for a particular road type or not. So the horizontal geometry influences speed, speeds together with that influence the actual standard we have of the road. The alignment standard we generally set on the road in turn can still go around and influence reading of the geometry and so on. So it is quite a circular relationship.
Some photographs there and examples of what we would class as high standard alignments. They don't all have to be high speed alignments, the one there on the left an urban arterial road here in Brisbane. Quite a recent road in the Department's history. For its function it's a high standard road. Drivers have no trouble driving that road – they can make all the lane changing and acceleration deceleration for traffic signals and so on, that occur on that type of road without any trouble. The one on the right is a section of the Cunningham Highway or the Southern Ipswich Bypass, high standard of alignment there by virtue of the horizontal curve size and the grades and so on.
Another example, the one there on the left, is a section the Bruce Highway. Quite a bit older in design than those previous two, but it happened to be a section that has, in this case, obviously, good curvature because they're relatively straight and likewise the vertical geometry - good-sized crest, vertical curve, sight distance isn’t an issue. And even then on the right, right out West. Much lower order road in terms of our road function, but we do have a high standard of alignment simply because the terrain lent itself to do that, so there was no restriction on providing good standard horizontal curvature and vertical curve size and so on.
I'm sure we can all relate to roads like this, obviously this road is quite a bit tighter, we've got quite a bit more in terms of curves. Curves are smaller, we've got more in terms of grades, crests, sizes and so on. And these are the types of roads, even where even though they have a 100km/hr speed limit, you're not expected to be able to travel every part of that road at 100km/hr. There are curves on that road which you slow down for, and then speed up and get up to your Desired Speed if you do get the chance. A lot of our middle-order roads in Queensland are like this.
And we do have what we call a lower standard of alignment - that doesn't necessarily automatically mean this is a low standard alignment. Because of the terrain conditions we’re getting now into mountainous terrain, I think we got a road part of which is familiar to Jarryd. So speeds will vary. Speeds will be lower in this case but dropped because we have different driver expectations. And if the road is in keeping with the driver expectations, and how they read the road, it might be a lower standard of alignment but it's not a low standard of road because of how it matches with the driver expectations.
Looking briefly at our Design Speed history, because this does underlie much of the current practice and some of the misunderstandings that we still have these days. We'll see that we've had some varying definitions of what we mean by Design Speed over time. And we'll see how the Design Speed is not necessarily linked to our speed limit practice.
The very first point there says up until 1940 in Queensland the open road speed limit was a whole 12mph (miles per hour). Needless to say it wasn't particularly well respected by the 1930s.
So in 1940, the open road speed limit was increased to 40mph - just about 70km/hr. 1950 the open road speed limit in Queensland went up to 50mph or 80km/hr. 1961 (before I started work at this place) it went up to 60mph, or essentially 100km/hr. In 1995 we did increase the open road speed limit on some of our better standard rural roads to 110km/hr.
We're looking briefly at a couple of tables put together from design guides at the time. Over there on the left “Form 21 ZL” showed that in the year when the open road speed limit had just gone up to 40mph there was recognition that the road network was going to improve, vehicles were going to improve, the roads would need to be capable of handling these changes in driver speeds in time. So most roads were designed for 50mph or more. Even in that era it was recognised that in the flat country we would need to design for 60mph. Most of our roads, the middle-order roads, we would design then or 50mph or 80 km/hr, and mountainous terrain obviously something less. The Design Guide the Department put out in 1965 and that was in use up until 1980. When it came out of course our open road speed limit had gone up but the thinking was obviously we need for most of our roads to design for at least 60mph on the highways in the undulating, 70mph or 110km/hr where the terrain was much freer for us to build the roads. But again these middle-order roads, the expectation was we would have a nominal 50mph or 80km/hr Design Speed. So you can see that that approach was somewhat subjective, somewhat aspirational even, because it was making some provision, or expecting us to make some provision, for changes in the road network and vehicle characteristics over time.
Now it gets quite interesting - in the late 1970s the then Australian Research Board undertook research into how our roads were actually being driven.
This research did show that the roads that had Designed Speeds of 100km/hr or more were driven with a uniform speed. The roads were, as I said, a lot of those roads before we said we will assume an 80km/hr Design Speed because in the interest of affordability primarily - these roads were driven quite differently. They weren't driven at 80 to 90km/h, they were driven at speeds varying between about 80, 90 up to about 110 km/hr for most of them. OK somewhat less speeds if it was a mountainous road, but the ones around the 80km/hr were driven very, very differently. People varied speeds, quite happy to do that. Also it did show that the speeds that drivers were prepared to take around the some of the curves, were higher than what the design guides expected at the time. Drivers were prepared to use a higher amount of side friction that the design guides said would be good practice.
So this did lead us to then have to look at how we design the road and what was also, that last points worth making the point, that it had sort of been known that vertical alignment didn't have as much an effect on driver's choice of speed as the horizontal alignment did. This research clearly showed that, and interestingly this research, around the world has been taken into account in reinforcing this point So in 1980 the Department switched from its own in-house design guide “The brown book” as it’s fondly known as, to the Austrade’s 1980 Interim Guide to Geometric Design for Rural Roads. This guide did introduce a clearer definition of Design Speed. It had to be at greater than or equal to the 85th percentile speed as we've covered. So this was the first real objective definition of Design Speed. Prior to that it used to be somewhat vague – a speed rarely exceeded by other drivers - but didn't really go into it any more than that. So it was the first objective definition of Design Speed in the world. USA didn't take it on board until 2001, the UK has taken on board the 85th percentile speed as part of its definition.
So this 1984 Interim Guide did recognise if we have the higher-speed roads where speeds are going to be 100km/hr or more, fine they’ll be travelled at uniform speed. You can, as all that time in the past, have a single uniform design speed, but these other roads, you will probably have varying speeds. You'll need to design it accordingly, and it provided a speed prediction model to help you work out what the speeds would be. And it did introduce new side friction factors for horizontal curve design based on the research
And it is interesting that most countries today still don't really tackle the issue of varying speed along the roads. Even on our higher-order roads sometimes we do have to transition from one speed limit to another. And how does that occur, what are the speeds likely to be? So we do have a tool in Australia. It’s just a pity it hasn't been used as well as it should have been over the years.
So some consequences of previous practices. In the 1960s and 1970s our practice was to have a single prescribed Design Speed for a fair length of the road. And we got away with that because a lot our design criteria did provide a margin of safety themselves, the Design Speed itself did provide a margin over the speed limit that was in force at the time. Designers were usually able to provide better than minimum radius horizontal curves at various points on the road. Most of the drivers said “thanks very much, I'll drive them a bit faster” as the research showed. And drivers were, those drivers were prepared use higher amounts of side friction than had been first thought. But interestingly through the 1960s 1970s our road network was improving significantly, but drivers were still used to a varying standard of the alignment along the road even on our major roads such as the Bruce Highway. We’ll come to that a bit more shortly.
So again the upshot of the 1980 Interim Guide was yes, recognise the roads where we could have uniform travel speed and consequently a single uniform design speed, and recognise the roads where drivers would drive those at varying speed and design accordingly.
But what actually happened? That speed environment model - the tool for being able to predict speeds along the roads where speeds would vary, quickly came to be ignored. Mainly because it was poorly understood, this was 1980. The first training course within the Department on the speed environment model occurred in 2001. Just things occurred through that era which made things a bit difficult for us. And it didn't support current practice which was ‘Set a single design speed - everything will be right”. When in fact the research showed it will be driven very differently. Another problem was a lot of our roads - even new roads - we would have time to design exception for example would come to a major rock cutting we couldn't put in a 9000m crest vertical curve, we could really, in this cutting afford about a 3-4000m crest vertical curve. How was that justified? Let's just assume that this cutting, through this cutting, the design speed’s now down to 80km/hr instead of 100-110. Now the people driving it, yes there's a grade there going over the crest. They don't really read the fact that their sight distance is down quite a bit over what they're getting elsewhere. They don't really see it. They don't see any need to reduce speed. The speed limit wasn't there. We couldn't provide any sensible advisory speed to slow down from because they couldn't really see the problem. So that was always a problem for the Department. That's still around. And it’s only now in recent times where we've got the rigor behind how do we assess and justify these particular types of design exception.
Again misreading of something that was one paragraph that was in that 1980 Interim Guide there led to overuse of just setting a 100km/hr area Design Speed for some of the higher-order roads and motorways when they really should have been designing for that bit more, to provide that bit more margin of safety with a 110 Design Speed.
Again misreading what was in the Guide, we saw a lot of our roads where we had the 100km/hr speed limit designed for 100, even 110. And a 600m radius curve is fine. Whereas previous practice was that really should have been more like 900m. So we've seen on some of our roads curves of that size when they didn't even need to be that, didn't cost us any more to have put something better, but somehow that came about. This is the interesting one, even though I’ve said before the speed environment model was quickly ignored, some aspects of it got applied to roads it was never ever intended to be used for. Such as some of our higher-order, higher-speed rural highways and indeed motorways - the Gateway Motorway on the, the old Gateway Motorway on the north side of the river here in Brisbane, had aspects of the speed environment model applied to it. So we end up with curves that are really too small for that type of road. We've had the safety issues with that as well.
Through my career money's always being tight for roadworks. Even though we have had a couple of road building boom periods, but they were short lived. Money has always been tight. The Department’s always been innovative in how to spend that money, how to improve its road network. I really do think we've achieved a lot for our customers with that. But as we've improved the road network we are seeing now that there's lower driver acceptance or expectation of a varying standard of geometry on some of our higher order roads. And I show this by means of an example on part of the Bruce Highway south of Gympie. Part of it has now been replaced, and yes another part’s to be replaced soon, and the third part will be coming along in the not-too-distant future as well. Parts of this road though, were designed in the 1940s - this part is still open as part of the Bruce Highway right now. Part designed in the 1940s. In that era of 80km/hr Design Speed. We've got a horizontal curve radii in the range of 300m to 700m as typical curve sizes. Other parts were designed in the 1960s and they had the Higher Design speed of 60mph, 100km/hr with commensurately larger radius curves. The little box here is just simply putting up something that was in the manual saying we should be aware of what's going to change over time due to improving road network and vehicle capability.
So just to get this home, this last bit here about this driver acceptance, expectation changing on our higher-order roads due to what they've seen elsewhere on the road. This part of the Bruce Highway (left-hand photograph) shows back in 2003 or so the curve the R300 curve advisory speed sign, as we got more traffic on the road network to the South it improved more so as well, the need to say “Hang on this curve’s not as good as you think!” It's got chevron alignment markers added, and then of course we had to go a stage further and put in the wide centre line treatment and reduce the speed limit, to overcome the fact that their expectations are such “why should I be slowing down for this part?” when all the rest of the road is so much better
So we still have issues with respect to Design Speed. The brownfields projects - in other words projects which you're upgrading parts the existing road - essentially keeping existing alignment. They've become even more dominant now they certainly, from the days in the 1960s early 1970s where we would replace the alignment, they've long gone except for some rare cases. But we still see within the Department, and I shouldn't be saying just the Department either, nationally, we're still seeing most designers would prefer to have a single design speed. And some of that is still down to the fact there’s this is misunderstanding of a Desired Speed and how that influences actual operating speeds on the road. We see over use of “Ahh well the Design Speed just needs to be the speed limit plus 10km/hr”. Some roads that's true, but we saw recently up in Cairns with the Bruce Highway upgrade project up there – where a service road alongside the Bruce Highway was set in the project documentation to have to have a 60 km/h speed limit which was appropriate for the service road. But it didn't mean that every curve on that had to be designed for 70km/h because there were a couple of bends in the road. Because it was just part, use part of the existing road network. It took quite a few arguments to get that accepted, just because of the way project documentation had been written.
We also see these days overtaking lines, they’ve become far more prominent obviously on our two-lane, two-way roads. We have seen how speeds in the overtaking lanes increase – it’s not just necessarily the people doing the overtaking - it's the ones being overtaken sometimes increase speeds. But there's other issues come into play with overtaking lanes. There’s the additional dynamics of people accelerating and changing lanes and the geometry needs to support those basic operations that occur within an overtaking lane.
We're seeing these days the evolution of the safe systems philosophy which is all about trying to have an improvement in the road standard, in the road speeds and the vehicles in order to make it a lot safer for drivers with a fundamental aim if a driver makes a unconscious mistake that they don't pay for that with their lives.
So it's fundamental to the safe systems philosophy is this reducing speeds where appropriate, and also having a compensating road environment - being outside the road or on the road itself with the user safety barriers and so on, it's certainly an evolving philosophy within Australia and how it gets implemented and how much can be taken on board from Europe and adapted to Australian conditions – it’s still being reviewed. We've certainly seen that the thinking there has been in place within the Department over the years we're on some of our middle-order roads which did have a 100km/hr speed limit. They were these intermediate speed roads where speed is varied, we've seen speed limits reduced to 90 and 80 in some cases, and the amount of traffic on the road actually allowed that to be more readily accepted as well.
We're also seeing now that many drivers are less willing to slow for curves even on these middle-order roads. So that's going to be quite a challenge for us – they’re prepared to use even higher side friction than that of research back in the late 1970s showed. And I wonder - and this is just my thinking - that last part - are our younger drivers less able to read the road. Because they've come into the driving experience with a better road network then I certainly did all those years ago.
So in conclusion, Design Speed is the fundamental geometric design parameter because it relates all of the other geometric parameters to make sure they're of the appropriate size. Together this then sets the standard of the road - have we got that right in terms of driver expectations? It is a simple concept, but is one surprisingly that has been misunderstood over time due to that original entrenched thinking.
We're still having trouble at the national level with the Austroads road design task force on how some of this is covered. As we’ve said some of our roads can have a single design speed as per the long-standing practice, but others will not. They must not. That's not the way they're going to be driven. We have to face up to that.
So the final test in terms of the right design. Have I got it right? Is it going to be driven the way I’ve designed it? Because drivers don’t know about Design Speed or care about the Design Speed. They will drive according to hear they read it, how they see everything. We've seen how their speed is influenced by the horizontal geometry, the topography, and the secondary factors such as cross-section and so on as we went into detail earlier.
And it is worth reinforcing the fact that the vertical alignment has very little influence on the speed of cars or the smaller vehicles on the road.
And to wrap it up again, with this new notion of Speed Choices. And Speed Choices with respect to Design Speed. From a design point of view we have to predict and understand how the road will actually be driven rather than just make some realistic assumptions or aspirational statements. We have to be able to back up our design with experience and the tools to be more confident we've got it right in how it's driven. We've got speed prediction models to help us with that.
At the end of the day, our customers don't know what the assumptions were that the designer’s made. It's going to how they read the road with the cues that we provide them, with the aids in terms of speed management, if we got that right that helps them. And a bit more of a challenge for designers these days, we do have to understand how the road’s likely to be constructed where the new works interface with the existing road. So for brownfield project we're working really on existing road that's virtually everywhere - we do have to understand how it's going to be constructed because in turn that will have a major control on the driver expectations as they go through the road works and whether we've got the speed limits associated with those roadworks correct.
So thank you very much.
Chris Russell and Mike Whitehead
The Design Storm – TMR’s approach to dealing with Queensland’s unpredictable weather through flood mitigation, drainage and road design
This technical presentation features Chris Russell and Mike Whitehead, leading experts within the department responsible for ensuring infrastructure designs adhere to the highest standards.
Road drainage is a fundamental consideration in every planning and design project. In light of the current floods and the unpredictability of the Queensland weather, this presentation highlights the importance of road drainage and the steps TMR takes to ensure it meets the immunity and accessibility expectations of the community, the protection of the investment made in road infrastructure, the safety of all road users and the protection of the environment.
Chris Russell: Thank you for giving us the opportunity to talk about some technical issues in the drainage space, so we’ve called this ‘The Design Storm: Drainage challenges and design in TMR’. What we'll do is just present some typical flooding and drainage issues faced by TMR, we quickly cover our current approach, how the discipline is changing and how we're responding to this. We acknowledge our values and diversity policies and we are working with our teams to get those entrenched and that's something that we constantly work on.
So a quick outline, so we want to talk about the unpredictability of the storm events, that is, I beg your pardon, real storm events versus design storm events, current drainage guidelines, which Mike will talk about, climate change impacts and what that means for TMR drainage. Hydrology changes, there's big changes going on the discipline at the moment that we’ll talk about, urban drainage design, creeks and streams, a little bit on intersection safety and how that's changing and a bit of a wrap.
So let's talk about storm unpredictability. On the fifth anniversary of the 2011 floods it's timely to think about the importance of flooding and drainage issues. Traditionally we use a design storm to model drainage. What a design storm is, it’s an average storm, whatever that is, and the problem with storms is that they're not average – as the 2011 Toowoomba and Grantham floods showed us, and I’ll show you something in a minute on Grantham. These images are from Toowoomba and we remember the devastating impacts that they had – so let's have a look at Grantham.
This next slide shows you, what I’ve done here is taking the Helidon Stream gauge which is just upstream of Grantham and I've plotted on there the historical floods in Grantham from the time that that was operating, so it started operating in the late sixties, early seventies and so I put all the design events, sorry all the actual flood events on this chart and that's time along the bottom there and creek recorded water level going up the top. And what you can see there, I think you'll agree, there's a reasonable consistency, there and that includes the ‘74 floods which is a top dashed one, which was the biggest flood up until January 2011. And you're probably looking at that thinking, rate of rise – so how fast the stream comes up is fairly consistent there, yes? And so you probably think that as far as the people of Grantham goes, this was their understanding of flooding prior to 2011.
Ready for me to add the 2011 flood? Yeah wow, look at that. So two things really stand out of that, first of all have a peak was just so much higher than anything they've ever experienced, but the right of rise, look at that. If you look at that other floods they typically no more than about one or two meters per hour – 2011 that was over ten meters per hour that came up and of course as far as evacuation and that goes, you can't do anything in that other than basically as the people did climb on your rooms and hope for the best. So, but this is a real storm event, the variability in real storms is just staggering and and look at that that's a great example of what variability looks like. Also just to remind you that extreme events do happen. I was with a lawyer recently who said ‘Is a one in a hundred event really foreseeable?’ and I said to her ‘Yes’ I said ‘A one in 2,000 of entries that we foreseeable,’ so extreme events do happen and think back to the first of May 2015, that event was about 1,000 years and at the Moreton Bay Rail project and even more extreme heading towards Caboolture, so extreme events aren’t mythical things, they do happen and they happen all the time it's just that they don't happen anywhere regularly is the reality.
So storm unpredictability is a big thing, OK, Mike.
Mike Whitehead: Good morning, this graphic shows the documents, the guidelines, the standards TMR uses and the interrelationship with other guides used in the industry. On the left there is the Road Drainage Manual, that’s TMR’s document, it represents the department's policy for drainage. It's actually our primary technical reference for all people engaged in the non-specialised aspects of road drainage. So typical of what our regions do deal with on our road network and covers probably ninety percent of the drainage the department does. In recent times Austroads has expanded to give us a national approach and the Guide to Road Design has three parts, 5, 5a and 5b and those cover road drainage. TMR played an active role in the development of that and then in fact most of our previous version of the drainage manual has been incorporated, now as the national standard because of the work we did in it so it's quite successful. So we've now harmonised with that with the latest version of the drainage manual and those two documents work together. The other key document on the right-hand side here is Australian Rainfall and Runoff by Engineers Australia. That is the industry standard accepted right across Australia for hydrology. That document unfortunately the one that we're using, which is the current standard’ is dated 1987, so it’s around 30 years old and is desperately due for an update which we've been waiting for for now three or four, five years. So it is pending and it will have some significant changes to the way TMR is going to operate and and look at the hydrology and extreme events and Chris is going to elaborate a little bit further. There is a fourth document down here, which the Department of Energy and Water Supply look after, are the custodians of, that’s QUDM which is Queensland Urban Drainage Manual. It is predominantly around the urban drainage. For many years now TMR, and this document, have had a little bit of an informal relationship. TMR is focussed largely on the rural drainage which most of their network covers, while QUDM focuses more on urban drainage and the two documents share – one refers to the other. QUDM predominantly used by regional councils in the urban environment and developers.
Alright, the next one. Now a question comes up quite often with ‘Why is road driving so important? Why do we keep talking about it?’ Well road drainage is one of the most important considerations to planning design of all roads to ensure appropriate training is provided. Drainage is one of the biggest killers of our road. It can accelerate the depreciation of the asset if we don't get it right through poor pavement drainage – put a loaded vehicle on it, the pavement’s life is greatly reduced. It causes damage because the concentrated flows over our infrastructure, such as floodways, culverts and at bridge sites, and that is evidenced over the last number of years for the big recent events and we've spent a lot of money and repairing those sites. Adequate and economic drainage is therefore absolutely essential. Yes, as a department as an engineering organisation we could build infrastructure that could withstand almost anything, unfortunately it will cost too much to do that and is unrealistic. So achieving the balance of what is economical, what is the best the outcome, is something department has to continually grapple with and things unfortunately think change with public perception, community expectations and obviously the priorities of the government of the day. We need to look at accessibility issues and expectations of community and a good example is when the Ipswich Motorway was opened up – the community saw that there was an inconvenience to them for many years while that infrastructure was being built and they thought it was going to solve all their problems. Not long after opening, we had a very significant flood and the motorway was cut and the have community said ‘Why did we spend so much time and inconvenience building that piece of instruction the first bit of rain we get it’s cut?’ Again, it is an understanding issue with the community. Certainly for the protection of the investment that we make in the road infrastructure, again poor drainage can accelerate depreciation of that or force early replacement of that infrastructure. Safety of all road users is also critical. Almost every single rain event of significance that we have around the state is tagged with someone being washed off a road way and losing their life. Even people in the street down here can be knocked over by flows and curves and again hospitalisation, while it's not a death, it's still inconvenience and people suffer. And the last one is the protection of the environment. Again the water is a transport mechanism and our roads carry a lot of heavy metals, rubber particles and oils etcetera and even dangerous goods on the back of them. The road is a concentration of that, water hits it runs off into the environment and can cause considerable environmental damage so again, it's another challenge for the department. Back to Chris.
Chris Russell: Thanks Mike. So we'll just talk a little bit of that climate change and then hydrology methods which are changing both changing. Firstly climate change. Now sadly there is no good news in climate change as far as drainage goes – the two major impacts on TMR drainage are firstly increase in mean sea level and also with that storm tides so worse cyclones, more intense, more storm surges, so that affects our coastal infrastructure. Secondly, an increase in extreme rainfalls, so in that one year to 100-year ARI type rainfalls which is what we use for design – so they're going to get worse those events. The advice does come with degrees of uncertainty and changes with time, and the graphic on the right illustrates why that's so. So climate modelling is still an emerging science and there's a long way to go there, but it is getting better. But what you see there on the right is some the emission scenarios that are are in the system and alignment through the UN and you can see how widely they vary – they just don't know what future missions really look like because there are so many variables. So they look at what they call represented concentration pathways, and that's what they use for the modelling, but look at the wide variation and that they haven't really got a clue what it looks like. So that's why this advice keeps changing with time and we've got to be so attuned to the latest science.
Just looking ahead, the RDM has evolved with time. In 2010 version, our climate change impact assessment was a point three meter increase in sea level and no increase in rainfall intensity. The 2015 version that we released last year, now it’s gone up to point six of a metre increase in mean sea level by 2100, still no increase in rainfall intensity. AR&R has very recently come out now and said actually there is going to be an increase in rainfall intensity and you've got to allow for that in your future planning, so many projects that have a design life of more than a few years you've got to start allowing for increases in rainfall intensity. Now TMR has always taken it’s advice on these issues from AR&R, and so we will need to amend their advice in the very near future.
Changes to hydrology. As Mark mentioned, AR&R which is the bible for flood estimation is being updated for the first time in 30 years, so this 87 version came out the first year I started as an engineer so it's it's a document close to my heart and these methods we've been using for my whole career but they are changing and there are some big changes coming. We're getting rid of the rational method effectively, which is quite a change because the RDM uses the rational method a lot – so we've got to get across that. There's a strong bias towards computer based methods including a technique called continuous simulation, which is a really exciting method and I'll talk about that in a moment. Then we're using, rather than a simple design storm, we're going to what's called an ensemble of temporal patterns so we'll probably look at ten to twenty storm patterns per design event to try and model that real variability that in design storms, rather than assuming some simple storm represents everything, so that's a real improvement in the way that we're going to be modelling floods. We are also changing from ARI, Average Recurrence Interval, which we've used forever, like a 100-year flood, to AEP Annual Exceedance Probability a 1% flood, reason being is that the community when you say the 100-year flood that, they just had one they automatically think I'll we're not going to get another one of these for 100 years and that's really, really dangerous because this is all about probability so that's why we're moving to a one-percent flood so that people understand it can happen in a year and you can even get multiple events in a year.
Now I talked about continuous simulation, I just want to give an example where we've used this on the Flinders Highway just recently and the power of this method. When you use a design event method and you're talking about a whole road link, so this is from Cloncurry in the west to Townsville in the east, 770 kilometres. When you use a traditional design event approach all it tells you is for individual crossings what the annual average time of closure is. For someone using the link, it doesn't help them with ‘If this crossing is closed, is that one closed and is that one closed?’ It can't help you with that. Continuous simulation can however.
It's a method where we model the climate for a hundred years or so and we model the entire link and actual rainfall patterns and there's an example of, so that's a snapshot of the right on one day. We run the models for 365 days times a hundred years of actual climate data and we model the flows at each and every crossing it’s as though we've had a stream gauge in them and then we’re able to crunch the numbers about and we work at the threshold flows when crossings close and we can get a whole picture of how the link performs over a hundred years of real climate and that's very powerful. What you then do is include the traffic volumes, including freight, you can work out the cost of delay and then you can work out the benefit stream when you come to do upgrades. So this is a very powerful method for allowing you to look at a whole link in a way that we've never done before and you can see why I'm excited about trying to apply this to the Bruce Highway and that's starting to get some traction which is great.
Now on the poor old rational method. We've been using this forever, the rational method, in Australia and there's a strong backlash from Queensland to its loss. The development industry is not happy and neither are a number of the council's – reason is that it's a simple cheap method to use and people don't want to give up. However, as academics will tell you, the problem with the rational method is there's very little actual data that supports it and the fundamental assumption is actually wrong which is about this time of concentration and constant rainfall. There's no such thing as constant rainfall as looking at real design event shows us, so you can see why there's a resistance to get the letting it go that the academics are right, that it's really not a very good method. Now QUDM is being finalised at the moment and it's likely to retain rational method because of this backlash, for certain applications not for all applications but certain applications, so we're actually going to get a schism it would seem. For the first time that you actually have QUDM and AR&R diverging on preferred hydrologic methods which is a shame, and the question is what should TMR do? Now we’ve traditionally always taken our advice on flood estimation from AR&R and I don't think we should be changing that, however, we've also got to remember that we're in Queensland and Queensland Urban Drainage Manual may take a different view so that's something we've got to think about. Back to Mike for urban drainage.
Mike Whitehead: Right, the urban drainage design process as Chris has just mentioned, we currently rely on the rational method for all their routine designs and it is suitable or has been deemed suitable up until probably now and recently the small and simple catchments – other catchments though however, do need more complex are computer based modelling and consideration because urban drainage is considerably more complex than rural drainage and that and can be very much a challenging for any designers and unfortunately it tends to change in time. The rational method I’ve said has been prescribed in QUDM which we refer to, however, the question will be is will we diverge away from that?
Just to give a little bit of a not insight into the complexity and apologies for a very old document, but that's what. Underneath the development lots there we have on the dotted line what would be the catchment area if it was undeveloped. So any rainfall falling within the catchment area, will drain to say a culvert or something like that down here and away and we could assess that. When developers come through however and they start to look at their lots, they will change the shape of the ground. So therefore we have to be aware of that because the dark solid line is now the new catchment area for it and that any water that falls and they will contribute to the outlet. And unfortunately we could potentially understand that and cater for it, but in another twenty years when a new developer comes in here and decides to put in a new shopping centre all or redevelop this to a higher density or something like that they could do further changes and maybe include this block here, they could also do things that all this section here we're going to put a shopping centre in but we're going to pipe the water out to there which means that no longer comes back to the other area. So that's why I mean the challenge for us is to be able to cater for this in time. nothing there so they can entrap . The other thing is it increases the area of impervious area sorry so that is when the rainfall hits it can't soak into the ground or it's not retained so it runs straight off, so by increasing number of roads,paved areas, rooftops etcetera that water potentially can run off. Now in recent years, larger developments have used retention basins or detention basins to try and mimic the old or previous flow because there has been conditioned to put on which is called no worsening and we expect that any work gets done doesn't change the current output of that, but that has been a challenge for us because it's it's often landed the department in some legal debates and arguments with developers,ministerials can get involved with it as well when people become unhappy, so it's a real challenge for us.
That's it. So now just a very quick overview again. It is complex and could take considerable time to go through the overall process but essentially what we’ve got to do is look at the development for houses to workout with the water coming from. Is it coming to a yard back here, was it being part to the front street? The street obviously collects the water and can actually act as a channel and we have streams and creeks throughout the urban area. Typically we have the curve in channel flow as the water flows along it. We then have pits and pipes that we put in and that's just take the water off the right service, get off the road and pipe it to where we needed to go. Typically then it will discharge into some creek or stream and then taken away. We do have the use of detention basins again to mitigate flooding and then it often goes off to receiving waters, whatever that may be and we have various pieces of infrastructure that we need to build to cater for that. And again, it's the no worsening because nobody wants water in their yard. People expect all over the years the flooding in the area has only been up to my back fence and if they see an event that pushes it up to the back door, they get very upset about it. Again, we can only use a design storm for our modelling to try and predict what's going to happen, we can't predict an actual storm, So the no worsening is the real problem for us and nobody wants water in their backyard so it will be a continual challenge for the department to get that and it's a continual legal exposure to us and how we deal with that.
The system that we predominantly use in an urban environment what we call the major minor system. The major, the minor system, sorry, is really just for the smaller events that we typically occur so they are ARIs 10 to 20 years. They typically will be drained off the road into a pipe system and then taken away. So there's very little impact of those on the on the community and on the road system. When we get the major drainage we go up to the Q50s and the Q100 events and largely those type of events most people don't really want to be out and about in them. So we actually allow the road itself to become part the channel. The underground system will put on, take some of the water and the rest of it has to go up above and away and t and that’s a method that we can use. One of things that sometimes gets forgotten, is when this system fails due to blockage through debris and rubbish and all sorts of things, the water that would go in there now has to go across here which will increase levels and that can cause flooding, which again then can be a bit of a backlash on the department. So there is some consideration there and of course what we do with extreme floods? Do we actually try and model the worst-case scenario and design for that even though the probability of it is probably low? It's an investment strategy for the department, we are limited in funds – what is the balance that we could achieve?
Lastly on this area, water sensitive urban design is important for us and it is emerging and it's dynamic in the research that’s being done and the benefits that can give us. TMR does have a role to play in it, we don't have any specific guidelines and have referred to the Australian, Engineers Australian’s guide Australian Runoff Quality. Unfortunately that document, and others, are more aimed at development, urban developments, not really geared to the linear infrastructure that TMR has. So one of our challenges is that we would like to work with our environmental colleagues to try and develop some better guidelines that’s suitable for TMR moving forward in this area. While rights do represent only a small portion of catchments, they are one of the bigger contributors to the water quality concerns, particularly when we’ve got roads with large AADT. As saif before, with the types of vehicles that we have on, the dangerous goods that they could be carrying – if anything happens that gets washed off into the road network.
Here’s a little bit of example in the urban environment this was taken back in 2008 in Mackay, Glennella Road. The above graphic there, the picture there is actually the water running down, now the system here hasn't, the underground system hasn't worked or is this full and obviously the major system is running here. There’s high velocities on that and again it was more of an extreme event so systems wouldn't really designed to cater for this, but this graphic here is after that's all gone that was what we were left with and have to go and repair. So again, it comes back to what is the department prepared to pay to try and minimize that for repair costs in the future or do we accept that and hope that doesn't happen? It's a challenge for us differently. Alright,back to Chris.
Chris Russell: So just a moment on creeks and streams. So this is the part of the system now where our analysis is extremely good. Typically these days we use what's called two-dimensional hydraulic models to model creeks and streams because we got fantastic Lidar or other survey available from most of the areas that we're looking at. A key issue, similar to the urban area, is typically try to achieve a no worsening in urban areas and in some of the semi-rural areas to avoid legal problems. Just to remind you though, that even widening a road can cause the impacts that can have an adverse effects on adjacent land holders and be difficult to mitigate in limited space – we just got to remember that that achieving a no worsening is always a challenge, even in some of the semi-rural environments, but the modelling methods that we've got a very robust and an example is a job at the moment that we're looking at which is Sandy Gulley Upgrade of the Bruce Highway, just north of Mackay. So 2D modelling has been used to assess impacts and mitigate those and so that's something that's being in work through detailed design right now – but the modelling that we adopt is very, very good these days, Intersection safety.
Mike Whitehead: The last section for the presentation. Intersection safety is something that's very important to all of us in TMR and even to the community, to try to save lives and reduce serious injury. Intersections are an area of concentrated accident because of the conflict points of people crossing each, also the area has a high demand for friction, people are accelerating, they're breaking, they’re changing direction – so the demand between the tyres and the road surface is much higher than anywhere else on the road network. Wet roads are where that friction drops dramatically, in fact, between one mil and four mils of water over the roads surface you'll see the biggest drop in available friction that we have. So even for that there, in small rainfall events most of your friction is disappearing rapidly and drivers aren’t aware of this and so their braking distances are greatly increased because they have to, they just end up sliding, particularly if they lock up. So drainage in the grading of the intersection must be compatible and we must try and get it right, As we said, also with intersections because of the grading and the cross falls etcetera of the road, because we've got crossing roads, we tend to flatten them also cars aren't feeling a bumpy experience going through intersections. When you flatten that off, the water can flow anyway because we need positive grade to drain the water, so that is where challenge is. We need to watch for the spread of flow from the kerbside so when we have the extreme events or even larger events the water will flow out onto the road and increase the depth of water we've got to drive through. There will be ponding and we’ve also got to be careful of that. In time, when the pavement starts to sag or to sink or rutting starts to occur – that starts to concentrate and capture water so we end up getting puddles where we never had them before. Again, that can add to the the potential safety issues of the thing, of the instruction. And as I've already mentioned, aquaplaning is not typically associated with intersections because of the lower speed environment, but with smooth surface and people not really having that legal tires etcetera, under heavy braking aquaplaning my occur. However as I said, skidding is something that can occur, and it’s something that we’ve got to try and cater for. So therefore grading of our kerbs around intersections needs to be carefully considered. We certainly need to look at all of our pits and pipes and alternative methods to try and get the water away. Once we capture the water, we want to get rid of it. We do have examples around where we’ve captured water and then through certain mishaps, I might say, the water actually then comes back out on the road and it's something which we got to really be careful of. Also another element of this which we're not going to really cover is the road surface itself and TMR has been actively involved for the last few years through various research and development projects to improve the quality of our road surfacing, particularly for anti-skid and high areas like interections, where we can service the road and improve the grip that's available, so that’s a positive that we’ve got there.
So having a look at a particular intersection, this is out on Beaudesert Road intersecting with the Granard and Rowena roads. It just gives an example of a typical urban intersection that TMR has to look after, multiple lines, large areas. This one is a particular bit of a problem because it is in a low-lying areas so does flood, but ignoring that, the amount of water that will fall on the in a typical afternoon storm has got to get away.We have a lot of truck movements through here, and again, the area in that section there because you've got crossing roads is very flat and so it's difficult for the water to flow or if it does flow it's very slow – so thicknesses can build up quite quickly. It is a challenge for us. So therefore, where we can we've got to put a series of pits and pipes in and various locations to try to capture that water and get the water away as soon as we can. Otherwise we end up with very long flow paths and again that can cause a problem. The other example that's here is the queuing that can occur. One of the things we have to look at and and work with their modellers is, to actually try and estimate what sort of queues have we got? So that in wet weather we can look at how far back are surfacing or drainage might need to go to get more water off so that when drivers see a queue and have to stop they can actually slightly do it. If we just do it traditionally, say to the end of the stop line, it means the cars back here doesn't have enough space to stop and then run into the back of a car in front. So again that is a challenge because, traffic is a quite variable, we've got these estimates of what the queues are, and then from there back for stopping distance the servicing and extra drainage we might have to do. That’s a challenge. The other one there is pedestrian crossings. We don't often think about it but water can actually knock a person of their feet. So where water is travelling around kerbs, we actually have a relationship of the velocity and the depth, and once it gets above point four, it can knock somebody off if they step into it. So we minimize the flow width, we minimize the velocity, so that the pedestrian access across here is a lot safer. Now there are challenges with people with mobility devices and how it affects them because a lot of them are electric and they don’t like getting into water, so do we have to go further with our standards? There's more work to be done in that area. So that just gives you a little bit of an example there, and that said, road surfacing in here typically, we probably have to surface quite a significant area with high skid-resistant surfacing and then maintain that over the time. So that’s our challenges. Alright, and just finally in this area, here we do make mistakes, we do have challenges. This particular one here, we're not sure whether it was a design error or a construction error, it’s never been investigated, but it does highlight how important drainage is and that we really do have to pay probably a little bit better attention. What we have here, this is actually T intersection, we've got a curve on the road here and this is another section coming into it. So in actual fact the cross fall across here super-elevated, so the road, this is the high part of the road, and the low part of the road is down the bottom of the picture. And here's a typical kerb and channel, it's catching water and it's meant to take it underneath the guardrail and off into a swale over there, but have a note that there's a puddle of water which means it can actually drain out, we've got something wrong there. So at the moment, that's the normal outlet, there’s ponding there which means that channel is almost sitting at capacity, even though there’s no flow init. So when we have a, and cross fall is that way. So when we have additional flow, you can go back here and can actually break out, and get back on the road. So this is an example where we’ve captured the water but through whatever the mechanism is, where now pushing it back out. That will increase the flow that's going across the water there with vehicles running around here, so again, depending on the speed environment, skidding off into the guardrail, potential aquaplaning problems – in this case here we're actually exacerbating the problem not solving it. So again, that’s our challenge to spend a bit more time, a bit more care with our drainage consideration, our drainage design and building it right, and unfortunately drainage is historically – because we don't see it for many years, it's buried under the road, unfortunately we don't consider it as well, or we just clean it there keep what's there and we keep going. When we have an extreme event, it's all highlighted, oh, there’s a problem but then it dies down we don't look at it again. So it is a probably a bit of a culture for TMR that we don't consider drainage as well as we should and particularly in the urban environment, but also equally in the rural which is most of their own network, we should take the time to look at it and consider it as much as we consider other elements of the road design and construction process.
Chris Russell: So just to wrap it up and go over what we thought were the key points. Firstly the new AR&R is bringing in some really big changes that are more onerous but will improve their estimation of real floods and as a hydrology person, I’m excited really by these new techniques, it is going to be more work, but they are a lot of better, they really are. Continuous simulation techniques can be used to help us get a really good understanding of the Bruce Highway if we want them to. I'm presenting to the steering committee for the Bruce Highway next month a case that we would do this and I’ll be showing them the Flinders and and hoping that we can move forward on that – even if it's only for one of the links, so fingers crossed on that. We need to amend their climate change advice again, so we've got we've got an amendment there to the RDM that we need to make and we need to keep monitoring that it'll keep changing. The Paris agreement will mean that the carbon pathways are revisited and so on so we've got to keep monitoring that. We need to decide what their position is in relation to the Rational Method and that's going to take a bit of thinking through all the implications of that. We need to be very careful with impacts in urban areas and maybe even mandate methodologies for certain cases. Internal staff training is also critical there, so our drainage courses and also Mike and I are working on a new one for people, for managers, to have think about bringing in some case studies of issues and where things have gone wrong. We think a key to helping lift our game there and of course the dot point that we've got put in there which is on the water quality aspects. We think it is worth doing some R&D, coming up with some criteria that are relevant to linear infrastructure, rather than the other development stuff that we're using – which is nice, but not that relevant. So, thank you.
Digital engineering: implementing BIM
This technical presentation explores the exciting work TMR is undertaking in the Digital Engineering project. By introducing Building Information Modelling (BIM) technologies across the department’s major projects, TMR has the opportunity to capture better information, reduce design errors, improve productivity, enhance collaboration, and in the long term, make significant cost savings. TMR uses a variety of BIM technologies already, including 3D modelling and laser scanning, which will be exhibited in this presentation.
Noel Dwyer has been in TMR since 1990 and led the Main Roads component of the Drive Tourism Program, a joint initiative between Tourism Queensland, Main Roads and Arts Queensland that won the Public Sector Premier’s Award in 2004 for ‘growing Queensland’s economy.
Noel: Good morning everybody and Mike particularly, my name is Noel Dwyer and I’m from Engineering and Technology and today I’m going to give you a bit of a run-through of digital engineering, or BIM as some people call it, Building Information Modelling, and the journey we have been undertaking at TMR.
I guess then after that we'll have a bit of a demonstration of some of the input technologies, some of the fun toys you got to play in E&T as an input to digital engineering.
So firstly, Building Information Modelling. What is it? It came out of the building industry probably a decade ago and in the last few years it's been adopted as a tool for the infrastructure industry to plan, design, construct and sometimes operate our transport infrastructure. But essentially Building Information Modelling, I'll just give you the quick definition: It's a digital representation of the physical and functional characteristics of a building or structure, forming a reliable basis for decisions during its whole-of-life cycle.
So this is where we get into not just the design and constructing, but it’s the whole-of-life cycle aspect of infrastructure and that's why we’re very keen to implement it.
So why are we using it? It's an enabling technology you can see that the whole lifecycle of our transport infrastructure asset. There are significant benefits across the entire lifecycle of the asset. At the moment, we’re focusing on the planning and design side of it and construct the huge benefits for TMR is at the back end of the asset management side of it. That's probably going to be the hard first nut to crack so at the moment with the short-term we are looking at the planning design. I will just run through some of the benefits. In the planning stage, it's basically making use of existing conditions and visualisation for planning.
In the design, it's vital to be used for things like clash detection and be able to optioneer various options as well. It’s not just 3D modelling, it actually has a lot of attributes into the model as well and there’s this thing called building twice, where you can actually build infrastructure virtually—plan it virtually—and then you build it for real. So it has a lot of quality control issues and being come to opt-out of rework and in some cases the BIM—or digitally engineering model—is actually used to drive a project and there's no gantt charts on a project you actually use the model to plan the sequencing of all the construction so plan, design, construct in the back end, in the facilities management, that's where we’ll be heading in the very near future.
So ultimately, digital engineering is cradle-to-grave and ultimately paperless. That's what we're hoping for, but it could take a while to get there. And the good thing about this technology, if things are born digital, they can start digital we haven't got to go to paper next time.
So how are we implementing it? There's a lot of work happening at the state and national space. We formed the first whole-of-government Queensland working group that's now been taken over DILGP. We are also represented on that group with my Project Manager, Brian here. And also we’re involved at the National Digital engineering working group as well, so there's a lot of effort for consistency in this space, so we don’t end up with the old dissimilar rail gauge issue from a long time ago.
We're adopting a thing called Natspec, we’re using that as the basis for our framework. It's a guide for us, and as I said before, the short-term focus we have come up with a specification for our TMR requirements that's based on the Natspec for planning, design, construct and in the next phase over time will be the asset management.
I should probably point out too that we are early adopters in the space. Industry are partnering apartment with us on this. They are very keen for us to proceed. You may or may not know, but the state infrastructure plan is mandated BIM methodologies in 2023. So we're slightly ahead of the curve.
You'll notice in here it impacts on a range of other enabling technologies as well and we come to the end, the data capture one is this one we're going to demonstrate in terms of the laser scanning capabilities that we have.
You may or may not have heard, this particular bridge, this is essentially the first project that we undertook using BIM methodologies that's Angellala Creek, which was the bridge that blew-up just west of Charleville.
Julie threw down the challenge, our Chief Engineer, threw down a challenge and said guys do it in digital engineering do it in BIM. And so they did that, and it had some immediate benefits, there are some challenges as well, but being able to optioneer the design height there were some discussions around whether or not the bridge would be back like-for-like. There were some decisions made for betterment and that design change can be done almost instantaneously. And it also ruled out some constructability issues, some problems they had on-site too so immediate benefits. So immediate problems though, it relied I guess on internet technology to make the full use of BIM and certainly anywhere west there's some dark spots in terms of the internet usage.
But a very successful project and other structures guys no longer design in 2D, they design in 3D and having attributes into their digital models as well. Ultimately they'll be able to run tracks over the bridge in real-time and see what the bridge is doing as well using digital engineering.
The other pilot which is on foot of the moment is the Rocklea to Darra project. It's currently midway through a WECI process and we've actually put into that project our specification called employee information requirements and that sets out the requirements for all the design consultants and constructors to enable digital engineering to be used across the board. So the benefits of that, most of the benefits would be collaboration, it's not just about the technology, it actually I guess the functionality, enables different design teams to work very highly collaboratively across the whole cycle of the process and again improve clash detection particular things around underground services and other clashes in a particular site. So there's other major projects that are currently coming to us and asking if they can use the technology as well, or use the specification. Thanks John.
So some of the current technologies that enable the digital engineering or BIM next one. That's just an example of some of the outputs of the design as you can see it goes above and below ground as well and that enables us to weed out any clashes particularly with underground services.
It's something that we've been doing for probably 10 years though, but it's only last couple of years that we've been able to marry a lot of the technologies up. Certainly in terms of software it's not an IT program, it's not about software, it's software agnostic so that's pleasing for all the contractors and ultimately, as I said, it will be cradle to grave. Just the next one John. Laser scanning is something that we've been doing in TMR for the past 10 years and that's a great enabler for this digital engineering program. Essentially the laser scanning gives us the ability to capture the asset and then that goes back into design modelling and can be used throughout whole lifecycle of the process.
And I might get John now to fire up just an example of the laser scanning and what we can do with it. There are various ways to capture. If it can be captured statically, it can be captured with vehicles and be captured by drones and also airborne laser scanning as well.
And these examples John's going to run through, I think we’ve scanned through most of the Bruce Highway and we've also scanned a few other non-road assets on behalf of the state and particularly the model which John will go through in a moment which is Raine Island.
Essentially this allows the surveyors or the spatial scientists to go and capture a site without spending a whole heap of time there. So it reduces the exposure to traffic or exposure to the elements and essentially all the design work, or the survey grunt work is then done back in the office. So it's huge impacts in health and safety as well.
John: Just as a bit of interest here, you can see the size of these files down here, they’re quite large and this model actually contains around about a billion points and it loads up literally, trust me, that quick, it’s actually a lot quicker than that. So from here you can see.
Noel: So this particular project, it was a whole-of-State project where I think the Department of Environment inherited. They wanted to preserve some assets over there that were degrading.
So they invited us to join their party there's a whole range of multi-disciplines to go to Raine Island, which is about a 10-day cruise from Cairns to that location, where you see that there is from the 1800s a navigation tower that’s been in place there.It was degrading and they needed to actually do some repairs on it so with this technology rather than go on site and take tools and all sorts of equipment there to actually repair it, we captured the model it's all survey accurate to within millimetres. Then they come back to the office and they can design whatever they want, there was platforms and ladders and all sorts of things.
So this is probably just one use of digital engineering or the laser scan that goes into it. Not just for the design but it's also being used nationwide and probably worldwide for archaeological digital archiving of structures that are decaying. There's a consortium called CYARC, I think stands for Cyber Arc, where their a non-profit organisation which go and scans iconic structures like the Sydney Opera House or various catacombs across the world and then basically preserves that digitally.
John: So with this, it’s actually given us the ability to go both inside and outside of the model or of the structure. The actual island itself was captured with UAV, the Unmanned Aerial Vehicles, and a point that was created from that, that's what's given the actual island and the actual tower itself was terrestrial scanned.
We actually isolated the outside of the tower so we can actually go through and all these levels are actually all spatially correct and we can actually seen cracking in the brick work.
We can go inside, we can take away the inside of the tower and we can just look at that ladder. So all parts to do with what we can do with BIM, isolate the information, put it in, if we want to actually mark something up in this package you can actually create a label, click on there, name it, and we can actually export that as a XYZ file.
So spatially we've got everything located, attributed any way we want it to be. So that’s one we can do there with terrestrial scanning.
Noel: Fantastic John. We might just move on and we can come back and we can show you how you don’t need to be an expert like John to run this, we've actually brought along an Xbox controller and we’re going to invite Mike afterwards to have a bit of a play if you like.
So the technology and we're also using that as demonstrations for STEM events as well so students getting involved in this technology as well, so they can see what we play with.
But if you can go to the last slide, I’ll just finish off and then we can go to questions.
So the last slide there, I’d just like that I just like to leave you with a couple of points: It's not just an IT system, a lot of the technology that we already have as well and eventually that it will become business as usual on all major projects which will take time and we're learning from every project of what goes back into the specification.
Again it’s not just technology about collaboration as well. The hard nut for us to crack will be the asset classification side of it, to enable digital engineering to bare its fruit. As I’ve said, most the technology already exists, so we're not actually going to be spending a lot of expenditure on new ICT aspects apart from some of the toys which we’ll show later. And as I said, it’s a whole-of-life approach, it’s not just design, construct. It’s the whole-of-life and the benefits for TMR will be at the asset management side of it. And as I said before, if information is born digital, it can stay digital. So any questions?
Damian Volker - pavement stabilisation
This technical presentation features Damian Volker, who was awarded the Young Stabiliser of the Year at AustStab’s 2015 Awards for Excellence.
Damian has played a major role in department’s agenda to maximise value on projects without compromising the quality of the end product. Based on his suggestions for additional research on a recent foam stabilisation project in the Darling Downs, extensive testing was performed in the our Herston laboratory, using specialist foam bitumen testing equipment. The work resulted in savings of approximately $1.5 million through reduced materials and increased productivity without compromising quality.
Hello everyone, I’m now introducing some of the very innovative technical projects that we’re doing in TMR in Queensland and what you’ll see is a series of Engineering Excellence going forward and the very good work we’re doing with the challenges in Queensland to meet and exceed our obligations on providing a single integrated transport network that’s accessible to everyone. Over 50% of Queensland's is composed of very unstable materials for example so what you’re going to see today are a series of technical talks just showing how we’ve met, addressed and overcome these challenges. My name is Damian Volker from Pavements rehabilitation unit, within Engineering and Technology branch. Today I’m presenting on pavement rehabilitation, putting innovation into practice pavement rehabilitation is taking existing road at the end of its service and through structural improvement injecting it with new life for many years to come innovation needs not to be a new invention but rather as you'll hear about here today taking existing rehabilitation processes and transforming them into new and improved techniques that provide value for money to the department and to our Queensland road users. At TMR, we are working hard to embed new public service values into our culture and we are already working more collaboratively more productively and smarter these values are customers first ideas into action unleash potential be courageous and empower people and through diverse and inclusive teams we can create new ideas challenge the status quo introduce fresh ways of looking at problems and offer a wider range of potential solutions to achieve innovative fit-for-purpose solutions. We need to understand some of our pavement challenges in Queensland the shrinks well effects of expansive soils are extremely damaging to our roads and 60% of Queensland is expansive soils. Flooding and an ageing network with fatiguing cement-treated bases and increased traffic volumes. Expansive soils changes volumetrically with moisture. They swell when they’re wet and they shrink when they’re dry the effects of expansive clays can be very destructive to roads and structures studies in the USA suggest that maintenance costs are 10 times higher in the expansive sub-grades than normal Queensland requires pavement foundations to be resilient to flooding as we all know in Queensland from recent years. Fatiguing cement treated bases are requiring alternative rehabilitation solutions adding more cement to these pavements is a costly short-term fix it is not proving to be a successful method of rehabilitating these pavements these 2 pavements pictured have been repeatedly treated with cement stabilization most recently 3 years ago. Pavement rehabilitation strategy is to understand the issues, constraints and needs engage industry specification input continual feedback into technical specifications and technical notes and project linked and technical knowledge transfer just like buildings our roads needs solid foundations foundation improvement is a long term investment for the department it can reduce pavement defects pavement thicknesses moisture ingress and maintenance costs. So what is a roads foundation? Subgrade. Subgrade is the prepared earth surface which a pavement is placed upon subgrade materials are commonly weak expansive and sensitive to moisture. Pictured is a typical pavement rutting defect due to inadequate subgrade support conditions subgrade stabilisation is a method using lime or lime, cement and fly ash to improve pour materials the benefits are, stabilisation increases subgrade stiffness and strength forms a water-resistant layer, reduces the plasticity index and the damaging shrinks swell characteristics of the subgrade materials, improves constructability by providing a sound construction platform or anvil to allow improved compaction of overlying pavement layers from a visual perspective with the addition of lime we can turn black soil into a sound structural layer and again by combining a weak sub base and weak subgrade together not only can we achieve a sound structural layer, but we can save money. This improved subgrade strength can be exploited to reduce payment thicknesses also saving money. Still with subgrade, the recent use of lime, cement and flyash, known as triple blend is now enabling a greater variety of subgrade materials to be stabilized forming a strong foundation. Triple blend is selected when the material is not exclusively suitable for cement and not exclusively suitable for lime, but fits somewhere in between a bit of an each-way bet for material that needs some lime and some cement to achieve strength. The proportion of lime and cement and fly ash is determined by the shrinkage properties of the material. Now moving up from the subgrade to the top of the pavement we have some examples of pavement bases granular, cement-stabilized base, foam-bitumen base and asphalt. Asphalt being the most expensive it's easy to construct however there’s limited availability of asphalt outside populated areas. Foam bitumen has a lower cost due to less bitumen and is significantly lower carbon footprint than asphalt it is flood resilient as you'll see in an upcoming video and has better fatigue resistance than cement with stabilized layers. Cement stabilised layers, have a lower cost than foam bitumen but are susceptible to shrinkage and fatigue block cracking they can be resilient to flooding as long as they're not suffering from this cracking. Granular pavements are least expensive however they are not always resilient to flooding or saturated conditions and deeper Thicknesses are required. A granular pavement does however offer simple rehabilitation options in the future. Quickly touching on cement stabilised pavement, our key message is stiffer is not necessarily better unconfined compression strength should target between one and 2 MPA, a modified range and we encourage slower setting additives to assist with construction processes. Foam bitumen stabilization is a fancy way of incorporating bitumen into gravel insitufoam bitumen is mixed in the pavement by a vertkin stabilizer hooked up to a bitumen tanker. So what is foam bitumen? Well we take hot class 170 bitumen and with the addition of water and air we're expanding that bitumen into 15 times its original value and it’s while it’s in that expanded state that the vertkin stabilizer is mixing the gravel into the foam bitumen. Now if you picture the kitchen sink at home and you take a green dollop of detergent and put it in the bottom of that empty sink and you take a handful of sand and throw it at that sink you’re going to have spicks and specks of sand that are going to land all the way around the sink and a little bit that will land on the green dollop of detergent. Now if you hit that green dollop with a bit of hot ferocious water that green dollop then is gonna foam and come up into a white froth. Now you take that same handful of sand and you throw it at your sink now what was a green dollop of green detergent now in it’s froth is gonna touch every single particle of those fines so we've expanded it and its covering a greater surface area and that's what we're doing foam bitumen. I have young children at home that love blowing bubbles and when you burst a bubble you will notice that it comes off a little tiny specks and on a microscale with foam bitumen the same thing is happening we have millions of foam bubbles that we’ve created and we have millions and millions of fine particles and those fine particles within the machine are bursting those bubbles and coming off in little tiny bitumen specks that we like to refer to as spot welds so unlike asphalt where every single particle is coated with bitumen. Foam bitumen is a matchstick of these little tiny bitumen specks that give foam bitumen its strength and flexibility. Under a microscope this is a picture of a grain of sand surrounded by bitumen spot welds. For foam bitumen stabilisation 3% class 170 bitumen is typically used hydrated lime is also included typically 2%. Lime assists with the dispersion of the foam bitumen throughout the material it increases the initial stiffness and early rut resistance and reduces the moisture sensitivity of the stabilized material. So what does foam bitumen look like? Well it really takes the appearance of the granular host material and if we ever get you out on site and you grab a handful of foam bitumen from behind the machine and you finally sort of dust your hand off. You can see on your hand that you'll see those little tiny bitumen spot welds. The benefits of foam bitumen are that it improves the stiffness and load-bearing capacity it offers better flood resilience it provides longer working times during construction and better fatigue resistance than using a cement stabilized layer. Another method of producing foam bitumen is through a mobile plant and delivered to site on trucks. So where does plant mixed foam bitumen fit in. It can be used as an alternative to deep lift asphalt in the lower layers or wherever plant mix cement stabilised material is used. One huge advantage of plant mix foam bitumen is the ability to place geo-textiles underneath the product and there is an upcoming project in black soil country where the plant mix foam bitumen will be placed over a geotextile fabric seal to protect the pavement from the expansive subgrade conditions. Flood resilience. Oaky Pitsworth Road was a lime stabilised subgrade and foam bitumen base combination the entire project was flooding during construction. It was an unexpected rain event not even enough time on the left as you see pictures, to get the construction equipment to higher Ground. Pictured on the right looking at the pavement from the downstream side and with the batter materials washed away you can see the exposed Upper-vertical face of the foam bitumen layer you can also see the lowest stabilize subgrade layer that tucks in towards the culvert the pavement survived this high velocity flood Unscathed. The applications of foam bitumen in Queensland foam bitumen is used in flood. Ways, pictured is Stapylton-Jacobs Well Road after one metre of floodwaters covered the unsealed road for over one week. It's used over highly expansive sub grades. In spite of our best efforts against Mother Nature one of the key learnings is geotextile seals are required as part of future designs. To address the expansive soils and to significantly reduce the shrinks well damaging effects of subgrades foam bitumen is used in a combination of stabilisation of subgrades and foam bitumen base. It’s used rehabilitating fatigued and cement treated pavements and is used on motorways as a sub base we still have asphalt layers as our wearing courses with foam bitumen as the sub bases providing more cost effective support conditions than deep lift asphalt and significantly less shrinkage cracking than cement treated bases successful stabilisation requires tight quality control checking additive rates, moisture, bitumen rates, checking foam quality and depth checks. We don’t know what we can’t see and in stabilisation we can see the top but we can’t see underneath the layer Where the with the machine is cutting to therefore we have introduced surveys as a part of the construction process to ensure that the required depth at the bottom of the layer is being achieved why is this so important well the reduction of 25 millimetres in a stabilized layer reduces its life by 5 to 6 years and our specifications are now assisting industry in how together this can be achieved. Survey behind the stabilization process has become a huge focus point of the department and for any project where insufficient depth has resulted from a poor construction process the contractor has had to compensate the insufficient depth with asphalt to achieve the design intent at the contractor’s cost not the department's. Now we’ll quickly go through 2 case study examples Gentle Annie was rehabilitated less than 3 years ago with 3% cement stabilized 200 millimetres deep. The cost in 2012 was $1.27 million the cost in 2015 constructed by our own Roadtek was $2 million for a 620 millimetre deep pavement designed for 20 years. The combination of triple blend sub-base and foam bitumen base was performed by using the foam bitumen technique for the base we were able to reuse 40% of the old cement treated base saving $122,000 for that kilometre and reduce the foam bitumen lime content from 2% to 1.2% saving over $38,000. Innovations and savings like $13.50 per metre squared, for this 1 kilometre project, have been applied to larger scale projects. Bruce High Yeppon South recently. Completed a triple blend subgrade foam bitumen base and asphalt surfacing combination TMR Rockhampton lab provided triple blend UCS designs for the sub base and subgrade for Bruce Highway Gentle Annie and Yeppon South projects this is an example of technical transfer that Rockhampton are now implementing into upcoming Bruce Highway safety projects. Implementation of the same innovative concepts from Gentle Annie save $10.25 on the Yeppon South project Case study 2 to the Flinders Highway project is located halfway between Richmond and Julia Creek approximately 6 hours west of Townsville, conveniently marked here with the vertkin stabiliser, aka, the middle of nowhere. Or 400 kilometres in either direction to the nearest McDonald’s. At this isolated location there were 3 main ingredients, hydrated lime, bitumen and some new gravel all existing pavement materials were reused with these three ingredients, we turned this. A road with marginal materials into a strong yet flexible flood resilient pavement. The cost of the project was $15 million for 14km. $1.72 million was saved in optimizing additive contents. Plus savings from increase productivity as a result. How? Through laboratory mix designs Performed at our TMR Herston and TMR Townsville Laboratories. $1.36 million was saved prior to the commencement of the project through laboratory testing that reduced the amount of lime for subgrade by 3% from 8% to 5% through TMR Herston lab we were able to optimise the amount of bitumen required from 3% to 2.5% plus an increase of productivity per bitumen tanker load of 16% So what did we have to start with? The pavement was designed in 2 parts a lime stabilized subgrade and a foam bitumen base. For the subgrade design we first remove the top 150 millimetres of cement treated base to access the lower pavement the reddish sub base was not suitable to be mixed in with the foam bitumen layer above however by stabilizing the ready sub base material and black soil subgrade together. It reduced the amount of lime that would usually be required to stabilize black soil alone by 3% this is how we save $1.36 million in lime in the subgrade layer. The foam bitumen based design 2.5% bitumen and 2% lime was the optimum additive rates for laboratory testing. The materials in the designs included 100 millimetres of new gravel 150 millimetres of reclaimed cement treated base and 50 millimetres of lime stabilised subgrade. A few pics of the construction process, removal of the existing cement treated base to access the sub base and subgrade to stabilize. The excavated base stored on the side of the road, limes stabilizing of the reddish sub base and black soils subgrade together 350 millimetres deep. The bringing back of the stored cement treated material back onto the pavement from the side of the road. The addition of a hundred millimetres of new gravel and finally the incorporation of the foam bitumen 300 millimetres deep. Now we can have the best design in the world but if it's constructed poorly we will not achieve the intended design life therefore during construction not only are we inspecting the workmanship but we also monitor the design, through rigorous testing in field research. Sending samples back to Herston lab for modulus testing. After the project, we perform post-construction analysis. Pictured is a pavement core sampling learnings from field research is fed back into designs, specifications, technical notes and shared through project link training. Over a $130 million directly related to this stabilization work from pavement rehabilitation in the last 3 years and $50 to $100 million in upcoming projects for 2016-17. These innovations and savings arise from rigorous laboratory and field research and justify ongoing research. Pavement rehabilitation extensively use and appreciate the great value that our TMR laboratories provide the department. This ensures that the implementation of responsible innovation is based on sound laboratory research. Thank you
- Last updated 09 January 2023