Skip to main content
Tag

data

Shedding Light on Wheel Slides

When train and tram wheels brake unevenly the result is often wheel flats. Resulting wheel slides cost the industry millions of pounds in engineering costs and delays each year.

UK Railway Group Standard GM/RT2466 requires that wheel flats larger than 60mm on vehicles operating at speeds up to and including 200km/h (125mph) have to be returned to depots immediately – at greatly reduced speeds. For 40-60mm flats, a vehicle has to be returned to the depot within 24 hours of discovery of the fault.

Meanwhile, in the UK it is estimated that the issue of slippery rails and reduced adhesion caused by falling autumn leaves cost the industry GBP60m (EUR67.5m) a year.

Wheel slides are directly impacted by a multitude of events, including service performance, weather conditions, time of year, vehicle condition, track quality, track cleaning approaches, leaf-fall data, level crossing proximity, driving policies and more. The issue has also resulted in several serious rail safety incidents in recent years.

A Typical Wheel Flat and Sanding

A Typical Wheel Flat and Sanding

Although it is recognised that there are many potential causes for wheel slide, and data needs to be acquired from a wide range of sources, the current approach to analysis is to simply use each individual data source, essentially in isolation. However if all the sources were examined in unison, using the power of Big Data analytics, it is probable that the industry would not only identify the major contributions more effectively and efficiently, but it would also probably make important discoveries of problem areas that would otherwise remain hidden.

Through ‘Knowledge Discovery from Data’ (KDD), there is a potential saving of millions. Over the last few years there has been significant anticipation associated with the use of Big Data techniques for the analysis of rail-related data; however the major expectations have yet to be fully realised and we are currently performing an analysis of real-time train data to prove the technique.

For this project we have used in-service data that includes GPS positioning, braking and power application, dwell times, information on wheel slide, sanding application, speed, acceleration and more. It predicts when a train or tram is most at risk from sliding, when the driver needs to be warned and when sand needs to be applied to stop wheel slides. By reducing the number of alerts and optimising the use of sand, the rail system will be subject to reduced delays and damage.

We are currently looking to apply the techniques that have been used in the prediction of solar flares to ‘multi-variable’ analysis of rail problems. We are working closely with a team from Georgia State University (GSU), one of the leaders in the emerging field of data science, who have recently made significant advances in big data analytics related to prediction of solar flares that we believe can be directly applied to complex rail problems. The GSU techniques are based on the combination of decision trees and deep neural networks feeding off multiple data streams.

This multi-data stream approach to prediction fits well with our ELBowTie©   risk analysis methodology. Over the coming months we will be setting up analysis of real time train data to prove the technique. Siemens Train care in Manchester are responsible for maintaining the Siemens Desiro diesel multiple Unit fleet .

For this project there will be supplied in service data from the train including train position (from GPS) braking and power data, station dwell data, wheel slide data, sanding application data, speed, acceleration and so on. We will use this data to attempt to predict when the train is most at risk from sliding and when the driver needs to be warned and when sand needs to be supplied to stop wheel slides. By reducing the number of alerts and optimising the use of sand, the system will be less subject to delays and damage from wheel flats.

wheel flats 2

Working it out in Logs

Conference Railway

Railway safety management is a complex subject that involves a significant amount of manual intervention in the assessment, analysis and control of risk. Supporting documentation is, usually, worked on by multiple parties, with differences in system viewpoints and writing styles. Maintaining quality safety documentation is therefore an interesting challenge for the industry. Hazard logs, for example, play a central role in both system engineering and risk assessment activity.

The role of the log is to contain a representation of the risks related to the system under consideration. The content of the hazard log relies upon input from a variety of sources and collaborative activities involving teams with varying expertise and knowledge. From past experience we have found that the quality of this information can vary greatly both within and between projects. This is particularly so for larger projects where problems can arise when the amount of textual data that has to be processed increases. The volume and variety of the data and the need for collaboration creates the significant challenge of managing the content, keeping up the textual readability, format and consistency.

A Program to analyse Hazard Log quality

What we are currently working on is a tool that automatically assesses the ‘quality’ of a risk log. The intention is that the tool can be used to monitor the quality of a hazard log in ‘real’ time or at least at regular intervals during a project or for checking the output from critical risk workshop sessions. The tool uses Natural Language Processing and machine learning to assess the quality of a hazard log, based solely on the textual content in the log. The method includes text classification and term frequency-inversion to identify important keywords on different textual elements to represent quality indicators.

The intention is not to replace a human expert, but rather to support assessments by providing an early indication of the textual data in the log. This involves checking for signs of imprecise and unclear writing and identifying issues that may make it hard for readers to fully interpret accident sequences.  The tool has been built around the CENELEC standards to aid compliance with both the standards and risk management best practice in general.

A preliminary study in collaboration with Lancaster University has been undertaken to prove the method. Results from this study have demonstrated the power of using textural analysis in this arena.  We have identified a number of hazard log quality indicators and developed demonstrator software which performed well against a manual evaluation of a sample data set. In general, the tool can help the users by saving time and effort by helping in the review of entries in the log. It can also help clarify thinking around accident sequences by highlighting ambiguous or multi-content entries.

howard6

 

howard7

 

The results of this study will be presented at the Transport Research Arena conference in April 2018 2018 in Vienna.

howard8

 

 

 

 

http://www.traconference.eu/

 

Shedding Light on Train Delays

Train Delays amongst Britain’s trains cost the rail industry over £100 million a year. However, the additional cost to the economy in lost time is over £1 billion [1]. On-time performance is directly impacted by a multitude of events including train reliability, poor track quality, and operation under degraded conditions. To better understand what constitutes the major contributions to delays, the rail industry has embarked on a comprehensive data collection and analysis program [2].  Although it is recognized that there are many potential causes for train delays and data need to be acquired from a wide range of sources, the current approach to analysis is to simply use each individual data source essentially in isolation. This has arisen from primarily a desire to solve a single targeted problem. However, if all the data sources were to be examined in unison, using the power of modern big data analytics, it is highly probable that the industry would not only identify the major contributions to delays more effectively and efficiently, but would also likely make important discoveries of problem areas that would otherwise remain hidden. Through “knowledge discovery from data (KDD)”, potentially saving the rail industry £millions.

Shedding Light on Train Delays

Over the last few years there has been significant anticipation associated with the use of big data techniques for the analysis of rail-related data. However, the major expectations for the approach have yet to be fully realized.

We are currently looking to apply the techniques that have been used in the prediction of solar flares to ‘multi-variable’ analysis of rail problems. We are working closely with a team from Georgia State University (GSU), one of the leaders in the emerging field of data science, who have recently made significant advances in big data analytics related to prediction of solar flares that we believe can be directly applied to complex rail problems. The GSU techniques are based on the combination of decision trees and deep neural networks feeding off multiple data streams.

 

Shedding Light on Train Delays

This multi-data stream approach to prediction fits well with our ELBowTie risk analysis methodology. Over the coming months we will be setting up analysis of real time train data to prove the technique and demonstrate ELBowTie live working.

References

[1] “Reducing passenger rail delays by better management of incidents”, National Audit Office, 2008,

https://www.nao.org.uk/report/reducingpassenger-rail-delays-by-better- management-of-incidents/

[2] “A Digital Railway for a Modern Britain”, Network Rail, 2017, http://digitalrailway.co.uk

 

Institution websites:

https://www.digitaltransit.co.uk/

http://www.gsu.edu/

Hornby BIG DATA Project

Overall Purpose of the Role:

There is a big push to make railways more effective and increase capacity. The main opportunity is to run trains closer together and autonomously. We are building a model railway that can be used to explore possibilities for this. The main theme will be to use an existing model railway set-up that already has digital control. See the set up on https://www.youtube.com/watch?v=AjSHDU2B474

Hornby BIG DATA Project Layout Under DCC Control

Hornby BIG DATA Project Layout Under DCC Control

The idea will be to fit proximity sensors to the track and model trains and tachometer on the trains so that the system knows exactly the whereabouts on the train-set of the trains. The movement of points will be automated based upon the routes chosen for the trains. The trains will be kept apart by knowing their own speeds, lengths and braking distances and those of the other trains on the train-set. There should be some facility to establish that a train formation is complete (i.e. it has not left a coach behind due to an inadvertent decoupling.)

It may be necessary to fit a vision system of the office ceiling to ascertain where the trains are on the network and also potentially on the front of the train to detect objects fouling the line or other trains. We are using Arduino technologies and commercially available sensors, cameras and radio, see the attached appendix.. The finished system will be used as a training tool and a means to experiment with different traffic management strategies and autonomous technologies. We will also look to experiment with detecting the health and resilience of the various components such as the electric train drives using available data such as current usage, noise etc.

Before the project starts we will have preliminary system design worked out however this will need to be detailed, adapted and implemented.

The system is currently controlled by Hornby OO train set and E-Link/Train master Digital Control system. See pictures below.

Main Task to do:

  • To develop technology demonstrator of a moving block automated signalling system based upon a Hornby OO train set and E-Link/Train master Digital Control system.
  • To write any code necessary.
  • To configure electronic and sensors and implement them with the software.
  • The job will involve certain practical tasks such as soldering, electrical testing, using basic modelling hand tools etc.

Bill of Material So Far:

Bill of Material so far

 

Will a data driven approach replace RAMS and conventional safety engineering?

If one looks through the railway safety and reliability standards, EN50126 / EN51028/9 https://www.linkedin.com/groups/4985242 and the recommended activity at each life-cycle stage it will be seen that nearly all the items are data driven. Using Big Data and Machine Learning we could eliminate a lot of the work done now by safety engineers. Of course you would have to follow a more formal system engineering approach to ensure that the data was created and logged properly but requirements management databases like DOORS and system modelling using for example SysML (Systems Modeling Language) would enable that to happen couple with BIM (Business Information Modelling). BIM means that railway design and development is properly documented through the use of CAD.

factors-influencing-railway-rams

The diagram above shows factors affecting RAMS and they are all based upon some form of data either structure/unstructured and real time/historical. They all provide management information that must be acted upon by using some form of decision logic. All of which is amenable to replication by an intelligent machine.

I conclude that RAMS is a now a past-its-sell by date of a commodity. The Fault Tree below might soon be redundant replaced by real data and modelling.

fault-tree