Javier Canon, Director MAANA speaking on a Knowledge Integration Perspective on Field Development & Operations in the Oil & Gas industry
So good to be here today with you. This presentation is titled “A Knowledge Integration Perspective on Field Development and Operations.” So here the story begins with an oil and gas company recognizing some inefficiencies in the context of major capital projects.
If we think of MCPS we have three main areas that are the drivers for MCPS in the industry. First of all, we have Capex. In this particular case this oil and gas company has identified some opportunities to increase the level of standardization in engineering design.
If we look at the schedule, schedule is of course one of the main drivers in capital projects. It’s kind of a quality in the industry that over 80 percent of upstream facilities experience delays in first oil. So here I have a line that helps kind of provide some context to the impact of delay in first oil.
It’s quite significant on the NPV as you see. The third item that I have is risk. Of course, risk could take many different context. We could think of a technical risk, we can also think of a financial and even a regulatory risk.
In this presentation this particular oil and gas company is looking at the impact on risk from suboptimal engineering design. So suboptimal engineering design could lead to delays in the completion of the project but it could also lead to production vulnerabilities or production losses once the asset is operational and it can also lead to HSE incidents in the life of a facility.
If we think of these inefficiencies there’s kind of a flip side to that story which is, there are also opportunities for adding value and improving the way in which the same CPS are executed.
If we think of the common thread for these inefficiencies of opportunities, many people would argue that the technical complexity and the large scale of MCPS in the industry is a common root cause for these things to happen. I would like to make an argument that very often this is the result of a segregation of knowledge.
What do I mean by that? So this little pie chart here helps explain how knowledge is siloed in the context of a capital project. The top portion of the graph we have the knowledge that resides inside the oil and gas operator or the EMP company and the bottom portion is looking at the external knowledge that exists in the external parties and in the industry in general.
So if we concentrate on the top portion of the pie chart, we can start by talking about engineering standards for this particular oil and gas company. There were over 500 engineering standards that were essentially in the form of PDF files that were centralized over SharePoint.
This is actually quite common for many operators in our industry so there’s two issues related to engineering standards in general; first one is Dharam. They could be overly complex, a standard is very often subject to interpretation and the second one is that the engineering standard does not encapsulate all the knowledge that exists actually inside the operator. That actually leads me to the slice on the left portion of the chart which is the corporate knowledge.
For I would say any oil and gas company the corporate knowledge resides in great part in the brains of highly skilled technical resources. Attrition and loss of knowledge is very real for this operator as cities for the industry as a whole.
As a matter of fact it’s being coded that over 30% of our critical resources related to MCPS in upstream in particular will be leaving the industry in the next five years so this is a real concern. So moving on to the right-hand side I just want to mention a couple of things about lessons learned from operations.
The experience of an EMP company producing, if we look at the context of an upstream facility, that operates for 20-30 years; the context and the content and the volume of the lessons there are accumulated throughout the operation is quite significant.
Most companies actually struggle to make a connection between these learnings in operation over to the projects that are currently in the pipeline. This is segregation of knowledge.
What if you think of integration of knowledge. This is where technology actually has a role and I would start by saying that a few years ago there was a lot of interest from many industrial companies including some oil and gas operators in investing in data lakes.
This is the concept of disallowing data, centralizing all this data and then the expectation was that by doing that these businesses were able to drive some insights and improve decision-making and improve all these different business metrics.
The reality is that very often data lake efforts were followed by development of applications that sat on top of these data lakes. These applications ended up creating new silos of knowledge. Some companies have recognized that there is a different approach to be taken and I just want to show you this diagram which is a digital stack.
This is the way some companies in the industrial space but also in the oil and gas industry are looking at reorganizing sets of technologies.
There’s of course a lot of information here but I would just like to mention that the concept that we’re going for is actually this knowledge layer you see that there in the stack.
What is that knowledge layer? I think the easiest way to explain that is through a couple of examples that I’ll be showing you here. I just want to emphasize a couple of points on this digital stack. One is the interconnectivity; this is essentially using the right tool for the job.
The second thing is the environment in which this integration takes place. Which is very clear for our industry that this is actually the cloud. This is where these things are residing. The first case I have to illustrate this concept is called similarity.
Here we’re looking at the problem of piping class proliferation. Piping is an essential component of every facility in oil and gas. Piping class is essentially a group of five that shared common attributes and essentially what happens typically known to oil and gas companies.
There is a central engineering function that defines a group of classes to be used by businesses, but very often business units end up creating new classes based on different motivations which often leads to proliferation. Proliferation can be associated to incremental cost and also to operational risk.
In the context of this situation of piping, a subject matter expert spends a significant effort in trying to consolidate the number of classes for a given business unit. This is a reference point that helps articulate the intensity.
One SME for one year to consider a class for one business unit. An example of a knowledge application here has essentially two components and now this is actually looking at the back end of this particular solution. This is a knowledge model of the domain of a piping consolidation.
What we have is essentially two parts to the model. One part which is the one you’re looking at is actually a representation of the knowledge of the SME. This part of the model is able to replicate the actions of this particular SME as that person who is doing the consolidation.
So this is one part where you have essentially concepts that are relevant for the piping domain. Things like a corrosion allowance, service, pressure rating, and so on and different relationships between these concepts.
What you have on the right-hand side is essentially some components that use machine learning techniques to calculate similarity between different classes. So Two aspects to a solution, and this is the back end right; so what the user is getting out of this is some recommendations for piping classes to be consolidated.
This is not meant to replace the SME by any means, but it’s meant to assist the work of this person. So up to up to this point you know you might argue that you could design this type of application you’ve used in a large number of technologies that are available in the market.
What’s different here is the scalability of these types of solutions. The same way we look at similarity between piping classes we can think of rotating equipment subsea, hardware and we can think of many types of elements and hard work that could benefit from this type of solution.
This is one area in which this model or these models could be reused, they could be applied to a different problem. All these all I have described so far are in the context of engineering.
We’re calculating similarity of piping classes based on parameters that are physical to PI classes. We can also think of enriching this with other functions. What if we were able to incorporate some data for procurement in the consolidation process? What if we were able to look at a list of approved vendors and they have some way into their recommendations that are made for the SME. This is where all this is going and most importantly these models could all be interconnected.
That’s the first use case the second use case I have is called a sister classification. Here we worked with a design insurance group of an oil and gas company.
This group is responsible for doing audits of capital projects and looking at the engineering design and trying to surface some vulnerabilities in the design of a particular facility. The process follows a methodology that is very common in oil and gas.
For those of you who have worked in technical roles in the industry you have probably participated in hazards or risk assessment sessions. What this is, is essentially a meeting or a series of meetings where a group of engineers from different disciplines try to look at the technical documentation of a project or an operating asset and then try to identify the potential risks that exist in this type of operation.
The output is essentially capturing a large table that has engineering comments, very often in in free text form, and these comments are scored in a risk matrix and different oil and gas companies have different versions of these matrices.
They tried to capture the probability of risk and the severity of the risk depending on the score of the process. Then the action would have different levels of our priorities. This process is common throughout our industry, it serves our industry very well I would say, but it also a has a couple of problems; the first one is that reaching consistency is very hard.
Why is that? Because in these sessions we quickly have a group of engineers who bring their own experiences, their own expertise, the data they have at hand, and these things influence the outcome of these risk assessment sessions.
There’s a recognized lack of consistency to score this risk. The second one is a point that I mentioned before in terms of a link less from the operation to new projects.
Very often what you see when there’s a major issue is covered in a late phase of a capital project, very often the conclusion. We have seen this before, it happened in another asset but somehow, we failed to make the connection.
So what we developed here was essentially an application. What you’re looking at here is the front end of this application. There’s a knowledge model that is seen in the background. What you’re seeing here is essentially a very simple user interface where the user enters free text and the application is going to score the risk that the user types.
In all these boxes there’s a lot of information here. These particular cases are related to a glass flotation unit that doesn’t have a certain level of personal protection so it could lead to an HSE type of incident so it’s something that could be quite high in terms of a severity.
That’s essentially what the user is doing and then once the user enters all the text and checks the score then you get a kind of a prediction for where the risk falls in the matrix.
There’s a machine learning module that is able to get this information and based on a train set that was prebuilt. One feature I’d like to emphasize here is that this model is not only predictive but it can also learn from the interaction of the user so in case the SME believes the classification is incorrect, there’s a place for that person to say this is not the correct observation, it is actually a different category.
All this information from the user is going to make it back to the model so the model captures the knowledge, also from the interaction of the user. That’s where we think things are getting really exciting. The output of this particular application was fairly positive; this particular group was used in this application on a regular basis.
We’ve managed to prove a good level of a consistency and accuracy in this prediction. But perhaps the best statement of the outcome of this application is this quote that was provided by the business owner of this particular application.
This person is referring to this as a game changer for the organization and a way to preserve the collective knowledge of the company for years to come.
Those were the two examples I wanted to share. I just want to leave you this this slide again emphasizing the concept of integration of knowledge as we see in these models. They not only provide solutions to given problems but most importantly they could be connected.
We see this as the evolution of corporate knowledge in an oil and gas company all right thank you very much thank you.
Moderator: Thank you, do we have questions for Javier? The question is about how you make an audit and what was your second part of the question? Audit and also related to machine learning. It’s one of the main subjects; machine learning, at the moment, and this is opening potential for changing the dynamics in to how decisions are made.
I don’t think we have a comprehensive answer or methodology for tackling this problem. I think what we would like to emphasize is that ultimately, it’s humans who make decisions. The algorithms are there to assist decision making but we don’t see these technologies as being autonomous. When we hear these arguments there’s also the kind of perception that this could kind of displace the workforce.
Of course, they’re say there’s some value concerns in that area, but I think these type of applications open the opportunities for SMEs to really leverage their knowledge in the areas where they are more effective.
The question is that In the case of say, system classification you Mention that you have to have a large data set to be able to train a model properly What we found in this particular experience is that the data set was not that large in fact and what we went for was essentially fifty label observations for each one of the buckets in the matrix and what we’ve seen right after training the model with that data set.
The model was tested against the data said that it hasn’t seen, and what we saw right off the bat was that the consistency was already better than the baseline that was established for human prediction which actually opens the way to many more things.
Actually when we look at how humans are consistently scoring a set of risk then the answer is sometimes not what we want to see because we tend to be fairly consistent when it comes to classifying things like nature.
What we notice is actually with limited effort and the data set not being extremely large, we were able to achieve accuracy that was very good. From that point the application helps surface the use of the cases.
I didn’t show that her because I didn’t have the time but in case a user enters an observation that is very far from the data se, the system probably will say the confidence in the friction is very low and it will request feedback from the user. I think we tend to talk about accuracy, but I think one of the main problems is actually consistency. I think that’s the way to look at it. Thank you.
Stay in the know with the latest information about Maana services, events, news and best practices by email.